Thursday, July 5, 2007

Application Development: What's new in ADO.NET version 2.0

With the first, public alpha version of the coming release of Visual Studio .NET—christened "Whidbey"—now in the hands of developers, it's time to start thinking about your applications and how they might be affected as you move to this new version. Although the move from version 1.0 to 1.1 of the .NET Framework was trivial and involved mostly bug fixes, performance enhancements, and the integration of previously separate technologies such as the ODBC and Oracle .NET Data Providers, version 2.0 changes the story for data access. It includes a host of new features, some of which may cause you to rethink how data is accessed and manipulated in your applications.

In this article, I'll give you a brief rundown of what I see as the most significant new features in ADO.NET and how you might take advantage of them in your implementations.

Providing a wider view of data
Before delving into the specific features of ADO.NET v2.0, let me preface the discussion by noting that one of the overall design goals of this version was to allow a higher degree of interoperability between data accessed relationally, accessed as XML, and accessed as custom objects. Since these three make up the "ruling triumvirate" in representing data in an application, ADO.NET v2.0 was designed to make it easier for developers to use the appropriate model when desired within and across applications.

For example, in applications built using a service-oriented architecture (SOA), persistent business data will often be manipulated relationally; data that represents the process and encapsulates business rules will be manipulated as objects; and message and lookup data that must be serialized for transport will be handled as XML. To present the new features, I've factored them into two broad buckets: the new features that provide this wider view of data and the features that enhance or extend the relational paradigm.

Widening the .NET
There are two primary new features you'll want to explore in the area of extending your ability to handle data. Let's take a look at each.

ObjectSpaces
This technology was previewed several years ago at PDC and will now be released in Whidbey. Simply put, ObjectSpaces provides an object-relational mapping layer in the System.Data.ObjectSpaces namespace, which instantiates and populates custom objects from a relational database. This works through XML metadata stored in a mappings file that is passed to the constructor of the ObjectSpace class, which maps relational objects to .NET objects and relational types to .NET types.

The programming model supports queries (ObjectQuery) and maintains sets of objects in memory (ObjectSet), access to streams of objects (ObjectReader), and even lazy loading of objects to improve performance (ObjectList and ObjectHolder). Following is an example of how the programming model looks:

// Create the mappings
ObjectSpace oa = new ObjectSpace(mappings-file, connection);
// Query the data
ObjectQuery oq = new ObjectQuery(Product, "category='Equipment'");
ObjectReader or = oa.GetObjectReader(oq);
// Traverse the data
while (or.Read())
{
   Product p = (Product)or.Current;
   Console.WriteLine(p.Name);
}

Although in the current release, ObjectSpaces works only with SQL Server 2000 and SQL Server "Yukon" (the release of SQL Server more or less synchronized with the release of Whidbey), this technology will be extended to access other relational stores in the future. ObjectSpaces is ideal when you want to represent your data using a domain model and encapsulate business logic as methods on your custom objects, since it will save you from writing the tiresome code needed to load from and persist your objects to a relational data store.

SQLXML and XmlAdapter
Although the ADO.NET DataSet has always included the ability to load data as XML and serialize its contents as XML, the translation between the two ways of representing data always included some tension. For example, in order for the XML to load into a DataSet, its schema couldn't be overly complex, and it needed to map well into the relational DataTables of the DataSet.

Although DataSet support of XML has been enhanced in version 2 to allow loading of XML with multiple in-line schemas, loading schemas with repeated element names in different namespaces, and loading/serializing directly from DataTable objects, the data must still be relational in nature to work with the DataSet. To overcome this, version 2 includes the System.Xml.XmlAdapter class. This class is analogous to the DataAdapter classes in that it is a liaison between a data source and a representation of the data, but is used to query and load XML from an XML View into an XPathDocument object (called XPathDocument2 in the alpha; however, that will be renamed to XPathDocument before release).

XML Views allow relational tables (in SQL Server only) to be mapped to an XML schema via mappings files; they are the core component of the SQLXML 3.0 technology, once provided separately from the .NET Framework but now migrated into ADO.NET v2 (including the ability to bulk-load XML into SQL Server) in the System.Data.SqlXml namespace. Using this approach, you can provide a set of XML Views for your SQL Server data, query the data with the XQuery language using the new XQueryProcessor class and the Fill method of the XmlAdapter, and manipulate the data using the XPathDocument, XPathEditor, XPathNavigator, and XPathChangeNavigator classes.

The changes are written to SQL Server by calling the Update method of the XmlAdapter, which relies on the XML View to write the SQL statements to execute using a mappings file. The advantage of this approach is that you can treat your SQL Server data no differently than other XML data stores and can take advantage of the full fidelity of the XML when making changes. See a simple example of the programming model.

// Set up the connection and query
SqlConnection con = new SqlConnection(connection-string);
XQueryProcessor xq = new XQueryProcessor();
xq.XmlViewSchemaDictionary.Add("name", new XmlTextReader("mappings-file"));
xq.Compile(…);
// Set up the datasource
XmlDataSourceResolver xd = new XmlDataSourceResolver();
xd.Add("MyDB", con);
// Configure the XmlAdapter
XmlAdapter xa = new XmlAdapter(xd);
XPathDocument xp = new XPathDocument();
// Execute the query and populate the document
xa.Fill(xp, xq.XmlCommand);
// Navigate the document…
XPathNavigator xn = xp.CreateXPathNavigator();
// Or edit the document and change the data
XPathEditor xe = xp.CreateXPathEditor();
// Set the schema and update the database
MappingSchema ms = new MappingSchema("mappings-file");
xa.Update(xp, ms);

Of course, XML Views simply provide the mapping of the data to and from SQL Server. If you're not using SQL Server, you can still take advantage of the substantial changes to XPathDocument (that will supersede and make obsolete the XmlDocument class) and its related classes to more easily query, navigate, and edit XML that you load from other sources.

For example, you can use a new XmlFactory class to create a related set of XmlReader, XmlWriter, and XPathNavigator classes for an XML document. These classes now support the ability to read and write .NET types to and from XML documents. And, of course, performance has improved for reading and writing with XmlTextReader and XmlTextWriter, and when using XSLT.

Extending the relational paradigm
The second broad set of changes relates to those made in ADO.NET v2.0 to enhance relational database access. I've organized these into changes that all developers can take advantage of, regardless of the underlying database you write to and regardless of those that will require SQL Server 2000 or the next version of SQL Server, Yukon.

Provider factory
Although the design of .NET Data Providers is based on a common set of interfaces and base classes, in v1.0 or v1.1, Microsoft did not ship factory classes to help developers write polymorphic data access code. As a result, developers did so on their own.

In version 2, ADO.NET includes factory classes inherited from System.Data.Common.DbProviderFactory to create the standard connection, command, data reader, table, parameter, permissions, and data adapter classes; these help you write code that targets multiple databases. A factory is accessed using the GetFactory method of the DbProviderFactories class and can be configured in the application's configuration file using the DbProviderConfigurationHandler.

Asynchronous data access
Commands executed by ADO.NET in version 1.0 using the ExecuteNonQuery, ExecuteReader, and ExecuteXmlReader methods of SqlCommand were synchronous and would block the current thread until the results were returned by the server. In v2.0, each of these methods includes both Begin and End versions to support asynchronous execution from the client's perspective.

This technique employs the familiar asynchronous programming model using the AsyncCallback delegate in .NET, and so includes the SqlAsyncResult class to implement the IAsyncResult interface. While this feature works only for SqlClient at the moment, look for it to perhaps be extended to other providers before the release. Following is an example of setting up an asynchronous command. (Note that the SqlAsyncResult class is not included in the alpha at this time, so the code will not execute.)

// Set up the connection and command
SqlConnection con = new SqlConnection(connection-string);
SqlCommand cm = new SqlCommand(SQL statement, con);
cm.Open();
cm.BeginExecuteNonQuery(new AsyncCallback(DoneExecuting), null);
// Thread is free, do other things
// Callback method
private void DoneExecuting(SqlAsyncResult ar)
{
   int numRows = ar.EndExecuteNonQuery(ar);
   // print the number of rows affected
}

Batch updates
In version 1.0, a DataAdapter always sent changes to rows one at a time to the server. In version 2.0, the DataAdapter exposes an UpdateBatchSize property that, if supported by the data provider, allows changed rows to be sent to the server in groups. This cuts down on the number of round-trips to the server and therefore increases performance.

Data paging
In both SqlClient and OracleClient, the command object now exposes an ExecutePageReader method that allows you to pass in the starting row and the number of rows to return from the server. This allows for more efficient data access by retrieving only the rows you need to display. However, this feature reads the rows currently in the table, so subsequent calls may contain rows from the previous page because of inserts, or from the latter pages because of deletes. It therefore works best with relatively static data.

Binary DataSet remoting
Version 2.0 now allows DataSets to be serialized using a binary format when employing .NET remoting. This both increases the performance of remoting data between .NET applications and reduces the number of bytes transferred.

DataSet and DataReader transfer
In version 1.1, you could only load a DataSet from a DataAdapter. But in version 2.0, you can also load one directly using a DataReader and the Load method. Conversely, you can now generate a DataTableReader (inherited from DbDataReader) with the GetDataReader method in order to traverse the contents of a DataSet. This feature makes it easy to load a DataSet and view its data.

Climbing Yukon
In this category are the new features of ADO.NET v2.0 that relate directly to the new release of the SQL Server code named Yukon, due out in the same time frame:

MARS
Multiple active result sets (MARS) allows you to work with more than one concurrent result set on a single connection to Yukon. This can be efficient if you need to open a SqlDataReader and, during the traversal, execute a command against a particular row. MARS allows both commands to share the same SqlConnection object so that a second connection to SQL Server is not required.

Change notification
One of the most interesting new features of Yukon is its ability to support notifications. ADO.NET v2.0 includes programmatic support for this feature by including a SqlNotificationRequest object that can be bound to a SqlCommand.

When data returned from the command changes in the database, a message is sent to the specified notification queue. ADO.NET code can then query the queue either by using an asynchronous query that blocks until a message is sent or by periodically checking the queue using new Transact-SQL syntax.

To make this feature even easier to work with, a SqlDependency class that sets up an asynchronous delegate is included. This will be called when the data changes, and it can be used like other dependencies in conjunction with the ASP.NET caching engine. An example of using a SqlDependency object-

// Set up the connection and command
SqlConnection con = new SqlConnection(connection-string);
SqlCommand cm = new SqlCommand(SELECT statement, con);
SqlDependency dep = new SqlDependency(cm);
dep.OnChanged += new OnChangedEventHandler (HandleChange);
SqlDataReader dr = cm.ExecuteReader();
// Process the data
private void HandleChange (object sender, SqlNotificationEventArgs e)
{
  // A change has been made to the data
  // Inspect the type of change using e.Type
}

Yukon types
ADO.NET v2.0 supports the full set of Yukon data types, including XML and User Defined Types (UDTs). This means that columns in Yukon defined as XML can be retrieved as XmlReader objects, and that UDTs can be passed to stored procedures and returned from queries as standard .NET types. This allows your applications to work with data as fully formed objects while interacting with the database using the objects. This feature can be used profitably when writing managed code that runs in-process in SQL Server, allowing both the managed stored procedure and the client code to use the same .NET type.

Server-side cursors
Because it often caused applications to perform poorly, ADO.NET v1.0 and v1.1 did away with the server-side cursors for ADO v2.x. ADO.NET v2.0 now reintroduces the concept in Yukon using the ExecuteResultset and ExecuteRow methods of the SqlCommand object and the SqlResultset class.

The SqlResultset class offers a fully scrollable and updateable cursor that can be useful for applications that need to traverse a large amount of data and update only a few rows. Although this feature can be used from client applications such as ASP.NET, it is mainly intended for use when writing managed code that runs in-process with Yukon in the form of stored procedures.

Bulk copy
Although not restricted to Yukon, ADO.NET v2.0 now allows programmatic access to the BCP or bulk copy API exposed by SQL Server. This is done using the SqlBulkCopyOperation and SqlBulkCopyColumnAssociator classes in the System.Data.SqlClient namespace.

What's New in ASP.NET 2.0?

ASP.NET is a programming framework built on the common language runtime that can be used on a server to build powerful Web applications. The first version of ASP.NET offered several important advantages over previous Web development models. ASP.NET 2.0 improves upon that foundation by adding support for several new and exciting features in the areas of developer productivity, administration and management, extensibility, and performance:

Developer Productivity

ASP.NET 2.0 encapsulates common Web tasks into application services and controls that can be easily reused across web sites. With these basic building blocks, many scenarios can now be implemented with far less custom code than was required in previous versions. With ASP.NET 2.0 it is possible to significantly reduce the amount of code and concepts necessary to build common scenarios on the web.
  • New Server Controls. ASP.NET 2.0 introduces many new server controls that enable powerful declarative support for data access, login security, wizard navigation, menus, treeviews, portals, and more. Many of these controls take advantage of core application services in ASP.NET for scenarios like data access, membership and roles, and personalization. Some of the new families of controls in ASP.NET 2.0 are described below.

    • Data Controls. Data access in ASP.NET 2.0 can be accomplished completely declaratively (no code) using the new data-bound and data source controls. There are new data source controls to represent different data backends such as SQL database, business objects, and XML, and there are new data-bound controls for rendering common UI for data, such as gridview, detailsview, and formview..

    • Navigation Controls. The navigation controls provide common UI for navigating between pages in your site, such as treeview, menu, and sitemappath. These controls use the site navigation service in ASP.NET 2.0 to retrieve the custom structure you have defined for your site.

    • Login Controls. The new login controls provide the building blocks to add authentication and authorization-based UI to your site, such as login forms, create user forms, password retrieval, and custom UI for logged in users or roles. These controls use the built-in membership and role services in ASP.NET 2.0 to interact with the user and role information defined for your site.

    • Web Part Controls. Web parts are an exciting new family of controls that enable you to add rich, personalized content and layout to your site, as well as the ability to edit that content and layout directly from your application pages. These controls rely on the personalization services in ASP.NET 2.0 to provide a unique experience for each user in your application.

  • Master Pages. This feature provides the ability to define common structure and interface elements for your site, such as a page header, footer, or navigation bar, in a common location called a "master page", to be shared by many pages in your site. In one simple place you can control the look, feel, and much of functionality for an entire Web site. This improves the maintainability of your site and avoids unnecessary duplication of code for shared site structure or behavior.

  • Themes and Skins. The themes and skins features in ASP.NET 2.0 allow for easy customization of your site's look-and-feel. You can define style information in a common location called a "theme", and apply that style information globally to pages or controls in your site. Like Master Pages, this improves the maintainability of your site and avoid unnecessary duplication of code for shared styles.

  • Personalization. Using the new personalization services in ASP.NET 2.0 you can easily create customized experiences within Web applications. The Profile object enables developers to easily build strongly-typed, sticky data stores for user accounts and build highly customized, relationship based experiences. At the same time, a developer can leverage Web Parts and the personalization service to enable Web site visitors to completely control the layout and behavior of the site, with the knowledge that the site is completely customized for them. Personalizaton scenarios are now easier to build than ever before and require significantly less code and effort to implement.

  • Localization. Enabling globalization and localization in Web sites today is difficult, requiring large amounts of custom code and resources. ASP.NET 2.0 and Visual Studio 2005 provide tools and infrastructure to easily build Localizable sites including the ability to auto-detect incoming locale's and display the appropriate locale based UI. Visual Studio 2005 includes built-in tools to dynamically generate resource files and localization references. Together, building localized applications becomes a simple and integrated part of the development experience.

Administration and Management

ASP.NET 2.0 is designed with administration and manageability in mind. We recognize that while simplifying the development experience is important, deployment and maintenance in a production environment is also a key component of an application's lifetime. ASP.NET 2.0 introduces several new features that further enhance the deployment, management, and operations of ASP.NET servers.
  • Configuration API. ASP.NET 2.0 contains new configuration management APIs, enabling users to programmatically build programs or scripts that create, read, and update Web.config and machine.config configuration files.

  • ASP.NET MMC Admin Tool. ASP.NET 2.0 provides a new comprehensive admin tool that plugs into the existing IIS Administration MMC, enabling an administrator to graphically read or change common settings within our XML configuration files.

  • Pre-compilation Tool. ASP.NET 2.0 delivers a new application deployment utility that enables both developers and administrators to precompile a dynamic ASP.NET application prior to deployment. This precompilation automatically identifies any compilation issues anywhere within the site, as well as enables ASP.NET applications to be deployed without any source being stored on the server (one can optionally remove the content of .aspx files as part of the compile phase), further protecting your intellectual property.

  • Health Monitoring and Tracing. ASP.NET 2.0 also provides new health-monitoring support to enable administrators to be automatically notified when an application on a server starts to experience problems. New tracing features will enable administrators to capture run-time and request data from a production server to better diagnose issues. ASP.NET 2.0 is delivering features that will enable developers and administrators to simplify the day-to-day management and maintenance of their Web applications.

Flexible Extensibility

ASP.NET 2.0 is a well-factored and open system, where any component can be easily replaced with a custom implementation. Whether it is server controls, page handlers, compilation, or core application services, you'll find that all are easily customizable and replaceable to tailor to your needs. Developers can plug in custom code anywhere in the page lifecycle to further customize ASP.NET 2.0 to their needs.
  • Provider-driven Application Services. ASP.NET 2.0 now includes built-in support for membership (user name/password credential storage) and role management services out of the box. The new personalization service enables quick storage/retrieval of user settings and preferences, facilitating rich customization with minimal code. The new site navigation system enables developers to quickly build link structures consistently across a site. As all of these services are provider-driven, they can be easily swapped out and replaced with your own custom implementation. With this extensibility option, you have complete control over the data store and schema that drives these rich application services.

  • Server Control Extensibility. ASP.NET 2.0 includes improved support for control extensibility, such as more base classes that encapsulate common behaviors, improved designer support, more APIs for interacting with client-side script, metadata-driven support for new features like themes and accessibility verification, better state management, and more.

  • Data Source Controls. Data access in ASP.NET 2.0 is now performed declaratively using data source controls on a page. In this model, support for new data backend storage providers can be easily added by implementing custom data source controls. Additionally, the SqlDataSource control that ships in the box has built-in support for any ADO.NET managed provider that implements the new provider factory model in ADO.NET.

  • Compilation Build Providers. Dynamic compilation in ASP.NET 2.0 is now handled by extensible compilation build providers, which associate a particular file extension with a handler that knows how to compile that extension dynamically at runtime. For example, .resx files can be dynamically compiled to resources, .wsdl files to web service proxies, and .xsd files to typed DataSet objects. In addition to the built-in support, it is easy to add support for additional extensions by implementing a custom build provider and registering it in Web.config.

  • Expression Builders. ASP.NET 2.0 introduces a declarative new syntax for referencing code to substitute values into the page, called Expression Builders. ASP.NET 2.0 includes expression builders for referencing string resources for localization, connection strings, application settings, and profile values. You can also write your own expression builders to create your own custom syntax to substitute values in a page rendering.

Performance and Scalability

ASP.NET is built to perform, using a compiled execution model for handling page requests and running on the world's fastest web server, Internet Information Services. ASP.NET 2.0 also introduces key performance benefits over previous versions.
  • 64-Bit Support. ASP.NET 2.0 is now 64-bit enabled, meaning it can take advantage of the full memory address space of new 64-bit processors and servers. Developers can simply copy existing 32-bit ASP.NET applications onto a 64-bit ASP.NET 2.0 server and have them automatically be JIT compiled and executed as native 64-bit applications (no source code changes or manual re-compile are required).

  • Caching Improvements. ASP.NET 2.0 also now includes automatic database server cache invalidation. This powerful and easy-to-use feature allows developers to aggressively output cache database-driven page and partial page content within a site and have ASP.NET automatically invalidate these cache entries and refresh the content whenever the back-end database changes. Developers can now safely cache time-critical content for long periods without worrying about serving visitors stale data.
The remainder of the QuickStart presents practical examples of these and other features in ASP.NET.

Events and Life Cycle of a Page (ASP.Net 1.x VS ASP.Net 2.0):

In the ASP.NET runtime the life cycle of a page is marked by a series of events. In ASP.NET 1.x, based on user interaction a page request is sent to the Web server. The event that is initiated by the page request is Init. After the Init event, Load event is raised. Following the Load event, PreRender event is raised. Finally, the Unload event is raised and an output page is returned to the client.

ASP.NET 2.0 adds a few new events to allow you to follow the request processing more closely and precisely.
These new events are discussed in the following table.

New Events in ASP.NET 2.0:

Events

Description

PreInit

This occurs before the page begins initialization. This is the first event in the life of an ASP.NET 2.0 page.

InitComplete

This occurs when the page initialization is completed.

PreLoad

This occurs immediately after initialization and before the page begins loading the state information.

LoadComplete

This occurs at the end of the load stage of the page's life cycle.

PreRenderComplete

This occurs when the pre-rendering phase is complete and all child controls have been created. After this event, the personalization data and the view state are saved and the page renders to HTML.

This new features enable developer to dynamically modify the page output and the state of constituent controls by handling these events.

What's New in the .NET Framework 2.0

The first version of the .NET Framework (1.0) was released in 2002 to much enthusiasm. The latest version, the .NET Framework 2.0, was introduced in 2005 and is considered a major release of the framework.

With each release of the framework, Microsoft has always tried to ensure that there were minimal breaking changes to code developed. Thus far, they have been very successful at this goal.

Make sure that you create a staging server to completely test the upgrade of your applications to the .NET Framework 2.0 as opposed to just upgrading a live application.

The following details some of the changes that are new to the .NET Framework 2.0 as well as new additions to Visual Studio 2005 — the development environment for the .NET Framework 2.0.

SQL Server integration

After a long wait, the latest version of SQL Server has finally been released. This version, SQL Server 2005, is quite special in so many ways. Most importantly for the .NET developer is that SQL Server 2005 is now hosting the CLR. Microsoft has developed their .NET offering for developers so that the .NET Framework 2.0, Visual Studio 2005 and SQL Server 2005 are all now tied together — meaning that these three products are now released in unison. This is quite important as it is rather well known that most applications built use all three of these pieces and that they all need to be upgraded together in such a way that they work with each other in a seamless manner.

Due to the fact that SQL Server 2005 now hosts the CLR, this means that you can now avoid building database aspects of your application using the T-SQL programming language. Instead, you can now build items such as your stored procedures, triggers and even data types in any of the .NET-compliant languages, such as C#.

SQL Server Express is the 2005 version of SQL Server that replaces MSDE. This version doesn't have the strict limitations MSDE had.

64-Bit support

Most programming today is done on 32-bit machines. It was a monumental leap forward in application development when computers went from 16-bit to 32-bit. More and more enterprises are moving to the latest and greatest 64-bit servers from companies such as Intel (Itanium chips) and AMD (x64 chips) and the .NET Framework 2.0 has now been 64-bit enabled for this great migration.

Microsoft has been working hard to make sure that everything you build in the 32-bit world of .NET will run in the 64-bit world. This means that everything you do with SQL Server 2005 or ASP.NET will not be affected by moving to 64-bit. Microsoft themselves made a lot of changes to the CLR in order to get a 64-bit version of .NET to work. Changes where made to things such as garbage collection (to handle larger amounts of data), the JIT compilation process, exception handling, and more.

Moving to 64-bit gives you some powerful additions. The most important (and most obvious reason) is that 64-bit servers give you a larger address space. Going to 64-bit also allows for things like larger primitive types. For instance, an integer value of 2^32 will give you 4,294,967,296 — while an integer value of 2^64 will give you 18,446,744,073,709,551,616. This comes in quite handy for those applications that need to calculate things such as the U.S. debt or other high numbers.

Companies such as Microsoft and IBM are pushing their customers to take a look at 64-bit. One of the main areas of focus are on database and virtual storage capabilities as this is seen as an area in which it makes a lot of sense to move to 64-bit for.

Visual Studio 2005 can install and run on a 64-bit computer. This IDE has both 32-bit and 64-bit compilers on it. One final caveat is that the 64-bit .NET Framework is meant only for Windows Server 2003 SP1 or better as well as other 64-bit Microsoft operating systems that might come our way.

When you build your applications in Visual Studio 2005, you can change the build properties of your application so that it compiles specifically for 64-bit computers. To find this setting, you will need to pull up your application's properties and click on the Build tab from within the Properties page. On the Build page, click on the Advanced button and this will pull up the Advanced Compiler Setting dialog. From this dialog, you can change the target CPU from the bottom of the dialog. From here, you can establish your application to be built for either an Intel 64-bit computer or an AMD 64-bit computer. This is shown here in Figure 1.

Figure 1
Figure 1: Building your application for 64-bit

Generics

In order to make collections a more powerful feature and also increase their efficiency and usability, generics were introduced to the .NET Framework 2.0. This introduction to the underlying framework means that languages such as C# and Visual Basic 2005 can now build applications that use generic types. The idea of generics is nothing new. They look similar to C++ templates but are a bit different. You can also find generics in other languages, such as Java. Their introduction into the .NET Framework 2.0 languages is a huge benefit for the user.

Generics enable you to make a generic collection that is still strongly typed — providing fewer chances for errors (because they occur at runtime), increasing performance, and giving you Intellisense features when you are working with the collections.

To utilize generics in your code, you will need to make reference to the System.Collections.Generic namespace. This will give you access to generic versions of the Stack, Dictionary, SortedDictionary, List and Queue classes. The following demonstrates the use of a generic version of the Stack class:

void Page_Load(object sender, EventArgs e)
{
System.Collections.Generic.Stack myStack =
New System.Collections.Generic.Stack();
myStack.Push("St. Louis Rams");
myStack.Push("Indianapolis Colts");
myStack.Push("Minnesota Vikings");

Array myArray;
myArray = myStack.ToArray();

foreach(string item in myArray)
{
Label1.Text += item + "
";
}
}

In the above example, the Stack class is explicitly cast to be of type string. Here, you specify the collection type with the use of brackets. This example, casts the Stack class to type string using Stack. If you wanted to cast it to something other than a Stack class of type string (for instance, int), then you would specify Stack.

Because the collection of items in the Stack class is cast to a specific type immediately as the Stack class is created, the Stack class no longer casts everything to type object and then later (in the foreach loop) to type string. This process is called boxing, and it is expensive. Because this code is specifying the types up front, the performance is increased for working with the collection.

In addition to just working with various collection types, you can also use generics with classes, delegates, methods and more.

Anonymous methods

Anonymous methods enable you to put programming steps within a delegate that you can then later execute instead of creating an entirely new method. For instance, if you were not using anonymous methods, you would use delegates in a manner similar to the following:

public partial class Default_aspx
{
void Page_Load(object sender, EventArgs e)
{
this.Button1.Click += ButtonWork;
}

void ButtonWork(object sender, EventArgs e)
{
Label1.Text = "You clicked the button!";
}
}

But using anonymous methods, you can now put these actions directly in the delegate as shown here in the following example:

public partial class Default_aspx
{
void Page_Load(object sender, EventArgs e)
{
this.Button1.Click += delegate(object
myDelSender, EventArgs myDelEventArgs)
{
Label1.Text = "You clicked the button!";
};
}
}

When using anonymous methods, there is no need to create a separate method. Instead you place the necessary code directly after the delegate declaration. The statements and steps to be executed by the delegate are placed between curly braces and closed with a semicolon.

Nullable types

Due to the fact that generics has been introduced into the underlying .NET Framework 2.0, it is now possible to create nullable value types — using System.Nullable. This is ideal for situations such as creating sets of nullable items of type int. Before this, it was always difficult to create an int with a null value from the get-go or to later assign null values to an int.

To create a nullable type of type int, you would use the following syntax:

System.Nullable x = new System.Nullable;

There is a new type modifier that you can also use to create a type as nullable. This is shown in the following example:

int? salary = 800000

This ability to create nullable types is not a C#-only item as this ability was built into .NET Framework itself and, as stated, is there due to the existence of the new generics feature in .NET. For this reason, you will also find nullable types in Visual Basic 2005 as well.

Iterators

Iterators enable you to use foreach loops on your own custom types. To accomplish this, you need to have your class implement the IEnumerable interface as shown here:

using System;
using Systm.Collections;

public class myList
{
internal object[] elements;
internal int count;

public IEnumerator GetEnumerator()
{
yield return "St. Louis Rams";
yield return "Indianapolis Colts";
yield return "Minnesota Vikiings";
}
}

In order to use the IEnumerator interface, you will need to make a reference to the System.Collections namespace. With this all in place, you can then iterate through the custom class as shown here:

void Page_Load(object sender, EventArgs e)
{
myList IteratorList = new myList();

foreach(string item in IteratorList)
{
Response.Write(item.ToString() + "
");
}
}

Partial Classes

Partial classes are a new feature to the .NET Framework 2.0 and again C# takes advantage of this addition. Partial classes allow you to divide up a single class into multiple class files, which are later combined into a single class when compiled.

To create a partial class, you simply need to use the partial keyword for any classes that are to be joined together with a different class. The partial keyword precedes the class keyword for the classes that are to be combined with the original class. For instance, you might have a simple class called Calculator as shown here:

public class Calculator
{
public int Add(int a, int b)
{
return a + b;
}
}

From here, you can create a second class that attaches itself to this first class as shown here in the following example:

public partial class Calculator
{
public int Subtract(int a, int b)
{
return a - b;
}
}

When compiled, these classes will be brought together into a single Calculator class instance as if they were built together to begin with.

Where C# Fits In

In one sense, C# can be seen as being the same thing to programming languages as .NET is to the Windows environment. Just as Microsoft has been adding more and more features to Windows and the Windows API over the past decade, Visual Basic 2005 and C++ have undergone expansion. Although Visual Basic and C++ have ended up as hugely powerful languages as a result of this, both languages also suffer from problems due to the legacies of how they have evolved.

In the case of Visual Basic 6 and earlier versions, the main strength of the language was the fact that it was simple to understand and didn't make many programming tasks easy, largely hiding the details of the Windows API and the COM component infrastructure from the developer. The downside to this was that Visual Basic was never truly object-oriented, so that large applications quickly became disorganized and hard to maintain. As well, because Visual Basic's syntax was inherited from early versions of BASIC (which, in turn, was designed to be intuitively simple for beginning programmers to understand, rather than to write large commercial applications), it didn't really lend itself to well-structured or object-oriented programs.

C++, on the other hand, has its roots in the ANSI C++ language definition. It isn't completely ANSI-compliant for the simple reason that Microsoft first wrote its C++ compiler before the ANSI definition had become official, but it comes close. Unfortunately, this has led to two problems. First, ANSI C++ has its roots in a decade-old state of technology, and this shows up in a lack of support for modern concepts (such as Unicode strings and generating XML documentation), and for some archaic syntax structures designed for the compilers of yesteryear (such as the separation of declaration from definition of member functions). Second, Microsoft has been simultaneously trying to evolve C++ into a language that is designed for high-performance tasks on Windows, and in order to achieve that they've been forced to add a huge number of Microsoft-specific keywords as well as various libraries to the language. The result is that on Windows, the language has become a complete mess. Just ask C++ developers how many definitions for a string they can think of: char*, LPTSTR, string, CString (MFC version), CString (WTL version), wchar_t*, OLECHAR*, and so on.

Now enter .NET —a completely new environment that is going to involve new extensions to both languages. Microsoft has gotten around this by adding yet more Microsoft-specific keywords to C++, and by completely revamping Visual Basic into Visual Basic .NET into Visual Basic 2005, a language that retains some of the basic VB syntax but that is so different in design that we can consider it to be, for all practical purposes, a new language.

It's in this context that Microsoft has decided to give developers an alternative — a language designed specifically for .NET, and designed with a clean slate. Visual C# 2005 is the result. Officially, Microsoft describes C# as a "simple, modern, object-oriented, and type-safe programming language derived from C and C++." Most independent observers would probably change that to "derived from C, C++, and Java." Such descriptions are technically accurate but do little to convey the beauty or elegance of the language. Syntactically, C# is very similar to both C++ and Java, to such an extent that many keywords are the same, and C# also shares the same block structure with braces ({}) to mark blocks of code, and semicolons to separate statements. The first impression of a piece of C# code is that it looks quite like C++ or Java code. Beyond that initial similarity, however, C# is a lot easier to learn than C++, and of comparable difficulty to Java. Its design is more in tune with modern developer tools than both of those other languages, and it has been designed to give us, simultaneously, the ease of use of Visual Basic, and the high-performance, low-level memory access of C++ if required. Some of the features of C# are:

  • Full support for classes and object-oriented programming, including both interface and implementation inheritance, virtual functions, and operator overloading.
  • A consistent and well-defined set of basic types.
  • Built-in support for automatic generation of XML documentation.
  • Automatic cleanup of dynamically allocated memory.
  • The facility to mark classes or methods with user-defined attributes. This can be useful for documentation and can have some effects on compilation (for example, marking methods to be compiled only in debug builds).
  • Full access to the .NET base class library, as well as easy access to the Windows API (if you really need it, which won't be all that often).
  • Pointers and direct memory access are available if required, but the language has been designed in such a way that you can work without them in almost all cases.
  • Support for properties and events in the style of Visual Basic.
  • Just by changing the compiler options, you can compile either to an executable or to a library of .NET components that can be called up by other code in the same way as ActiveX controls (COM components).
  • C# can be used to write ASP.NET dynamic Web pages and XML Web services.

Most of the above statements, it should be pointed out, do also apply to Visual Basic 2005 and Managed C++. The fact that C# is designed from the start to work with .NET, however, means that its support for the features of .NET is both more complete, and offered within the context of a more suitable syntax than for those other languages. While the C# language itself is very similar to Java, there are some improvements, in particular, Java is not designed to work with the .NET environment.

Before we leave the subject, we should point out a couple of limitations of C#. The one area the language is not designed for is time-critical or extremely high performance code — the kind where you really are worried about whether a loop takes 1,000 or 1,050 machine cycles to run through, and you need to clean up your resources the millisecond they are no longer needed. C++ is likely to continue to reign supreme among low-level languages in this area. C# lacks certain key facilities needed for extremely high performance apps, including the ability to specify inline functions and destructors that are guaranteed to run at particular points in the code. However, the proportions of applications that fall into this category are very low.