Monday, December 17, 2007

MVC Framework - The bird’s eye view

During the weekend I came across Dino Esposito’s blog and found few interesting lines. Here it goes…
“It seems that the bar of software preview has been moved one more tick down. There was a time when any software was top secret until publicly released. Next, we assisted to pre-release announcements and demos. Next, we were allowed to play ourselves with beta versions for longer and longer times. And our feedback was first appreciated and then requested. Next, it was the season of CTPs, progressive alpha versions that precede the beta stage–the teen-age of software.With the MVC framework we reached the notable point of publicly discussing a piece of software that is not even a CTP. I wonder what would be the next step. Discussing ideas, I bet. And after that?”
It made me enough curious about MVC Framework May be not because I am very aggressive about new things but at least because you must be updated every minute in this industry or just get ready to die and that I don’t want…
There is not enough material available on this as it is in its infancy but I found some interesting things that I want to share with you…The first thing is the ScottGu’s presentation on this at Alt.Net.
See video. And next is ScottGu’s blog on the same. These two sources are native and apart from them I went across several blogs from the attendees of the Alt.Net presentation. I tried to assemble all the main content and some of my findings in this article and hope it might be helpful.
Starting with some issues…Being an ASP.Net developer I often run through a problem – How to write a unit test in a tightly coupled system. You may ask why to develop a tightly coupled application if I want to follow the test driven design. Then what I can do…I can use the MVC (Master view controller) pattern to develop my application. In fact ASP.Net itself supports the MVC by code behind model where you can say each aspx.vb class as a controller and aspx as a view. I will brief about the MVC as we will go a little further. For now I am stating another problem…ASP.Net is based on two primary things…Postback and Viewstate. And both of them have several problems associated with them. Trust me. Just do a googling on the phrase “Viewstate Problem” and you will find millions of entries.
So as the name suggests, MVC Framework is primarily based on and tried to remove the complexity of implementation of MVC design pattern. What is MVC pattern is all about?


Model-View-Controller (MVC) Pattern
The Model-View-Controller (MVC) pattern separates the modeling of the domain, the presentation, and the actions based on user input into three separate classes.
Model. The model manages the behavior and data of the application domain, responds to requests for information about its state (usually from the view), and responds to instructions to change state (usually from the controller).
View. The view manages the display of information.
Controller. The controller interprets the mouse and keyboard inputs from the user, informing the model and/or the view to change as appropriate.


It is important to note that both the view and the controller depend on the model. However, the model depends on neither the view nor the controller. This is one the key benefits of the separation. This separation allows the model to be built and tested independent of the visual presentation. The separation between view and controller is secondary in many rich-client applications, and, in fact, many user interface frameworks implement the roles as one object. In Web applications, on the other hand, the separation between view (the browser) and controller (the server-side components handling the HTTP request) is very well defined.
Read More to learn how to implement MVC pattern.

Now, in ScottGu’s word some of the key points of MVC Framework:
It enables clean separation of concerns, testability, and TDD(Test Driven Design) by default. All core contracts within the MVC framework are interface based and easily mockable (it includes interface based IHttpRequest/IHttpResponse intrinsics). You can unit test the application without having to run the Controllers within an ASP.NET process (making unit testing fast). You can use any unit testing framework you want to-do this testing (including NUnit, MBUnit, MS Test, etc).

It is highly extensible and pluggable. Everything in the MVC framework is designed so that it can be easily replaced/customized (for example: you can optionally plug-in your own view engine, routing policy, parameter serialization, etc). It also supports using existing dependency injection and IOC container models (Windsor, Spring.Net, NHibernate, etc).

It includes a very powerful URL mapping component that enables you to build applications with clean URLs. URLs do not need to have extensions within them, and are designed to easily support SEO and REST-friendly naming patterns. For example, I could easily map the /products/edit/4 URL to the “Edit” action of the ProductsController class in my project above, or map the /Blogs/scottgu/10-10-2007/SomeTopic/ URL to a “DisplayPost” action of a BlogEngineController class.

The MVC framework supports using the existing ASP.NET .ASPX, .ASCX, and .Master markup files as “view templates” (meaning you can easily use existing ASP.NET features like nested master pages, <%= %> snippets, declarative server controls, templates, data-binding, localization, etc). It does not, however, use the existing post-back model for interactions back to the server. Instead, you’ll route all end-user interactions to a Controller class instead - which helps ensure clean separation of concerns and testability (it also means no viewstate or page lifecycle with MVC based views).

The ASP.NET MVC framework fully supports existing ASP.NET features like forms/windows authentication, URL authorization, membership/roles, output and data caching, session/profile state management, health monitoring, configuration system, the provider architecture, etc.

There are some good implementation examples available on this framework:
I guess, now you are much familiar with MVC framework, but this is the time to elaborate few of the points that ScottGu has pointed out in his blog. The first point is Dependency Injection. What is this??? Now a day the focus is on reusing existing components and wiring together disparate components to form a cohesive architecture. But this wiring can quickly become a scary task because as application size and complexity increase, so do dependencies. One way to mitigate the proliferation of dependencies is by using Dependency Injection (DI), which allows you to inject objects into a class, rather than relying on the class to create the object itself. Wanna go in detail? Wait for my next post in which I am planning to elaborate DI and Spring.Net and describe about limitations of Factory, Abstract Factory, Builder and Container.

The next thing that ScottGu has talked about is REST (Representational State Transfer)…
Well, REST is an architectural pattern that defines how network resources should be defined and addressed in order to gain shorter response times, clear separation of concerns between the front-end and back-end of a networked system. REST is based on three following principles:
  • An application expresses its state and implements its functionality by acting on logical resources.
  • Each resource is addressed using a specific URL syntax.
  • All addressable resources feature a contracted set of operations.
As you can see, the MVC Framework fulfills it entirely.

The MVC Framework doesn’t support classic postbacks and viewstate and doesn’t consider any URL as the endpoint to a physical server file to parse and compile to a class. In ASP.NET, you have a 1:1 correspondence between a URL and a resource. The only exception to this rule is when you use completely custom HTTP handlers bound to a particular path.

In the MVC Framework, a URL is seen as the mean to address a logical server resource, but not necessarily an ASPX file to parse. So the URLs employed by the pages of an MVC Framework application have a custom format that the application itself mandates. In the end, the MVC Framework employs a centralized HTTP handler that recognizes an application-specific syntax for links. In addition, each addressable resource exposes a well-known set of operations and a uniform interface for executing operations.

So here’s an alternate way of looking at the MVC Framework. It is an ASP.NET framework that performs data exchange by using a REST model versus the postback model of classic ASP.NET. Each page is split into two distinct components -controller and view - that operate over the same model of data. This is opposed to the classic code-behind model where no barrier is set that forces you to think in terms of separation of concerns and controllers and views. However, by keeping the code-behind class as thin as possible, and designing the business layer appropriately, a good developer could achieve separation of concerns even without adopting MVC and its overhead. MVC, however, is a model superior to a properly-done code-behind for its inherent support for test-driven development.

Wednesday, July 25, 2007

Visual Studio Orcas features

Visual Studio Orcas, due for release at the end of 2007, promises numerous improvements for Visual Basic, a data query called the Language Integrated Query (LINQ), a new Entity Framework for ADO.NET and updated tools for ASP.NET AJAX and Office 2007 development.

Multi-targeting
Visual Studio Orcas is billed as the design surface for the .NET Framework 3.5, which itself is the merger of the .NET 3.0 toolset introduced earlier this year with updated versions of ASP.NET, ADO.NET, Visual Basic, C# and the CLR.
At the same time, though, Orcas allows developers to work backwards and develop specifically for .NET 2.0 or 3.0. In the words of Jeff King, program manager on the Visual Studio team, the tool and the framework are decoupled: "It really buys you freedom."
Once a framework version has been selected, Visual Studio Orcas will enable the reference features that are appropriate for that version of the framework. (In other words, don't try using LINQ in a .NET 2.0 application.)

N-tier application development
An n-tier application is spread among any number of different systems, typically a service layer, an access layer and a business logic layer. With such a model, it is easy to share validation logic between entities, said Young Joo, a Visual Basic program manager.
Unfortunately, developing such applications in Visual Studio 2005 is "pretty much impossible," Joo said, because a dataset and the code that connects it to a database are in the same file. The change in Visual Studio Orcas is subtle but important, as the table and the dataset now reside in different layers.

An improved designer
King described the Visual Studio 2005 designer as little more the Internet Explorer renderer turned into an editor. To improve upon this, the Visual Studio group turned to Expression, the Microsoft product suite for Web designers.
The new designer allows developers to apply styles in line, with an existing class, with a new class or with Internet Explorer. "We default manually nowadays," King said. In addition, changes such as margins and paddings around images can be applied to rules and not just individually. This also makes for cleaner CSS files.
Finally, the designer offers a split view, so developers can look at source code and design simultaneously. This is a response to the growing trend of development using two monitors, King said.

ASP.NET AJAX and VSTO for Office 2007
Right now, developers aiming to build cutting edge Web applications have to download the ASP.NET AJAX framework, and those who want to develop for Office 2007 have to download Visual Studio 2005 Tools for Office Second Edition.
Both ASP.NET AJAX and VSTO 2005 SE will be directly incorporated into Visual Studio Orcas. VSTO will come with a new runtime, which will run both Office 2007 and Office 2003 add-ins, while ASP.NET AJAX will come with a variety of JavaScript tools, such as IntelliSense and robust debugging.

The ADO.NET Entity Framework
The biggest changes to ADO.NET revolve around its Entity Framework, which, unfortunately, is now slated to be released quite a while after Visual Studio 2008. This framework consists of a conceptual layer, which fits between an application's logical database layer and its object layer, and the Entity Data Model, said Julia Lerman, consultant and owner of The Data Farm.
Run the Entity Data Model Wizard in Visual Studio Orcas and the output is three files -- a conceptual model that talks to object classes, a logical model that the relational database talks to, and map between the conceptual and logical models.
Within the conceptual layer, one finds entity types bundled into sets, associations that define the relationship between entities, and sets of associations. The information in this layer will handle the back and forth with SQL Server without touching data access code, Lerman said.
Once entities have been created, developers can use the either CreateQuery or new LINQ to Entities query to retrieve entity objects, data records or anonymous types, Lerman said.

LINQ: The Language Integrated Query
In Visual Studio 2005, querying a dataset typically involves a stored procedure, a newly created view and a bit of ADO.NET filtering. This is the case because data exists in rows, while .NET code deals with objects. "They are always two different worlds," said Jeff King, a program manager on the Visual Studio team. "LINQ puts queries inside the languages and merges the worlds together."
At its most basic level, the Language Integrated Query, a feature of both Visual Basic and C#, uses the concept of the SQL Select statement to make its queries. However, there are two important differences, said Markus Egger, president and chief software architect at EPS Software Corp. and publisher of CoDe Magazine.
First, LINQ statements begin with a From statement instead of the Select statement. By listing the data source first, IntelliSense is triggered, Egger said.
Second, since C# and Visual Basic are object-oriented languages, he said, "whatever you can express in C# or VB you can make part of LINQ queries." This encompasses anything that is IEnumerable -- entities, collections and even XML. Moreover, since the queries are being made in an object-oriented environment, Egger said, "you can do very, very complex things that result in a completely different result set," such as calling up an instance of objects that did not exist in the source at all.
LINQ also brings about the introduction of several new language concepts for Visual Basic and C#. The expression tree, for example, is a data representation of the LINQ expression that bridges the gap between the .NET code and the SQL Server. In addition, property initialization makes it possible to create an object and set its properties in a single line of code.
Other new language concepts, which will be discussed below, include implicit types and extension methods.

VB 9: Implicit types
In a nutshell, author and consultant Billy Hollis said, implicit types provide strong typing without forcing developers to figure out the type they need. The compiler does the work for them, since the type is inferred from the initializer expression.
Implicit types work well when looping through a collection, Hollis said, since in such a scenario a developer is likely to be looking only for a key and a value and will not know, or care, what the index type is.
In addition, inferring types makes it possible for extensions to bind to data types such as XML. This is fundamental to making LINQ work, Hollis said.

VB 9: Extension Methods
Extension methods, which Hollis described as "syntactic sugar," are another Visual Basic 9 feature coming straight from LINQ, since all LINQ query operators are extension methods. These methods are marked with custom attributes and are then added to other objects automatically (so long as they are not already there).
Hollis said extension methods can be used just about anywhere a developer would use a normal function. However, they cannot contain optional parameters, parameter arrays or generic arguments that have not been typed. Also, late binding cannot be done with extension methods, he said.

VB 9: IntelliSense
IntelliSense, already referred to as "Intellicrack" in some development circles, is set in Visual Basic 9 to encompass keywords, member variables and anything in scope. "Anything that would make sense there, IntelliSense will put it in," Hollis said.
Moreover, IntelliSense will work with implicit types, once the compiler has figured out what is they type, Egger said.
In addition, LINQ, as stated, is set up to take advantage of IntelliSense. In SQL syntax, the Select query comes first, but in LINQ, the From statement comes first. With the data source listed first, IntelliSense has a chance to kick in.
VB 9: Relaxed delegates, initializers and more For additional information on what's new in Visual Basic 9, including relaxed delegates and initializers, check out the MSDN article Overview of Visual Basic 9.0. The emphasis there is on productivity gains developers can expect to enjoy when building data-oriented applications with an increasingly dynamic language.

Thursday, July 5, 2007

Application Development: What's new in ADO.NET version 2.0

With the first, public alpha version of the coming release of Visual Studio .NET—christened "Whidbey"—now in the hands of developers, it's time to start thinking about your applications and how they might be affected as you move to this new version. Although the move from version 1.0 to 1.1 of the .NET Framework was trivial and involved mostly bug fixes, performance enhancements, and the integration of previously separate technologies such as the ODBC and Oracle .NET Data Providers, version 2.0 changes the story for data access. It includes a host of new features, some of which may cause you to rethink how data is accessed and manipulated in your applications.

In this article, I'll give you a brief rundown of what I see as the most significant new features in ADO.NET and how you might take advantage of them in your implementations.

Providing a wider view of data
Before delving into the specific features of ADO.NET v2.0, let me preface the discussion by noting that one of the overall design goals of this version was to allow a higher degree of interoperability between data accessed relationally, accessed as XML, and accessed as custom objects. Since these three make up the "ruling triumvirate" in representing data in an application, ADO.NET v2.0 was designed to make it easier for developers to use the appropriate model when desired within and across applications.

For example, in applications built using a service-oriented architecture (SOA), persistent business data will often be manipulated relationally; data that represents the process and encapsulates business rules will be manipulated as objects; and message and lookup data that must be serialized for transport will be handled as XML. To present the new features, I've factored them into two broad buckets: the new features that provide this wider view of data and the features that enhance or extend the relational paradigm.

Widening the .NET
There are two primary new features you'll want to explore in the area of extending your ability to handle data. Let's take a look at each.

ObjectSpaces
This technology was previewed several years ago at PDC and will now be released in Whidbey. Simply put, ObjectSpaces provides an object-relational mapping layer in the System.Data.ObjectSpaces namespace, which instantiates and populates custom objects from a relational database. This works through XML metadata stored in a mappings file that is passed to the constructor of the ObjectSpace class, which maps relational objects to .NET objects and relational types to .NET types.

The programming model supports queries (ObjectQuery) and maintains sets of objects in memory (ObjectSet), access to streams of objects (ObjectReader), and even lazy loading of objects to improve performance (ObjectList and ObjectHolder). Following is an example of how the programming model looks:

// Create the mappings
ObjectSpace oa = new ObjectSpace(mappings-file, connection);
// Query the data
ObjectQuery oq = new ObjectQuery(Product, "category='Equipment'");
ObjectReader or = oa.GetObjectReader(oq);
// Traverse the data
while (or.Read())
{
   Product p = (Product)or.Current;
   Console.WriteLine(p.Name);
}

Although in the current release, ObjectSpaces works only with SQL Server 2000 and SQL Server "Yukon" (the release of SQL Server more or less synchronized with the release of Whidbey), this technology will be extended to access other relational stores in the future. ObjectSpaces is ideal when you want to represent your data using a domain model and encapsulate business logic as methods on your custom objects, since it will save you from writing the tiresome code needed to load from and persist your objects to a relational data store.

SQLXML and XmlAdapter
Although the ADO.NET DataSet has always included the ability to load data as XML and serialize its contents as XML, the translation between the two ways of representing data always included some tension. For example, in order for the XML to load into a DataSet, its schema couldn't be overly complex, and it needed to map well into the relational DataTables of the DataSet.

Although DataSet support of XML has been enhanced in version 2 to allow loading of XML with multiple in-line schemas, loading schemas with repeated element names in different namespaces, and loading/serializing directly from DataTable objects, the data must still be relational in nature to work with the DataSet. To overcome this, version 2 includes the System.Xml.XmlAdapter class. This class is analogous to the DataAdapter classes in that it is a liaison between a data source and a representation of the data, but is used to query and load XML from an XML View into an XPathDocument object (called XPathDocument2 in the alpha; however, that will be renamed to XPathDocument before release).

XML Views allow relational tables (in SQL Server only) to be mapped to an XML schema via mappings files; they are the core component of the SQLXML 3.0 technology, once provided separately from the .NET Framework but now migrated into ADO.NET v2 (including the ability to bulk-load XML into SQL Server) in the System.Data.SqlXml namespace. Using this approach, you can provide a set of XML Views for your SQL Server data, query the data with the XQuery language using the new XQueryProcessor class and the Fill method of the XmlAdapter, and manipulate the data using the XPathDocument, XPathEditor, XPathNavigator, and XPathChangeNavigator classes.

The changes are written to SQL Server by calling the Update method of the XmlAdapter, which relies on the XML View to write the SQL statements to execute using a mappings file. The advantage of this approach is that you can treat your SQL Server data no differently than other XML data stores and can take advantage of the full fidelity of the XML when making changes. See a simple example of the programming model.

// Set up the connection and query
SqlConnection con = new SqlConnection(connection-string);
XQueryProcessor xq = new XQueryProcessor();
xq.XmlViewSchemaDictionary.Add("name", new XmlTextReader("mappings-file"));
xq.Compile(…);
// Set up the datasource
XmlDataSourceResolver xd = new XmlDataSourceResolver();
xd.Add("MyDB", con);
// Configure the XmlAdapter
XmlAdapter xa = new XmlAdapter(xd);
XPathDocument xp = new XPathDocument();
// Execute the query and populate the document
xa.Fill(xp, xq.XmlCommand);
// Navigate the document…
XPathNavigator xn = xp.CreateXPathNavigator();
// Or edit the document and change the data
XPathEditor xe = xp.CreateXPathEditor();
// Set the schema and update the database
MappingSchema ms = new MappingSchema("mappings-file");
xa.Update(xp, ms);

Of course, XML Views simply provide the mapping of the data to and from SQL Server. If you're not using SQL Server, you can still take advantage of the substantial changes to XPathDocument (that will supersede and make obsolete the XmlDocument class) and its related classes to more easily query, navigate, and edit XML that you load from other sources.

For example, you can use a new XmlFactory class to create a related set of XmlReader, XmlWriter, and XPathNavigator classes for an XML document. These classes now support the ability to read and write .NET types to and from XML documents. And, of course, performance has improved for reading and writing with XmlTextReader and XmlTextWriter, and when using XSLT.

Extending the relational paradigm
The second broad set of changes relates to those made in ADO.NET v2.0 to enhance relational database access. I've organized these into changes that all developers can take advantage of, regardless of the underlying database you write to and regardless of those that will require SQL Server 2000 or the next version of SQL Server, Yukon.

Provider factory
Although the design of .NET Data Providers is based on a common set of interfaces and base classes, in v1.0 or v1.1, Microsoft did not ship factory classes to help developers write polymorphic data access code. As a result, developers did so on their own.

In version 2, ADO.NET includes factory classes inherited from System.Data.Common.DbProviderFactory to create the standard connection, command, data reader, table, parameter, permissions, and data adapter classes; these help you write code that targets multiple databases. A factory is accessed using the GetFactory method of the DbProviderFactories class and can be configured in the application's configuration file using the DbProviderConfigurationHandler.

Asynchronous data access
Commands executed by ADO.NET in version 1.0 using the ExecuteNonQuery, ExecuteReader, and ExecuteXmlReader methods of SqlCommand were synchronous and would block the current thread until the results were returned by the server. In v2.0, each of these methods includes both Begin and End versions to support asynchronous execution from the client's perspective.

This technique employs the familiar asynchronous programming model using the AsyncCallback delegate in .NET, and so includes the SqlAsyncResult class to implement the IAsyncResult interface. While this feature works only for SqlClient at the moment, look for it to perhaps be extended to other providers before the release. Following is an example of setting up an asynchronous command. (Note that the SqlAsyncResult class is not included in the alpha at this time, so the code will not execute.)

// Set up the connection and command
SqlConnection con = new SqlConnection(connection-string);
SqlCommand cm = new SqlCommand(SQL statement, con);
cm.Open();
cm.BeginExecuteNonQuery(new AsyncCallback(DoneExecuting), null);
// Thread is free, do other things
// Callback method
private void DoneExecuting(SqlAsyncResult ar)
{
   int numRows = ar.EndExecuteNonQuery(ar);
   // print the number of rows affected
}

Batch updates
In version 1.0, a DataAdapter always sent changes to rows one at a time to the server. In version 2.0, the DataAdapter exposes an UpdateBatchSize property that, if supported by the data provider, allows changed rows to be sent to the server in groups. This cuts down on the number of round-trips to the server and therefore increases performance.

Data paging
In both SqlClient and OracleClient, the command object now exposes an ExecutePageReader method that allows you to pass in the starting row and the number of rows to return from the server. This allows for more efficient data access by retrieving only the rows you need to display. However, this feature reads the rows currently in the table, so subsequent calls may contain rows from the previous page because of inserts, or from the latter pages because of deletes. It therefore works best with relatively static data.

Binary DataSet remoting
Version 2.0 now allows DataSets to be serialized using a binary format when employing .NET remoting. This both increases the performance of remoting data between .NET applications and reduces the number of bytes transferred.

DataSet and DataReader transfer
In version 1.1, you could only load a DataSet from a DataAdapter. But in version 2.0, you can also load one directly using a DataReader and the Load method. Conversely, you can now generate a DataTableReader (inherited from DbDataReader) with the GetDataReader method in order to traverse the contents of a DataSet. This feature makes it easy to load a DataSet and view its data.

Climbing Yukon
In this category are the new features of ADO.NET v2.0 that relate directly to the new release of the SQL Server code named Yukon, due out in the same time frame:

MARS
Multiple active result sets (MARS) allows you to work with more than one concurrent result set on a single connection to Yukon. This can be efficient if you need to open a SqlDataReader and, during the traversal, execute a command against a particular row. MARS allows both commands to share the same SqlConnection object so that a second connection to SQL Server is not required.

Change notification
One of the most interesting new features of Yukon is its ability to support notifications. ADO.NET v2.0 includes programmatic support for this feature by including a SqlNotificationRequest object that can be bound to a SqlCommand.

When data returned from the command changes in the database, a message is sent to the specified notification queue. ADO.NET code can then query the queue either by using an asynchronous query that blocks until a message is sent or by periodically checking the queue using new Transact-SQL syntax.

To make this feature even easier to work with, a SqlDependency class that sets up an asynchronous delegate is included. This will be called when the data changes, and it can be used like other dependencies in conjunction with the ASP.NET caching engine. An example of using a SqlDependency object-

// Set up the connection and command
SqlConnection con = new SqlConnection(connection-string);
SqlCommand cm = new SqlCommand(SELECT statement, con);
SqlDependency dep = new SqlDependency(cm);
dep.OnChanged += new OnChangedEventHandler (HandleChange);
SqlDataReader dr = cm.ExecuteReader();
// Process the data
private void HandleChange (object sender, SqlNotificationEventArgs e)
{
  // A change has been made to the data
  // Inspect the type of change using e.Type
}

Yukon types
ADO.NET v2.0 supports the full set of Yukon data types, including XML and User Defined Types (UDTs). This means that columns in Yukon defined as XML can be retrieved as XmlReader objects, and that UDTs can be passed to stored procedures and returned from queries as standard .NET types. This allows your applications to work with data as fully formed objects while interacting with the database using the objects. This feature can be used profitably when writing managed code that runs in-process in SQL Server, allowing both the managed stored procedure and the client code to use the same .NET type.

Server-side cursors
Because it often caused applications to perform poorly, ADO.NET v1.0 and v1.1 did away with the server-side cursors for ADO v2.x. ADO.NET v2.0 now reintroduces the concept in Yukon using the ExecuteResultset and ExecuteRow methods of the SqlCommand object and the SqlResultset class.

The SqlResultset class offers a fully scrollable and updateable cursor that can be useful for applications that need to traverse a large amount of data and update only a few rows. Although this feature can be used from client applications such as ASP.NET, it is mainly intended for use when writing managed code that runs in-process with Yukon in the form of stored procedures.

Bulk copy
Although not restricted to Yukon, ADO.NET v2.0 now allows programmatic access to the BCP or bulk copy API exposed by SQL Server. This is done using the SqlBulkCopyOperation and SqlBulkCopyColumnAssociator classes in the System.Data.SqlClient namespace.

What's New in ASP.NET 2.0?

ASP.NET is a programming framework built on the common language runtime that can be used on a server to build powerful Web applications. The first version of ASP.NET offered several important advantages over previous Web development models. ASP.NET 2.0 improves upon that foundation by adding support for several new and exciting features in the areas of developer productivity, administration and management, extensibility, and performance:

Developer Productivity

ASP.NET 2.0 encapsulates common Web tasks into application services and controls that can be easily reused across web sites. With these basic building blocks, many scenarios can now be implemented with far less custom code than was required in previous versions. With ASP.NET 2.0 it is possible to significantly reduce the amount of code and concepts necessary to build common scenarios on the web.
  • New Server Controls. ASP.NET 2.0 introduces many new server controls that enable powerful declarative support for data access, login security, wizard navigation, menus, treeviews, portals, and more. Many of these controls take advantage of core application services in ASP.NET for scenarios like data access, membership and roles, and personalization. Some of the new families of controls in ASP.NET 2.0 are described below.

    • Data Controls. Data access in ASP.NET 2.0 can be accomplished completely declaratively (no code) using the new data-bound and data source controls. There are new data source controls to represent different data backends such as SQL database, business objects, and XML, and there are new data-bound controls for rendering common UI for data, such as gridview, detailsview, and formview..

    • Navigation Controls. The navigation controls provide common UI for navigating between pages in your site, such as treeview, menu, and sitemappath. These controls use the site navigation service in ASP.NET 2.0 to retrieve the custom structure you have defined for your site.

    • Login Controls. The new login controls provide the building blocks to add authentication and authorization-based UI to your site, such as login forms, create user forms, password retrieval, and custom UI for logged in users or roles. These controls use the built-in membership and role services in ASP.NET 2.0 to interact with the user and role information defined for your site.

    • Web Part Controls. Web parts are an exciting new family of controls that enable you to add rich, personalized content and layout to your site, as well as the ability to edit that content and layout directly from your application pages. These controls rely on the personalization services in ASP.NET 2.0 to provide a unique experience for each user in your application.

  • Master Pages. This feature provides the ability to define common structure and interface elements for your site, such as a page header, footer, or navigation bar, in a common location called a "master page", to be shared by many pages in your site. In one simple place you can control the look, feel, and much of functionality for an entire Web site. This improves the maintainability of your site and avoids unnecessary duplication of code for shared site structure or behavior.

  • Themes and Skins. The themes and skins features in ASP.NET 2.0 allow for easy customization of your site's look-and-feel. You can define style information in a common location called a "theme", and apply that style information globally to pages or controls in your site. Like Master Pages, this improves the maintainability of your site and avoid unnecessary duplication of code for shared styles.

  • Personalization. Using the new personalization services in ASP.NET 2.0 you can easily create customized experiences within Web applications. The Profile object enables developers to easily build strongly-typed, sticky data stores for user accounts and build highly customized, relationship based experiences. At the same time, a developer can leverage Web Parts and the personalization service to enable Web site visitors to completely control the layout and behavior of the site, with the knowledge that the site is completely customized for them. Personalizaton scenarios are now easier to build than ever before and require significantly less code and effort to implement.

  • Localization. Enabling globalization and localization in Web sites today is difficult, requiring large amounts of custom code and resources. ASP.NET 2.0 and Visual Studio 2005 provide tools and infrastructure to easily build Localizable sites including the ability to auto-detect incoming locale's and display the appropriate locale based UI. Visual Studio 2005 includes built-in tools to dynamically generate resource files and localization references. Together, building localized applications becomes a simple and integrated part of the development experience.

Administration and Management

ASP.NET 2.0 is designed with administration and manageability in mind. We recognize that while simplifying the development experience is important, deployment and maintenance in a production environment is also a key component of an application's lifetime. ASP.NET 2.0 introduces several new features that further enhance the deployment, management, and operations of ASP.NET servers.
  • Configuration API. ASP.NET 2.0 contains new configuration management APIs, enabling users to programmatically build programs or scripts that create, read, and update Web.config and machine.config configuration files.

  • ASP.NET MMC Admin Tool. ASP.NET 2.0 provides a new comprehensive admin tool that plugs into the existing IIS Administration MMC, enabling an administrator to graphically read or change common settings within our XML configuration files.

  • Pre-compilation Tool. ASP.NET 2.0 delivers a new application deployment utility that enables both developers and administrators to precompile a dynamic ASP.NET application prior to deployment. This precompilation automatically identifies any compilation issues anywhere within the site, as well as enables ASP.NET applications to be deployed without any source being stored on the server (one can optionally remove the content of .aspx files as part of the compile phase), further protecting your intellectual property.

  • Health Monitoring and Tracing. ASP.NET 2.0 also provides new health-monitoring support to enable administrators to be automatically notified when an application on a server starts to experience problems. New tracing features will enable administrators to capture run-time and request data from a production server to better diagnose issues. ASP.NET 2.0 is delivering features that will enable developers and administrators to simplify the day-to-day management and maintenance of their Web applications.

Flexible Extensibility

ASP.NET 2.0 is a well-factored and open system, where any component can be easily replaced with a custom implementation. Whether it is server controls, page handlers, compilation, or core application services, you'll find that all are easily customizable and replaceable to tailor to your needs. Developers can plug in custom code anywhere in the page lifecycle to further customize ASP.NET 2.0 to their needs.
  • Provider-driven Application Services. ASP.NET 2.0 now includes built-in support for membership (user name/password credential storage) and role management services out of the box. The new personalization service enables quick storage/retrieval of user settings and preferences, facilitating rich customization with minimal code. The new site navigation system enables developers to quickly build link structures consistently across a site. As all of these services are provider-driven, they can be easily swapped out and replaced with your own custom implementation. With this extensibility option, you have complete control over the data store and schema that drives these rich application services.

  • Server Control Extensibility. ASP.NET 2.0 includes improved support for control extensibility, such as more base classes that encapsulate common behaviors, improved designer support, more APIs for interacting with client-side script, metadata-driven support for new features like themes and accessibility verification, better state management, and more.

  • Data Source Controls. Data access in ASP.NET 2.0 is now performed declaratively using data source controls on a page. In this model, support for new data backend storage providers can be easily added by implementing custom data source controls. Additionally, the SqlDataSource control that ships in the box has built-in support for any ADO.NET managed provider that implements the new provider factory model in ADO.NET.

  • Compilation Build Providers. Dynamic compilation in ASP.NET 2.0 is now handled by extensible compilation build providers, which associate a particular file extension with a handler that knows how to compile that extension dynamically at runtime. For example, .resx files can be dynamically compiled to resources, .wsdl files to web service proxies, and .xsd files to typed DataSet objects. In addition to the built-in support, it is easy to add support for additional extensions by implementing a custom build provider and registering it in Web.config.

  • Expression Builders. ASP.NET 2.0 introduces a declarative new syntax for referencing code to substitute values into the page, called Expression Builders. ASP.NET 2.0 includes expression builders for referencing string resources for localization, connection strings, application settings, and profile values. You can also write your own expression builders to create your own custom syntax to substitute values in a page rendering.

Performance and Scalability

ASP.NET is built to perform, using a compiled execution model for handling page requests and running on the world's fastest web server, Internet Information Services. ASP.NET 2.0 also introduces key performance benefits over previous versions.
  • 64-Bit Support. ASP.NET 2.0 is now 64-bit enabled, meaning it can take advantage of the full memory address space of new 64-bit processors and servers. Developers can simply copy existing 32-bit ASP.NET applications onto a 64-bit ASP.NET 2.0 server and have them automatically be JIT compiled and executed as native 64-bit applications (no source code changes or manual re-compile are required).

  • Caching Improvements. ASP.NET 2.0 also now includes automatic database server cache invalidation. This powerful and easy-to-use feature allows developers to aggressively output cache database-driven page and partial page content within a site and have ASP.NET automatically invalidate these cache entries and refresh the content whenever the back-end database changes. Developers can now safely cache time-critical content for long periods without worrying about serving visitors stale data.
The remainder of the QuickStart presents practical examples of these and other features in ASP.NET.

Events and Life Cycle of a Page (ASP.Net 1.x VS ASP.Net 2.0):

In the ASP.NET runtime the life cycle of a page is marked by a series of events. In ASP.NET 1.x, based on user interaction a page request is sent to the Web server. The event that is initiated by the page request is Init. After the Init event, Load event is raised. Following the Load event, PreRender event is raised. Finally, the Unload event is raised and an output page is returned to the client.

ASP.NET 2.0 adds a few new events to allow you to follow the request processing more closely and precisely.
These new events are discussed in the following table.

New Events in ASP.NET 2.0:

Events

Description

PreInit

This occurs before the page begins initialization. This is the first event in the life of an ASP.NET 2.0 page.

InitComplete

This occurs when the page initialization is completed.

PreLoad

This occurs immediately after initialization and before the page begins loading the state information.

LoadComplete

This occurs at the end of the load stage of the page's life cycle.

PreRenderComplete

This occurs when the pre-rendering phase is complete and all child controls have been created. After this event, the personalization data and the view state are saved and the page renders to HTML.

This new features enable developer to dynamically modify the page output and the state of constituent controls by handling these events.

What's New in the .NET Framework 2.0

The first version of the .NET Framework (1.0) was released in 2002 to much enthusiasm. The latest version, the .NET Framework 2.0, was introduced in 2005 and is considered a major release of the framework.

With each release of the framework, Microsoft has always tried to ensure that there were minimal breaking changes to code developed. Thus far, they have been very successful at this goal.

Make sure that you create a staging server to completely test the upgrade of your applications to the .NET Framework 2.0 as opposed to just upgrading a live application.

The following details some of the changes that are new to the .NET Framework 2.0 as well as new additions to Visual Studio 2005 — the development environment for the .NET Framework 2.0.

SQL Server integration

After a long wait, the latest version of SQL Server has finally been released. This version, SQL Server 2005, is quite special in so many ways. Most importantly for the .NET developer is that SQL Server 2005 is now hosting the CLR. Microsoft has developed their .NET offering for developers so that the .NET Framework 2.0, Visual Studio 2005 and SQL Server 2005 are all now tied together — meaning that these three products are now released in unison. This is quite important as it is rather well known that most applications built use all three of these pieces and that they all need to be upgraded together in such a way that they work with each other in a seamless manner.

Due to the fact that SQL Server 2005 now hosts the CLR, this means that you can now avoid building database aspects of your application using the T-SQL programming language. Instead, you can now build items such as your stored procedures, triggers and even data types in any of the .NET-compliant languages, such as C#.

SQL Server Express is the 2005 version of SQL Server that replaces MSDE. This version doesn't have the strict limitations MSDE had.

64-Bit support

Most programming today is done on 32-bit machines. It was a monumental leap forward in application development when computers went from 16-bit to 32-bit. More and more enterprises are moving to the latest and greatest 64-bit servers from companies such as Intel (Itanium chips) and AMD (x64 chips) and the .NET Framework 2.0 has now been 64-bit enabled for this great migration.

Microsoft has been working hard to make sure that everything you build in the 32-bit world of .NET will run in the 64-bit world. This means that everything you do with SQL Server 2005 or ASP.NET will not be affected by moving to 64-bit. Microsoft themselves made a lot of changes to the CLR in order to get a 64-bit version of .NET to work. Changes where made to things such as garbage collection (to handle larger amounts of data), the JIT compilation process, exception handling, and more.

Moving to 64-bit gives you some powerful additions. The most important (and most obvious reason) is that 64-bit servers give you a larger address space. Going to 64-bit also allows for things like larger primitive types. For instance, an integer value of 2^32 will give you 4,294,967,296 — while an integer value of 2^64 will give you 18,446,744,073,709,551,616. This comes in quite handy for those applications that need to calculate things such as the U.S. debt or other high numbers.

Companies such as Microsoft and IBM are pushing their customers to take a look at 64-bit. One of the main areas of focus are on database and virtual storage capabilities as this is seen as an area in which it makes a lot of sense to move to 64-bit for.

Visual Studio 2005 can install and run on a 64-bit computer. This IDE has both 32-bit and 64-bit compilers on it. One final caveat is that the 64-bit .NET Framework is meant only for Windows Server 2003 SP1 or better as well as other 64-bit Microsoft operating systems that might come our way.

When you build your applications in Visual Studio 2005, you can change the build properties of your application so that it compiles specifically for 64-bit computers. To find this setting, you will need to pull up your application's properties and click on the Build tab from within the Properties page. On the Build page, click on the Advanced button and this will pull up the Advanced Compiler Setting dialog. From this dialog, you can change the target CPU from the bottom of the dialog. From here, you can establish your application to be built for either an Intel 64-bit computer or an AMD 64-bit computer. This is shown here in Figure 1.

Figure 1
Figure 1: Building your application for 64-bit

Generics

In order to make collections a more powerful feature and also increase their efficiency and usability, generics were introduced to the .NET Framework 2.0. This introduction to the underlying framework means that languages such as C# and Visual Basic 2005 can now build applications that use generic types. The idea of generics is nothing new. They look similar to C++ templates but are a bit different. You can also find generics in other languages, such as Java. Their introduction into the .NET Framework 2.0 languages is a huge benefit for the user.

Generics enable you to make a generic collection that is still strongly typed — providing fewer chances for errors (because they occur at runtime), increasing performance, and giving you Intellisense features when you are working with the collections.

To utilize generics in your code, you will need to make reference to the System.Collections.Generic namespace. This will give you access to generic versions of the Stack, Dictionary, SortedDictionary, List and Queue classes. The following demonstrates the use of a generic version of the Stack class:

void Page_Load(object sender, EventArgs e)
{
System.Collections.Generic.Stack myStack =
New System.Collections.Generic.Stack();
myStack.Push("St. Louis Rams");
myStack.Push("Indianapolis Colts");
myStack.Push("Minnesota Vikings");

Array myArray;
myArray = myStack.ToArray();

foreach(string item in myArray)
{
Label1.Text += item + "
";
}
}

In the above example, the Stack class is explicitly cast to be of type string. Here, you specify the collection type with the use of brackets. This example, casts the Stack class to type string using Stack. If you wanted to cast it to something other than a Stack class of type string (for instance, int), then you would specify Stack.

Because the collection of items in the Stack class is cast to a specific type immediately as the Stack class is created, the Stack class no longer casts everything to type object and then later (in the foreach loop) to type string. This process is called boxing, and it is expensive. Because this code is specifying the types up front, the performance is increased for working with the collection.

In addition to just working with various collection types, you can also use generics with classes, delegates, methods and more.

Anonymous methods

Anonymous methods enable you to put programming steps within a delegate that you can then later execute instead of creating an entirely new method. For instance, if you were not using anonymous methods, you would use delegates in a manner similar to the following:

public partial class Default_aspx
{
void Page_Load(object sender, EventArgs e)
{
this.Button1.Click += ButtonWork;
}

void ButtonWork(object sender, EventArgs e)
{
Label1.Text = "You clicked the button!";
}
}

But using anonymous methods, you can now put these actions directly in the delegate as shown here in the following example:

public partial class Default_aspx
{
void Page_Load(object sender, EventArgs e)
{
this.Button1.Click += delegate(object
myDelSender, EventArgs myDelEventArgs)
{
Label1.Text = "You clicked the button!";
};
}
}

When using anonymous methods, there is no need to create a separate method. Instead you place the necessary code directly after the delegate declaration. The statements and steps to be executed by the delegate are placed between curly braces and closed with a semicolon.

Nullable types

Due to the fact that generics has been introduced into the underlying .NET Framework 2.0, it is now possible to create nullable value types — using System.Nullable. This is ideal for situations such as creating sets of nullable items of type int. Before this, it was always difficult to create an int with a null value from the get-go or to later assign null values to an int.

To create a nullable type of type int, you would use the following syntax:

System.Nullable x = new System.Nullable;

There is a new type modifier that you can also use to create a type as nullable. This is shown in the following example:

int? salary = 800000

This ability to create nullable types is not a C#-only item as this ability was built into .NET Framework itself and, as stated, is there due to the existence of the new generics feature in .NET. For this reason, you will also find nullable types in Visual Basic 2005 as well.

Iterators

Iterators enable you to use foreach loops on your own custom types. To accomplish this, you need to have your class implement the IEnumerable interface as shown here:

using System;
using Systm.Collections;

public class myList
{
internal object[] elements;
internal int count;

public IEnumerator GetEnumerator()
{
yield return "St. Louis Rams";
yield return "Indianapolis Colts";
yield return "Minnesota Vikiings";
}
}

In order to use the IEnumerator interface, you will need to make a reference to the System.Collections namespace. With this all in place, you can then iterate through the custom class as shown here:

void Page_Load(object sender, EventArgs e)
{
myList IteratorList = new myList();

foreach(string item in IteratorList)
{
Response.Write(item.ToString() + "
");
}
}

Partial Classes

Partial classes are a new feature to the .NET Framework 2.0 and again C# takes advantage of this addition. Partial classes allow you to divide up a single class into multiple class files, which are later combined into a single class when compiled.

To create a partial class, you simply need to use the partial keyword for any classes that are to be joined together with a different class. The partial keyword precedes the class keyword for the classes that are to be combined with the original class. For instance, you might have a simple class called Calculator as shown here:

public class Calculator
{
public int Add(int a, int b)
{
return a + b;
}
}

From here, you can create a second class that attaches itself to this first class as shown here in the following example:

public partial class Calculator
{
public int Subtract(int a, int b)
{
return a - b;
}
}

When compiled, these classes will be brought together into a single Calculator class instance as if they were built together to begin with.

Where C# Fits In

In one sense, C# can be seen as being the same thing to programming languages as .NET is to the Windows environment. Just as Microsoft has been adding more and more features to Windows and the Windows API over the past decade, Visual Basic 2005 and C++ have undergone expansion. Although Visual Basic and C++ have ended up as hugely powerful languages as a result of this, both languages also suffer from problems due to the legacies of how they have evolved.

In the case of Visual Basic 6 and earlier versions, the main strength of the language was the fact that it was simple to understand and didn't make many programming tasks easy, largely hiding the details of the Windows API and the COM component infrastructure from the developer. The downside to this was that Visual Basic was never truly object-oriented, so that large applications quickly became disorganized and hard to maintain. As well, because Visual Basic's syntax was inherited from early versions of BASIC (which, in turn, was designed to be intuitively simple for beginning programmers to understand, rather than to write large commercial applications), it didn't really lend itself to well-structured or object-oriented programs.

C++, on the other hand, has its roots in the ANSI C++ language definition. It isn't completely ANSI-compliant for the simple reason that Microsoft first wrote its C++ compiler before the ANSI definition had become official, but it comes close. Unfortunately, this has led to two problems. First, ANSI C++ has its roots in a decade-old state of technology, and this shows up in a lack of support for modern concepts (such as Unicode strings and generating XML documentation), and for some archaic syntax structures designed for the compilers of yesteryear (such as the separation of declaration from definition of member functions). Second, Microsoft has been simultaneously trying to evolve C++ into a language that is designed for high-performance tasks on Windows, and in order to achieve that they've been forced to add a huge number of Microsoft-specific keywords as well as various libraries to the language. The result is that on Windows, the language has become a complete mess. Just ask C++ developers how many definitions for a string they can think of: char*, LPTSTR, string, CString (MFC version), CString (WTL version), wchar_t*, OLECHAR*, and so on.

Now enter .NET —a completely new environment that is going to involve new extensions to both languages. Microsoft has gotten around this by adding yet more Microsoft-specific keywords to C++, and by completely revamping Visual Basic into Visual Basic .NET into Visual Basic 2005, a language that retains some of the basic VB syntax but that is so different in design that we can consider it to be, for all practical purposes, a new language.

It's in this context that Microsoft has decided to give developers an alternative — a language designed specifically for .NET, and designed with a clean slate. Visual C# 2005 is the result. Officially, Microsoft describes C# as a "simple, modern, object-oriented, and type-safe programming language derived from C and C++." Most independent observers would probably change that to "derived from C, C++, and Java." Such descriptions are technically accurate but do little to convey the beauty or elegance of the language. Syntactically, C# is very similar to both C++ and Java, to such an extent that many keywords are the same, and C# also shares the same block structure with braces ({}) to mark blocks of code, and semicolons to separate statements. The first impression of a piece of C# code is that it looks quite like C++ or Java code. Beyond that initial similarity, however, C# is a lot easier to learn than C++, and of comparable difficulty to Java. Its design is more in tune with modern developer tools than both of those other languages, and it has been designed to give us, simultaneously, the ease of use of Visual Basic, and the high-performance, low-level memory access of C++ if required. Some of the features of C# are:

  • Full support for classes and object-oriented programming, including both interface and implementation inheritance, virtual functions, and operator overloading.
  • A consistent and well-defined set of basic types.
  • Built-in support for automatic generation of XML documentation.
  • Automatic cleanup of dynamically allocated memory.
  • The facility to mark classes or methods with user-defined attributes. This can be useful for documentation and can have some effects on compilation (for example, marking methods to be compiled only in debug builds).
  • Full access to the .NET base class library, as well as easy access to the Windows API (if you really need it, which won't be all that often).
  • Pointers and direct memory access are available if required, but the language has been designed in such a way that you can work without them in almost all cases.
  • Support for properties and events in the style of Visual Basic.
  • Just by changing the compiler options, you can compile either to an executable or to a library of .NET components that can be called up by other code in the same way as ActiveX controls (COM components).
  • C# can be used to write ASP.NET dynamic Web pages and XML Web services.

Most of the above statements, it should be pointed out, do also apply to Visual Basic 2005 and Managed C++. The fact that C# is designed from the start to work with .NET, however, means that its support for the features of .NET is both more complete, and offered within the context of a more suitable syntax than for those other languages. While the C# language itself is very similar to Java, there are some improvements, in particular, Java is not designed to work with the .NET environment.

Before we leave the subject, we should point out a couple of limitations of C#. The one area the language is not designed for is time-critical or extremely high performance code — the kind where you really are worried about whether a loop takes 1,000 or 1,050 machine cycles to run through, and you need to clean up your resources the millisecond they are no longer needed. C++ is likely to continue to reign supreme among low-level languages in this area. C# lacks certain key facilities needed for extremely high performance apps, including the ability to specify inline functions and destructors that are guaranteed to run at particular points in the code. However, the proportions of applications that fall into this category are very low.

Wednesday, July 4, 2007

Introduction to .NET 3.0 for Architects

Posted by Mohammad Akif

The latest version of Microsoft's .NET, .NET Framework 3.0, opens up new possibilities for developing the next generation of business solution software. It is designed to enhance productivity, reduce infrastructure plumbing, provide an identity meta-system, and facilitate development of enterprise-class services, work flow solutions and immersive user experiences.

During my discussions with a wide variety of architects I have learned that Solution Architects care very much about security, open standards, interoperability, services oriented architecture, relationship between key technologies (for example Workflow Foundation and Biztalk) and productivity. In this article, I have attempted to provide a description of .NET 3.0 in terms of areas that are of significant interest for the architect community.

.NET versions

It has been almost six years since Microsoft released the first version of the .NET Framework. The 3.0 incarnation is the first framework being distributed with the operating system, i.e. with every Windows Vista installation and is also supported on Windows XP SP2 and Windows Server 2003. Until .NET 3.0, each version of the .NET framework was also accompanied by a new Common Language Runtime, hereafter referred to as the CLR. Microsoft has not modified the CLR for the 3. 0 version of the .NET framework and this is an important point to understand.

As .NET 2.0 and 3.0 share the same CLR, everything written in .NET 2.0 works in .NET 3.0 which is an important and significant departure from previous versions. In terms of change, for those who love algebra equations the relationship can be summed anecdotally as:

.NET 3.0 = .NET 2.0 + WCF + WPF + WCS + WF

I will provide a definition of each of the acronyms, but anytime you get confused about the relationship between .NET 2.0 and 3.0, keep the above equation foremost in mind. One of the key philosophies behind .NET 3.0 is to provide functionality that can be considered ‘infrastructure plumbing' as part of the framework. This allows you to focus on your key business problem.

The .NET Framework 3.0 achieves this objective through four key standards-based pillars corresponding to areas identified and requested by our customers. It also introduced a key new language called XAML. XAML is a declarative XML-based language that defines objects and their properties in XML allowing customers to develop workflows (WF) and immersive user experiences (WPF) declaratively. Let us explore the key pillars of the .NET 3.0 framework in greater detail.

Windows Communication Foundation (WCF)

WCF allows you to architect services by offering a standards- based framework and a composable architecture. The three key design philosophies for WCF are interoperability, productivity and service-oriented development.

Microsoft provides a number of messaging layer channels and service model layer behaviors that can be added and removed easily. It also allows you to define your own custom instances, for example you can write or buy a custom encoder for ASCII and insert it as a reusable channel in the messaging layer that can be used across various systems. WCF interoperates with existing investments and combines and extends existing Microsoft distributed systems technologies like Enterprise Services, System.Messaging, Microsoft .NET Remoting, ASMX and Web Services Extensions (WSE). This change implies that you can use a single model for different types of application behaviours which significantly reduces the complexity in application development. WCF also offers interoperability with non-Microsoft applications through support of the WS-I basic profile and a number of additional WS-* standards.

Finally in terms of productivity you can get a significant order of magnitude difference in developing secure transactional web services using WCF. Think of WCF as tens of thousands of lines of code that you would need to develop, generate and maintain, but instead are provided now as part of the base framework. WCF offers one of the first core programming frameworks that have been designed from the ground-up for services oriented development.

Windows Workflow (WF)

Workflow Foundation is an enterprise class workflow development framework and engine that has taken declarative workflow mainstream for the first time. WF supports human, system, sequential and state-machine workflows. It provides runtime infrastructure, flexible flow control, long running and stateful work flows and runtime and design time transparency and auditing capabilities for compliance and record management.

Workflow Foundation allows you to define a workflow as a set of activities. Activities are the unit of execution and allow for easy re-use and composition. Basic activities are steps within a workflow while composite activities contain other activities. You can add and remove activities even when a workflow is already in progress allowing you significant flexibility in terms of change. Workflow Foundation provides a base activity library out-of-the-box and a framework for partners and customers to easily author custom activities.

In terms of authoring options, you can use XAML mark-up only, mark-up plus code or code-only. Visual Studio 2005 Designer for Workflow Foundation is available as a downloadable plug-in and provides a drag-and-drop design surface, intuitive graphical tools, integration with the Properties window, debugging and graphic Commenting capabilities.

A number of architects have asked me about the relationship between Workflow Foundation, BizTalk, Microsoft Office SharePoint Server 2007, (MOSS 2007) and Windows SharePoint Services (WSS).

Workflow Foundation, (WF), was developed by the same team at Microsoft that developed the BizTalk workflow engine and is intended to be utilized by BizTalk Server in future versions. WF provides the foundation for implementing workflow scenarios mostly within an application and between applications in certain cases. BizTalk allows you to automate your business processes, orchestrate processes comprising of systems implemented in different technologies through adapters and provide advance business activity monitoring capabilities.

In terms of MOSS 2007 and WSS, MOSS 2007 is built on top of WF and provides additional functionalities and features using WF for the base functionality. Windows SharePoint Services provides a subset of the functionality of MOSS 2007 as an add-in to Windows Server. Simply put, WSS provides simple document management capabilities and workflow.

Windows Presentation Foundation (WPF)

Windows Presentation Foundation attempts to bridge the gap between the immersive user experience typically found in the gaming and entertainment industry and the static and hard-to-use world of business. WPF uses XAML extensively to allow you to develop the next generation of interfaces without becoming a graphic designer yourself.

I would encourage you to see a demonstration of a WPF application to understand what I mean by the next-generation user interface. You can, for example, view fifteen of the most precious books present at the British Library (http://www.bl.uk/ttp2/ttp1.html ) including Mozart and DaVanci hand-written notebooks. This reader is a WPF based application hosted in a Internet Explorer browser session and is referred to as an XBAP, the technology meant to replace ActiveX in the browser functionality-wise. The key differentiator for WPF is not the end product, a wonderfully rich interface, but actually how easy it is to develop and maintain the application code.

From an architectural perspective, WPF maintains a very clear separation between the graphic element and business logic. A designer will use the Expression product line and XAML to build the view, while the developer will use Visual Studio and write code using VB.NET or C#.

Another technology that has received a lot of attention lately is WPF Everywhere, (WPF/E), which is now officially called SilverLight. Please note that SilverLight is not part of the NET 3.0 framework. SilverLight is a cross-browser, cross-platform plug-in with its own runtime for delivering the next generation of Microsoft .NET-based media experiences and rich interactive applications for the Web. You can find out more information about SilverLight and view the demonstrations at http://www.microsoft.com/silverlight.

Windows Card Spaces (WCS)

In today's world, everyone carries a number of self asserted and third party issued identities. Examples of these identities include driving licences, credit card, cinema card and others. We have control over the information we use to prove our identity to the requesting party. Windows Card Spaces extends the same concept of user control to the digital world. WCS creates an identity meta-system that can significantly improve the way enterprises manage identities within their organization and between different organizations. To understand the potential, it has been praised by one of Microsoft's prominent critics as "one of the most significant contribution to computer security since cryptology.

In the digital world, identity can be expressed as subject (who), identity claims and security token (digital representation of subject and claims). WCS uses the concept of self-asserted and managed identities, a self asserted digital identity card could be used for signing up with a service like Hotmail whereas a managed identity could be a credit card issued by a bank. The following picture illustrates the protocol that is used to exchange information between the various entities. Please note that in this case the Identity provider could use Kerberos, X509 or a custom mechanism. Similarly the relaying party could use SAML or send the security token using HTTPS post.

WCS provides an overarching framework for various implementations of identity management technologies to work together. At Java One (which is the one of largest Java conferences in the world), Sun and Microsoft did a joint keynote demonstrating interoperability mechanisms based on WS-* standards. I have posted the link to the demonstration and the toolkit at my blog mentioned at the end of this article.

Conclusion

The .NET 3.0 Framework opens up a new world of possibilities for both architects and developers. It is intended to make it significantly easier for you to develop, integrate and maintain application based systems. Microsoft plans to continue this same philosophy of reducing complexity and infrastructure plumbing while concurrently increasing interoperability and standards support for the future version of the .NET framework.

CLR Garbage Collector

Justin Smith presents the internals of the .NET Garbage Collector and how you should design your types to play nicely with it. Learn about heap allocations, the GC Algorithms, Multiprocessor considerations, and the IDisposable pattern.
Join the conference

Friday, June 15, 2007

Silverlight Architecture - Part1

Silverlight has few basic properties:
  • It integrates with various browsers on Windows and on the Macintosh.

  • It enables rendering of richer user experiences that are defined by XAML.

  • It render media (music and video).

  • It enables programming that is consistent with the Web programming model.

  • It is small.


Silverlight was designed to address these properties,



  • Lightweight browser plug-in—Silverlight has Windows and Macintosh modules that are designed to enhance Internet Explorer (versions 6.0 and 7.0), Firefox 2.0, and Safari browsers. The December 2006 CTP for Windows is 1.1 MB in size.

  • Native presentation runtime— Software-based browser enhancement that allows rendering of XAML-based interactive 2-D graphics, text, and media, in addition to the browser native rendering of HTML. XAML can be used inline, in a file, or in a package.

  • Interactive video and audio—Cross-platform independent media runtime that can render Windows Media content (WMV and WMA) in addition to MP3 (will be available after the December 2006 CTP). Video and audio are handled as a media element in XAML, enabling flexibility in their presentation. Furthermore, the media support leverages the huge infrastructure and ecosystem around Windows Media, enabling cost-effective delivery of top-quality media.

  • Programming layer—In consistency with the Web architecture, Silverlight XAML is exposed using a DOM model to JavaScript. That way, AJAX programs can utilize the extended markup rendering capability using the same programming paradigms and practices (on the client and on the server). After the December 2006 CTP, we will also enable a managed code programming model using a subset of full CLR that will enhance the programmability side of the browsers to enable more performant and more scalable Web applications.

Tuesday, June 12, 2007

Google's Application Fabric

Application Fabrics: An Overview
Two major IT trends – commoditization of computing hardware and ubiquity of highperformance networking – have made a new kind of application infrastructure software possible: application fabrics. Application fabrics deliver on the promise of “real-time” grid computing, virtualization and utility computing, and are applicable to the most demanding CPU- and data- intensive applications in the enterprise. An application fabric provides a software-based environment that simultaneously delivers scalability, dependability, manageability and affordability for time-critical applications.

Compared to the traditional grid computing approach, application fabrics create a self-managing, self-healing application environment out of standards-based commodity hardware and operating systems, rather than relying on a set of separately-managed, heterogeneous resources to provide additional computing power. This difference is very important for several reasons. Most importantly, this architectural approach allows application fabrics to provide support for time-sensitive applications that are unable to withstand the latency and unpredictability associated with batch-oriented technologies like traditional grids. Further, this approach allows application fabrics to virtualize the commodity hardware nodes into a single system, enabling developers and administrators to view and manage the hardware as if it were a single computer. Finally, this architecture provides application-level fault-tolerance, rather than depending on the reliability of a collection of heterogeneous infrastructure resources under distributed control.

Putting Application Fabrics to Use
At some point in the future, application fabrics will become a standard deployment option for all applications. At this point in the maturity of the market, application fabrics are being used most frequently to benefit time-critical analytics, high-performance computing and data-processing applications, deployed either as stand-alone applications or as Web services within an SOA environment. These applications can be characterized as CPU- and/or data-intensive in their efforts to provide timely business insights and capabilities that are essential to the ongoing operations of an enterprise.

For time-critical applications that are CPU-intensive, application fabrics provide effortless scaling across a “fabric” of hundreds or even thousands of commoditygrade computers. At the scale often required for these applications, the automated management offered by application fabrics becomes a key consideration, greatly reducing application total cost-of-ownership (TCO).

For time-critical applications that process high volumes of data, reliability is paramount Organizations cannot afford failure mid-way through a process or transaction, which might cause errors and inconsistencies throughout multiple systems. For these applications, application fabrics provide the reliability of expensive, high-end systems at the cost of commodity hardware.

For both CPU- and data-intensive time-critical applications, the effortless scaling of application fabric environments enables organizations to bring online only the computing power they need today, with the knowledge they can easily add additional computers later, as needed.

CPU/data-intensive time-critical applications may be deployed as Web services within an SOA environment, rather than as stand-alone applications. In services-oriented environments, business logic is not rigidly associated with a single application, but rather available as Web services to be accessed and assembled into a variety of composite applications, for a variety of audiences. Time-critical applications composed of a large number of services can become very brittle, since the failure of one basic service can cause a chain reaction that brings each of the composite applications that consume that service to a halt. Deploying an application fabric as a key component of a services-oriented environment secures the dependability of individual services and thus ensures the performance of time-critical applications that rely on those services, without the need for additional hardware or software infrastructure.

Google Puts Its Application Fabric To Highly Productive Use
Google’s application fabric underlies its powerful core applications, including its search engine. For each search request, the search application queries a 40+ terabyte index of over 4 billion Web pages to produce search results, which are delivered to endusers often at sub-second rates. Google’s applications are run on 60,000+ famously inexpensive commodity computers running Linux, and its application fabric manages these tens of thousands of computers as a self-managing and self-healing network that is both extremely scalable and inexpensive considering its capability. The fabric facilitates bringing new machines on line to expand capacity and allows dead machines to be swapped-out at the system administrators’ convenience, all without interruption of service.

The result of Google’s underlying application fabric is that the company’s executives can work to grow the business, enhance existing services and create new ones, all without concern for the ability of its applications and infrastructure to keep up. And not only can Google’s application fabric keep up, but it can do so with linear cost increases to add capacity, rather than periodic massive overhauls to re-architect for new requirements.

Benefits of Application Fabric Software to Key Stakeholders
Overall, application fabrics benefit enterprises by enabling applications to be simultaneously scalable, dependable, manageable and affordable. These applications can create new capabilities and insights for the business, which drive greater business agility and competitive advantage.

Specifically, application fabrics benefit numerous key stakeholders within an organization:

Application architects and developers
: Application architects and developers are in the business of translating business requirements into technology-based solutions. Traditional application deployment approaches require architects to design to the limitations of the infrastructure. Application fabric software frees architects from having to trade-off among scalability, dependability, manageability and affordability, thus allowing them to focus on creating maximum business value. With application fabric software, developers are also freed from infrastructure limitations, in particular the need to worry about complicated distributed computing concepts Instead, developers can write code as if the applications were going to
be deployed on a single computer.

Systems administrators: Systems administrators are responsible for deploying and managing applications and their infrastructure, including adding capacity to the infrastructure as the demands on a given application grow. With application fabrics, administrators can treat a network of commodity machines as a virtualized single system, easing deployment and management challenges as all changes to any hardware, software, or applications running within the fabric happen dynamically. Further, application fabric software can detect when new “bare metal” has been added to the fabric’s network, automatically installing the appropriate operating system, fabric software and applications.

Technology executives: IT and engineering executives are responsible for enabling competitive advantage through technology-related initiatives, while minimizing the cost of doing so. Application fabrics provide a dependable environment that IT executives can count on to make strategic applications scalable, dependable, manageable and affordable. As a result, technology executives can bring new capabilities and insights to market faster, driving forward the organization’s ability to outpace the competition. And because they run on commodity-grade hardware and industry-standard operating systems, application fabrics also minimize the cost of deploying and scaling these applications. Another large component of a technology executive’s job is to manage the talent within a technology organization. In the past, these executives were forced to deploy senior development staff to manually build scalability and reliability into application environments. With application fabric software, executives can rely on the fabric layer to provide these qualities, rather than expensive and hard-to-find development talent.

Business executives: Business executives are concerned with the overall success of the business, which requires the agility to stay ahead of the competition. Appistry EAF supports competitive agility by decoupling strategic applications from the limitations of their physical infrastructure. Confident that their fabricbased applications will keep pace, business executives are freed to imagine new capabilities and drive for new insights, thus improving decision-making, providing better value and service to consumers, operating more efficiently, and, ultimately, staying ahead of the competition.

Interviews

Interview: Erik Saltwell on Expression Web


Interview: Jezz Santos about Software Factories

Understanding .NET Code Access Security

My favorite Article on CodeProject by UB

Other Suggested Readings-

A full length article on .NET Code Access Security by UB

Introduction

.NET has two kinds of security:

  1. Role Based Security (not being discussed in this article)
  2. Code Access Security

The Common Language Runtime (CLR) allows code to perform only those operations that the code has permission to perform. So CAS is the CLR's security system that enforces security policies by preventing unauthorized access to protected resources and operations. Using the Code Access Security, you can do the following:

  • Restrict what your code can do
  • Restrict which code can call your code
  • Identify code

We'll be discussing about these things through out this article. Before that, you should get familiar with the following key elements of CAS.

  • permissions
  • permission sets
  • code groups
  • evidence
  • policy

Permissions

Permissions represent access to a protected resource or the ability to perform a protected operation. The .NET Framework provides several permission classes, like FileIOPermission (when working with files), UIPermission (permission to use a user interface), SecurityPermission (this is needed to execute the code and can be even used to bypass security) etc.

Permission sets

A permission set is a collection of permissions. You can put FileIOPermission and UIPermission into your own permission set and call it "My_PermissionSet". A permission set can include any number of permissions. FullTrust, LocalIntranet, Internet, Execution and Nothing are some of the built in permission sets in .NET Framework. FullTrust has all the permissions in the world, while Nothing has no permissions at all, not even the right to execute.

Code groups

Code group is a logical grouping of code that has a specified condition for membership. Code from http://www.somewebsite.com/ can belong to one code group, code containing a specific strong name can belong to another code group and code from a specific assembly can belong to another code group. There are built-in code groups like My_Computer_Zone, LocalIntranet_Zone, Internet_Zone etc. Like permission sets, we can create code groups to meet our requirements based on the evidence provided by .NET Framework. Site, Strong Name, Zone, URL are some of the types of evidence.

Policy

Security policy is the configurable set of rules that the CLR follows when determining the permissions to grant to code. There are four policy levels - Enterprise, Machine, User and Application Domain, each operating independently from each other. Each level has its own code groups and permission sets. They have the hierarchy given below.

Figure 1

Okay, enough with the theory, it's time to put the theory into practice.

Quick Example

Let's create a new Windows application. Add two buttons to the existing form. We are going to work with the file system, so add the System.IO namespace.

using System.IO;

Figure 2

Write the following code:

private void btnWrite_click(object sender, System.EventArgs e)
{
StreamWriter myFile = new StreamWriter("c:\\Security.txt");
myFile.WriteLine("Trust No One");
myFile.Close();
}

private void btnRead_click(object sender, System.EventArgs e)
{
StreamReader myFile = new StreamReader("c:\\Security.txt");
MessageBox.Show(myFile.ReadLine())
myFile.Close()
}

The version number should be intact all the time, for our example to work. Make sure that you set the version number to a fixed value, otherwise it will get incremented every time you compile the code. We're going to sign this assembly with a strong name which is used as evidence to identify our code. That's why you need to set the version number to a fixed value.

[assembly: AssemblyVersion("1.0.0.0")]

That's it ... nothing fancy. This will write to a file named Security.txt in C: drive. Now run the code, it should create a file and write the line, everything should be fine ... unless of course you don't have a C: drive. Now what we are going to do is put our assembly into a code group and set some permissions. Don't delete the Security.txt file yet, we are going to need it later. Here we go.

.NET Configuration Tool

We can do this in two ways, from the .NET Configuration Tool or from the command prompt using caspol.exe. First we'll do this using the .NET Configuration Tool. Go to Control Panel --> Administrative Tools --> Microsoft .NET Framework Configuration. You can also type "mscorcfg.msc" at the .NET command prompt. You can do cool things with this tool ... but right now we are only interested in setting code access security.

Figure 3

Creating a new permission set

Expand the Runtime Security Policy node. You can see the security policy levels - Enterprise, Machine and User. We are going to change the security settings in Machine policy. First we are going to create our own custom permission set. Right click the Permission Sets node and choose New. Since I couldn't think of a catchy name, I'm going to name it MyPermissionSet.

Figure 4

In the next screen, we can add permissions to our permission set. In the left panel, we can see all the permissions supported by the .NET Framework. Now get the properties of File IO permission. Set the File Path to C:\ and check Read only, don't check others. So we didn't give write permission, we only gave read permission. Please note that there is another option saying "Grant assemblies unrestricted access to the file system." If this is selected, anything can be done without any restrictions for that particular resource, in this case the file system.

Figure 5

Now we have to add two more permissions - Security and User Interface. Just add them and remember to set the "Grant assemblies unrestricted access". I'll explain these properties soon. Without the Security permission, we don't have the right to execute our code, and without the User Interface permission, we won't be able to show a UI. If you're done adding these three permissions, you can see there is a new permission set created, named MyPermissionSet.

Creating a new code group

Now we will create a code group and set some conditions, so our assembly will be a member of that code group. Notice that in the code groups node, All_Code is the parent node. Right Click the All_Code node and choose New. You'll be presented with the Create Code Group wizard. I'm going to name it MyCodeGroup.

Figure 6

In the next screen, you have to provide a condition type for the code group. Now these are the evidence that I mentioned earlier. For this example, we are going to use the Strong Name condition type. First, sign your assembly with a strong name and build the project. Now press the Import button and select your assembly. Public Key, Name and Version will be extracted from the assembly, so we don't have to worry about them. Now move on to the next screen. We have to specify a permission set for our code group. Since we have already created one - MyPermissionSet, select it from the list box.

Figure 7

Exclusive and LevelFinal

If you haven't messed around with the default .NET configuration security settings, your assembly already belongs to another built-in code group - My_Computer_Zone. When permissions are calculated, if a particular assembly falls into more than one code group within the same policy level, the final permissions for that assembly will be the union of all the permissions in those code groups. I'll explain how to calculate permissions later, for the time being we only need to run our assembly only with our permission set and that is MyPermissionSet associated with the MyCodeGroup. So we have to set another property to do just that. Right click the newly created MyCodeGroup node and select Properties. Check the check box saying "This policy level will only have the permissions from the permission set associated with this code group." This is called the Exclusive attribute. If this is checked then the run time will never grant more permissions than the permissions associated with this code group. The other option is called LevelFinal. These two properties come into action when calculating permissions and they are explained below in detail.

Figure 8

I know we have set lots of properties, but it'll all make sense at the end (hopefully).

Okay .. it's time to run the code. What we have done so far is, we have put our code into a code group and given permissions only to read from C: drive. Run the code and try both buttons. Read should work fine, but when you press Write, an exception will be thrown because we didn't set permission to write to C: drive. Below is the error message that you get.

Figure 9

So thanks to Code Access Security, this kind of restriction to a resource is possible. There's a whole lot more that you can do with Code Access Security, which we're going to discuss in the rest of this article.

Functions of Code Access Security

According to the documentation, Code Access Security performs the following functions: (straight from the documentation)

  • Defines permissions and permission sets that represent the right to access various system resources.
  • Enables administrators to configure security policy by associating sets of permissions with groups of code (code groups).
  • Enables code to request the permissions it requires in order to run, as well as the permissions that would be useful to have, and specifies which permissions the code must never have.
  • Grants permissions to each assembly that is loaded, based on the permissions requested by the code and on the operations permitted by security policy.
  • Enables code to demand that its callers have specific permissions. Enables code to demand that its callers possess a digital signature, thus allowing only callers from a particular organization or site to call the protected code.
  • Enforces restrictions on code at run time by comparing the granted permissions of every caller on the call stack to the permissions that callers must have.

We have already done the top two, and that is the administrative part. There's a separate namespace that we haven't looked at yet - System.Security, which is dedicated to implementing security.

Security Namespace

These are the main classes in System.Security namespace:

Classes Description
CodeAccessPermission Defines the underlying structure of all code access permissions.
PermissionSet Represents a collection that can contain many different types of permissions.
SecurityException The exception that is thrown when a security error is detected.

These are the main classes in System.Security.Permissions namespace:

Classes Description
EnvironmentPermission Controls access to system and user environment variables.
FileDialogPermission Controls the ability to access files or folders through a file dialog.
FileIOPermission Controls the ability to access files and folders.
IsolatedStorageFilePermission Specifies the allowed usage of a private virtual file system.
IsolatedStoragePermission Represents access to generic isolated storage capabilities.
ReflectionPermission Controls access to metadata through the System.Reflection APIs.
RegistryPermission Controls the ability to access registry variables.
SecurityPermission Describes a set of security permissions applied to code.
UIPermission Controls the permissions related to user interfaces and the clipboard.

You can find more permission classes in other namespaces. For example, SocketPermission and WebPermission in System.Net namespace, SqlClientPermission in System.Data.SqlClient namespace, PerformanceCounterPermission in System.Diagnostics namespace etc. All these classes represent a protected resource.

Next, we'll see how we can use these classes.

Declarative vs. Imperative

You can use two different kinds of syntax when coding, declarative and imperative.

Declarative syntax

Declarative syntax uses attributes to mark the method, class or the assembly with the necessary security information. So when compiled, these are placed in the metadata section of the assembly.

[FileIOPermission(SecurityAction.Demand, Unrestricted=true)]
public calss MyClass
{
public MyClass() {...} // all these methods
public void MyMethod_A() {...} // demands unrestricted access to
public void MyMethod_B() {...} // the file system
}

Imperative syntax

Imperative syntax uses runtime method calls to create new instances of security classes.

public calss MyClass
{
public MyClass() { }

public void Method_A()
{
// Do Something

FileIOPermission myPerm =
new FileIOPermission(PermissionState.Unrestricted);
myPerm.Demand();
// rest of the code won't get executed if this failed

// Do Something
}

// No demands
public void Method_B()
{
// Do Something
}
}

The main difference between these two is, declarative calls are evaluated at compile time while imperative calls are evaluated at runtime. Please note that compile time means during JIT compilation (IL to native).

There are several actions that can be taken against permissions.

First, let's see how we can use the declarative syntax. Take the UIPermission class. Declarative syntax means using attributes. So we are actually using the UIPermissionAttribute class. When you refer to the MSDN documentation, you can see these public properties:

  • Action - one of the values in SecurityAction enum (common)
  • Unrestricted - unrestricted access to the resource (common)
  • Clipboard - type of access to the clipboard, one of the values in UIPermissionClipboard enum (UIPermission specific)
  • Window - type of access to the window, one of the values in UIPermissionWindow enum (UIPermission specific).

Action and Unrestricted properties are common to all permission classes. Clipboard and Window properties are specific to UIPermission class. You have to provide the action that you are taking and the other properties that are specific to the permission class you are using. So in this case, you can write like the following:

[UIPermission(SecurityAction.Demand,
Clipboard=UIPermissionClipboard.AllClipboard)]

or with both Clipboard and Window properties:

[UIPermission(SecurityAction.Demand,
Clipboard=UIPermissionClipboard.AllClipboard,
Window=UIPermissionWindow.AllWindows)]

If you want to declare a permission with unrestricted access, you can do it as the following:

[UIPermission(SecurityAction.Demand, Unrestricted=true)]

When using imperative syntax, you can use the constructor to pass the values and later call the appropriate action. We'll take the RegistryPermission class.

RegistryPermission myRegPerm =
new RegistryPermission(RegistryPermissionAccess.AllAccess,
"HKEY_LOCAL_MACHINE\\Software");
myRegPerm.Demand();

If you want unrestricted access to the resource, you can use PermissionState enum in the following way:

RegistryPermission myRegPerm = new
RegistryPermission(PermissionState.Unrestricted);
myRegPerm.Demand();

This is all you need to know to use any permission class in the .NET Framework. Now, we'll discuss about the actions in detail.

Security Demands

Demands are used to ensure that every caller who calls your code (directly or indirectly) has been granted the demanded permission. This is accomplished by performing a stack walk. What .. a cat walk? No, that's what your girl friend does. I mean a stack walk. When demanded for a permission, the runtime's security system walks the call stack, comparing the granted permissions of each caller to the permission being demanded. If any caller in the call stack is found without the demanded permission then a SecurityException is thrown. Please look at the following figure which is taken from the MSDN documentation.

Figure 10

Different assemblies as well as different methods in the same assembly are checked by the stack walk.

Now back to demands. These are the three types of demands.

  • Demand
  • Link Demand
  • Inheritance Demand

Demand

Try this sample coding. We didn't use security namespaces before, but we are going to use them now.

using System.Security;
using System.Security.Permissions;

Add another button to the existing form.

private void btnFileRead_Click(object sender, System.EventArgs e)
{
try
{
InitUI(1);
}
catch (SecurityException err)
{
MessageBox.Show(err.Message,"Security Error");
}
catch (Exception err)
{
MessageBox.Show(err.Message,"Error");
}
}

InitUI just calls the ShowUI function. Note that it has been denied permission to read the C: drive.

// Access is denied for this function to read from C: drive
// Note: Using declrative syntax
[FileIOPermission(SecurityAction.Deny,Read="C:\\")]
private void InitUI(int uino)
{
// Do some initializations
ShowUI(uino); // call ShowUI
}

ShowUI function takes uino in and shows the appropriate UI.

private void ShowUI(int uino)
{
switch (uino)
{
case 1: // That's our FileRead UI
ShowFileReadUI();
break;
case 2:
// Show someother UI
break;
}
}

ShowFileReadUI shows the UI related to reading files.

private void ShowFileReadUI()
{
MessageBox.Show("Before calling demand");
FileIOPermission myPerm = new
FileIOPermission(FileIOPermissionAccess.Read, "C:\\");
myPerm.Demand();
// All callers must have read permission to C: drive
// Note: Using imperative syntax

// code to show UI
MessageBox.Show("Showing FileRead UI");
// This is excuted if only the Demand is successful.
}

I know that this is a silly example, but it's enough to do the job.

Now run the code. You should get the "Before calling demand" message, and right after that the custom error message - "Security Error". What went wrong? Look at the following figure:

Figure 11

We have denied read permission for the InitUI method. So when ShowFileReadUI demands read permission to C: drive, it causes a stack walk and finds out that not every caller is granted the demanded permission and throws an exception. Just comment out the Deny statement in InitUI method, then this should be working fine because all the callers have the demanded permission.

Note that according to the documentation, most classes in .NET Framework already have demands associated with them. For example, take the StreamReader class. StreamReader automatically demands FileIOPermission. So placing another demand just before it causes an unnecessary stack walk.

Link Demand

A link demand only checks the immediate caller (direct caller) of your code. That means it doesn't perform a stack walk. Linking occurs when your code is bound to a type reference, including function pointer references and method calls. A link demand can only be applied declaratively.

[FileIOPermission(SecurityAction.LinkDemand,Read="C:\\")]
private void MyMethod()
{
// Do Something
}

Inheritance Demand

Inheritance demands can be applied to classes or methods. If it is applied to a class, then all the classes that derive from this class must have the specified permission.

[SecurityPermission(SecurityAction.InheritanceDemand)]
private class MyClass()
{
// what ever
}

If it is applied to a method, then all the classes that derive from this class must have the specified permission to override that method.

private class MyClass()
{
public class MyClass() {}

[SecurityPermission(SecurityAction.InheritanceDemand)]
public virtual void MyMethod()
{
// Do something
}
}

Like link demands, inheritance demands are also applied using declarative syntax only.

Requesting Permissions

Imagine a situation like this. You have given a nice form to the user with 20+ fields to enter and at the end, all the information would be saved to a text file. The user fills all the necessary fields and when he tries to save, he'll get this nice message saying it doesn't have the necessary permission to create a text file! Of course you can try to calm him down explaining all this happened because of a thing called stack walk .. caused by a demand .. and if you are really lucky you can even get away by blaming Microsoft (believe me ... sometimes it works!).

Wouldn't it be easier if you can request the permissions prior to loading the assembly? Yes you can. There are three ways to do that in Code Access Security.

  • RequestMinimum
  • RequestOptional
  • RequestRefuse

Note that these can only be applied using declarative syntax in the assembly level, and not to methods or classes. The best thing in requesting permissions is that the administrator can view the requested permissions after the assembly has been deployed, using the permview.exe (Permission View Tool), so what ever the permissions needed can be granted.

RequestMinimum

You can use RequestMinimum to specify the permissions your code must have in order to run. The code will be only allowed to run if all the required permissions are granted by the security policy. In the following code fragment, a request has been made for permissions to write to a key in the registry. If this is not granted by the security policy, the assembly won't even get loaded. As mentioned above, this kind of request can only be made in the assembly level, declaratively.

using System;
using System.Windows.Forms;
using System.IO;

using System.Security;
using System.Security.Permissions;

// placed in assembly level
// using declarative syntax
[assembly:RegistryPermission(SecurityAction.RequestMinimum,
Write="HKEY_LOCAL_MACHINE\\Software")]

namespace SecurityApp
{
// Rest of the implementation
}

RequestOptional

Using RequestOptional, you can specify the permissions your code can use, but not required in order to run. If somehow your code has not been granted the optional permissions, then you must handle any exceptions that is thrown while code segments that need these optional permissions are being executed. There are certain things to keep in mind when working with RequestOptional.

If you use RequestOptional with RequestMinimum, no other permissions will be granted except these two, if allowed by the security policy. Even if the security policy allows additional permissions to your assembly, they won't be granted. Look at this code segment:

[assembly:FileIOPermission(SecurityAction.RequestMinimum, Read="C:\\")]
[assembly:FileIOPermission(SecurityAction.RequestOptional, Write="C:\\")]

The only permissions that this assembly will have are read and write permissions to the file system. What if it needs to show a UI? Then the assembly still gets loaded but an exception will be thrown when the line that shows the UI is executing, because even though the security policy allows UIPermission, it is not granted to this assembly.

Note that, like RequestMinimum, RequestOptional doesn't prevent the assembly from being loaded, but throws an exception at run time if the optional permission has not been granted.

RequestRefuse

You can use RequestRefuse to specify the permissions that you want to ensure will never be granted to your code, even if they are granted by the security policy. If your code only wants to read files, then refusing write permission would ensure that your code cannot be misused by a malicious attack or a bug to alter files.

[assembly:FileIOPermission(SecurityAction.RequestRefuse, Write="C:\\")]

Overriding Security

Sometimes you need to override certain security checks. You can do this by altering the behavior of a permission stack walk using these three methods. They are referred to as stack walk modifiers.

  • Assert
  • Deny
  • PermitOnly

Assert

You can call the Assert method to stop the stack walk from going beyond the current stack frame. So the callers above the method that has used Assert are not checked. If you can trust the upstream callers, then using Assert would do no harm. You can use the previous example to test this. Modify the code in ShowUI method, just add the two new lines shown below:

private void ShowUI(int uino)
{
// using imperative syntax to create a instance of FileIOPermission
FileIOPermission myPerm = new
FileIOPermission(FileIOPermissionAccess.Read, "C:\\");
myPerm.Assert(); // don't check above stack frames.

switch (uino)
{
case 1: // That's our FileRead UI
ShowFileReadUI();
break;
case 2:
// Show someother UI
break;
}

CodeAccessPermission.RevertAssert(); // cancel assert
}

Make sure that the Deny statement is still there in InitUI method. Now run the code. It should be working fine without giving any exceptions. Look at the following figure:

Figure 12

Even though InitUI doesn't have the demanded permission, it is never checked because the stack walk stops from ShowUI. Look at the last line. RevertAssert is a static method of CodeAccessPermission. It is used after an Assert to cancel the Assert statement. So if the code below RevertAssert is accessing some protected resources, then a normal stack walk would be performed and all callers would be checked. If there's no Assert for the current stack frame, then RevertAssert has no effect. It is a good practice to place the RevertAssert in a finally block, so it will always get called.

Note that to use Assert, the Assertion flag of the SecurityPermission should be set.

Warning from Microsoft!: If asserts are not handled carefully it may lead into luring attacks where malicious code can call our code through trusted code.

Deny

We have used this method already in the previous example. The following code sample shows how to deny permission to connect to a restricted website using imperative syntax:

WebPermission myWebPermission =
new WebPermission(NetworkAccess.Connect,
"http://www.somewebsite.com");
myWebPermission.Deny();

// Do some work

CodeAccessPermission.RevertDeny(); // cancel Deny

RevertDeny is used to remove a previous Deny statement from the current stack frame.

PermitOnly

You can use PermitOnly in some situations when needed to restrict permissions granted by security policy. The following code fragment shows how to use it imperatively. When PermitOnly is used, it means only the resources you specify can be accessed.

WebPermission myWebPermission =
new WebPermission(NetworkAccess.Connect,
"http://www.somewebsite.com");
myWebPermission.PermitOnly();

// Do some work

CodeAccessPermission.PermitOnly(); // cancel PermitOnly

You can use PermitOnly instead of Deny when it is more convenient to describe resources that can be accessed instead of resources that cannot be accessed.

Calculating Permissions

In the first example, we configured the machine policy level to set permissions for our code. Now we'll see how those permissions are calculated and granted by the runtime when your code belongs to more than one code group in the same policy level or in different policy levels.

The CLR computes the allowed permission set for an assembly in the following way:

  1. Starting from the All_Code code group, all the child groups are searched to determine which groups the code belongs to, using identity information provided by the evidence. (If the parent group doesn't match, then that group's child groups are not checked.)
  2. When all matches are identified for a particular policy level, the permissions associated with those groups are combined in an additive manner (union).
  3. This is repeated for each policy level and permissions associated with each policy level are intersected with each other.

So all the permissions associated with matching code groups in one policy level are added together (union) and the result for each policy level is intersected with one another. An intersection is used to ensure that policy lower down in the hierarchy cannot add permissions that were not granted by a higher level.

Look at the following figure taken from a MSDN article, to get a better understanding:

Figure 13

Have a quick look at the All_Code code group's associated permission set in Machine policy level. Hope it makes sense by now.

Figure 14

The runtime computes the allowed permission set differently if the Exclusive or LevelFinal attribute is applied to the code group. If you are not suffering from short term memory loss, you should remember that we set the Exclusive attribute for our code group - MyCodeGroup in the earlier example.

Here's what happens if these attributes are set.

  • Exclusive - The permissions with the code group marked as Exclusive are taken as the only permissions for that policy level. So permissions associated with other code groups are not considered when computing permissions.
  • LevelFinal - Policy levels (except the application domain level) below the one containing this code group are not considered when checking code group membership and granting permissions.

Now you should have a clear understanding why we set the Exclusive attribute earlier.

Nice Features in .NET Configuration Tool

There are some nice features in .NET Configuration Tool. Just right click the Runtime Security Policy node and you'll see what I'm talking about.

Figure 15

Among other options there are two important ones.

  • Evaluate Assembly - This can be used to find out which code group(s) a particular assembly belongs to, or which permissions it has.
  • Create Deployment Package - This wizard will create a policy deployment package. Just choose the policy level and this wizard will wrap it into a Windows Installer Package (.msi file), so what ever the code groups and permissions in your development PC can be quickly transferred to any other machine without any headache.

Tools

Permissions View Tool - permview.exe

The Permissions View tool is used to view the minimal, optional, and refused permission sets requested by an assembly. Optionally, you can use permview.exe to view all declarative security used by an assembly. Please refer to the MSDN documentation for additional information.

Examples:

  • permview SecurityApp.exe - Displays the permissions requested by the assembly SecurityApp.exe.

Code Access Security Policy Tool - caspol.exe

The Code Access Security Policy tool enables users and administrators to modify security policy for the machine policy level, the user policy level and the enterprise policy level. Please refer to the MSDN documentation for additional information.

Examples:

Here's the output when you run "caspol -listgroups", this will list the code groups that belong to the default policy level - Machine level.

Figure 16

Note that label "1." is for All_Code node because it is the parent node. It's child nodes are labeled as "1.x", and their child nodes are labeled as "1.x.x", get the picture?

  • caspol -listgroups - Displays the code groups
  • caspol -machine -addgroup 1. -zone Internet Execution - Adds a child code group to the root of the machine policy code group hierarchy. The new code group is a member of the Internet zone and is associated with the Execution permission set.
  • caspol -user -chggroup 1.2. Execution - Changes the permission set in the user policy of the code group labeled 1.2. to the Execution permission set.
  • caspol -security on - Turns code access security on.
  • caspol -security off - Turns code access security off.

Summary

  • Using .NET Code Access Security, you can restrict what your code can do, restrict which code can call your code and identify code.
  • There are four policy levels - Enterprise, Machine, User and Application Domain which contains code groups with associated permissions.
  • Can use declarative syntax or imperative syntax.
  • Demands can be used to ensure that every caller has the demanded permission.
  • Requests can be used to request (or refuse) permissions in the grant time.
  • Granted permissions can be overridden.

That's it.


I suggest that you go through the whole article one more time, just to make sure you didn't miss anything. If it's still not clear, don't worry, it's not your fault, it's my fault. :)

Happy Coding !!!