Link List (How to develop with C#)

AN OBJECT-ORIENTED LINKED LIST DESIGN
Our design of a linked list will involve at least two classes.We’ll create a Node class and instantiate a Node object each time we add a node to the list. The nodes in the list are connected via references to other nodes. These references are set using methods created in a separate LinkedList class.Let’s start by looking at the design of the Node class.
The Node Class
A node is made up of two data members: Element, which stores the node’s data; and Link, which stores a reference to the next node in the list.We’ll use Object for the data type of Element, just so we don’t have to worry about what kind of data we store in the list. The data type for Link is Node, which seems strange but actually makes perfect sense. Since we want the link to point to the next node, and we use a reference to make the link, we have to assign a Node type to the link member.
To finish up the definition of the Node class, we need at least two constructor methods. We definitely want a default constructor that creates an empty Node, with both the Element and Link members set to null. We also need a parametrized constructor that assigns data to the Element member and sets the Link member to null.
Here’s the code for the Node class:

public class Node
{
public Object Element;
public Node Link;
public Node()
{
Element = null;
Link = null;
}
public Node(Object theElement)
{
Element = theElement;
Link = null;
}
}

The LinkedList Class
The LinkedList class is used to create the linkage for the nodes of our linked list. The class includes several methods for adding nodes to the list, removing nodes from the list, traversing the list, and finding a node in the list. We also need a constructor method that instantiates a list. The only data member in the class is the header node.

public class LinkedList
{
protected Node header;
public LinkedList()
{
header = new Node("header");
}
. . .
}

The header node starts out with its Link field set to null. When we add the first node to the list, the header node’s Link field is assigned a reference to the new node, and the new node’s Link field is assigned the null value.

To insert a new node after an existing node

To do this, we create a Private method, Find, that searches through the Element field of each node until a match is found.

private Node Find(Object item)

{
Node current = new Node();
current = header;
while(current.header != item)
current = current.Link;
return current;
}

Once we’ve found the “after” node, the next step is to set the new node’s Link field to the Link field of the “after” node, and then set the “after” node’s Link field to a reference to the new node. Here’s how it’s done:

public void Insert(Object newItem, Object after)

{
Node current = new Node();
Node newNode = new Node(newItem);
current = Find(after);

newNode.Link = current.Link;
current.Link = newNode;
}


Remove a node from a linked list

we simply have to change the link of the node that points to the removed node to point to the node after the removed node.

private Node FindPrevious(Object n)

{
Node current = header;
while(!(current.Link == null) && (current.Link.
Element != n))
current = current.Link;
return current;
}

Now we’re ready to see how the code for the Remove method looks:

public void Remove(Object n)

{
Node p = FindPrevious(n);
if (!(p.Link == null))
p.Link = p.Link.Link;
}

PrintList

public void PrintList()

{
Node current = new Node();
current = header;
while (!(current.Link == null))

{
Console.WriteLine(current.Link.Element);
current = current.Link;
}
}

Link List (Howz it works)

A linked list is a collection of class objects called nodes. Each node is linked to its successor node in the list using a reference to the successor node. A node is made up of a field for storing data and the field for the node reference. The reference to another node is called a link. An example linked list is:

A major difference between an array and a linked list is that whereas the elements in an array are referenced by position (the index), the elements of a linked list are referenced by their relationship to the other elements of the array. From above example we can say that “Bread” follows “Milk”, not that “Bread” is in the second position. Moving through a linked list involves following the links from the beginning node to the ending node.
Another thing to notice that we mark the end of a linked list by pointing to the value null. Since we are working with class objects in memory, we use the null object to denote the end of the list.
Marking the beginning of a list can be a problem in some cases. It is common in many linked list implementations to include a special node, called the “header”, to denote the beginning of a linked list. The linked list in underlying figure is redesigned with a header node:
Insertion and Removal of an Item from Link List
Insertion becomes a very efficient task when using a linked list. All that is involved is changing the link of the node previous to the inserted node to point to the inserted node, and setting the link of the new node to point to the node the previous node pointed to before the insertion. In Figure (below) the item “Cookies” is added to the linked list after “Eggs”.
Removing an item from a linked list is just as easy. We simply redirect the link of the node before the deleted node to point to the node the deleted node points to and set the deleted node’s link to null. The diagram (below) we remove “Bacon” from the linked list.


Pointers and unsafe code in c#

Introduction

We will see that C# allows suspending the verification of code by the CLR to allow developers to directly access memory using pointers. Hence with C#, you can complete, in a standard way, certain optimizations which were only possible within unmanaged development environments such as C++. These optimizations concern, for example, the processing of large amounts of data in memory such as bitmaps.

Pointers and unsafe code

C++ does not know the notion of code management. This is one of the advantages of C++ as it allows the use of pointers and thus allows developers to write optimized code which is closer to the target machine.

This is also a disadvantage of C++ since the use of pointers is cumbersome and potentially dangerous, significantly increasing the development effort and maintenance required.

Before the .NET platform, 100% of the code executed on the Windows operating system was unmanaged. This means the executable contains the code directly in machine instructions which are compatible with the type of processor (i.e. machine language code). The introduction of the managed execution mode with the .NET platform is revolutionary. The main sources of hard to track bugs are detected and resolved by the CLR. Amongst these:

  • Array access overflows (Now dynamically managed by the CLR).
  • Memory leaks (Now mostly managed by the garbage collector).
  • The use of an invalid pointer. This problem is solved in a radical way as the manipulation of pointers if forbidden in managed mode.

The CLR knows how to manipulate three kinds of pointers:

  • Managed pointers. These pointers can point to data contained in the object heap managed by the garbage collector. These pointers are not used explicitly by the C# code. They are thus used implicitly by the C# compiler when it compiles methods with out and ref arguments.

  • Unmanaged function pointers. The pointers are conceptually close to the notion of delegate. We will discuss them at the end of this article.

  • Unmanaged pointers. These pointers can point to any data contained in the user addressing space of the process. The C# language allows to use this type of pointers in zones of code considered unsafe. The IL code emitted by the C# compiler corresponding to the zones of code which use these unmanaged pointers make use of specialized IL instructions. Their effect on the memory of the process cannot be verified by the JIT compiler of the CLR. Consequently, a malicious user can take advantage of unsafe code regions to accomplish malicious actions. To counter this weakness, the CLR will only allow the execution of this code at run-time if the code has the SkipVerification CAS meta-permission.

Since it allows to directly manipulating the memory of a process through the use of an unmanaged pointer, unsafe code is particularly useful to optimize certain processes on large amounts of data stored in structures.

Compilation options to allow unsafe code

Unsafe code must be used on purpose and you must also provide the /unsafe option to the csc.exe compiler to tell it that you are aware that the code you wish to compile contains zones which will be seen as unverifiable by the JIT compiler. Visual Studio offers the Build Allow unsafe code project property to indicate that you wish to use this compiler option.

Declaring unsafe code in C#

In C#, the unsafe keyword lets the compiler know when you will use unsafe code. It can be used in three situations:

  • Before the declaration of a class or structure. In this case, all the methods of the type can use pointers.

  • Before the declaration of a method. In this case, the pointers can be used within the body of this method and in its signature.

  • Within the body of a method (static or not). In this case, pointers are only allowed within the marked block of code. For example:

    unsafe
    {
    ...
    }

Let us mention that if a method accepts at least one pointer as an argument or as a return value, the method (or its class) must be marked as unsafe, but also all regions of code calling this method must also be marked as unsafe.


Using pointers in C#

Each object, whether it is a value or reference type instance, has a memory address at which it is physically located in the process. This address is not necessarily constant during the lifetime of the object as the garbage collector can physically move objects store in the heap.

.NET types that support pointers

For certain types, there is a dual type, the unmanaged pointer type which corresponds to the managed type. A pointer variable is in fact the address of an instance of the concerned type. The set of types which authorizes the use of pointers limits itself to all value types, with the exception of structures with at least one reference type field. Consequently, only instances of the following types can be used through pointers: primitive types; enumerations; structures with no reference type fields; pointers.

Declaring pointers

A pointer might point to nothing. In this case, it is extremely important that its value should be set to null (0). In fact, the majority of bugs due to pointers come from pointers which are not null but which point to invalid data. The declaration of a pointer on the FooType is done as follows:

FooType * pointeur;

For example:

long * pAnInteger = 0;

Note that the declaration...

int * p1,p2;

... makes it so that p1 is a pointer on an integer and p2 is a pointer.


Now let's see the first program

Program 1


using System;

class MyClass {
public static void Main() {
int iData = 10;
int* pData = &iData;
Console.WriteLine("Data is " + iData);
Console.WriteLine("Address is " + (int)pData );
}

Here I use a pointer in this program. Now compile this program. The compiler gives the error

Now let's change the program a little bit and add unsafe modifier with the function.

Program 2

using System;

class MyClass {
public unsafe static void Main() {
int iData = 10;
int* pData = &iData;
Console.WriteLine("Data is " + iData);
Console.WriteLine("Address is " + (int)pData );
}
Data is 10
Address is 1244316

It is not necessary that we define the unsafe modifier with the function. We can define a block of unsafe code. Let's change a program little bit more.

Program 3


using System;

class MyClass {
public static void Main() {
unsafe {
int iData = 10;
int* pData = &iData;
Console.WriteLine("Data is " + iData);
Console.WriteLine("Address is " + (int)pData );
}
}
}

In this program a block is defined with unsafe modifier. So we can use pointers in that code. The output of this program is the same as previous one.

Now let's change the program a little bit to get a value from the pointer.

Program 4

using System;

class MyClass {
public static void Main() {
unsafe {
int iData = 10;
int* pData = &iData;
Console.WriteLine("Data is " + iData);
Console.WriteLine("Data is " + pData->ToString() );
Console.WriteLine("Address is " + (int)pData );
}
}
}
Program 5
using System;

class MyClass {
public static void Main() {
testFun();
}

public static unsafe void testFun() {
int iData = 10;
int* pData = &iData;
Console.WriteLine("Data is " + iData);
Console.WriteLine("Address is " + (int)pData );
}

In this program a function with unsafe modifier is called from a normal function. This program shows that a managed code can call unmanaged functions. The output of the program is the same as previous program.

Now change the program little bit and make an unsafe function in another class.

Program 6


using System;

class MyClass {
public static void Main() {
TestClass Obj = new TestClass();
Obj.testFun();
}
}

class TestClass {
public unsafe void testFun() {
int iData = 10;
int* pData = &iData;
Console.WriteLine("Data is " + iData);
Console.WriteLine("Address is " + (int)pData );
}
}

The output of the program is same as previous one.

Now try to pass pointer as a parameter. Let’s see this program.

Program 7


using System;

class MyClass {
public static void Main() {
TestClass Obj = new TestClass();
Obj.testFun();
}
}

class TestClass {
public unsafe void testFun() {
int x = 10;
int y = 20;
Console.WriteLine("Before swap x = " + x + " y= " + y);
swap(&x, &y);
Console.WriteLine("After swap x = " + x + " y= " + y);
}

public unsafe void swap(int* p_x, int *p_y) {
int temp = *p_x;
*p_x = *p_y;
*p_y = temp;
}
}

In this program the unsafe function testFun() calls the classic swap() function to interchange the value of two variables passing by reference. Now change the program a little bit.

Program 8


using System;

class MyClass {
public static void Main() {
TestClass Obj = new TestClass();
unsafe {
int x = 10;
int y = 20;
Console.WriteLine("Before swap x = " + x + " y= " + y);
Obj.swap(&x, &y);
Console.WriteLine("After swap x = " + x + " y= " + y);
}
}
}

class TestClass {
public unsafe void swap(int* p_x, int* p_y) {
int temp = *p_x;
*p_x = *p_y;
*p_y = temp;
}
}

This program does the same job as previous one. But in this program we write only one unsafe function and call this function from the unsafe block in Main.

Now let's see another program which show the usage of array in C#

Program 9


using System;

class MyClass {
public static void Main() {
TestClass Obj = new TestClass();
Obj.fun();
}
}

class TestClass {
public unsafe void fun() {
int [] iArray = new int[10];

// store value in array
for (int iIndex = 0; iIndex < 10; iIndex++) {
iArray[iIndex] = iIndex * iIndex;
}

// get value from array
for (int iIndex = 0; iIndex < 10; iIndex++) {
Console.WriteLine(iArray[iIndex]);
}
}
}

This program display the square of numbers from zero to 9.

Let's change the program a little bit and pass the array as a parameter to a function.

Program 10


using System;

class MyClass {
public static void Main() {
TestClass Obj = new TestClass();
Obj.fun();
}
}

class TestClass {
public unsafe void fun() {
int [] iArray = new int[10];

// store value in array
for (int iIndex = 0; iIndex < 10; iIndex++) {
iArray[iIndex] = iIndex * iIndex;
}

testFun(iArray);
}

public unsafe void testFun(int [] p_iArray) {

// get value from array
for (int iIndex = 0; iIndex < 10; iIndex++) {
Console.WriteLine(p_iArray[iIndex]);
}
}
}

The output of the program is same as previous one.

Now let's change the program a little bit and try to get the value of the array from a pointer rather than an index.

Program 11


using System;

class MyClass {
public static void Main() {
TestClass Obj = new TestClass();
Obj.fun();
}
}

class TestClass {
public unsafe void fun() {
int [] iArray = new int[10];

// store value in array
for (int iIndex = 0; iIndex < 10; iIndex++) {
iArray[iIndex] = iIndex * iIndex;
}

// get value from array
for (int iIndex = 0; iIndex < 10; iIndex++) {
Console.WriteLine(*(iArray + iIndex) );
}
}
}

Here in this program we try to access the value of the array from *(iArray + iIndex) rather than iArray[iIndex]. But the program gives the following error.

Microsoft (R) Visual C# Compiler Version 7.00.9030 [CLR version 1.00.2204.21]
Copyright (C) Microsoft Corp 2000. All rights reserved.

um11.cs(21,24): error CS0019: Operator '+' cannot be applied to operands of type 'int[]' and 'int'

In C# int* and in[] are not treated the same. To understand it more let's see one more program.

using System;

class MyClass {
public static void Main() {
TestClass Obj = new TestClass();
Obj.fun();
}
}

class TestClass {
public unsafe void fun() {
int [] iArray = new int[10];
iArray++;

int* iPointer = (int*)0;
iPointer++;

}
}

There are two different types of variable in this program. First, the variable iArray is declared an array and the second variable iPointer is a pointer variable. Now I am going to increment both. We can increment the pointer variable because it is not fixed in memory but we can't increment the iArray, because the starting address of the array is stored in iArray and if we are allowed to increment this then we will lose starting address of array.

The output of the program is an error.

Microsoft (R) Visual C# Compiler Version 7.00.9030 [CLR version 1.00.2204.21]
Copyright (C) Microsoft Corp 2000. All rights reserved.

um12.cs(13,3): error CS0187: No such operator '++' defined for type 'int[]'

To access the element of the array via a pointer we have to fix the pointer so it can't be incremented. C# uses the fixed reserve word to do this.

Program 13


using System;

class MyClass {
public static void Main() {
TestClass Obj = new TestClass();
Obj.fun();
}
}

class TestClass {
public unsafe void fun() {
int [] iArray = new int[10];

// store value in array
for (int iIndex = 0; iIndex < 10; iIndex++) {
iArray[iIndex] = iIndex * iIndex;
}

// get value from array
fixed(int* pInt = iArray)
for (int iIndex = 0; iIndex < 10; iIndex++) {
Console.WriteLine(*(pInt + iIndex) );
}
}
}

We can use the same technique to pass the array to a function which receives the pointer as a parameter.

Program 14


using System;

class MyClass {
public static void Main() {
TestClass Obj = new TestClass();
Obj.fun();
}
}

class TestClass {
public unsafe void fun() {
int [] iArray = new int[10];

// store value in array
for (int iIndex = 0; iIndex < 10; iIndex++) {
iArray[iIndex] = iIndex * iIndex;
}

// get value from array
fixed(int* pInt = iArray)
testFun(pInt);
}

public unsafe void testFun(int* p_pInt) {

for (int iIndex = 0; iIndex < 10; iIndex++) {
Console.WriteLine(*(p_pInt + iIndex) );
}
}
}

The output of the program is the same as the previous one. If we try to access beyond the array limit then it will print garbage.

Program 15
using System;

class MyClass {
public static void Main() {
TestClass Obj = new TestClass();
Obj.fun();
}
}

class TestClass {
public unsafe void fun() {
int [] iArray = new int[10];

// store value in array
for (int iIndex = 0; iIndex < 10; iIndex++) {
iArray[iIndex] = iIndex * iIndex;
}

// get value from array
fixed(int* pInt = iArray)
testFun(pInt);
}

public unsafe void testFun(int* p_pInt) {

for (int iIndex = 0; iIndex &lt 20; iIndex++) {
Console.WriteLine(*(p_pInt + iIndex) );
}
}
}

Here we try to read 20 elements from array but there are only 10 elements in the array so it will print garbage after printing the elements of array.

Program 16


using System;

struct Point {
public int iX;
public int iY;
}

class MyClass {
public unsafe static void Main() {

// reference of point
Point refPoint = new Point();
refPoint.iX = 10;
refPoint.iY = 20;

// Pointer of point
Point* pPoint = &refPoint;

Console.WriteLine("X = " + pPoint->iX);
Console.WriteLine("Y = " + pPoint->iY);

Console.WriteLine("X = " + (*pPoint).iX);
Console.WriteLine("Y = " + (*pPoint).iY);

}
}

Here pPoint is the pointer of Point class instance. We can access the element of it by using the -> Operator.

Change in Beta 2

When you want to compile program using command line switch you type the program name after the compiler name; for example if your program name is prog1.cs then you will compile this:

scs prog1.cs

This works fine for unsafe code while you are programming in beta 1. In beta 2 Microsft added one more switch to command line compiler of C# for writing unsafe code. Now if you want to write unsafe code then you have to specify the /unsafe command line switch with command line compiler otherwise the compiler gives an error. In beta 2 if you want to write unsafe code in your program then you compile your programas follows:

csc /unsafe prog1.cs















Cryptography in .NET

This article gives a brief overview of Cryptography and the Cryptography support provided by the .NET Framework. I begin by introducing Cryptography and then proceed to examine the various types of it. In particular, I review and analyze the various cryptography algorithms and objects supported by .NET. I conclude after proposing and briefly discussing the algorithmic technique that would work best for you.
Cryptography
I remember as kids, we would often play a game called 'jumble the word', where in we would replace an alphabet of a word with another. This way, A would be replaced with C; B with D and so on. Only someone who could understand this algorithm( in this case shift by 2), could decipher these messages and tell the word. Well in fact, this is 'Cryptography'. Surprisingly, we often use cryptography without consciously knowing it. For example, you may've tried to pass on a secret message to your friend using signals that only the two of you understood, or scribbled some text whose meaning was known only to you. We have all done that. Well....so we begin.
Cryptography is the science of scrambling meaningful characters into non-meaningful characters so that people who do not have access to the data cannot read it. The science of cryptography has been around for many years, even long before computers. Cryptography, over the ages, has been an art practiced by many who have devised different techniques to meet some of the information security requirements. The last twenty years have been a period of transition as the discipline moved from an art to a science. With the advent of computers, however, the science was able to produce almost unbreakable codes.
Well, Cryptography has been considered as one of the most complex aspect used by a developer. Using cryptographic algorithms and techniques is not considered a child's play, as it requires a high level of mathematical knowledge. Fortunately, with Microsoft .NET, newly created classes wrap up these sophisticated algorithms into fairly easy-to-use properties and methods. This article gives you an overview of the cryptography support that is provided by the .NET Framework.
However lets first look into a few jargons to make you familiar with cryptography :
1. Data that can be read and understood without any special measures is called 'plaintext' or 'cleartext'
2.The method of disguising plaintext in such a way as to hide its meaning is called 'Encryption'.
3.Encrypting plaintext results in unreadable chunks of data called 'Ciphertext'. You use encryption to make sure that information is hidden from anyone for whom it is not intended, even those who can see the encrypted data.
4.The process of reverting ciphertext to its original plaintext is called 'Decryption'.
5.And finally 'key' is a string of bits used for encrypting and decrypting information to be transmitted. It is a randomly generated set of numbers/ characters that is used to encrypt/decrypt information.
Types of Cryptography
After getting familiar with the terminology, let's delve into the types of Cryptography. There are two types of cryptography: private-key encryption and public-key encryption
Private key Encryption
Private Key encryption, also referred to as conventional or symmetric or single-key encryption was the only available option prior to the advent of Public Key encryption in 1976. This form of encryption was used by emperors like Julius Caesar and other military organizations to convey secret messages. This key requires all communicating parties, to share a common key. With private-key encryption, you encrypt a secret message using a key that only you know. To decrypt the message, you need to use the same key. Private-key cryptography is effective only if the key can be kept secret. Despite the potential weakness of private-key encryption, it is very easy to implement and computationally doesn't consume excessive resources.
Lets see an example - Imagine Julius is trying to send a secret message to his army chief, using a private key. In order for his chief to decrypt the secret message, he must know the private key. So Julius needs to send the key to him. But if the secrecy of his key is known to his opponents somehow , the message remains no longer secure. Moreover, if the Chief tells his subordinate about the private key, he can then also decrypt the message.
Public-key encryption
Public key encryption algorithms are based on the premise that each sender and recipient has a private key, known only to him/her and a public key, which can be known by anyone. Each encryption/decryption process requires at least one public key and one private key. Each is related to the other mathematically, such that messages encrypted with the public key can only be decrypted with the corresponding private key.
Lets see an example - Before Julius sends a message to his chief, he needs to generate the key pair containing the private key and the public key. The chief then freely distributes the public key to his subordinates but keeps the private key to himself. When Julius wants to send a message to his chief, he uses his public key to encrypt the message and then send it to him. Upon receiving the encrypted message, the Chief proceeds to decrypt it with his private key. In this case, he's the only one who can decrypt the message, since the key pair works in such a way that only messages encrypted with the public key can be decrypted with the private key. Also, there's no need to exchange secret keys, thus eliminating the risk of compromising the secrecy of the key.The reverse can happen as well. Suppose the Chief sends a message encrypted with his private key to Julius. To decrypt the message, Julius need his public key. But what's the point of that? The public key isn't a secret-everyone knows it. However, using this method guarantees that the message hasn't been tampered with and is indeed from his chief and not his opponents. If the message had been modified, Julius wouldn't be able to decrypt it.
I wish he was here to read all this!!!
.NET and Cryptography
.NET provides a set of cryptographic objects, supporting well-known algorithms and common uses including hashing, encryption, and generating digital signatures. These objects are designed in a manner that facilitates the incorporation of these basic capabilities into more complex operations, such as signing and encrypting a document. Cryptographic objects are used by .NET to support internal services, but are also available to developers who need cryptographic support. The .NET Framework provides implementations of many such standard cryptographic algorithms and objects. Similar to the ready availability of simple authentication features within the .NET Framework, cryptographic primitives are also easily accessible to developers via stream-based managed code libraries for encryption, digital signatures, hashing, and random number generation. The System.Security.Cryptography namespace in the .NET Framework provides these cryptographic services. The Algorithm support includes:
RSA and DSA public key (asymmetric) encryption - Asymmetric algorithms operate on fixed buffers. They use a public-key algorithm for encryption/decryption. An example for asymmetric algorithms is the RSA algorithm which is so named after its three inventors Rivest, Shamir, and Adleman. It is a popular public-key algorithm - the de facto standard - for digital signatures and can be used for encryption as well. The DSA_CSP is an implementation of the digital signature algorithm (DSA). This is a public-key algorithm. It can be used to create and verify a digital signature.
DES, TripleDES, and RC2 private key (symmetric) encryption - Symmetric algorithms are used to modify variable length buffers and perform one operation for periodical data input. They use a single secret key to encrypt and decrypt data.The Data Encryption Standard (DES) is a world-wide standard for data encryption, which was published in the early 1970s. It is the most popular encryption algorithm. It is implemented by the DES_CSP class. This class represents a stream where you pour in data that is encrypted/decrypted using a single key. The Triple DES encryption algorithm operates on a block of data three times using one key. RC2 stands for Rivest Cipher or "Ron's Code", which is the name of its inventor. RC2 is a symmetric encryption algorithm and works with a variable key-size. it is a block cipher, like many other .NET cryptography algorithms, that operates on groups of bits in contrast to stream cipher algorithms.
MD5 and SHA1 hashing - MD5 - Message Digest 5-is a one-way hash algorithm. Given variable length data as input it always produces a 128-bit hash value. The Secure Hash Algorithm (SHA) also is a one-way hash algorithm that produces a 160-bit hash value, which is longer than the MD5 produced hash value.
(You must have observed the word CSP. Well CSP is a Cryptographic Service Provider. It is the entity that performs the cryptographic computations. The CSP classes are derived from the corresponding base classes and implement a solution for a specific algorithm. For example, the DESCryptoServiceProvider class is derived from the DES class and implements the digital encryption standard. You can use the provided classes or implement your own solution. )

Here is a general guideline to help you decide when to use which method
Symmetric, or secret key, algorithms are extremely fast and are well suited for encrypting large streams of data. These algorithms, both encrypt and decrypt data. While these are fairly secure, they do have the potential to be broken given enough time, as someone could do a search on every known key value combination. Since each of these algorithms uses a fixed key length or ASCII characters, it is feasible that a computer program could try every possible combination of keys and eventually stumble onto the right one. A common use of these types of algorithms is for storing and retrieving connection strings to databases.
Asymmetric, or public key, algorithms are not as fast as symmetric, but are much harder codes to break. These algorithms rely on two keys, one is Private and the other is Public. The public key is used to encrypt a message. The Private key is the only one that can decrypt the message. The public and private keys are mathematically linked and thus both are needed for this cryptographic exchange to occur successfully. Asymmetric algorithms are not well suited to large amounts of data due to performance. One common use of asymmetric algorithms is to encrypt and transfer to another party a symmetric key and initialization vector. The symmetric algorithm is then used for all messages being sent back and forth.
Hash values are used when you do not wish to ever recover the original value and you especially wish for no one else to discover the original value as well. Hashes will take any arbitrary string length and hash it to a fixed set of bytes. This operation is one-way, and thus is typically used for small amounts of data, like a password. If a user inputs a user password into a secure entry screen, the program can hash this value and store the hashed value into a database. Even if the database were compromised, no one would be able to read the password since it was hashed. When the user then logs into the system to gain entry, the password typed in is hashed using the same algorithm, and if the two hashed values match, then the system knows the input value was the same as the saved value from before.
Everyone Loves an Example
Everyone needs and loves a good example. After having read about the various algorithms available, lets see an example of encrypting and decrypting files using the System.Security.Cryptography namespace. I have used the Rijndael Managed encryption method. The Rijndael Managed class accesses the managed version of the Rijndael algorithm. This class cannot be inherited. The Rijndael class represents the base class from which all implementations of the Rijndael symmetric encryption algorithm must inherit.
The hierarchy is as follows
System.Object
System.Security.Cryptography.SymmetricAlgorithm
System.Security.Cryptography.Rijndael
System.Security.Cryptography.RijndaelManaged
// Encrypting and decrypting files using the Rijndael Managed encryption method.
using System;
using System.IO;
using System.Security.Cryptography;
class CryptoEx
{
public static void Main(string[] args)
{
if (args.Length!=1)
{
Console.WriteLine("FileName Not Entered. Specify a filename to encrypt.");
return;
}
string file = args[0];
string tempfile = Path.GetTempFileName(); // Open the file to read
FileStream fsIn = File.Open(file,FileMode.Open,FileAccess.Read);
FileStream fsOut = File.Open(tempfile, FileMode.Open,FileAccess.Write);
SymmetricAlgorithm symm = new RijndaelManaged();
//creating an instance
ICryptoTransform transform = symm.CreateEncryptor();
//and calling the CreateEncryptor method which
//creates a symmetric encryptor object.
CryptoStream cstream = new CryptoStream(fsOut,transform,CryptoStreamMode.Write); BinaryReader br = new BinaryReader(fsIn);
cstream.Write(br.ReadBytes((int)fsIn.Length),0,(int)fsIn.Length);
cstream.FlushFinalBlock();
cstream.Close();
fsIn.Close();
fsOut.Close();
Console.WriteLine("Created Encrypted File {0}", tempfile);
fsIn = File.Open(tempfile,FileMode.Open,FileAccess.Read);
transform = symm.CreateDecryptor();
cstream = new CryptoStream(fsIn,transform,CryptoStreamMode.Read);
StreamReader sr = new StreamReader(cstream);
Console.WriteLine("Decrypted the File: " + sr.ReadToEnd());
fsIn.Close();
}
Summary :
We saw that the .NET Framework supports encryption by means of cryptographic streaming objects based on the primitives. It also supports digital signatures, message authentication codes (MACs)/keyed hash, pseudo-random number generators (PRNGs), and authentication mechanisms. New or pre-standard primitives as SHA-256 or XMLDSIG are already supported. The ready availability of such libraries is hopefully going to drive more widespread reliance on Cryptography to fortify the security of everyday applications. Based on our own experiences, we can confidently state that well-implemented cryptography dramatically increases the security of many aspects of a given application.

The HTML <link> tag

You must aware of HTML tags. There are several tags that define elements and linked resources of the web pages. Have you ever use the HTML link tag ? Its so stupid question but there is reason behind putting this question , The HTML link tag is very useful and amazing.

It has three main attribute

1. rel
2. type
3. href

The rel attribute define the relationship between page and linked web resource that could be a stylesheet, bookmark, alternate url and so on.


The type attribute define MIME(Multipurpose Mail Extension) of the web resource.


The href attribute define url of the web resource to be linked with page.



Syntax:

<link rel="Relationship" type="MIME Type" href="URL" / >

Here is the list of possible values of the rel attribute:

* alternate - an alternate version of the document(print page, translated/mirror)
* stylesheet - an external style sheet for the document
* shortcut icon - the fevicon of the document
* start - the first document in a selection
* next - the next document in the current selection
* prev - the previous document in the current selection
* contents - a table of contents for the document
* index - an index for the document
* glossary - a glossary (explanation) of words used in the document
* copyright - a document containing copyright information
* chapter - a chapter of a selection of documents
* section - a section of a selection of documents
* subsection - a subsection of a selection of documents
* appendix - an appendix of a selection of documents
* help - a help document
* bookmark - a related document

Note: Most browsers do not use this attribute in any way. However, search engines, and some browsers, may us this attribute to get more information about the link.

Linking a stylesheet with the page is one of most commonly used purpose of the HTML link tag. Here I'm going to show you more interesting uses of this tag.

Example 1:
This example with show a small icon in the browser's address bar before the URL

<link rel="shortcut icon" type="image/x-icon" href="fevicon.ico" / >

Example 2:
This example will show a RSS link in the browser's address bar after the URL

<link rel="alternate" type="application/rss+xml" title="RSS" href="http://www.mysite.com/rss.xml">


Example 3:
This example will show company's copyright information

<link rel="copyright" href="/copyright.shtml" title="Copyright">


Example 4:
This will link Author information with the page

<link rel="author" href="/about/index.shtml" title="About">


Example 5:
We can have link tag for many purposes

<HEAD>
<TITLE>Document Tags</TITLE>
<LINK REL=HOME TITLE="Home Page" HREF="http://www.mydomain.com">
<LINK REL=PREVIOUS TITLE="URLs" HREF="../urls/">
<LINK REL=NEXT TITLE="Lines and Paragraphs" HREF="../linepar/">
<LINK REV=MADE TITLE="Ravindra Patel" HREF="mailto:ravipatel.write@gmail.com">
<LINK REL=COPYRIGHT TITLE="copyright info" HREF="copyright.html">
<LINK REL=STYLESHEET TITLE="style sheet" HREF="stdstyles.css">
</HEAD>


There are much more on link tag. Post your comments if you want to know more about the HTML tags

ASP.NET From PHP Perspective

You Can also follow the URL:
http://in.youtube.com/watch?v=aEIwEGyK_Ns

Using LINQ

http://in.youtube.com/watch?v=B0gD0NqbGHk

An Introduction to Microsoft Silverlight

You can also follow the URLs
http://in.youtube.com/watch?v=Carpbkwxem0
http://in.youtube.com/watch?v=m47xvHbdzLY&feature=related
http://in.youtube.com/watch?v=dUWIOBMNhbI&feature=related

Basics of Jquery

You can also use the following link for the video
http://in.youtube.com/watch?v=CNw2wVDWYaU&feature=related

Asp.Net 2.0 Membership I LoginStatus LoginName

How to: Create an ASP.NET Login Page

You can create a login page by using the ASP.NET login control. The control consists of text boxes, a check box and a button control. The text boxes collect username and password information from the user. The check box is used to remember the user's credentials and the button is used to submit the information the user provides. The login control simplifies creating login pages for form authentication. This login control takes a user name and password and uses ASP.NET membership and forms authentication to verify the user's credentials and create an authentication ticket. For information about how to configure ASP.NET membership and forms authentication. If a user selects the Remember me next time check box, the control creates a persistent cookie that remembers the user's credentials.


The login control functions as expected when you create an ASP.NET Web site with membership, configure membership, and create a membership user.

To create a login page

  1. Create an ASP.NET Web application that uses ASP.NET membership.

  2. Create an ASP.NET Web page named Login.aspx in your application.( By default, ASP.NET forms authentication is configured to use a page named Login.aspx. You can change the default login page name in the Web.config file for your application using the Login Url property.)
  3. Add a Login control to the page.
  4. Set the control's Designation page Url property to the name of the page that the user will be redirected to after they log in. For example, you can set the DestinationPageUrl property to DestinationPageUrl="~/Membership/MembersHome.aspx", a members only page. If you do not specify a value for the Designation Page Url property, the user will be redirected to the original page the user requested after successfully logging in.
  5. The following example shows the markup for a Login control:

About IIS(Internet Information Servicces)

Overview
Over the years, it has become simple for advanced Web users to create their own Web site. Once an end-user has registered their domain name, there are various types of hosters to choose from. Web hosters are available throughout the world to meet the demands of end customers. But regardless of which hoster is chosen, the scenario of signing up and creating a web site is similar in all cases.
Initially, a new user account needs to be established for the user. Once the account has been set up, the end user decides what features and options the site should incorporate: for instance, how much disk space, FTP capability, creation of virtual directories, whether or not databases are needed, etc. Hosters build control panels or dashboard applications that allow end users to create and manage these features.
There are a number of ways that these capabilities can be implemented into the control panel. In this section, we look at implementing the provisioning aspect of these features through managed code. An outline of the features is as follows:
Provision new user account
Create content storage
Create Application Pool
Create Web site
Create binding
Create root application
Create Virtual directory
Create FTP site
Provision New User Account
A user account is needed for the site owner who manages and maintains the site. The account can either be an Active Directory Account, or a local user account. To simplify the scenario, we only illustrate creating a local user account.
The following code fragments demonstrate the creation of a local account.
The System.DirectoryServices namespace must be specified.

public static bool CreateLocalUserAccount(string userName, string password)
{
try
{
if (string.IsNullOrEmpty(userName))
throw new ArgumentNullException("userName", "Invalid User Name.");
if (string.IsNullOrEmpty(password))
throw new ArgumentNullException("password", "Invalid Password.");
DirectoryEntry directoryEntry = new DirectoryEntry("WinNT://" + Environment.MachineName + ",computer");
bool userFound = false;
try
{
if (directoryEntry.Children.Find(userName, "user") != null)
userFound = true;
}
catch
{
userFound = false;
}
if (!userFound)
{
DirectoryEntry newUser = directoryEntry.Children.Add(userName, "user"); newUser.Invoke("SetPassword", new object[] { password });
newUser.Invoke("Put", new object[] { "Description", "Application Pool User Account" }); newUser.CommitChanges();
newUser.Close();
}
}
catch (Exception ex)
{
throw new Exception(ex.Message, ex);
} return true;
}
Create Content Storage
The content storage requires specific permissions configured for users to manage their own contents. The following code fragment demonstrates how to set directory permissions using managed code in C#:

InheritanceFlags inheritanceFlags, PropagationFlags propagationFlags, AccessControlType controlType)
{
try
{
// Create a new DirectoryInfo object.
DirectoryInfo dInfo = new DirectoryInfo(directoryPath);
// Get a DirectorySecurity object that represents the
// current security settings.
DirectorySecurity dSecurity = dInfo.GetAccessControl();
// Add the FileSystemAccessRule to the security settings.
dSecurity.AddAccessRule(new FileSystemAccessRule(userAccount, rights, inheritanceFlags, propagationFlags, controlType));
// Set the new access settings.
dInfo.SetAccessControl(dSecurity);
}
catch (Exception ex)
{
throw new Exception(ex.Message, ex);
}
return true;
}
Furthermore, if disk quota is restricted, the following code fragment demonstrates how to set disk quota using managed code. To use the disk quota management, you must add reference to Windows Disk Quota management component; it resides under window\system32\dskquota.dll.

public static bool AddUserDiskQuota(string userName, double quota, double quotaThreshold, string diskVolume)
{
try
{
DiskQuotaControlClass diskQuotaCtrl = GetDiskQuotaControl(diskVolume); diskQuotaCtrl.UserNameResolution = UserNameResolutionConstants.dqResolveNone; DIDiskQuotaUser diskUser = diskQuotaCtrl.AddUser(userName);
diskUser.QuotaLimit = quota;
diskUser.QuotaThreshold = quotaThreshold;
}
catch (Exception ex)
{
throw new Exception(ex.Message, ex);
}
return true;
}

Create Application Pool
An application pool defines the settings for a worker process that hosts one or more IIS 7.0 applications, carrying out their request processing. The application pool is a unit of process isolation, since all request processing for an application runs within its application pool's worker processes.
It is also a unit of isolation from a security perspective, since the application pool can run with a different identity, and ACL all needed resources exclusively for itself to prevent applications in other application pools from being able to access its resources. The following code fragment demonstrates the creation of an application pool, setting the identity, and setting properties.

public static bool CreateApplicationPool(string applicationPoolName, ProcessModelIdentityType identityType, string applicationPoolIdentity, string password, string managedRuntimeVersion, bool autoStart, bool enable32BitAppOnWin64,ManagedPipelineMode managedPipelineMode, long queueLength, TimeSpan idleTimeout, long periodicRestartPrivateMemory, TimeSpan periodicRestartTime)
{
try
{
if (identityType == ProcessModelIdentityType.SpecificUser)
{
if (string.IsNullOrEmpty(applicationPoolName))
throw new ArgumentNullException("applicationPoolName", "CreateApplicationPool: applicationPoolName is null or empty.");
if (string.IsNullOrEmpty(applicationPoolIdentity))
throw new ArgumentNullException("applicationPoolIdentity", "CreateApplicationPool: applicationPoolIdentity is null or empty.");
if (string.IsNullOrEmpty(password))
throw new ArgumentNullException("password", "CreateApplicationPool: password is null or empty.");
}
using (ServerManager mgr = new ServerManager())
{
ApplicationPool newAppPool = mgr.ApplicationPools.Add(applicationPoolName);
if (identityType == ProcessModelIdentityType.SpecificUser)
{
newAppPool.ProcessModel.IdentityType = ProcessModelIdentityType.SpecificUser; newAppPool.ProcessModel.UserName = applicationPoolIdentity; newAppPool.ProcessModel.Password = password;
}
else
{
newAppPool.ProcessModel.IdentityType = identityType;
}
if (!string.IsNullOrEmpty(managedRuntimeVersion)) newAppPool.ManagedRuntimeVersion = managedRuntimeVersion; newAppPool.AutoStart = autoStart;
newAppPool.Enable32BitAppOnWin64 = enable32BitAppOnWin64; newAppPool.ManagedPipelineMode = managedPipelineMode;
if (queueLength > 0)
newAppPool.QueueLength = queueLength;
if (idleTimeout != TimeSpan.MinValue)
newAppPool.ProcessModel.IdleTimeout = idleTimeout;
if (periodicRestartPrivateMemory > 0) newAppPool.Recycling.PeriodicRestart.PrivateMemory = periodicRestartPrivateMemory; if (periodicRestartTime != TimeSpan.MinValue) newAppPool.Recycling.PeriodicRestart.Time = periodicRestartTime;
mgr.CommitChanges();
}
}
catch (Exception ex)
{
throw new Exception(ex.Message, ex);
}
return true;
}

Using LINQ to Create a Pager for SQL Data in C#

Using LINQ to SQL, we can make use of the built-in methods to page the database data a lot easier than with using SQL alone. LINQ to SQL can make it extremely easy for us to create pages from our data source using just two methods - Skip and Take.Skip allows us to skip a certain number of records, and Take allows us to select a certain number of records.
In this tutorial, we will be creating a SQL database and adding a LINQ to SQL Class that Visual Studio creates to represent our database. We will then extend the class to support paging of the data, using the methods mentioned above.Let's start by creating our database. In this example, we will use one table named tblEmployees with three columns - id, name, position.Once the database is set up, we will add some sample data - we will need at least 5 records to make use of the paging feature.
Once we have our database set up and have added data to it, we then need to create a representation of our database using a LINQ to SQL Class. Right-click your project in the Solution Explorer, and goto Add ASP.NET Folder > App_Code. Now right-click the App_Code folder and choose Add New Item.. LINQ to SQL Classes. This will bring up the Object Relationship Designer. All we need to do here is drag the tables we will be working with into the Designer, from the Server Explorer, and then Save. This will allow Visual Studio to create a representation of our database. For this example, we will name it Employees.dbml
Now we will create an extension of this class by again right-clicking the App_Code folder and choose Add New Item.. Class. We will also name this Employees and change the public class to public partial class. We may need to also add extra assembly references; we will be using the System.Collections.Generic, System.Data.Linq and System.Linq in particular.We are going to extend this class by providing methods to select the data in pages. Our first method will select all the data:
public static IEnumerable Select()
{
EmployeesDataContext db = new EmployeesDataContext();

return db.tblEmployees;
}
Notice the EmployeesDataContext refers to our LINQ to SQL class.Next, we add a method to move between the pages of the data:
public static IEnumerable SelectPage(int startRowIndex, int maximumRows)
{
return Select().Skip(startRowIndex).Take(maximumRows);

}
This method will be called when a new page is requested through PostBack. The GridView's paging links will provide the variables required for this method.Finally, we create a method that will get the number of records in the database:
public static int SelectCount()
{
return Select().Count();

}
The entire class extension will look something like this:
using System;
using System.Data;
using System.Configuration;
using System.Linq;
using System.Data.Linq;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Collections.Generic;
/// /// Extension class for Employees.dbml///
public partial class Employees{
public static IEnumerable Select()
{
EmployeesDataContext db = new EmployeesDataContext();return db.tblEmployees;
}
public static IEnumerable SelectPage(int startRowIndex, int maximumRows)
{
return Select().Skip(startRowIndex).Take(maximumRows);
}
public static int SelectCount()
{
return Select().Count();
}
}
Now we are done with the class, and can implement the functionality into our ASPX page. To make this work, we will need to use a GridView control and an ObjectDataSource:

Because we are using VS.NET 2008, we can simply add AJAX Functionality to our web application using a ScriptManager and UpdatePanel.In order to implement paging, we need to set the EnablePaging attributes on both of our controls. We also set the Method attributes of the ObjectDataSource to reflect those we created in our partial class - note that the TypeName refers to our class name.
NOTE: We can change the PageSize to set the number of items on each page.



























































Adding to XML File using LINQ and C#

This tutorial was created with Visual Studio 2008, but can be replicated in 2005 if you download and install Microsoft's LINQ Community Technology Preview release, which can be downloaded from here.
LINQ has the amazing ability to communicate and interact with many various data sources - databases, XML and varying objects. In this tutorial, we will be using a Windows Form to look at how we can use LINQ to add data to an XML file for storage. This is a quick and easy alternative to using a database. An XML file would be good for storing less information, which is not sensitive.

The XML file we will be using in this example is as follows:



We will create a Windows Form to view the XML file, and also to add new data to it. We will add three textboxes to the form - for name, city and age; two buttons - for adding and viewing XML data; and a Rich Text Box for viewing the XML data. We can also add labels to the text boxes.
Once our form is ready, we can move to the code-behind and add the namespaces we are going to use. The majority of which are added automatically. We may need to add System.Xml.Linq, however. It will look something like this:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.Xml.Linq;
Once we have added our references, we will create an on-click handler for the view button. We will add the following code-behind:
private void button1_Click(object sender, EventArgs e){
XDocument xmlDoc = XDocument.Load("People.xml");

var persons = from person in xmlDoc.Descendants("Person")
select new{
Name = person.Element("Name").Value,

City = person.Element("City").Value,
Age = person.Element("Age").Value,
};
txtResults.Text = "";
foreach (var person in persons){
txtResults.Text = txtResults.Text + "Name: " + person.Name + "\n";

txtResults.Text = txtResults.Text + "City: " + person.City + "\n";
txtResults.Text = txtResults.Text + "Age: " + person.Age + "\n\n";
}
if (txtResults.Text == "")
txtResults.Text = "No Results.";

}
This code-block utilizes LINQ, first loading the XML file into memory, and then selecting all the data from within. The LINQ query looks similar to a SQL statement. We select all data from the XML file and then loop through this data, outputting all results into the text box. Finally, we check to see if the XML file is empty.
Next, we will add the code for the button to add new data to the XML file. We create another handler for the other button:
private void button2_Click(object sender, EventArgs e){
XDocument xmlDoc = XDocument.Load("People.xml");

xmlDoc.Element("Persons").Add(new XElement("Person", new XElement("Name", txtName.Text),
new XElement("City", txtCity.Text), new XElement("Age", txtAge.Text)));

xmlDoc.Save("People.xml");
}
Similar to the other button, this one also opens the XML file, to enable us to edit it. We then simply add an element to the XML file, into the Persons element.Now our Window Form can both read and write to the XML file. The entire code-behind looks something like this:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.Xml.Linq;

namespace LINQtoXML_cs{
public partial class Form1 : Form{
public Form1(){
InitializeComponent();

}
private void button1_Click(object sender, EventArgs e){
XDocument xmlDoc = XDocument.Load("People.xml");

var persons = from person in xmlDoc.Descendants("Person")
select new{
Name = person.Element("Name").Value,

City = person.Element("City").Value,
Age = person.Element("Age").Value,
};
txtResults.Text = "";
foreach (var person in persons){
txtResults.Text = txtResults.Text + "Name: " + person.Name + "\n";

txtResults.Text = txtResults.Text + "City: " + person.City + "\n";
txtResults.Text = txtResults.Text + "Age: " + person.Age + "\n\n";
}
if (txtResults.Text == "")
txtResults.Text = "No Results.";

}
private void button2_Click(object sender, EventArgs e)
{
XDocument xmlDoc = XDocument.Load("People.xml");xmlDoc.Element("Persons").Add(new XElement("Person", new XElement("Name", txtName.Text),
new XElement("City", txtCity.Text), new XElement("Age", txtAge.Text)));xmlDoc.Save("People.xml");

}
}
}

Performance Tips and Tricks in .NET Applications


Summary: This article is for developers who want to tweak their applications for optimal performance in the managed world. Sample code, explanations and design guidelines are addressed for Database, Windows Forms and ASP applications, as well as language-specific tips for Microsoft Visual Basic and Managed C++. (25 printed pages)
Contents
OverviewPerformance Tips for All ApplicationsTips for Database AccessPerformance Tips for ASP.NET ApplicationsTips for Porting and Developing in Visual BasicTips for Porting and Developing in Managed C++Additional ResourcesAppendix: Cost of Virtual Calls and Allocations
Overview
This white paper is designed as a reference for developers writing applications for .NET and looking for various ways to improve performance. If you are a developer who is new to .NET, you should be familiar with both the platform and your language of choice. This paper strictly builds on that knowledge, and assumes that the programmer already knows enough to get the program running. If you are porting an existing application to .NET, it's worth reading this document before you begin the port. Some of the tips here are helpful in the design phase, and provide information you should be aware of before you begin the port.
This paper is divided into segments, with tips organized by project and developer type. The first set of tips is a must-read for writing in any language, and contains advice that will help you with any target language on the Common Language Runtime (CLR). A related section follows with ASP-specific tips. The second set of tips is organized by language, dealing with specific tips about using Managed C++ and Microsoft® Visual Basic®.
Due to schedule limitations, the version 1 (v1) run time had to target the broadest functionality first, and then deal with special-case optimizations later. This results in a few pigeonhole cases where performance becomes an issue. As such, this paper covers several tips that are designed to avoid this case. These tips will not be relevant in the next version (vNext), as these cases are systematically identified and optimized. I'll point them out as we go, and it is up to you to decide whether it is worth the effort.
Performance Tips for All Applications
There are a few tips to remember when working on the CLR in any language. These are relevant to everyone, and should be the first line of defense when dealing with performance issues.
Throw Fewer Exceptions
Throwing exceptions can be very expensive, so make sure that you don't throw a lot of them. Use Perfmon to see how many exceptions your application is throwing. It may surprise you to find that certain areas of your application throw more exceptions than you expected. For better granularity, you can also check the exception number programmatically by using Performance Counters.
Finding and designing away exception-heavy code can result in a decent perf win. Bear in mind that this has nothing to do with try/catch blocks: you only incur the cost when the actual exception is thrown. You can use as many try/catch blocks as you want. Using exceptions gratuitously is where you lose performance. For example, you should stay away from things like using exceptions for control flow.
Here's a simple example of how expensive exceptions can be: we'll simply run through a For loop, generating thousands or exceptions and then terminating. Try commenting out the throw statement to see the difference in speed: those exceptions result in tremendous overhead.

public static void Main(string[] args){
int j = 0;
for(int i = 0; i < j =" i;" href="http://msdn.microsoft.com/en-us/library/04fy9ya1.aspx">gritty details of marshalling can be explored further on the MSDN Library. I recommend reading it carefully if you spend a lot of your time marshalling.
Design with ValueTypes
Use simple structs when you can, and when you don't do a lot of boxing and unboxing. Here's a simple example to demonstrate the speed difference:
using System;
namespace ConsoleApplication{
public struct foo{
public foo(double arg){ this.y = arg; }
public double y;
}
public class bar{
public bar(double arg){ this.y = arg; }
public double y;
}
class Class1{
static void Main(string[] args){
System.Console.WriteLine("starting struct loop...");
for(int i = 0; i < test =" new" i =" 0;" test2 =" new" href="http://msdn.microsoft.com/en-us/library/ms973838.aspx">Performance Considerations of Run-Time Technologies in the .NET Framework.
Use AddRange to Add Groups
Use AddRange to add a whole collection, rather than adding each item in the collection iteratively. Nearly all windows controls and collections have both Add and AddRange methods, and each is optimized for a different purpose. Add is useful for adding a single item, whereas AddRange has some extra overhead but wins out when adding multiple items. Here are just a few of the classes that support Add and AddRange:
StringCollection, TraceCollection, etc.
HttpWebRequest
UserControl
ColumnHeader
Trim Your Working Set
Minimize the number of assemblies you use to keep your working set small. If you load an entire assembly just to use one method, you're paying a tremendous cost for very little benefit. See if you can duplicate that method's functionality using code that you already have loaded.
Keeping track of your working set is difficult, and could probably be the subject of an entire paper. Here are some tips to help you out:
Use vadump.exe to track your working set. This is discussed in another white paper covering various tools for the managed environment.
Look at Perfmon or Performance Counters. They can give you detail feedback about the number of classes that you load, or the number of methods that get JITed. You can get readouts for how much time you spend in the loader, or what percent of your execution time is spent paging.
Use For Loops for String Iteration—version 1
In C#, the foreach keyword allows you to walk across items in a list, string, etc. and perform operations on each item. This is a very powerful tool, since it acts as a general-purpose enumerator over many types. The tradeoff for this generalization is speed, and if you rely heavily on string iteration you should use a For loop instead. Since strings are simple character arrays, they can be walked using much less overhead than other structures. The JIT is smart enough (in many cases) to optimize away bounds-checking and other things inside a For loop, but is prohibited from doing this on foreach walks. The end result is that in version 1, a For loop on strings is up to five times faster than using foreach. This will change in future versions, but for version 1 this is a definite way to increase performance.
Here's a simple test method to demonstrate the difference in speed. Try running it, then removing the For loop and uncommenting the foreach statement. On my machine, the For loop took about a second, with about 3 seconds for the foreach statement.

public static void Main(string[] args) {
string s = "monkeys!";
int dummy = 0;

System.Text.StringBuilder sb = new System.Text.StringBuilder(s);
for(int i = 0; i < s =" sb.ToString();" i =" 0;" name="dotnetperftips_topic2a">Use StringBuilder for Complex String Manipulation
When a string is modified, the run time will create a new string and return it, leaving the original to be garbage collected. Most of the time this is a fast and simple way to do it, but when a string is being modified repeatedly it begins to be a burden on performance: all of those allocations eventually get expensive. Here's a simple example of a program that appends to a string 50,000 times, followed by one that uses a StringBuilder object to modify the string in place. The StringBuilder code is much faster, and if you run them it becomes immediately obvious.

namespace ConsoleApplication1.Feedback

using System;

public class Feedback{
public Feedback(){
text = "You have ordered: \n";
}

public string text;

public static int Main(string[] args) {
Feedback test = new Feedback();
String str = test.text;
for(int i=0;i<50000;i++){ str =" str" href="javascript:CopyCode(">


namespace ConsoleApplication1.Feedback

using System;

public class Feedback{
public Feedback(){
text = "You have ordered: \n";
}

public string text;

public static int Main(string[] args) {
Feedback test = new Feedback();
System.Text.StringBuilder SB =
new System.Text.StringBuilder(test.text);
for(int i=0;i<50000;i++){ name="dotnetperftips_topic3">Tips for Database Access
The philosophy of tuning for database access is to use only the functionality that you need, and to design around a 'disconnected' approach: make several connections in sequence, rather than holding a single connection open for a long time. You should take this change into account and design around it.
Microsoft recommends an N-Tier strategy for maximum performance, as opposed to a direct client-to-database connection. Consider this as part of your design philosophy, as many of the technologies in place are optimized to take advantage of a multi-tired scenario.
Use the Optimal Managed Provider
Make the correct choice of managed provider, rather than relying on a generic accessor. There are managed providers written specifically for many different databases, such as SQL (System.Data.SqlClient). If you use a more generic interface such as System.Data.Odbc when you could be using a specialized component, you will lose performance dealing with the added level of indirection. Using the optimal provider can also have you speaking a different language: the Managed SQL Client speaks TDS to a SQL database, providing a dramatic improvement over the generic OleDbprotocol.
Pick Data Reader Over Data Set When You Can
Use a data reader whenever when you don't need to keep the data lying around. This allows a fast read of the data, which can be cached if the user desires. A reader is simply a stateless stream that allows you to read data as it arrives, and then drop it without storing it to a dataset for more navigation. The stream approach is faster and has less overhead, since you can start using data immediately. You should evaluate how often you need the same data to decide if the caching for navigation makes sense for you. Here's a small table demonstrating the difference between DataReader and DataSet on both ODBC and SQL providers when pulling data from a server (higher numbers are better):

ADO
SQL
DataSet
801
2507
DataReader
1083
4585
As you can see, the highest performance is achieved when using the optimal managed provider along with a data reader. When you don't need to cache your data, using a data reader can provide you with an enormous performance boost.
Use Mscorsvr.dll for MP Machines
For stand-alone middle-tier and server applications, make sure mscorsvr is being used for multiprocessor machines. Mscorwks is not optimized for scaling or throughput, while the server version has several optimizations that allow it to scale well when more than one processor is available.
Use Stored Procedures Whenever Possible
Stored procedures are highly optimized tools that result in excellent performance when used effectively. Set up stored procedures to handle inserts, updates, and deletes with the data adapter. Stored procedures do not have to be interpreted, compiled or even transmitted from the client, and cut down on both network traffic and server overhead. Be sure to use CommandType.StoredProcedure instead of CommandType.Text
Be Careful About Dynamic Connection Strings
Connection pooling is a useful way to reuse connections for multiple requests, rather than paying the overhead of opening and closing a connection for each request. It's done implicitly, but you get one pool per unique connection string. If you're generating connection strings dynamically, make sure the strings are identical each time so pooling occurs. Also be aware that if delegation is occurring, you'll get one pool per user. There are a lot of options that you can set for the connection pool, and you can track the performance of the pool by using the Perfmon to keep track of things like response time, transactions/sec, etc.
Turn Off Features You Don't Use
Turn off automatic transaction enlistment if it's not needed. For the SQL Managed Provider, it's done via the connection string:

SqlConnection conn = new SqlConnection(
"Server=mysrv01;
Integrated Security=true;
Enlist=false");
When filling a dataset with the data adapter, don't get primary key information if you don't have to (e.g. don't set MissingSchemaAction.Add with key):

public DataSet SelectSqlSrvRows(DataSet dataset,string connection,string query){
SqlConnection conn = new SqlConnection(connection);
SqlDataAdapter adapter = new SqlDataAdapter();
adapter.SelectCommand = new SqlCommand(query, conn);
adapter.MissingSchemaAction = MissingSchemaAction.AddWithKey;
adapter.Fill(dataset);
return dataset;
}
Avoid Auto-Generated Commands
When using a data adapter, avoid auto-generated commands. These require additional trips to the server to retrieve meta data, and give you a lower level of interaction control. While using auto-generated commands is convenient, it's worth the effort to do it yourself in performance-critical applications.
Beware ADO Legacy Design
Be aware that when you execute a command or call fill on the adapter, every record specified by your query is returned.
If server cursors are absolutely required, they can be implemented through a stored procedure in t-sql. Avoid where possible because server cursor-based implementations don't scale very well.
If needed, implement paging in a stateless and connectionless manner. You can add additional records to the dataset by:
Making sure PK info is present
Changing the data adapter's select command as appropriate, and
Calling Fill
Keep Your Datasets Lean
Only put the records you need into the dataset. Remember that the dataset stores all of its data in memory, and that the more data you request, the longer it will take to transmit across the wire.
Use Sequential Access as Often as Possible
With a data reader, use CommandBehavior.SequentialAccess. This is essential for dealing with blob data types since it allows data to be read off of the wire in small chunks. While you can only work with one piece of the data at a time, the latency for loading a large data type disappears. If you don't need to work the whole object at once, using Sequential Access will give you much better performance.
Performance Tips for ASP.NET Applications
Cache Aggressively
When designing an app using ASP.NET, make sure you design with an eye on caching. On server versions of the OS, you have a lot of options for tweaking the use of caches on the server and client side. There are several features and tools in ASP that you can make use of to gain performance.
Output Caching—Stores the static result of an ASP request. Specified using the <@% OutputCache %> directive:
Duration—Time item exists in the cache
VaryByParam—Varies cache entries by Get/Post params
VaryByHeader—Varies cache entries by Http header
VaryByCustom—Varies cache entries by browser
Override to vary by whatever you want:
Fragment Caching—When it is not possible to store an entire page (privacy, personalization, dynamic content), you can use fragment caching to store parts of it for quicker retrieval later.
a) VaryByControl—Varies the cached items by values of a control
Cache API—Provides extremely fine granularity for caching by keeping a hashtable of cached objects in memory (System.web.UI.caching). It also:
a) Includes Dependencies (key, file, time)
b) Automatically expires unused items
c) Supports Callbacks
Caching intelligently can give you excellent performance, and it's important to think about what kind of caching you need. Imagine a complex e-commerce site with several static pages for login, and then a slew of dynamically-generated pages containing images and text. You might want to use Output Caching for those login pages, and then Fragment Caching for the dynamic pages. A toolbar, for example, could be cached as a fragment. For even better performance, you could cache commonly used images and boilerplate text that appear frequently on the site using the Cache API. For detailed information on caching (with sample code), check out the ASP. NET Web site.
Use Session State Only If You Need To
One extremely powerful feature of ASP.NET is its ability to store session state for users, such as a shopping cart on an e-commerce site or a browser history. Since this is on by default, you pay the cost in memory even if you don't use it. If you're not using Session State, turn it off and save yourself the overhead by adding <@% EnabledSessionState = false %> to your asp. This comes with several other options, which are explained at the ASP. NET Web site.
For pages that only read session state, you can choose EnabledSessionState=readonly. This carries less overhead than full read/write session state, and is useful when you need only part of the functionality and don't want to pay for the write capabilities.
Use View State Only If You Need To
An example of View State might be a long form that users must fill out: if they click Back in their browser and then return, the form will remain filled. When this functionality isn't used, this state eats up memory and performance. Perhaps the largest performance drain here is that a round-trip signal must be sent across the network each time the page is loaded to update and verify the cache. Since it is on by default, you will need to specify that you do not want to use View State with <@% EnabledViewState = false %>. You should read more about View State on the the ASP. NET Web site to learn about some of the other options and settings to which you have access.
Avoid STA COM
Apartment COM is designed to deal with threading in unmanaged environments. There are two kinds of Apartment COM: single-threaded and multithreaded. MTA COM is designed to handle multithreading, whereas STA COM relies on the messaging system to serialize thread requests. The managed world is free-threaded, and using Single Threaded Apartment COM requires that all unmanaged threads essentially share a single thread for interop. This results in a massive performance hit, and should be avoided whenever possible. If you can't port the Apartment COM object to the managed world, use <@%AspCompat = "true" %> for pages that use them. For a more detailed explanation of STA COM, see the MSDN Library.
Batch Compile
Always batch compile before deploying a large page into the Web. This can be initiated by doing one request to a page per directory and waiting until the CPU idles again. This prevents the Web server from being bogged down with compilations while also trying to serve out pages.
Remove Unnecessary Http Modules
Depending on the features used, remove unused or unnecessary http modules from the pipeline. Reclaiming the added memory and wasted cycles can provide you with a small speed boost.
Avoid the Autoeventwireup Feature
Instead of relying on autoeventwireup, override the events from Page. For example, instead of writing a Page_Load() method, try overloading the public void OnLoad() method. This allows the run time from having to do a CreateDelegate() for every page.
Encode Using ASCII When You Don't Need UTF
By default, ASP.NET comes configured to encode requests and responses as UTF-8. If ASCII is all your application needs, eliminated the UTF overhead can give you back a few cycles. Note that this can only be done on a per-application basis.
Use the Optimal Authentication Procedure
There are several different ways to authenticate a user and some of more expensive than others (in order of increasing cost: None, Windows, Forms, Passport). Make sure you use the cheapest one that best fits your needs.
Tips for Porting and Developing in Visual Basic
A lot has changed under the hood from Microsoft® Visual Basic® 6 to Microsoft® Visual Basic® 7, and the performance map has changed with it. Due to the added functionality and security restrictions of the CLR, some functions are simply unable to run as quickly as they did in Visual Basic 6. In fact, there are several areas where Visual Basic 7 gets trounced by its predecessor. Fortunately, there are two pieces of good news:
Most of the worst slowdowns occur during one-time functions, such as loading a control for the first time. The cost is there, but you only pay it once.
There are a lot of areas where Visual Basic 7 is faster, and these areas tend to lie in functions that are repeated during run time. This means that the benefit grows over time, and in several cases will outweigh the one-time costs.
The majority of the performance issues come from areas where the run time does not support a feature of Visual Basic 6, and it has to be added to preserve the feature in Visual Basic 7. Working outside of the run time is slower, making some features far more expensive to use. The bright side is that you can avoid these problems with a little effort. There are two main areas that require work to optimize for performance, and few simple tweaks you can do here and there. Taken together, these can help you step around performance drains, and take advantage of the functions that are much faster in Visual Basic 7.
Error Handling
The first concern is error handling. This has changed a lot in Visual Basic 7, and there are performance issues related to the change. Essentially, the logic required to implement OnErrorGoto and Resume is extremely expensive. I suggest taking a quick look at your code, and highlighting all the areas where you use the Err object, or any error-handling mechanism. Now look at each of these instances, and see if you can rewrite them to use try/catch. A lot of developers will find that they can convert to try/catch easily for most of these cases, and they should see a good performance improvement in their program. The rule of thumb is "if you can easily see the translation, do it."
Here's an example of a simple Visual Basic program that uses On Error Goto compared with the try/catch version.

Sub SubWithError(

On Error Goto SWETrap
Dim x As Integer
Dim y As Integer
x = x / y
SWETrap:
Exit Sub
End Sub


Sub SubWithErrorResumeLabel()
On Error Goto SWERLTrap
Dim x As Integer
Dim y As Integer
x = x / y
SWERLTrap:
Resume SWERLExit
End Sub

SWERLExit:
Exit Sub


Sub SubWithError(

Dim x As Integer
Dim y As Integer
Try
x = x / y
Catch
Return
End Try
End Sub

Sub SubWithErrorResumeLabel()
Dim x As Integer
Dim y As Integer
Try
x = x / y
Catch
Goto SWERLExit
End Try

SWERLExit:
Return
End Sub

The speed increase is noticeable. SubWithError() takes 244 milliseconds using OnErrorGoto, and only 169 milliseconds using try/catch. The second function takes 179 milliseconds compared to 164 milliseconds for the optimized version.
Use Early Binding
The second concern deals with objects and typecasting. Visual Basic 6 does a lot of work under the hood to support casting of objects, and many programmers aren't even aware of it. In Visual Basic 7, this is an area that out of which you can squeeze a lot of performance. When you compile, use early binding. This tells the compiler to insert a Type Coercion is only done when explicitly mentioned. This has two major effects:
Strange errors become easier to track down.
Unneeded coercions are eliminated, leading to substantial performance improvements.
When you use an object as if it were of a different type, Visual Basic will coerce the object for you if you don't specify. This is handy, since the programmer has to worry about less code. The downside is that these coercions can do unexpected things, and the programmer has no control over them.
There are instances when you have to use late binding, but most of the time if you're not sure then you can get away with early binding. For Visual Basic 6 programmers, this can be a bit awkward at first, since you have to worry about types more than in the past. This should be easy for new programmers, and people familiar with Visual Basic 6 will pick it up in no time.
Turn On Option Strict and Explicit
With Option Strict on, you protect yourself from inadvertent late binding and enforce a higher level of coding discipline. For a list of the restrictions present with Option Strict, see the MSDN Library. The caveat to this is that all narrowing type coercions must be explicitly specified. However, this in itself may uncover other sections of your code that are doing more work than you had previously thought, and it may help you stomp some bugs in the process.
Option Explicit is less restrictive than Option Strict, but it still forces programmers to provide more information in their code. Specifically, you must declare a variable before using it. This moves the type-inference from the run time into compile time. This eliminated check translates into added performance for you.
I recommend that you start with Option Explicit, and then turn on Option Strict. This will protect you from a deluge of compiler errors, and allow you to gradually start working in the stricter environment. When both of these options are used, you ensure maximum performance for your application.
Use Binary Compare for Text
When comparing text, use binary compare instead of text compare. At run time, the overhead is much lighter for binary.
Minimize the Use of Format()
When you can, use toString() instead of format(). In most cases, it will provide you with the functionality you need, with much less overhead.
Use Charw
Use charw instead of char. The CLR uses Unicode internally, and char must be translated at run time if it is used. This can result in a substantial performance loss, and specifying that your characters are a full word long (using charw) eliminates this conversion.
Optimize Assignments
Use exp += val instead of exp = exp + val. Since exp can be arbitrarily complex, this can result in lots of unnecessary work. This forces the JIT to evaluate both copies of exp, and many times this is not needed. The first statement can be optimized far better than the second, since the JIT can avoid evaluating the exp twice.
Avoid Unnecessary Indirection
When you use byRef, you pass pointers instead of the actual object. Many times this makes sense (side-effecting functions, for example), but you don't always need it. Passing pointers results in more indirection, which is slower than accessing a value that is on the stack. When you don't need to go through the heap, it is best to avoid it.
Put Concatenations in One Expression
If you have multiple concatenations on multiple lines, try to stick them all on one expression. The compiler can optimize by modifying the string in place, providing a speed and memory boost. If the statements are split into multiple lines, the Visual Basic compiler will not generate the Microsoft Intermediate Language (MSIL) to allow in-place concatenation. See the StringBuilder example discussed earlier.
Include Return Statements
Visual Basic allows a function to return a value without using the return statement. While Visual Basic 7 supports this, explicitly using return allows the JIT to perform slightly more optimizations. Without a return statement, each function is given several local variables on stack to transparently support returning values without the keyword. Keeping these around makes it harder for the JIT to optimize, and can impact the performance of your code. Look through your functions and insert return as needed. It doesn't change the semantics of the code at all, and it can help you get more speed from your application.
Tips for Porting and Developing in Managed C++
Microsoft is targeting Managed C++ (MC++) at a specific set of developers. MC++ is not the best tool for every job. After reading this document, you may decide that C++ is not the best tool, and that the tradeoff costs are not worth the benefits. If you aren't sure about MC++, there are many good resources to help you make your decision This section is targeted at developers who have already decided that they want to use MC++ in some way, and want to know about the performance aspects of it.
For C++ developers, working Managed C++ requires that several decisions be made. Are you porting some old code? If so, do you want to move the entire thing to managed space or are you instead planning to implement a wrapper? I'm going to focus on the 'port-everything' option or deal with writing MC++ from scratch for the purposes of this discussion, since those are the scenarios where the programmer will notice a performance difference.
Benefits of the Managed World
The most powerful feature of Managed C++ is the ability to mix and match managed and unmanaged code at the expression level. No other language allows you to do this, and there are some powerful benefits you can get from it if used properly. I'll walk through some examples of this later on.
The managed world also gives you huge design wins, in that a lot of common problems are taken care of for you. Memory management, thread scheduling and type coercions can be left to the run time if you desire, allowing you to focus your energies on the parts of the program that need it. With MC++, you can choose exactly how much control you want to keep.
MC++ programmers have the luxury of being able to use the Microsoft Visual C® 7 (VC7) backend when compiling to IL, and then using the JIT on top of that. Programmers that are used to working with the Microsoft C++ compiler are used to things being lightning-fast. The JIT was designed with different goals, and has a different set of strengths and weaknesses. The VC7 compiler, not bound by the time restrictions of the JIT, can perform certain optimizations that the JIT cannot, such as whole-program analysis, more aggressive inlining and enregistration. There are also some optimizations that can be performed only in typesafe environments, leaving more room for speed than C++ allows.
Because of the different priorities in the JIT, some operations are faster than before while others are slower. There are tradeoffs you make for safety and language flexibility, and some of them aren't cheap. Fortunately, there are things a programmer can do to minimize the costs.
Porting: All C++ Code Can Compile to MSIL
Before we go any further, it's important to note that you can compile any C++ code into MSIL. Everything will work, but there's no guarantee of type-safety and you pay the marshalling penalty if you do a lot of interop. Why is it helpful to compile to MSIL if you don't get any of the benefits? In situations where you are porting a large code base, this allows you to gradually port your code in pieces. You can spend your time porting more code, rather than writing special wrappers to glue the ported and not-yet-ported code together if you use MC++, and that can result in a big win. It makes porting applications a very clean process. To learn more about compiling C++ to MSIL, take a look at the /clr compiler option.
However, simply compiling your C++ code to MSIL doesn't give you the security or flexibility of the managed world. You need to write in MC++, and in v1 that means giving up a few features. The list below is not supported in the current version of the CLR, but may be in the future. Microsoft chose to support the most common features first, and had to cut some others in order to ship. There is nothing that prevents them from being added later, but in the meantime you will need to do without them:
Multiple Inheritance
Templates
Deterministic Finalization
You can always interoperate with unsafe code if you need those features, but you will pay the performance penalty of marshalling data back and forth. And bear in mind that those features can only be used inside the unmanaged code. The managed space has no knowledge of their existence. If you are deciding to port your code, think about how much you rely on those features in your design. In a few cases, the redesign is too expensive and you will want to stick with unmanaged code. This is the first decision you should make, before you start hacking.
Advantages of MC++ Over C# or Visual Basic
Coming from an unmanaged background, MC++ preserves a lot of the ability to handle unsafe code. MC++'s ability to mix managed and unmanaged code smoothly provides the developer with a lot of power, and you can choose where on the gradient you want to sit when writing your code. On one extreme, you can write everything in straight, unadulterated C++ and just compile with /clr. On the other, you can write everything as managed objects and deal with the language limitations and performance problems mentioned above.
But the real power of MC++ comes when you choose somewhere in between. MC++ allows you to tweak some of the performance hits inherent in managed code, by giving you precise control over when to use unsafe features. C# has some of this functionality in the unsafe keyword, but it's not an integral part of the language and it is far less useful than MC++. Let's step through some examples showing the finer granularity available in MC++, and we'll talk about the situations where it comes in handy.
Generalized "byref" pointers
In C# you can only take the address of some member of a class by passing it to a ref parameter. In MC++, a byref pointer is a first-class construct. You can take the address of an item in the middle of an array and return that address from a function:

Byte* AddrInArray( Byte b[] ) {
return &b[5];
}
We exploit this feature for returning a pointer to the "characters" in a System.String via our helper routine, and we can even loop through arrays using these pointers:

System::Char* PtrToStringChars(System::String*);
for( Char*pC = PtrToStringChars(S"boo");
pC != NULL;
pC++ )
{
... *pC ...
}
You can also do a linked-list traversal with injection in MC++ by taking the address of the "next" field (which you cannot do in C#):

Node **w = &Head;
while(true) {
if( *w == 0 val < (*w)->val ) {
Node *t = new Node(val,*w);
*w = t;
break;
}
w = &(*w)->next;
}
In C#, you can't point to "Head", or take the address of the "next" field, so you have make a special-case where you're inserting at the first location, or if "Head" is null. Moreover, you have to look one node ahead all the time in the code. Compare this to what a good C# would produce:

if( Head==null val < t =" new" head =" t;" w="Head;" next ="=" t =" new" next =" t;" w =" w.next;" href="javascript:CopyCode(">
__value struct V {
int i;
};
int main() {
V v = {10};
__box V *pbV = __box(v);
pbV->i += 10; // update without casting
}
In C# you have to unbox to a "v", then update the value and re-box back to an Object:

struct B { public int i; }
static void Main() {
B b = new B();
b.i = 5;
object o = b; // implicit box
B b2 = (B)o; // explicit unbox
b2.i++; // update
o = b2; // implicit re-box
}
STL Collections vs. Managed Collections—v1
The bad news: In C++, using the STL Collections was often just as fast as writing that functionality by hand. The CLR frameworks are very fast, but they suffer from boxing and unboxing issues: everything is an object, and without template or generic support, all actions have to be checked at run time.
The good news: In the long term, you can bet that this problem will go away as generics are added to the run time. Code you deploy today will experience the speed boost without any changes. In the short term, you can use static casting to prevent the check, but this is no longer safe. I recommend using this method in tight code where performance is absolutely critical, and you've identified two or three hot spots.
Use Stack Managed Objects
In C++, you specify that an object should be managed by the stack or the heap. You can still do this in MC++, but there are restrictions you should be aware of. The CLR uses ValueTypes for all stack-managed objects, and there are limitations to what ValueTypes can do (no inheritance, for example). More information is available on the MSDN Library.
Corner Case: Beware Indirect Calls Within Managed Code—v1
In the v1 run time, all indirect function calls are made natively, and therefore require a transition into unmanaged space. Any indirect function call can only be made from native mode, which means that all indirect calls from managed code need a managed-to-unmanaged transition. This is a serious problem when the table returns a managed function, since a second transition must then be made to execute the function. When compared to the cost of executing a single Call instruction, the cost is fifty- to one hundred times slower than in C++!
Fortunately, when you are calling a method that resides within a garbage-collected class, optimization removes this. However, in the specific case of a regular C++ file that has been compiled using /clr, the method return will be considered managed. Since this cannot be removed by optimization, you are hit with the full double-transition cost. Below is an example of such a case.

//////////////////////// a.h: //////////////////////////
class X {
public:
void mf1();
void mf2();
};

typedef void (X::*pMFunc_t)();


////////////// a.cpp: compiled with /clr /////////////////
#include "a.h"

int main(){
pMFunc_t pmf1 = &X::mf1;
pMFunc_t pmf2 = &X::mf2;

X *pX = new X();
(pX->*pmf1)();
(pX->*pmf2)();

return 0;
}


////////////// b.cpp: compiled without /clr /////////////////
#include "a.h"

void X::mf1(){}


////////////// c.cpp: compiled with /clr ////////////////////
#include "a.h"
void X::mf2(){}
There are several ways to avoid this:
Make the class into a managed class ("__gc")
Remove the indirect call, if possible
Leave the class compiled as unmanaged code (e.g. do not use /clr)
Minimize Performance Hits—version 1
There are several operations or features that are simply more expensive in MC++ under version 1 JIT. I'll list them and give some explanation, and then we'll talk about what you can do about them.
Abstractions—This is an area where the beefy, slow C++ backend compiler wins heavily over the JIT. If you wrap an int inside a class for abstraction purposes, and you access it strictly as an int, the C++ compiler can reduce the overhead of the wrapper to practically nothing. You can add many levels of abstraction to the wrapper, without increasing the cost. The JIT is unable to take the time necessary to eliminate this cost, making deep abstractions more expensive in MC++.
Floating Point—The v1 JIT does not currently perform all the FP-specific optimizations that the VC++ backend does, making floating point operations more expensive for now.
Multidimensional Arrays—The JIT is better at handling jagged arrays than multidimensional ones, so use jagged arrays instead.
64 bit Arithmetic—In future versions, 64-bit optimizations will be added to the JIT.
What You Can Do
At every phase of development, there are several things you can do. With MC++, the design phase is perhaps the most important area, as it will determine how much work you end up doing and how much performance you get in return. When you sit down to write or port an application, you should consider the following things:
Identify areas where you use multiple inheritance, templates, or deterministic finalization. You will have to get rid of these, or else leave that part of your code in unmanaged space. Think about the cost of redesigning, and identify areas that can be ported.
Locate performance hot spots, such as deep abstractions or virtual function calls across managed space. These will also require a design decision.
Look for objects that have been specified as stack-managed. Make sure they can be converted into ValueTypes. Mark the others for conversion to heap-managed objects.
During the coding stage, you should be aware of the operations that are more expensive and the options you have in dealing with them. One of the nicest things about MC++ is that you come to grips with all the performance issues up front, before you start coding: this is helpful in paring down work later on. However, there are still some tweaks you can perform while you code and debug.
Determine which areas make heavy use of floating point arithmetic, multidimensional arrays or library functions. Which of these areas are performance critical? Use profilers to pick the fragments where the overhead is costing you most, and pick which option seems best:
Keep the whole fragment in unmanaged space.
Use static casts on the library accesses.
Try tweaking boxing/unboxing behavior (explained later).
Code your own structure.
Finally, work to minimize the number of transitions you make. If you have some unmanaged code or an interop call sitting in a loop, make the entire loop unmanaged. That way you'll only pay the transition cost twice, rather than for each iteration of the loop.
Additional Resources
Related topics on performance in the .NET Framework include:
Performance Considerations of Run-Time Technologies in the .NET Framework
Watch for future articles currently under development, including an overview of design, architectural and coding philosophies, a walkthrough of performance analysis tools in the managed world, and a performance comparison of .NET to other enterprise applications available today.
Appendix: Cost of Virtual Calls and Allocations
Call Type
# Calls/sec
ValueType Non-Virtual Call
809971805.600
Class Non-Virtual Call
268478412.546
Class Virtual Call
109117738.369
ValueType Virtual (Obj Method) Call
3004286.205
ValueType Virtual (Overridden Obj Method) Call
2917140.844
Load Type by Newing (Non-Static)
1434.720
Load Type by Newing (Virtual Methods)
1369.863
Note The test machine is a PIII 733Mhz, running Windows 2000 Professional with Service Pack 2.
This chart compares the cost associated with different types of method calls, as well as the cost of instantiating a type that contains virtual methods. The higher the number, the more calls/instantiations-per-second can be performed. While these numbers will certainly vary on different machines and configurations, the relative cost of performing one call over another remains significant.
ValueType Non-Virtual Call: This test calls an empty non-virtual method contained within a ValueType.
Class Non-Virtual Call: This test calls an empty non-virtual method contained within a class.
Class Virtual Call: This test calls an empty virtual method contained within a class.
ValueType Virtual (Obj Method) Call: This test calls ToString() (a virtual method) on a ValueType, which resorts to the default object method.
ValueType Virtual (Overridden Obj Method) Call: This test calls ToString() (a virtual method) on a ValueType that has overridden the default.
Load Type by Newing (Static): This test allocates space for a class with only static methods.
Load Type by Newing (Virtual Methods): This test allocates space for a class with virtual methods.
One conclusion you can draw is that Virtual Function calls are about two times as expensive as regular calls when you're calling a method in a class. Bear in mind that calls are cheap to begin with, so I wouldn't remove all virtual calls. You should always use virtual methods when it makes sense to do so.
The JIT cannot inline virtual methods, so you lose a potential optimization if you get rid of non-virtual methods.
Allocating space for an object that has virtual methods is slightly slower than the allocation for an object without them, since extra work must be done to find space for the virtual tables.
Notice that calling a non-virtual method within a ValueType is more than three times as fast as in a class, but once you treat it as a class you lose terribly. This is characteristic of ValueTypes: treat them like structs and they're lighting fast. Treat them like classes and they're painfully slow. ToString() is a virtual method, so before it can be called, the struct must be converted to an object on the heap. Instead of being twice as slow, calling a virtual method on a ValueType is now eighteen times as slow! The moral of the story? Don't treat ValueTypes as classes.