In order to understand completely what is a thread, how and when to use it we should first see how Windows operating system works. Windows is a preemptive multitasking operation system.
When the operation system decides how much time a task will take to be executed this is a preemptive multitasking operation system. While when the task itself decides for how long to execute this is cooperative multitasking operation system. So the difference between preemptive and cooperative operation system is in the object, which decides for how long a task to be executed.
A system with only one CPU can work over only one task at a time. The memory space, where a given application is executed is called — process. A Process is the memory set aside for an application to be executed in. Within this process the thing, which is really executed is the thread. The operation system allocates CPU's time to the thread. So a process contains at least one thread but the process can contain many threads, which can be executed "simultaneously" by sharing the CPU. The illusion of simultaneous execution comes when the operation system divides the CPU's time among all the running processes.
So a thread is the compound unit of a process, which the operating system allocates CPU's time to. A process always has at least one thread running, but it also can have many threads running, which share the CPU's time.
The operation system decides how to allocate CPU's time to the processes according to the process's priority level. In Windows 2000 there are four priority levels, which can be assigned to processes: Real time — highest priority. The processes with real time priority can interrupt all other processes with lower priority. This type of priority is reserved for processes which must not be interrupted in order to be done well like streaming video or applications with complex graphics.
High priority: for processes that should be executed immediately. Such as Windows Task List, which should be fired immediately after the user request. The high priority processes can be interrupted only by processes with real time priority and can preempt all processes with lower priority.
Normal priority: for normal applications, which do not have any special CPU's time allocations.
Idle priority: for processes, which are done only when the CPU is idle, such as Screen saver. The above-mentioned priority levels exist in Windows2000 and in .NET they are different but the logic behind them is the same. The priority levels in .NET will be discussed later in this article.
Types of Threading in Windows
There are three basic types of threading in Win32 environment:
There is only one thread in the application, and it has to do all the work. That thread has all the space allocated for the process.
There are multiple threads within the application. The application defines when and for how long a thread should be executed. Each thread has an assigned space within the space allocated for the application and the threads do not shared that resource.
There are multiple threads within an application and these threads do share resources among. Different threads can call the same method and property at the same time. The apartment model is more efficient than the single threading because the work is divided among multiple objects, while the free model is the fastest and the most efficient. But the free model hides many risks in the same time. Because of the sharing of resources among multiple objects, much more attention must be paid over synchronizing that activities and not allowing mutual exclusive changes to happen.
Single threaded versus Multi threaded application For example if the application consists of only one thread and there is an operation which can take much time such as requesting data from a remote server. The CPU will send the request to the server and wait for the response. The request will travel to the server, which let's say is in Australia. The server will receive the request will prepare the data and send them back. For all this time while the request and the data travel the application will and after that will fire any other operation. This can result in much idle time and the user will have to wait without doing anything else. On the other hand if the application uses more than one thread, the operation of requesting data from a remote computer can fire a new thread, and the running one will still be on focus, allowing the user to perform what action wants, while the request and data travel. Windows is multi threaded because we can download a file from Internet and in the same time write a document using Ms Word and listen to a music CD. What the operation system does is dividing the CPU's time to small time spans for each one of these processes. Because the CPU is very fast and the time spans are really small everything seems to done simultaneously. Let's assume again that we have two identical applications but one of them consists of one thread and the other one has many threads. Let's also assume that the application employs operations, which leave the CPU idle for a relatively long time such as connecting to a remote computer, requesting data from a remote server etc. If we subtract the time for which the CPU has been idle from the total time the application has run, we will receive the time the CPU has worked for the one threaded application. But for the multi-threaded application we can find the total time the CPU has worked by adding the time the CPU has worked over each thread. If we compare both times we will find that the CPU has been more efficiently used by the multi-threaded application.
When to use Multithreading
- There are complex computations, which are going to take much time - Some of the operations can be performed parallel - The CPU has to wait for a response from a remote computer or server - Long I/O operations
- The application has to wait for a user input.
When not to use Multithreading
- Because it is just cool, you have to have at least one of the above conditions in order to employ multithreading in your application - You must have proven that a single threaded application is unacceptably slow, and using multiple threads will improve significantly the performance.
Advantages of Multithreading
- Better responsiveness to the user - if there are operations which can take long time to be done, these operations can be put in a separate thread, and the application will preserve its responsiveness to the user input - Faster application - the work will be done by more objects.
Disadvantages of Multithreading
- Creating overhead for the processor - each time the CPU finishes with a thread it should write in the memory the point it has reached, because next time when the processor starts that thread again it should know where it has finished and where to start from - The code become more complex - using thread into the code makes the code difficult read and debug - Sharing the resources among the threads can lead to deadlocks or other unexpected problems.
Methods for creating threads in .NET
There are two methods for creating threads in a .NET application:
Using the Thread class to create and manage threads
Start a thread. To declare a thread we use the following code:
Thread myThread=new Thread(myThreadStart);
Here we pass to the thread constrictor as a parameter the entry point where the thread is going to start to work. That is the method the thread is going to execute. Because we pass a method we should do this through a delegate. There is a predefined delegate in the System.Threading class and it is the following:
public delegate void ThreadStart();
The parameter passed to the constructor should be of this type too.
ThreadStart myThreadStart=new ThreadStart(myThreadClass.hello);
Here I have a class myThreadClass where I have a method called "hello". This is going to be the start point for myThreadStart. But only declaring the delegate and the thread is not going to start the thread we use the following code to start the thread:
Passing a parameter to a thread:
Sometimes may be you will need to create two or more threads with the same entry point but with different tasks. That means you will have to pass to the methods different parameters. One easy and direct way of doing this is to create an object, which holds the thread, a static field, which will hold the parameter, and the method to be executed.
This is my class:
public class multiply
//This is the thread I an going to start:
public Thread myThread;
//These two are the parameters I want to pass:
//In the constructor of the class I instantiate the parameters to the passed values:
public multiply(string pname, int pcounter)
//and start the thread:
myThread=new Thread(new ThreadStart(start));
//This is the method the thread is going to execute:
private void start()
Console.WriteLine("Now thread " + name + " has started");
for(int i=1; i<=8*counter; i++)
Console.WriteLine(name + ": count has reached " + i);
Console.WriteLine("Thread " + name + " has finished");
public class TestClass
static int counter;
static void Main()
//Here I create an instance of multiply class, and pass to
//the constructor two unique parameters:
multiply m1=new multiply("First", counter);
//Here I create another instance of multiply class and
//pass two other parameters:
multiply m2=new multiply("Second", counter*2);
Because the system will provide both threads with different memory space to be done, the two parameters will have their unique values even if the threads are going to be executed simultaneously.
You can assign different priorities to the threads you create in your application. That means you can tell the system which thread can be done first, which thread can interrupt others and which thread cannot be interrupted. The possible values are:
Each thread with higher priority can interrupt a thread with a lower priority level. The threads with Highest priority will be executed first and they cannot be interrupted. You can assign a priority level to the thread using the following code:
Stop a thread
Once a thread is start it can be stopped for a given period of time, resumed after that or even killed. You can make a thread to "sleep" for a given interval through the following:
Which means that the current thread will be stopped for 1000 milliseconds and resumed after that by the system. This is a static method of the thread class and cannot be used with a thread instance you have created. You can stop a thread instance by the Suspend() method:
This is thread will stay dormant until you call it back to action. You can resume that thread with the Resume() method:
The thread will be resumed at the point it has been suspended. You can also directly kill a thread with the Abort() method:
This is going to stop the thread and the system will destroy all the data related to that thread. Suspending and aborting a thread does not take place immediately. The system can allow the thread to do some more actions before being suspended or killed, in order to stop the thread when it is safe for the running application. The Abort() method actually throws an exception ThreadAbortException in the affected thread. ThreadAbortException is a special exception class that cannot be handled. That means if the thread is in a Try...Catch block, it will pass the catch block and execute what is I the finally block. This is done in order to be ensured some cleaning process before shutting down the thread.
Wait for another thread
You can make the current thread to wait for another thread to be executed through the Join() method:
This will make the calling thread to wait for 10 seconds and resume execution after that. We can check it myThread is still alive after the calling thread resumes:
When we employ free multithreading in the application, we exposed the application to great risk of corrupting the data, because more than one thread can access a variable at a time and try to write data into the same variable. The way we can prevent this is through using synchronization. Synchronization is the process of ensuring that only one thread can access a given variable at a time. There are many ways of synchronizing thread and two of them are through Monitor class and lock keyword.
Monitor class We use the Monitor class when we want to lock an object and perform some operations over it with one thread a time. We use the Enter() method of the Monitor class:
We release the object through the Exit() method of the Monitor class:
We can have the following situation:
static object object1="Experiment"; Thread t1=new Thread(new ThreadStart(method1));
Thread t2=new Thread(new ThreadStart(method2));
Public static void method1()
//Some code to be done
Public static void method2()
//Some code to be done
Here at first one of the thread locks the object and start doing its job. At some time the other thread is going to start and first tries if the object is free. If the object is still in the possession of the previous thread the acquiring thread will simply go to the else statement. When the object gets free the waiting thread will take it and lock it for itself.
lock() keyword We use the lock() keyword when we want to lock a variable:
Public static void method()
//Some code to be done
What the lock does is to create a mutual exclusive lock around the variable, so no other thread can access it while it is in the lock() block. Even if the thread locked the variable loses its time slice, and another thread comes to the picture the newcomer will not have access to the locked variable.
Deadlocks a deadlock is a bug when two threads are trying to access resources, which are locked by each other:
One thread has reached this code and has locked a:
//The other thread has reached here and has locked b
No one of the threads is going to release its object and they will wait infinitely for the resource they need. Deadlocks can be avoided by claiming the resources you will need in the beginning of the code and locking them at first.