Jump to content

Thread (computing): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
added disambiguation link
Line 1: Line 1:
: ''This article is about the computer science term. For other uses of this word, see ''[[Thread_(disambiguation)]]''.''

Many [[programming language]]s, [[operating system]]s, and other [[software development]] environments support what are called "'''threads'''" of [[execution (computers)|execution]]. Threads are similar to [[computer process|processes]], in that both represent a single [[sequence]] of [[instruction]]s executed in parallel with other sequences, either by [[Computer multitasking|time slicing]] or [[multiprocessing]]. Threads are a way for a [[computer program|program]] to split itself into two or more simultaneously running [[task]]s. (The name "thread" is by analogy with the way that a number of [[yarn|threads]] are interwoven to make a piece of fabric).
Many [[programming language]]s, [[operating system]]s, and other [[software development]] environments support what are called "'''threads'''" of [[execution (computers)|execution]]. Threads are similar to [[computer process|processes]], in that both represent a single [[sequence]] of [[instruction]]s executed in parallel with other sequences, either by [[Computer multitasking|time slicing]] or [[multiprocessing]]. Threads are a way for a [[computer program|program]] to split itself into two or more simultaneously running [[task]]s. (The name "thread" is by analogy with the way that a number of [[yarn|threads]] are interwoven to make a piece of fabric).



Revision as of 05:48, 16 May 2005

This article is about the computer science term. For other uses of this word, see Thread_(disambiguation).

Many programming languages, operating systems, and other software development environments support what are called "threads" of execution. Threads are similar to processes, in that both represent a single sequence of instructions executed in parallel with other sequences, either by time slicing or multiprocessing. Threads are a way for a program to split itself into two or more simultaneously running tasks. (The name "thread" is by analogy with the way that a number of threads are interwoven to make a piece of fabric).

A common use of threads is having one thread paying attention to the graphical user interface, while others do a long calculation in the background. As a result, the application more readily responds to user's interaction.

An unrelated use of the term thread is for threaded code, which is a form of code consisting entirely of subroutine calls, written without the subroutine call instruction, and processed by an interpreter or the CPU. Two threaded code languages are Forth and early B programming languages.

Threads compared with processes

Threads are distinguished from traditional multi-tasking operating system processes in that processes are typically independent, carry considerable state information, have separate address spaces, and interact only through system-provided inter-process communication mechanisms. Multiple threads, on the other hand, typically share the state information of a single process, and share memory and other resources directly. Context switching between threads in the same process is typically faster than context switching between processes. Systems like Windows NT and OS/2 are said to have "cheap" threads and "expensive" processes, while in other operating systems there is not so big a difference.

An advantage of a multi-threaded program is that it can operate faster on computer systems that have multiple CPUs, or across a cluster of machines. This is because the threads of the program naturally lend themselves for truly concurrent execution. In such a case, the programmer needs to be careful to avoid race conditions, and other non-intuitive behaviors. In order for data to be correctly manipulated, threads will often need to rendezvous in time in order to process the data in the correct order. Threads may also require atomic operations (often implemented using semaphores) in order to prevent common data from being simultaneously modified, or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.

Use of threads in programming often causes a state inconsistency. A common anti-pattern is to set a global variable, then invoke subprograms that depend on its value. This is known as accumulate and fire.

Operating systems generally implement threads in one of two ways: preemptive multithreading, or cooperative multithreading. Preemptive multithreading is generally considered the superior implementation, as it allows the operating system to determine when a context switch should occur. Cooperative multithreading, on the other hand, relies on the threads themselves to relinquish control once they are at a stopping point. This can create problems if a thread is waiting for a resource to become available. The disadvantage to preemptive multithreading is that the system may make a context switch at an inappropriate time, causing priority inversion or other bad effects which may be avoided by cooperative multithreading.

Traditional mainstream computing hardware did not have much support for multithreading as switching between threads was generally already quicker than full process context switches. Processors in embedded systems, which have higher requirements for real-time behaviors, might support multithreading by decreasing the thread switch time, perhaps by allocating a dedicated register file for each thread instead of saving/restoring a common register file. In the late 1990s, the idea of executing instructions from multiple threads simultaneously has become known as simultaneous multithreading. This feature was introduced in Intel's Pentium 4 processor, with the name Hyper-threading.

Processes, threads, and fibers

The concept of a process, thread, and fiber are interrelated by a sense of "ownership" and of containment.

A process is the "heaviest" unit of kernel scheduling. Processes own resources allocated by the operating system. Resources include memory, file handles, sockets, device handles, and windows. Processes do not share address spaces or file resources except through explicit methods such as inheriting file handles or shared memory segments, or mapping the same file in a shared way. Processes are typically pre-emptively multitasked. However, Windows 3.1 and older versions of Mac OS used co-operative or non-preemptive multitasking.

A thread is the "lightest" unit of kernel scheduling. At least one thread exists within each process. If multiple threads can exist within a process, then they share the same memory and file resources. Threads are pre-emptively multitasked if the operating system's process scheduler is pre-emptive. Threads do not own resources except for a stack and a copy of registers including the program counter.

In some situations, there is a distinction between "kernel threads" and "user threads" -- the former are managed and scheduled by the kernel, whereas the latter are managed and scheduled by thread tools and scheduling in userspace. In this article, the term "thread" is used to refer to kernel threads, whereas "fiber" is used to refer to user threads.

A fiber, also known as a coroutine, is a user-level thread. Fibers are co-operatively scheduled: a running fiber must explicitly "yield" to allow another fiber to run. A fiber can be scheduled to run in any thread in the same process.

Thread and fiber issues

Typically fibers are implemented entirely in userspace. As a result, context switching between fibers in a process is extremely efficient: because the kernel is oblivious to the existence of fibers, a context switch does not require a system call. Instead, a context switch can be performed by saving the CPU registers used by the currently executing fiber, and loading the registers required by the fiber to be executed. Since scheduling occurs in userspace, specific scheduling mechanisms can be tailored for a certain task, by the userlevel program.

However, the use of blocking system calls (such as are commonly used to implement synchronous I/O) in fibers can be problematic. If a fiber performs a system call that blocks (perhaps to wait for an I/O operation to complete), the other fibers in the process are unable to run until the system call returns.

A common solution to this problem is providing an I/O API that implements a synchronous interface by using non-blocking I/O internally, and scheduling another fiber while the I/O operation is in progress. Win32 supplies a fiber API. SunOS 4.x implemented "light-weight processes" or LWPs as fibers known as "green threads". SunOS 5.x and later, NetBSD 2.x, and DragonFly BSD implement LWPs as threads as well.

Alternatively, a system call such as select under Unix and Unix-like operating systems can be used to check whether certain system calls will block, but this adds complexity to the runtime system.

The use of kernel threads brings simplicity; the program doesn't need to know how to manage threads, as the kernel handles all aspects of thread management. There are no blocking issues since if a thread blocks, the kernel can reschedule another thread from within the process or from another, nor are extra system calls needed.

However, there are obvious issues with managing threads through the kernel, since on creation and removal, a context switch between kernel and usermode needs to occur, so programs that rely on using a lot of threads for short periods may suffer performance hits.

Hybrid schemes are available which gains a tradeoff between the two.

Relationships between processes, threads, and fibers

The operating system creates a process for the purpose of running a program. Every process has at least one thread. On some operating systems, processes can have more than one thread. A thread can use fibers to implement cooperative multitasking to divide the thread's CPU time for multiple tasks. Generally, this is not done because threads are cheap, easy, and well implemented in modern operating systems.

Processes are used to run an instance of a program. Some programs like word processors are designed to have only one instance of themselves running at the same time. Sometimes, such programs just open up more windows to accommodate multiple simultaneous use. After all, you can go back and forth between five documents, but you can edit one of them at a given instance.

Other programs like command shells maintain a state that you want to keep separate. Each time you open a command shell in Windows, the operating system creates a process for that shell window. The shell windows do not affect each other. Some operating systems support multiple users being logged in simultaneously. It is typical for dozens or even hundreds of people to be logged into some Unix systems. Other than the sluggishness of the computer, the individual users are (usually) blissfully unaware of each other. If Bob runs a program, the operating system creates a process for it. If Alice then runs the same program, the operating system creates another process to run Alice's instance of that program. So if Bob's instance of the program crashes, Alice's instance does not. In this way, processes protect users from failures being experienced by other users.

However, there are times when a single process needs to do multiple things concurrently. The quintessential example is a program with a graphical user interface (GUI). The program must repaint its GUI and respond to user interaction even if it is currently spell-checking a document or playing a song. For situations like these, threads are used.

Threads allow a program to do multiple things concurrently. Since the threads a program spawns share the same address space, one thread can modify data that is used by another thread. This is both a good and a bad thing. It is good because it facilitates easy communication between threads. It can be bad because a poorly written program may cause one thread to inadvertently overwrite data being used by another thread. The sharing of a single address space between multiple threads is one of the reasons that multithreaded programming is usually considered to be more difficult and error-prone that programming a single-threaded application.

There are other potential problems as well such as deadlocks, livelocks, and race conditions. However, all of these problems are concurrency issues and as such affect multi-process and multi-fiber models as well.

Threads are also used by web servers. When a user visits a web site, a web server will use a thread to serve the page to that user. If another user visits the site while the previous user is still being served, the web server can serve the second visitor by using a different thread. Thus, the second user does not have to wait for the first visitor to be served. This is very important because not all users have the same speed Internet connection. A slow user should not delay all other visitors from downloading a web page. For better performance, threads used by web servers and other Internet services are typically pooled and reused to eliminate even the small overhead associated with creating a thread.

Fibers were popular before threads were implemented by the kernels of operating systems. Historically, fibers can be thought of as a trial run at implementing the functionality of threads. There is little point in using fibers today because threads can do everything that fibers can do and threads are implemented well in modern operating systems. A case where fibers still can be helpful is when the limits of the OS in terms of number of threads per process are reached (for example 2000-3000 threads). In this case context-switching, which includes a system call, is too expensive and fibers can help.

Implementations

There are many different and incompatible implementations of threading. These can either be kernel-level or user-level implementations.

Kernel-level

User-level

  • NPTL Native Posix Threading Library for Linux from Red Hat. (What makes it "native"? Buzzword?)
  • Pthreads ??

Comparison between models

Multiprocess Multithreaded Fibers Example
No No No A program running on DOS. The program can only do one thing at a time.
No No Yes Windows 3.1 running on top of DOS. Every program is run in a single process, and so programs can corrupt each other's memory space. This happened often and caused the infamous General Protection Fault. A poorly written program could easily crash Windows 3.1 because there was only one process. Early versions of Mac OS also fall into this category.
No Yes No This case is used only in embedded systems and small real-time operating systems. Theoretically possible in a general purpose operating system, but no known examples. If a general purpose operating system supports threads, it almost definitely support multiple processes.
No Yes Yes This case is used only in embedded systems and small real-time operating systems. Theoretically possible in a general purpose operating system, but no known examples.
Yes No No Most early implementations of Unix. The operating system could run more than one program at a time, and executing programs were protected from each other. If a program behaved badly, it could crash its process ending that instance of the program. However, the operating system and other programs would not be disrupted in most cases. However, sharing information between processes became necessary. Unfortunately, doing so in this model is awkward and error-prone involving techniques like shared memory. If a single program needed to perform multiple tasks asynchronously, a copy of the process had to be made using the expensive fork() function.
Yes Yes No Amiga OS and AROS. Also AmigaOS could run more than one program at a time. Its microkernel Exec allows static priority driven round-robin scheduling. Exec is compact, efficient, flexible. reliable, and expandable. AmigaOS was designed and marketed before the whole thread-versus-process dichotomy became a hot issue, and certainly long before the phrase multi-threaded became a house-hold word. In fact, considering their respective timings, the introduction of AmigaOS quite likely had a substantial influence in making OS designers and application designers aware of the benefits of having lightweight tasks/threads. In Amiga scene you'll often hear the term 'application' and 'task' used interchangably. This is because it is quite common in both AROS and AmigaOS to have only one task per running instance of a program. This is possible due directly to the uniquely efficient inter-process communications mechanisms used in AmigaOS/AROS. However, exec.library fully supports the concept of multi-threading, where a single application is free to have multiple tasks cooperating to achieve a common goal. Neither AmigaOS nor AROS support the concept of a "process" as defined by Unix. However, the term "inter-process communications" is used instead of "inter-task communications" for historical reasons, as the prior phrase has existed since at least the mid-1960s. AmigaOS and AROS supports two explicit (message passing and sigbits) and one implicit (semaphores) method of inter-process communications. AmigaOS has no fibers, however Exec is expandable, so it could be modified to support them. The Amiga operating system also is unusual in that it doesn't partition memory for applications, or even the operating system itself. Instead, it maintains a free memory list where each chunk of free memory has certain attributes. There is no system-memory-in-use list. If an application fails and doesn't have a cleanup routine, or if the programmer neglects to free all acquired memory, it's lost until the system is rebooted. Exec memory management supports both bank-switched memory and virtual memory. Libraries are also supported. They act like shared libraries into other OSes but don't support dynamic linking. This fact has advantages and disavantages.
Yes No Yes Sun OS before Solaris. Sun OS is Sun Microsystem's version of Unix. Sun OS implemented "green threads" in order to allow a single process to asynchronously perform multiple tasks such as playing a sound, repainting a window, and responding to user events such as clicking the stop button. Although processes were pre-emptively scheduled, the "green threads" or fibers were co-operatively multitasked. Often this model was used before real threads were implemented. This model is still used in microcontrollers and embedded devices.
Yes Yes No This is the most common case for applications running on Windows NT, Windows 2000, Windows XP, Mac OS X, Linux, and other modern operating systems. Although each of these operating systems allows the programmer to implement fibers or use a fiber library, most programmers do not use fibers in their applications. The programs are multithreaded and run inside a multitasking operating system, but perform no user-level context switching.

On the typical home computer, most running processes have two or more threads. A few processes will have a single thread. Usually these processes are services running without user interaction. Typically there are no processes using fibers.

Yes Yes Yes Pretty much all operating systems after 1995 fall into this category. The use of threads to perform concurrent operations is the most common choice, although there are also multi-process and multi-fiber applications. Threads are used to enable a program to render its graphical user interface while waiting for input from the user or performing a task like spell checking.

Note that fibers can be implemented without operating system support, although some operating systems or libraries provide explicit support for them. For example, recent versions of Microsoft Windows support a fiber API for applications that want to gain performance improvements by managing scheduling themselves, instead of relying on the kernel scheduler (which may not be tuned for the application). Microsoft SQL Server 2000's user mode scheduler, running in fiber mode, is an example of doing this.

It is also worth noting that many software developers believe that in most cases attempts to use fibers actually decrease the performance of an application. There are several reasons for this. First, the use of fibers does not increase the percentage of CPU time that the operating sytem gives to a process. Second, in typical multithreaded or multifibered code there is contention for shared resources like variables. Programming constructs like semaphores and critical sections are used to solve problems that multithreading and multifibering create. Often these tools are implemented more efficiently in the operating system than in user libraries, particularly if those libraries are not written in assembly language. Third, as users install newer versions of an operating system, improvements may be made to the kernel scheduler, operating system provided semaphores, etc. User libraries do not see these benefits unless they call libraries provided by the operating system.

When fibers are used, it is typically on a computer configured as a server rather than a client.

Example of multithreaded code

This is an example of a simple multi-threaded program written in Java. The program calculates prime numbers until the user types the word "stop". Then the program prints how many prime numbers it found and exits. This example demonstrates how threads can access the same variable while working asynchronously. This example also demonstrates a simple "race condition". The thread printing prime numbers continues to do so for a short time after the user types "stop". Of course, this problem is easily corrected using standard programming techniques.

import java.io.*;

public class Example implements Runnable
{
   static Thread threadCalculate;
   static Thread threadListen;
   long totalPrimesFound = 0;
   
   public static void main(String[] args)
   {
       Example e = new Example();
       
       threadCalculate = new Thread(e);
       threadListen = new Thread(e);
       
       threadCalculate.start();
       threadListen.start();
   }
   
   public void run()
   {
       Thread currentThread = Thread.currentThread();
       
       if (currentThread == threadCalculate)
           calculatePrimes();
       else if (currentThread == threadListen)
           listenForStop();
   }
   
   public void calculatePrimes()
   {
       int n = 1;
       
       while (true)
       {
           n++;
           boolean isPrime = true;
           
           for (int i = 2; i < n; i++)
               if ((n / i) * i == n) // (n / i) does return an int, not a float!
               {
                   isPrime = false;
                   break;
               }
           
           if (isPrime)
           {
               totalPrimesFound++;
               System.out.println(n);
           }
       }
   }
   
   public void listenForStop()
   {
       BufferedReader input = new BufferedReader(new InputStreamReader(System.in));
       String line = "";
       
       while (!line.equals("stop"))
       {
           try
           {
               line = input.readLine();
           }
           catch (IOException exception) {}
       }
       
       System.out.println("Found " + totalPrimesFound +
           " prime numbers before you said stop");
       System.exit(0);
   }
}

The spin lock article includes a C program using two threads that communicate through a global integer.

See also

References