████████ ██ ██ ██████ ████████ ██░░░░░░ ░██ ░██ ░█░░░░██ ██░░░░░░██ ░██ ░██ ██████ █████ ░██ ██ ███ ██ ██████ ██████ █████ ░█ ░██ ██ ░░ ░█████████ ░██ ░░░░░░██ ██░░░██░██ ██ ░░██ █ ░██ ░░░░░░██ ░░██░░█ ██░░░██ ░██████ ░██ ░░░░░░░░██ ░██ ███████ ░██ ░░ ░████ ░██ ███░██ ███████ ░██ ░ ░███████ ░█░░░░ ██░██ █████ ░██ ░██ ██░░░░██ ░██ ██░██░██ ░████░████ ██░░░░██ ░██ ░██░░░░ ██░█ ░██░░██ ░░░░██ ████████ ███░░████████░░█████ ░██░░██ ███░ ░░░██░░████████░███ ░░██████░██░███████ ░░████████ ░░░░░░░░ ░░░ ░░░░░░░░ ░░░░░ ░░ ░░ ░░░ ░░░ ░░░░░░░░ ░░░ ░░░░░░ ░░ ░░░░░░░ ░░░░░░░░Mirrors for Slackware and some Slackware related projects.
What are threads (user/kernel)?
Threads are "light weight processes" (LWPs). The idea is a process has five fundamental parts: code ("text"), data (VM), stack, file I/O, and signal tables. "Heavy-weight processes" (HWPs) have a significant amount of overhead when switching: all the tables have to be flushed from the processor for each task switch. Also, the only way to achieve shared information between HWPs is through pipes and "shared memory". If a HWP spawns a child HWP using fork(), the only part that is shared is the text.
Threads reduce overhead by sharing fundamental parts. By sharing these parts, switching happens much more frequently and efficiently. Also, sharing information is not so "difficult" anymore: everything can be shared. There are two types of threads: user-space and kernel-space.
User-space avoids the kernel and manages the tables itself. Often this is called "cooperative multitasking" where the task defines a set of routines that get "switched to" by manipulating the stack pointer. Typically each thread "gives-up" the CPU by calling an explicit switch, sending a signal or doing an operation that involves the switcher. Also, a timer signal can force switches. User threads typically can switch faster than kernel threads [however, Linux kernel threads' switching is actually pretty close in performance].
Disadvantages. User-space threads have a problem that a single thread can monopolize the timeslice thus starving the other threads within the task. Also, it has no way of taking advantage of SMPs (Symmetric MultiProcessor systems, e.g. dual-/quad-Pentiums). Lastly, when a thread becomes I/O blocked, all other threads within the task lose the timeslice as well.
Solutions/work arounds. Some user-thread libraries have addressed these problems with several work-arounds. First timeslice monopolization can be controlled with an external monitor that uses its own clock tick. Second, some SMPs can support user-space multithreading by firing up tasks on specified CPUs then starting the threads from there [this form of SMP threading seems tenuous, at best]. Third, some libraries solve the I/O blocking problem with special wrappers over system calls, or the task can be written for nonblocking I/O.
Kernel-space threads often are implemented in the kernel using several tables (each task gets a table of threads). In this case, the kernel schedules each thread within the timeslice of each process. There is a little more overhead with mode switching from user->kernel-> user and loading of larger contexts, but initial performance measures indicate a negligible increase in time.
Advantages. Since the clocktick will determine the switching times, a task is less likely to hog the timeslice from the other threads within the task. Also I/O blocking is not a problem. Lastly, if properly coded, the process automatically can take advantage of SMPs and will run incrementally faster with each added CPU.
Combination
Some implementations support both user- and kernel-space threads. This gives the advantages of each to the running task. However, since Linux's kernel-space threads nearly perform as well as user-space, the only advantage of using user-threads would be the cooperative multitasking.
[Previous Page] | [First Page] | [Dictionary] | [Email Author] | [Next Page] |