Excellent Scalability, Multimedia Improvements, Greater Responsiveness, XFS Filesystem, Unified Device Model and lots of other untold features: This is 2.6 Kernel For you. Out of these /Greater Responsiveness/ is very attractive, specially for Desktops. This trait is all becos of the *hOt & nOvel* Scheduler which has lots of CommonSense (unCommon in others). This is to say that the kernel is #TRULY# Preemptible. Truly, since the older kernels had some sort of preemption. Reach out to your library for a Linux Kernel Book. keywords: realtime, nice and ofcourse, preemption.
PreEmption: In a system with several tasks, its the move made by the scheduler to force a Task out of CPU due to change in task priorities. BookWorms please eXcuse and correct me if i’m wrong. The priorities are altered due to occurrence of events. Availability of a resource or a Message in the queue can be an event.
You: What is ContextSwitch? How is it related to PreEmption?
Me: Can’t Say. :: Do I Blink? ::
Tasks are assigned Timeslices based on priority ; higher-priority tasks run for longer than lower-priority ones. Static priority is to quantify how important a task is. Dynamic priority is determined by the scheduler by monitoring a task’s behavior during execution.
The Crux is the scheduler categorizes tasks as I/O bound or CPU bound. I/O bound tasks are lowered in priority, and thus a larger timeslice. Besides, CPU bound tasks, granted with a priority increment and so smaller timeslice. This prevents CPU-bounds from controlling the processor, and allows input and output to proceed smoothly. The scheduler constantly checks the task’s behavior; and readjusts the dynamic priorities accordingly.
Not just this, the scheduler keeps track of all these timeslices with two lists of tasks. The 1st list contains tasks that have yet to use their timeslice; the 2nd contains those tasks which have used their time. When a task uses its timeslice, the scheduler calculates a new timeslice by adding the dynamic priority and the static priority. The task then gets inserted in the 2nd list. When the 1st list becomes empty, the 2nd list takes the place of the first, and vice-versa. This allows the scheduler to continuously calculate timeslices with minimal computational overhead. Balance between Throughput and Latency is achieved by adjusting timeslices.
Throughput: The amount of data that can be transferred from one
location to another.
Latency: The time it takes for a process to respond to input.
An I/O-bound task needs good throughput if it is to accomplish its chores quickly. Thats why the scheduler gives I/O-bound tasks large timeslices; they have make and respond to I/O requests, and don’t have to wait as long for other tasks to execute. Because nearly all tasks can benefit from superior throughput, why not give all tasks a large timeslice? If a scheduler were to do this, latency would suffer. Because each task has a long time to complete its task, other tasks won’t be able to respond quickly to user input.
A good balance between throughput and latency leads to a responsive user eXPerience with sufficient throughput. This is what I, you and me want.
Itz really great to know all these stuff. Thats OpenSource!