It will be noted that it takes time to save/restore the programs state and switch from one program to another (called dispatching). This action is performed by the kernel, and must execute quickly, because we want to spend most of our time running user programs, not switching between them.
The amount of time that is spent in the system state (running the kernel and performing tasks like switching between user programs) is called the system overhead, and should typically be below 10%. Too much time spent performing system tasks in preference to running user programs will result in poor performance for user programs, which will appear to run very slowly.
This switching between user programs is done by part of the kernel. To switch from one program to another requires,
- a regular timed interrupt event (provided by a clock)
- saving the interrupted programs state and data
- restoring the next programs state and data
- running that program till the next timed interrupt occurs
When the processor is switched from one process to another, the state (processor registers and associated data) must be saved, because at some later date the process will be restarted and continue as though it was never interrupted. Once this state has been saved, the next waiting process is activated. This involves loading the processor registers and memory with all the previously saved data and restarting it at the instruction that was to be executed when it was last interrupted.
The process of switching from one process to another is called context switching. A time period that a process runs for before being context switched is called a time slice or quantum period.
Deciding which process should run next is called scheduling, and can be done in a wide variety of ways.
Co-operative schedulers are generally very simple, as the processes are arranged in a ROUND ROBIN queue. When a running process gives itself up, it goes to the end of the queue. The process at the top of the queue is then run, and all processes in the queue move up one place. This provides a measure of fairness, but does not prevent one process from monopolizing the system (failing to give itself up).
Pre-emptive scheduling uses a real-time clock that generates interrupts at regular intervals (say every 1/100th of a second). Each time an interrupt occurs, the processor is switched to another task. Systems employing this type of scheduling generally assign priorities to each process, so that some may be executed more frequently than others.
First in First Out Scheduling
A FIFO queue is a list of available processes awaiting execution by the processor. New processes arrive and are placed at the end of the queue. The process at the start of the queue is assigned the processor when it next becomes available, and all other processes move up one slot in the queue.
Round Robin Scheduling
One of the problems with the FIFO approach is that a process may in fact take a very long time to complete, and thus holds up other waiting processes in the queue. To prevent this from happening, we employ a pre-emptive scheduler that lets each process run for a little while. When the time-slice is up, the running process is interrupted and placed at the rear of the queue. The next process at the top of the queue is then started.
No comments:
Post a Comment
This is Good Computer Information Blog.they will give
best information about computer.