Scheduling Criteria

Scheduling Criteria

In multi programming a number of processes can be in the memory at the same time. Processes perform I/O operations in their course of computation. Since I/O operations require more time to complete than CPU instructions, multi programming systems allocate the CPU to another process whenever a process invokes an I/O operation.

Short term scheduler allocates the process to CPU as per various scheduling algorithms. The main aim of scheduling is to improve the performance of the system by keeping the CPU busy all the time.

Scheduling Criteria (criteria for performance evaluation of the scheduling strategy):

1.  Processor Utilization: This is the percentage of time that the processor is busy.

2.  Throughput: This is the measure of how much work is being performed. The scheduling policy should attempt to maximize the number of processes completed per unit of time.

3.  3.  Turnaround Time: This is the interval of time between the submission of a process and its completion. It includes actual execution time plus time spent waiting for resources, including the processor.

4.  4.  Response Time: This is the time from the submission of the process till the first response is produced.

5.  5.  Waiting Time: This is the time a process waits in the ready queue to get CPU. Sum of times spent in ready queue by a process is its total waiting time.

6.  6.  Deadlines: When process completion deadlines are satisfied, the scheduling should maximize the percentage of deadlines met.

7.  7.   Fairness: Every process should get a fair share of the CPU time. No process should suffer starvation.

8.  8.   Enforcing Priorities: When processes are assigned priorities, the scheduling policy should favor higher-priority processes.

9.  9.   Balancing Resources: The scheduling policy should keep the resources of the system busy. Processes that will under utilize stressed resources should be favored.

Non-Preemptive Scheduling vs Preemptive Scheduling:

  • Non-Preemptive: Non-preemptive algorithms are designed so that once a process is allocated to the CPU, it does not free the CPU until it completes its execution.

  • Preemptive: Preemptive algorithms allow taking away CPU from processes during execution. It ensures that highest priority processes are being executed.