Section 1

Preview this deck

Systems and Models

Front

Star 0%
Star 0%
Star 0%
Star 0%
Star 0%

0.0

0 reviews

5
0
4
0
3
0
2
0
1
0

Active users

0

All-time users

0

Favorites

0

Last updated

6 years ago

Date created

Mar 1, 2020

Cards (103)

Section 1

(50 cards)

Systems and Models

Front

A system is the part of the real world under study. It is composed of a set of entities interacting among themselves and with the environment. system behavior depends on input data and actions from the environment. A system has: --Structure --Behavior The model of a system is simpler than the real system in its structure and behavior. But it should be equivalent to the system. MODELS A model is an abstract representation of a system. A user can: Manipulate the model by supplying it with a set of inputs Observe its behavior or output Predict the behavior of the real system by analyzing the behavior of the model. Behavior Depends on: --time --Input data --Events generated by the environment

Back

Process Behavior

Front

- Requests processor and I/O servicing at various times and usually will have to wait for these services. - The OS maintains a waiting line or queue of processes for every one of these services. - At some point in time, a process will be in a queue waiting for processor service. At some other point in time, the process will be in a different queue waiting for I/O service. - When a process completes all service requests, it terminates and exits the system.

Back

Processes

Front

- Two important types of dynamic entities in a computer system are processes and threads. - Dynamic entities only exist at execution time, and their behavior exhibits state changes. - A process is a program in memory either in execution or waiting to execute, I/O, or other service. - A process is the fundamental computational unit that requests various types of services in the computer system. - Processes and threads are handled by the process manager module of the operating system. - Several processes can be active at a time (O.S with multiprogramming) - Only one process is actually running at any instant of time (in a single-processor system) - The OS switches the processor from one process to another - Each process can have several threads running within it

Back

Abstraction

Front

The most important concept in analysis and design A high-level description of a collection of objects Only the relevant properties and features of the objects are included.

Back

Arrival Rate vs Service Rate

Front

Arrival Rate < Service Rate --Resource can handle requests --Queuing Time will be small --If Arrival Rate significantly less ----Resource may be under-utilized Arrival Rate = Service Rate --Resource fully utilized --Resource can generally handle requests --But...Requests will often be queued Arrival Rate > Service Rate --Resource unable to handle all requests --Queuing Time may be large --If difference is large ----Resource will be over-utilized Both rates will change from one instant to another Queuing system & resource scheduling handle momentary imbalances Long-term imbalances indicate a system with a major resource utilization problem

Back

Threads in a Process

Front

- The operating system manages processes and threads in a multiprogramming environment - Execution of threads is more efficient than the execution of processes. - Modern view of threads represents them as the active elements of computation within a process. - All threads that belong to a process share the state and resources of the process.

Back

Arrival Interval & Rate Service Time & Rate

Front

Arrival Interval is the time between 2 resource requests arriving Arrival Rate = 1 / Arrival Interval --How often requests for a resource arrive --Denoted by λ Service Time is the time to actually perform a request --e.g. 500 Msec / request Service Rate = 1 / Service Time --Denoted by μ --e.g. 2 requests / second

Back

Degree of Multiprogramming

Front

The NUMBER of processes that the system supports in memory

Back

Types of Threads

Front

Back

Thread Attributes

Front

Execution state Context, its own program counter within the process Execution stack Local memory block (for local variables) Reference to parent process to access the shared resources allocated to the process

Back

Performance Measures

Front

Measures that indicate how well (or bad) is the system being studied carrying out its functions, with respect to some aspect In studying a system, usually several performance measures are necessary. Studying Performance Measurements on the real system Simulation models Analytical models Performance Study Define a set of relevant objectives and goals Decide on the following: --The performance metrics --The system parameters --The system factors --The system workloads

Back

Deterministic vs Stochastic

Front

Deterministic - display a completely predictable behavior (with 100% certainty) Stochastic - display some level of uncertainty in their behavior. This random behavior is implemented with random variables. --One or more attributes (random) change value according to some probability distribution. --Eg random vars in simple batch OS model: ----Inter-arrival time of jobs ----CPU service time of jobs Rand Vars: Most workload parameters change randomly, and are represented by a probability distribution. eg. in a model of a computer system, - the inter-arrival intervals of jobs usually follow an exponential distribution. Other probability distributions that are used in the models discussed in this book are: normal, uniform, and Poisson. In the simulation models considered, most workload parameters are implemented as random variables.

Back

Features Needed for Multiprogramming

Front

- I/O routines supplied by the system - Memory management - Processor scheduling - Allocation of devices

Back

Service Requests

Front

- A process will require (demand) several processor and I/O requests - The total processor request of a process is divided into shorter service periods: each one is known as a CPU burst - The total I/O request is divided into shorter service periods called I/O bursts - In normal behavior of a process, it alternates between a CPU and I/O burst

Back

Continuous and Discrete Models

Front

Continuous models change their state continuously with time. Mathematical notation is used to completely define the behavior. For example, the free-falling object. Discrete models only change their state at discrete instants. For example, arrival of a job in the Simple batch OS model.

Back

Workload on a System

Front

Performance measures depend on workload of the system The workload for a system can be characterized by another series of measures, which are made on the input to the system Errors in characterizing the workload may have serious consequences. WORKLOAD PARAMETERS Inter-arrival time Task size I/O request rate I/O service rate Memory request

Back

Interrupts and Context Switches

Front

- A context switch occurs when: The executing process completes its current CPU burst and needs an I/O operation, such as reading data from a file. - The executing process is interrupted by a timer under OS control. The process is returned to its ready state in the wait line for CPU service, the READY queue. - A HIGH PRIORITY process arrives into the ready queue, the OS interrupts the running process if it has a lower priority.

Back

Categories of Operating Systems

Front

- BATCH systems, in which a set of jobs are submitted in sequence for processing. - INTERACTIVE systems, support computing for on-line users. The most common type of operating systems that support interactive computing is TIME-SHARING, which are multi-user systems. - REAL-TIME systems, which support application programs with very tight timing constraints. - HYBRID systems, which support batch and interactive computing. image is Time Sharing

Back

System Resources Types

Front

Active resources: CPU, I/O devices (provide a service) Passive resources: memory (do not provide a service) service?

Back

Solutions to Mathematical Models

Front

Analytic, solutions are expressions that define the behavior of the model. Numeric, mathematical techniques derive values for model behavior within given intervals. Deterministic and stochastic models are solved with numerical techniques.

Back

Process Descriptor

Front

- Also called process control block (PCB) - A data structure in which the OS maintains data about a process. Each process is represented by its own - PCB - Process name - Process state - Program counter - CPU registers - I/O status - Memory information - Accounting information - Scheduler information (priority, queues, ...) - List of resources

Back

Thread Descriptor

Front

A data structure used by the OS to store all the relevant data of a thread. It contains the following fields: - thread identifier - execution state of the thread - Process owner of the thread - list of related threads - execution stack - thread priority - thread-specific resources Some of the fields of a thread descriptor also appear as part of the corresponding process descriptor.

Back

Abstract Views

Front

- Overall structure of operating system is divided into various software components using a top-down (layered) approach. - The top layer provides the easiest interface to the human operators and users interacting with the system. - Any layer uses the services or functions provided by the next lower layer. External views - As an user interface of the computer system - As a layer of software on top of the hardware Internal view - Resource manager - It controls and manages CPU, memory, I/O devices, etc.

Back

Performance: External/Internal Goals

Front

EXTERNAL Goals that can be measured without looking at the internals of a system Examples --Maximize Work Performed (Throughput) --Minimize Response Time --Fairness --Scheduling processes --Goals often conflict INTERNAL Performance Goals for sub-systems that are internal to the computer(s) Examples --Maximize CPU Utilization ----% time CPU is busy --Maximize Disk Utilization ----% time Disk is busy --Minimize Disk Access Time ----Time it takes to perform a disk request

Back

Operating Systems Interfaces

Front

1.Graphics GUI (windows oriented) 2.Command level (shell). At login time,the shell starts computing 3.System Call Interface; Invoked from user programs Layered Structure - Users (top layer) - Application User Interface (AUI): shell, - commands, application programs - Application program Interface (API): libraries, system calls - OS kernel

Back

Hardware Interrupts

Front

- hardware component (i.e. I/O device) sends interrupt request signal to the CPU. - Causes a temporary stop of the normal execution of a program. The CPU then starts to execute a special function called an interrupt service routine (ISR) that handles the interrupt. - When ISR is complete, the CPU can resume execution of the program that was interrupted.

Back

Internal View of an Operating System

Front

- The system call interface separates the kernel from the application layer and the kernel is located above the hardware - Kernel is the core, and most critical part, of the operating system and needs to be always resident in memory. - A detailed knowledge about the different components including these lower-level components of the operating system, corresponds to an internal view of the system.

Back

Types of Threads

Front

User-level threads (ULT) - Thread management carried out at application level without kernel intervention. - The application carries out threads management using a thread library, such as the POSIX Pthreads Library. - Using this library, the application invokes the appropriate functions for the various thread management tasks such as: creating, suspending, and terminating threads. Kernel-level threads (KLT). - The thread management tasks are carried out by the kernel. - A process that needs these thread handling services has to use the API to the kernel thread facility. - One of the advantages of kernel-level threads is that the process will not be blocked if one of its threads becomes blocked. Another advantage is that the possibility of scheduling multiple threads on multiple processors.

Back

System Parameters

Front

System memory Processor speed Number and type of processors Degree of multi-programming Length of time slice Number and type of I/O ports

Back

Process Creation

Front

- On a typical operating system, a process is created -- In response to a new job arriving -- A new user who carries out a login -- A currently executing process that creates a new process. - A process is created if there is sufficient memory available; creating the process waits if other resources it needs are not available.

Back

Processes and Multiprogramming

Front

- With multiprogramming, several processes can be active at a time - Only one process is actually running at any instant of time (if there is only a single CPU) - The OS switches rapidly the CPU from one process to another (context switching) - The selection of which process to run next is made by the scheduler

Back

Process States

Front

State Diagram - Represents the various states of a process and the possible transitions in the behavior of a process. - Each state is indicated by a rounded oval and the arrows connecting the states are the transitions from one state to the next. - The small black circle denotes that the state it points to is the initial state. - In a similar manner, the state before the hollow small circle indicates that it is the last state in the behavior of a process. - Normally a process spends a short finite time in every state. - Created - Ready (waiting in the ready queue a processor) - Running (executing) - Blocked (suspended and waiting for I/O or other event) - I/O processing - Waiting for passive resources - Interrupted by the OS - Terminated

Back

What is an OS?

Front

- Large and complex software component for the operation and control of the computer system. - It acts as an intermediary between a user and the computer system. - Provider of services to user programs - Resource manager

Back

System Call Interface

Front

- Also known as the Programming Interface - Programs use the PI to request the OS toperform some operation

Back

Functional Components of an OS

Front

The most important components of an operating system are: - Process manager - Memory manager - Resource manager - File manager - Device manager

Back

Two general types of software components

Front

SYSTEM SOFTWARE(Operating System, Assemblers, Loaders,Linkers, Compilers, Editors, ...) - Control the activities and functions of various hardware components, programming tools and abstractions, and other utilities to monitor the state of the computer system. - Forms environment for programmers to develop and execute programs (application software). - Three types of users: system programmers, application programmers and end-users. SYSTEM PROGRAM EG. Web Browser, Email, utilities APPLICATION SOFTWARE(All User-Oriented Programs) - programs that solve specific problems for the users and execute under control of the operating system.

Back

A Thread is:

Front

- A light-weight process - A single execution sequence of a process - A component of a process - Shares with other threads code and resources allocated to the process - Modern operating systems support computational units known as threads, lightweight processes. - A thread is a dynamic component of a process. - Several threads are usually created within a single process. These threads share part of the program code and the resources of the process. - Most modern operating systems support multiple processes and multiple threads of execution within a single process, this feature is called multi-threading.

Back

Jobs vs Processes

Front

A job is a unit of work submitted by a user to the operating system. A typical job consists of the parts listed below: - A sequence of commands to the operating system - A program either in a source language or in binary form - A set of input data used by the program when it executes - A process is an execution instance of a program.

Back

Multiprogramming

Front

- Process management is one of the major functions of the operating system; it involves creating processes and controlling their execution. - In most operating systems, several processes are stored in memory at the same time and the operating system (OS) manages the sharing of the CPU and other resources among the various processes. - This facility of the operating system is called multiprogramming. - One requirement of multiprogramming is that OS must allocate the processor(s) and other resources to the various processes in such a way that these ACTIVE RESOURCES are maintained BUSY the longest period possible. - If only one processor, then only one process can be in execution at any given time. - The other processes are ready, waiting for processor service. Processes also request access to passive resources, such as memory.

Back

Types of Models

Front

Physical models (scale models) Graphical models Mathematical models. These are the most flexible ones and are the ones studied here. Modeling and simulation are used to study the different components and various aspects of an operating system.

Back

History of Operating Systems

Front

First generation - No operating system, bare hardware, machine language. Second generation - Batch systems, assemblers, linkers, loaders, compilers - Batch systems with Automatic Job Sequencing Third generation -- O.S. for complete families of computers (OS/360) - Batch with Multiprogramming - Spooling - Timesharing (MULTICS, UNIX, ...) Fourth generation - Network and distributed operating systems

Back

The Shell

Front

A text-oriented program that handles user interaction with the system: - Command-line interpreter - The most common shell on Linux is 'bash' - On Windows, the command window B

Back

Multi-Programming (2)

Front

- In most computer systems, the I/O controllers enable a system to overlap device I/O operation with processor (CPU) operation. - This results in a higher utilization of the system facilities - Several processes can be in memory at a time; one receiving CPU service and another receiving I/O service, the others are waiting.

Back

Thread States

Front

- Threads have their own execution state, context, execution stack, local memory (for local variables), in addition to the resources allocated to the process. - When a process is created, only one thread is created called the base thread; then other threads of the process may be created by the process or by the base thread. - SIMPLE PROCESS - A process can have a single thread; has only one thread of execution.

Back

Context Switch

Front

- When the OS switches the processor to another process, the OS must save the context of the current process and load the (saved) context of the new process - Context switch time is short and is overhead time - Context switch time is dependent on hardware support Occurrence(any of these times) The executing process completes current CPU BURST. This is the normal case in simple batch systems. The process is INTERRUPTED because its time slice has expired. This is a normal case in time-sharing systems. The process is INTERRUPTED for a higher priority process requesting CPU service.

Back

Services Provided by the OS

Front

- Process Control, execution, scheduling, etc. - Communication between processes - File Manipulation - Device Manipulation - Information Maintenance - Memory Management

Back

Simulation Models and Results

Front

A mathematical model implemented with a general-purpose programming language, or a simulation language. A simulation run is an experiment carried out for some observation period and with the simulation model to study the behavior of the model. RESULTS For every simulation run there are two types of output: Trace - sequence of events that occur during the simulation period Performance measures - summary statistics about the simulation.

Back

Interrupt

Front

- An application requests operating system to perform an operation by using a Software Interrupt. - Interrupts are requests to the operatingsystem to perform some actions - An operating system is interrupt driven

Back

Total Processor Demand (CPU Service) Total I/O Service Demand

Front

- The total processor service demand for a process is the sum of all its CPU bursts - Each CPU burst has a different duration "t" - The total CPU service time(burts) requested by process Pi is: ti = t1 + t2 + ... tm - The total I/O service demand for a process is the sum of all its I/O bursts - Each I/O burst has a different duration, "s" - The total I/O service time requested by process Pi is: si = s1 + s2 + ... sn

Back

Service Time

Front

Service Time for a given request can vary --A request to read data from a certain part of a disk will not always take the same amount of time. For some resources, a request's Service Time may be affected by the previous requests that have been executed

Back

Section 2

(50 cards)

Relevant Performance Measures

Front

Throughput Capacity Response time Utilization Reliability Speedup Backlog MEANING The average number of jobs in the system The average number of jobs in the queue(s) (i.e., that are waiting) The average time that a job spends in the system The average time that a job spends in the queue(s) The CPU utilization Throughput - the total number of jobs serviced.

Back

Multilevel Queues

Front

Multi-class systems have several classes of processes and each class of process is assigned a DIFFERENT PRIORITY. Needed when system has different categories of processes. Each category needs a different scheduling. eg one category requires interactive processing and another requires batch processing. For each category there may be more than one priority used.

Back

Priority Scheduling policy

Front

A priority is associated with every process type or class The next process selected from the ready queue is the one with the highest priority A high-priority process will be placed at the head of the ready queue The priority is implemented as an integer attribute of a process

Back

Multiprogramming

Front

In most computer systems, the I/O controllers enable a system to overlap device I/O operation with processor operation. This results in a more efficient utilization of the system facilities Processor utilization and I/O device utilization is higher Review An operating system can support several processes in memory. While one process receives service from a processor, another process receives service from an I/O device and the other processes are waiting in some queues. The number of processes in memory that the operating system supports is known as the degree of multiprogramming.

Back

Multiple Processors

Front

Several ways a [multiple]system is configured.(single/multi queue) The scheduling of the processes on any of the processors is the simplest approach to use. This assumes that the processors are tightly coupled, that is, they share memory and other important hardware/software resources.

Back

Model of a System with Multiprogramming

Front

The model of a batch system studied here includes one CPU and one I/O device. Three queues are used: --the first queue is used for processes that have arrived and are waiting for memory allocation --The ready queue used for the CPU server --The I/O queue for the I/O server. --Each server provides service independently of the other servers (overlapping)

Back

Service Demand

Front

The total CPU and I/O service demands of a process is divided into CPU and I/O burts A process will require several CPU and I/O bursts. Each CPU request is called a CPU burst Each I/O request is called an I/O burst A typical process alternates from a CPU burst to an I/O burst

Back

Real-Time Scheduling Policies

Front

Try to maintain the CPU allocated to high-priority real-time processes. Goal: guarantee fast response of the real-time processes. 2 widely known scheduling policies: --Rate monotonic scheduling (RMS) --Earliest deadline first scheduling (EDFS).

Back

Efficiency and Reliability

Front

The efficiency of a system is the ratio usable capacity to nominal capacity The reliability of a system is measured by the probability of errors. Also defined as the mean time between errors. Availability is the fraction of time that the system is available for user requests, also called the system uptime.

Back

Components of a Simulation Model

Front

The ACTIVE RESOURCES of the system are modeled as active objects The ENVIRONMENT is modeled as an active object because the environment generates the jobs that will arrive into the system/model The PROCESSES are also modeled as active objects --- they arrive requesting resources and services The other system resources are modeled as passive objects (memory, queues, etc) The DISK SERVICE, which represents an I/O device that provides I/O service to the processes according to their demands. The disk device is an active object of class Disk. The MEMORY, which represents a major (passive) resource of the system and is modeled as a passive object, of class Res. The QUEUES, these are modeled as passive objects of class Squeue.

Back

Semaphore

Front

traffic light of critical section - abstract data type that functions as a software synchronization tool - an object whose methods are invoked by processes that need to share a resource. • binary semaphore, either 0 or 1. allow mutual exclusion. • The counting semaphore, any non-negative integer value.

Back

Dynamic Priority Scheduling

Front

Processes that request I/O service typically start with CPU service for a short time, request another I/O operation, and release the CPU. If processes are given higher priority, they can keep the I/O devices busy without using a significant amount of CPU time. This will tend to maximize I/O utilization while using a relatively small amount of CPU time. The remaining CPU capacity will be available for processes that are requesting CPU bursts. The CPU scheduler dynamically adjusts the priority of a process as the process executes. Adjust priority based on level of expectation that the process will carry out a system call (typically an I/O request). However, this requires CPU scheduler to predict future process requests. --OS can estimates (heuristic) based on observed program behavior

Back

Synchronization

Front

Coordinating activities of 2+ processes that usually need to: • Compete for resources in a mutually exclusive manner. • Cooperate in sequencing or ordering specific events in their individual activities. No Synchronization • Race condition: eg. two processes manipulate same global variable(eg. of critical section) • • Solution is Mutual Exclusion - one at a time

Back

Random Variables

Front

(MODELS/SIMULATIONS) used to represent several parameters and each is derived from a specific probability distribution. The inter-arrival periods are generated from an exponential distribution. The CPU and I/O service periods(service demands) are generated from an exponential distribution. The memory demands for the processes are generated from a uniform distribution.

Back

CPU Scheduling Policies

Front

Categories of scheduling policies: Non-Preemptive - no interrupts are allowed. A process completes execution of its CPU burst Preemptive - a process can be interrupted before it completes its CPU burst

Back

Types of Schedulers

Front

Long-term scheduler (memory allocation) --Determines which processes are loaded into memory --Controls degree of multiprogramming Medium-term scheduler -- suspends (swaps out) and resumes (swaps in) processes --("Swapping")Some of the processes can be removed from memory (to disk) --This reduces the degree of multi-programming. Short-term scheduler (processor scheduling). Selection of one of the processes that are ready and allocates the processor to it.

Back

System Capacity

Front

performance measure -The capacity of a system is determined by its maximum performance -The nominal capacity of a system is given by the maximum achievable throughput under ideal conditions -The usable capacity is the maximum throughput achievable under specified constraints. BOTTLENECK The computer system reaches capacity when one or more of its servers or resources reach a utilization close to 100%. The bottleneck of the system will be localized in the server or resource with a utilization close to 100%, while the other servers and resources each have utilization significantly below 100%. The bottleneck(MODEL) can be localized at the processor, the queue, or the memory. The queue may become full (reaches capacity) very often as the processor utilization increases. The memory capacity may also be used at capacity (100%). Thus, in any of the three cases, the processor, the queue, or the memory will need to be replaced or increased in capacity.

Back

Real-time Processes

Front

Processes compete for the CPU(Each with its own service demand, priority and DEADLINE) Processes must complete their service before their DEADLINE expire. The second general goal: guarantee the processes can be scheduled to meet individual deadlines. --Performance of the system is based on this guarantee. Processes can be: --Periodic: executed every specific interval (ie period) --Sporadic: can be started by external random events. The operating system uses a real-time scheduling policy based on priorities and preemption.

Back

CPU-IO Bursts of Processes

Front

An important property of a process is its CPU-IO burst An I/O bound process has many short CPU burst A CPU bound process has few long CPU bursts The OS tries to main maintain a balance of these two types pf processes

Back

Solutions to critical section problem.

Front

Three categories: • Busy waiting - process executes and continuously examines variables and loops,thus consuming CPU cycles. • Hardware support - directly accessing the hardwarefacilities of the computer system. • • Disabling interrupts is the simplest of these techniques. • • test and set lock (TSL)(special machine instructions) • Operating system support - mechanisms and tools(*semaphores and monitors*) provided by the operating system are used to implement the synchronization needed.

Back

Processor Scheduling

Front

Decides WHEN and how to carry out a context switch to the selected process, i.e., the de-allocation and allocation of the CPU Scheduling policy defines ORDER in which processes are SELECTED from the ready queue for CPU processing. SCHEDULER SELECTS next process to execute from among several processes waiting in the ready queue. DISPATCHER allocates CPU to selected process at appropriate time.

Back

???????Classes of Processes

Front

There are several classes or types of processes For every class there is a set of workload parameters: --mean inter-arrival period --mean service period Every class normally has an associated priority

Back

Process Service Requests

Front

A process requests CPU and I/O services at various times and often waits for services. The OS maintains a queue of processes waiting for every one of these services. At some point in time, a process will be in a queue waiting for CPU service. At some other point in time, the process will be in a different queue waiting for I/O service. When the process completes all service requests, it terminates and exits the system.

Back

Process Activities (Behavior)

Front

Every process that request CPU service, carries out the following sequence of activities: 1.Join the ready queue and wait for CPU service. 2.Execute for the duration of the current CPU burst or for the duration of the time slice (timeout). 3.Join the I/O queue to wait for I/O service or return to the ready queue to wait for more CPU service. 4.Get I/O service for the duration of an I/O burst 5.Terminate and exit if service is completed, i.e., there are no more CPU or I/O bursts. If more service is required, return to the ready queue to wait for more CPU service.

Back

Scheduling Algorithm Evaluation

Front

Criteria for evaluation, and measure performance of the computer system --Maximize CPU utilization under certain constraints --Minimize response time --Maximize throughput under certain constraints Analytic evaluation - use algorithm and system workload --Deterministic modeling --Queuing models Simulation - involves programming a model of the computer system

Back

Heuristic Algorithm

Front

1.Allocate CPU to highest priority process. 2.When selected for execution, assign a time-slice. 3.If it requests I/O operation before time-slice expires, raise its priority (i.e. assume it another I/O request soon) 4.If time-slice expires, lower priority (i.e., assume it is now in CPU burst) and allocate CPU to highest priority ready process.

Back

Process Arrival time CPU burst P1 0 135 P2 0 102 P3 0 56 P4 0 148 P5 0 125 P6 200 65

Front

Process Start Completion Wait Turnaround Ntat P1 348 483 348 483 3.577 P2 56 158 56 158 1.549 P3 0 56 0 56 1.0 P4 483 631 483 631 4.263 P5 158 348 223 348 2.784 P6 200 265 0 65 1.0 The average wait period is: 185.0 microsec. The average turnaround time is: 290.16 microsec.

Back

Priorities and Scheduling

Front

Priorities can be used with preemptive or non-preemptive scheduling. Depending on the goals of an operating system, one or more of various scheduling policies can be used; each will result in a different system performance. The criteria are based on relevant performance measures and the various scheduling policies are evaluated based on the criteria.

Back

Classes and Objects in the Simulation Model

Front

The processes, each one representing a computational unit to be serviced in the computer system, under control of the operating system. The processes are implemented as ACTIVE objects of class Job. The environment, which generates the arriving processes. This is an ACTIVE object of class Arrivals. This object creates instances of class Job according to the inter-arrival period. The CPU, which represents the processor that provides CPU service (execution) to the processes according to their service demands. The CPU is an ACTIVE object of class Processor. A class for active objects is a class that inherits class Process, which is a Psim library class A class for passive objects can be any other class

Back

Criteria for Scheduling Algorithms

Front

CPU utilization Throughput Turnaround time Waiting time Response time Fairness

Back

Round Robin Scheduling

Front

A PREEMPTIVE scheduling designed for Time Sharing Systems The Ready Queue is treated as a circular queue A small execution time interval is defined as the Time Quantum, or TIME SLICE Interrupts in Round Robin When the executing interval of a process reaches the time quantum, a SYSTEM TIMER will cause the OS to interrupt the process The OS carries out a CONTEXT SWITCH to the next selected process from the ready queue. Length of the Time Quantum Time quantum that is too short will generate many context switching and results in lower CPU efficiency. Time quantum too long may result in poor performance time.

Back

Scheduling Policies which are pre-emptive/non-preemptive

Front

First-come-first-served (FCFS) - non-preemptive Shortest process next (SPN) - non-preemptive --- selection from queue based on the length of the next CPU burst. Longest job first (LJF) Priority scheduling (Pr) - non-preemptive - explicit priority assigned to each process. A multi-class systems. Round robin (RR) Shortest remaining time (SRT) also known as shortest remaining time first (SRTF) Priority preemptive scheduling (PPS)

Back

Non-Fair Scheduling

Front

The scheduling of multiple classes of processes is not fair, the system gives preference to processes higher priority Scheduling of Multi-Class Systems Not Fair - Processes are not treated alike, preference is given to higher-priority processes A major problem is STARVATION --indefinite waiting, low-priority processes may never execute --solution is AGING

Back

Preemptive Scheduling

Front

An executing process can be interrupted when its time slice expires (Round robin) An executing process can be interrupted when its remaining time is longer than the CPU burst of an arriving process (SRTF) Priority preemptive scheduling - A currently running process will be preempted if a higher-priority process arrives (PPS)

Back

Mutual Exclusion Necessary Conditions at critical sections:

Front

• Mutual exclusion: One process at a time • Absence of starvation: Processes wait finite time to enter critical sections. • Absence of deadlock: Processes don't block each other indefinitely. • Progress: A process will take a finite time interval to execute its critical section.

Back

Real-Time Systems

Front

A system that maintains ON-GOING INTERACTION with environment One of the requirements is strict TIMING CONSTRAINTS System depends on priorities and preemption

Back

SPN Scheduling policy

Front

SPN is optimal - gives the minimum average waiting time for a given set of processes. The scheduler selects next the process with the shortest CPU burst A non-preemptive policy Like SRTF(which is preemptive)

Back

Requirements for Multi-programming

Front

OS allocates CPU and other resources so the CPU and other active resources are maintained busy the longest period possible. If there is only one processor, only one process can be in execution at any given time. Other processes are ready, waiting for CPU service. Processes also request access to passive resources, such as memory locations.

Back

Round Robin Consider the following workload: Five processes arrive at time 0, in the order: P1, P2, P3, P4, P5; with CPU burst times: 135, 102, 56, 148, 125 msec., respectively. The execution chart is:

Front

The chart for Round-Robin, with Quantum =40 msec., is: The average waiting time: 365.6 msec 296

Back

Goals of Processor Scheduling

Front

To share CPU among processes in the ready queue The critical activities are: -- the ordering of the allocation and de-allocation of the CPU to processes and threads, one at a time -- Deciding when to de-allocate and allocate the CPU from a process to another process To meet the performance objectives of the system

Back

Shortest Process Next (SPN) Consider the following workload: Five processes arrive at time 0, in the order: P1, P2, P3, P4, P5; with CPU burst times: 135, 102, 56, 148, 125 msec., respectively. The execution chart is:

Front

The average waiting time is: (283+56+0+418+158) / 5 = 183 msec.

Back

Scheduler and Dispatcher

Front

Scheduler Insertion of processes that request CPU service into the ready queue. Usually a data structure(FIFO) list, a set of simple lists, or as a priority list. Provided by the enqueuer, a component of the scheduler. The context switcher that saves the context of the current process and de-allocates the CPU from that process. Loads the context of the selected process. The selection of the next process from the ready queue and loading its context. This can be carried out by the dispatcher, which then allocates the CPU to the newly selected process. Dispatcher module gives control of the CPU to the process selected by the short-term scheduler --context switching --switching to user mode --jumping to the proper location in the user program to restart (resume) that program Dispatch latency - the time that elapses from the stopping one process to the starting of the newly selected process

Back

Length of the Time Quantum (Time Slice)

Front

Time quantum that is too short will generate many context switching and results in lower CPU efficiency. Time quantum too long may result in poor performance time.

Back

Assume the following processes P1,P2, P3, P4 and P5 arrive at 1, 2, 4, 5, 5 respectively. The CPU burst and the priority assigned to each process are: P1: 45 3 P2: 5 5 P3: 15 2 P4: 20 1 P5: 25 4 For FCFS, RR, SJF and PR scheduling, determine a) the turnaround time for every process, b) waiting time for every process and the average waiting time, c) throughput for the system. Use a time quantum of 10 time units, and negligible context time.

Front

???

Back

Multi - Queue Scheduling

Front

Ready queue partitioned into separate queues, based on some property of the processes Each queue has its own scheduling algorithm eg, one queue for each of the following types of processes: --System processes --Interactive processes --Editing processes --Batch processes --Other low priority processes

Back

A System with no Multiprogramming

Front

Only one process is allowed in memory at a time. The degree of multiprogramming is 1 All processes request a certain amount of memory a ( passive resource), a CPU burst, an I/O burst, and finally, another CPU burst.

Back

Process Queues

Front

The processes that are ready and waiting to execute are kept on a list called the ready queue. A similar list exists for each I/O device, called the device queue. Process Queues Job queue (all processes) Ready queue Device queue Arrivals queue (external)

Back

Shortest Remaining Time First

Front

Shortest remaining time (SRTF) is a PREEMPTIVE version of SPN scheduling. A new process will cause the scheduler to interrupt the currently executing process if the CPU burst of the new process is less than the REMAINING SERVICE PERIOD of the current process. Then a context switch starting new process immediately. When no arrivals, scheduler selects process with the shortest CPU service period (burst). Considered multi-class since scheduler gives preference to processes with shortest remaining service time and processes with the shortest CPU burst.(As with SPN)

Back

FCFS Scheduling which are pre-emptive/non-preemptive Calc turnaround time

Front

First come first served (FCFS) scheduling algorithm, a non-preemptive policy The order of service is the same order of arrivals Managed with FIFO ready queue Simple to understand and implement Scheduling is FAIR The performance of this scheme is relatively poor EG. Three CPU-bound processes arrive: P1, P2, and P3, with time bursts 24msec, 3msec, and 3msec respectively. The average turnaround time is: (24 + 27 + 30) / 3 = 27msec

Back

FCFS Consider the following workload: Five processes arrive at time 0, in the order: P1, P2, P3, P4, P5; with CPU burst times: 135, 102, 56, 148, 125 msec., respectively. The execution chart is:

Front

The average waiting time is: (0 + 135 + 237 + 293 + 441) / 5 = 221 msec

Back

Section 3

(3 cards)

Linux Commands • ls • mkdir • cd • cp • more • rm • pwd • exit • man " " • chmod • passwd • / • . • .. • script "___.txt" • exit • mv dir ___.txt

Front

• ls - List files/subdirectories in current directory or specified directory • mkdir - make directory under current directory • cd - change directory(ret home dir if alone) • cp - copy file • more • rm • pwd - display current working dir • exit • man - get a short online manual • chmod • passwd • / - system root • . - current directory • .. - parent directory • script - new record log • exit - terminate session • mv - move file(or change name)

Back

Bounded-buffer problem/ Producer-consumer problem

Front

Solution requires three semaphores: • Counting semaphore, full, for counting full slots. • Counting semaphore, empty, for counting empty slots. • Binary semaphore, mutex, for mutual exclusion. PRODUCER: 1. Produces data item. 2. When the producer process invokes wait on semaphore mutex, it will be blocked(suspended) if no empty slots available. 3. Reactivated after consumer removes an item. 4. Invokes wait on semaphore mutex to gain exclusive access to the buffer. Producer is suspended if consumer has access to the buffer/critical section. 5. Reactivated when consumer releases mutual exclusion. 6. Executes the critical section(inserts...) 7. Releases mutual exclusive access by invoking the signal of semaphore mutex. 8. Invokes signal of semaphore full; this increments value of semaphore full and reactivates the consumer if it is suspended. CONSUMER 1. Checks if any full slots by invoking wait on semaphore full. 2. If at least one full slot, it proceeds to next statement; otherwise, it is blocked. 3. Attempts to gain exclusive access by invoking wait on semaphore mutex. 4. Executes its critical section(removes...) 5. Releases mutual exclusion by invoking signal on semaphore mutex. 6. Increments number of empty slots by invoking signal on semaphore empty. 7. It consumes the data item.

Back

Implement a semaphore

Front

Semaphore mutex; //declaration mutex = new Semaphore (1);// create semaphore object mutex.waitO; // invoke the wait method mutex.signal(); // invoke the signal method

Back