Operating Systems, OS, OS Midterm 1, OS quiz 2, OS Midterm, OS 2, OS 3, Operating Systems, OS Final

Operating Systems, OS, OS Midterm 1, OS quiz 2, OS Midterm, OS 2, OS 3, Operating Systems, OS Final

memorize.aimemorize.ai (lvl 286)
Section 1

Preview this deck

As a process executes, it changes state. What are the states

Front

Star 0%
Star 0%
Star 0%
Star 0%
Star 0%

0.0

0 reviews

5
0
4
0
3
0
2
0
1
0

Active users

0

All-time users

0

Favorites

0

Last updated

6 years ago

Date created

Mar 1, 2020

Cards (507)

Section 1

(50 cards)

As a process executes, it changes state. What are the states

Front

new, running, waiting, ready, terminated.

Back

context switch

Front

When CPU switches to another process, the system must save the state of the old process via a context switch

Back

Process identifier (pid)

Front

Generally, process identified and managed via a process identifier

Back

Time-shared Systems

Front

user programs or tasks

Back

CPU-bound process

Front

spends more time doing computations; few very long CPU bursts.

Back

Heap

Front

containing memory dynamically allocated during run time.

Back

A process memory is divided into four sections

Front

The text section, stack, data section, heap

Back

A new process is initially put in

Front

ready queue

Back

wait

Front

Output data from child to parent

Back

fork system call

Front

creates new process

Back

short-term scheduler (or CPU scheduler)

Front

selects which process should be executed next and allocates CPU

Back

Resource sharing

Front

parent and children share all resources. child share subset of parent resources. parent and child share no resources.

Back

abort

Front

Parent may terminate execution of child processes

Back

Current activity of process include

Front

program counter, processor registers

Back

the function fork returns

Front

an integer equal to 0 to the child process and one different from 0 to the parent process.

Back

Parent process create

Front

new processes (children) which in turn create other processes forming a tree of processes

Back

Address Space

Front

Child duplicate of parent. Child has a program load into it

Back

Process scheduling maintains scheduling queues of processes

Front

Job Queue, Ready Queue, Device Queue

Back

exec call

Front

used after a fork to replace the process memory space ith a new program

Back

The more complex the OS and the PCB the ->

Front

longer the context switch

Back

Information associated with each process

Front

Process id, process state, program counter, CPU registers, CPU scheduling information, Memory-management information, Accounting information, I/O status information.

Back

the eleclp system calls used

Front

after a fork system call by one of the processes to replace the process memory space with a new program

Back

Job Queue

Front

set of ll processes in the system

Back

I/O-bound process

Front

spends more time doing IO than computations; many short PCU bursts

Back

Short term scheduler is invoked very

Front

frequently

Back

Stack Containing temp data

Front

Function parameters, return addresses, return addresses, local variables

Back

mid-term, scheduler (medium-term - swapping)

Front

Removes processes from the memory. Reduces the degree of multi-programming. (speed between long & short.)

Back

Long-term Scheduler (job scheduler)

Front

selects which processes should be brought into the ready queue

Back

The operating System at all times has a table called the

Front

process table

Back

Batch System

Front

jobs

Back

Process Control Block (PCB)

Front

Process State-> Process Number-> Program Counter-> registers-> memory limits-> list of open files

Back

Data Section

Front

containing global variables

Back

When are processes terminated?

Front

When they're done, When an error occurs, When a fatal error occurs, or When killed by another process

Back

process

Front

a program in execution, which forms the basis of all computation; process execution must progress in sequential fashion

Back

can one program have several processes?

Front

yes

Back

When are processes created?

Front

On system start-up, On system calls, On user requests

Back

Process schedulaer

Front

selects among available process for next execution on CPU

Back

text section

Front

the program code

Back

exit

Front

Process executes last statement and asks the operating system to delete it

Back

The parent waits for the child process to complete with

Front

the wait system call (example in 3.30)

Back

Execution

Front

Parent and child execute concurrently. Parent waits until children terminate.

Back

Time dependent on

Front

Memory speed, Speed of saving registers, # of registers

Back

Context- switch time is overhead

Front

the system does no useful work while switching

Back

Ready Queue

Front

set of processes waiting for an I/O device

Back

Contest switching of a process is represented in

Front

PCB

Back

long term scheduler controls the

Front

degree of multiprogramming

Back

long term scheduler is invoked very

Front

frequently

Back

program is a _____ entity whereas process is ____

Front

passive, active

Back

Child process creation statement

Front

pid = fork()

Back

The OS manages processes by

Front

creating & deleting, suspending & restarting, assigning, Scheduling, Synchronizing, Providing communication between.

Back

Section 2

(50 cards)

Reasons for cooperating processes

Front

information sharing, Computation speed up, modularity, convenience

Back

Some computer systems do not provide a privileged mode of operation in hardware. Is it possible to construct a secure operating system for these computer systems? Give arguments both that it is and that it is not possible.

Front

An operating system for a machine of this type would need to remain in control (or monitor mode) at all times. This could be accomplished by two methods: Software interpretation of all user programs. The software interpreter would provide, in software, what the hardware does not provide. Require meant that all programs be written in high-level languages so that all object code is compiler-produced. The compiler would generate the protection checks that the hardware is missing.

Back

What is peer to peer computing?

Front

In the peer-to-peer model all nodes in the system are considered peers and thus may act as either clients or servers - or both. A node may request a service, or provide such a service to other peers in the system.

Back

OS is a resource allocator

Front

Manages all resources. Decides between conflicting requests for efficient and fair resource use

Back

How are links established?

Front

Link is established automatically between communicating processes.

Back

What is the capacity of a link?

Front

Zero capacity - the link cannot have any messages waiting in it. Sender must wait for receiver Bounded capacity - finite length of n messages Sender must wait if link is full Unbounded capacity - infinite length

Back

context - switching

Front

save old process and load new. overhead. time dependent.

Back

Is the size of a message that the link can accommodate fixed or variable?

Front

both?

Back

Cooperating processes require

Front

an inter process communication mechanism (IPC)

Back

What is a Client Server?

Front

The client-server model firmly distinguishes the roles of the client and server. Under this model, the client requests services that are provided by servers.

Back

Two models of IPC?

Front

Shared memory. Message passing

Back

Processes within a system may be

Front

independent or cooperating

Back

What are the process states?

Front

new: The process is being created running: Instructions are being executed waiting: The process is waiting for some event to occur ready: The process is waiting to be assigned to a processor terminated: The process has finished execution

Back

Reasons for cooperating processes?

Front

Information sharing, Computation speedup, Modularity, Convenience

Back

What are the steps to process creation? (be able to show how processes are created.)

Front

Know how to read coding examples with forking.

Back

ready queue

Front

main memory, ready, and wait to execute

Back

Device queue

Front

set of processes waiting for IO device

Back

MS-DOS provided no means of concurrent processing. Discuss three major complications that concurrent processing adds to an operating system.

Front

A method of time sharing must be implemented to allow each of several processes to have access to the system . Processes and system resources must have protections and must be protected from each other. Any given process must be limited in the amount of memory it can use and the operations it can perform on devices like disks. Care must be taken in the kernel to prevent deadlocks between processes, so processes aren't waiting for each other's allocated resources.

Back

OS is a control program

Front

Controls execution of programs to prevent errors and improper use of the computer

Back

What is Dual mode operation?

Front

allows OS to protect itself and other system components (User mode and kernel mode)

Back

scheduler

Front

long term (job - ready queue - slow - infrequent), short term (CPU - executing - fast - frequent), mid-term(medium -Swap)

Back

Is a link unidirectional or bi-directional?

Front

The link may be unidirectional, but is usually bi-directional.

Back

Message system

Front

processes communicate with each other without sharing the same address space

Back

multi-threading benefits

Front

simplify code, increase efficiency >> Responsiveness, >> Resource sharing >> Economy >> Scalability

Back

process scheduler

Front

selects among available processes for next execution on CPU

Back

cooperating process

Front

producer consumer, shared mem, protection, need to prevent deadlocks

Back

How many links can there be between every pair of communicating processes?

Front

Exactly one link for every pair of processes.

Back

What is Timer for?

Front

Timer to prevent infinite loop / process hogging resources

Back

Operating system goals:

Front

Ease of use, Compromise between individual usability and resource utilization, optimized for usability and battery life

Back

process

Front

a program in execution

Back

Two models of IPC

Front

shared memory, Message passing

Back

IPC facility provides two operations:

Front

send(message) - message size fixed or variable. receive(message)

Back

What is a shared memory system?

Front

shared memory requires communicating processes to establish a region of shared memory. Resides in the address space of the Process. Other processes must attach the shared memory to their address space

Back

process creation (fork)

Front

system start up, system call, user requests

Back

Process states

Front

new, running, waiting, ready, terminated

Back

What are some differences between client server and peer-to-peer computing?

Front

no central server. Each workstation on the network shares its files equally with the others. There's no central storage or authentication of users. separate dedicated servers and clients in a client/server network.

Back

process termination

Front

done, error, fatal, error, killed.

Back

Criteria for CPU Scheduling

Front

Cannot affect CPU time or IO time. Can affect time in ready queue and ready queue wait time.

Back

Parent terminating child process execution

Front

Child exceeds allocated resources, Task assigned to child is no longer required, if parent is exiting (cascading termination)

Back

multi-threading models

Front

many to 1, one to one, many to many

Back

The services and functions provided by an operating system can be divided into two sets.

Front

To provide functions that are helpful to the user. To ensure efficient operation of the system itself.

Back

job queue

Front

system

Back

Threading issues

Front

semantics of fork and execution, thread cancellation of target, signal handling, thread pools, thread specific data, schedular activation

Back

creating process using unix and C example

Front

3.34

Back

what is interprocess communication?

Front

Cooperating processes require an interprocess communication (IPC) mechanism

Back

What is Operating System?

Front

A program that acts as an intermediary between a user of a computer and the computer hardware.

Back

Can a link be associated with more than two processes

Front

Link is associated with exactly two processes. (1 pair)

Back

process control block

Front

state, number, counter, registers, memory limits, list of open files

Back

Cooperating process can affect or be affected by other processes including

Front

Sharing data

Back

What is a message passing system?

Front

a mechanism to allow processes to communicate and to synchronize their action

Back

Section 3

(50 cards)

What does SRTF stand for? Describe it.

Front

Shortest-remaining-time-first. After every interrupt select the process with shortest next burst time. Preemptive - if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. Advantage: Can yield minimum average waiting time. Disadvantage: Increased Overhead

Back

Explain Scheduling in the threading system.

Front

Both many-to-many and Two-level models require communication to maintain the appropriate number of kernel threads allocated to the application Scheduler activations provide upcalls - a communication mechanism from the kernel to the thread library This communication allows an application to maintain the correct number kernel threads

Back

What are the goals of Operating Systems for personal computers and mainframe computers?

Front

Convenience and efficiency

Back

Provide one programming example in which multithreading does provide better performance than a single-threaded solution and one example in which multithreading does not provide better performance than a single-threaded solution.

Front

A Web server that services each request in a separate thread. (2) A parallelized application such as matrix multiplication where different parts of the matrix may be worked on in parallel. Any kind of sequential program is not a good candidate to be threaded, for example, calculating an individual tax return.

Back

Synchronization Hardware. What does TestANDSet() do?

Front

Back

What main services that OS provides?

Front

Program execution I/O operations File System manipulation Communication Error Detection Resource Allocation Protection

Back

The traditional UNIX scheduler enforces an inverse relationship between priority numbers and priorities: the higher the number, the lower the priority. The scheduler recalculates process priorities once per second using the following function: Priority = (recent CPU usage / 2) + base, where base = 60 and recent CPU usage refers to a value indicating how often a process has used the CPU since priorities were last recalculated. Assume that recent CPU usage for process P1 is 40, for process P2 is 18, and for process P3 is 10. What will be the new priorities for these three processes when priorities are recalculated? Based on this information, does the traditional UNIX scheduler raise or lower the relative priority of a CPU-bound process?

Front

The priorities assigned to the processes are 80, 69, and 65 respectively. The scheduler lowers the relative priority of CPU-bound processes.

Back

What role do simulations play in algorithm evaluations?

Front

Queueing models limited, Simulations more accurate. (Programmed model of computer system, Clock is a variable, Gather statistics indicating algorithm performance, Data to drive simulation gathered)

Back

What is queuing modeling?

Front

Describes the arrival of processes, and CPU and I/O bursts probabilistically

Back

What is deterministic modeling?

Front

Type of analytic evaluation, Takes a particular predetermined workload and defines the performance of each algorithm for that workload.

Back

Describe the Round Robin Algorithm.

Front

Each process gets a small unit of CPU time (time quantum q), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. Timer interrupts every quantum to schedule next process. Performance (q large =>FIFO, q small =>q must be large with respect to context switch, otherwise overhead is too high)

Back

User vs. Kernel level thread

Front

don't need system calls then use user-level threads because they are faster. If the threads need system calls then kernel-level threads are better

Back

When a process creates a new process using the fork() system call, which of the following states is shared between the parent process and the child process? a.) Stack b.) Heap c.) Shared memory segments

Front

Heap and shared memory segments are shared between the parent process and the newly forked child process.

Back

What are kernel level threads?

Front

Supported and managed by OS, Virtually all modern general-purpose operating systems support them Kernel sees every thread of every process. If a thread makes a system call, other threads within that process can run. Switching among threads of a process is done via interrupts. Context switching is slower than for user-level threads but faster than process context switching. Takes advantage of multiprocessor

Back

Describe the Priority SchedulingAlgorithm.

Front

A priority number (integer) is associated with each process: The CPU is allocated to the process with the highest priority (smallest integer= highest priority): Preemptive, Nonpreemptive. SJF is priority scheduling where priority is the inverse of predicted next CPU burst time. Problem = Starvation - low priority processes may never execute. Solution = Aging - as time progresses increase the priority of the process.

Back

Synchronization Hardware. What are Semephores?

Front

variable or abstract data type that is used for controlling access, by multiple processes, to a common resource in a concurrent system such as a multiprogramming operating system

Back

What is the bounded-buffer problem?

Front

a multi-process synchronization problem. The problem describes two processes, the producer and the consumer, who share a common, fixed-size buffer used as a queue. N -cells buffer, each cell can hold one item Semaphore mutex initialized to the value 1 Semaphore full initialized to the value 0 Semaphore empty initialized to the value N

Back

One way to evaluate an algorithm?

Front

The first step in determining which algorithm ( and what parameter settings within that algorithm ) is optimal for a particular operating environment is to determine what criteria are to be used, what goals are to be targeted, and what constraints if any must be applied. For example, one might want to "maximize CPU utilization, subject to a maximum response time of 1 second". Once criteria have been established, then different algorithms can be analyzed and a "best choice" determined. The following sections outline some different methods for determining the "best choice".

Back

What role does algorithms play in algorithm evaluations?

Front

Determine criteria, then evaluate algorithms, Deterministic modeling: Type of analytic evaluation, Takes a particular predetermined workload and defines the performance of each algorithm for that workload.

Back

What are advantages and disadvantages of using kernel-level threads?

Front

Advantages: Because kernel has full knowledge of all threads, Scheduler may decide to give more time to a process having large number of threads than process having small number of threads. Kernel-level threads are especially good for applications that frequently block. Disadvantages: The kernel-level threads are slow and inefficient. For instance, threads operations are hundreds of times slower than that of user-level threads. Since kernel must manage and schedule threads as well as processes. It require a full thread control block (TCB) for each thread to maintain information about threads. As a result there is significant overhead and increased in kernel complexity.

Back

Name activities are performed by OS to context switch between processes.

Front

Save state of P1 Service interrupt Select next user process P2 Restore state of P2 and restart CPU

Back

Under what circumstances is better to use kernel-level threads than user-level threads?

Front

User-level threads are threads that the OS is not aware of. They exist entirely within a process, and are scheduled to run within that process's timeslices. The OS is aware of kernel-level threads. Kernel threads are scheduled by the OS's scheduling algorithm, and require a "lightweight" context switch to switch between (that is, registers, PC, and SP must be changed, but the memory context remains the same among kernel threads in the same process). User-level threads are much faster to switch between, as there is no context switch; further, a problem-domain-dependent algorithm can be used to schedule among them. CPU-bound tasks with interdependent computations, or a task that will switch among threads often, might best be handled by user-level threads. Kernel-level threads are scheduled by the OS, and each thread can be granted its own timeslices by the scheduling algorithm. The kernel scheduler can thus make intelligent decisions among threads, and avoid scheduling processes which consist of entirely idle threads (or I/O bound threads). A task that has multiple threads that are I/O bound, or that has many threads (and thus will benefit from the additional timeslices that kernel threads will receive) might best be handled by kernel threads. Kernel-level threads require a system call for the switch to occur; user-level threads do not.

Back

Thread Basics

Front

Threads of the same process share the entire process address space as well as all open files. Each thread has its own program counter and CPU register set. Cooperation among threads is much easier Also context switching among the threads of the same process is faster Notice the need to protect critical sections Programmer defines the threads in a process

Back

What does FCFS stand for? Describe it.

Front

First-Come, First-Served (FCFS) Scheduling. Non-preemptive (Processes are assigned the CPU in the same order as they arrive at the Ready Queue) Advantage: Easy to implement and fairness (no starvation). Disadvantage: Low overall system throughput

Back

What are some of the scheduling algorithms?

Front

FCFS, SJF, SRTF

Back

what are some multi-threading models?

Front

Many-to-One One-to-One Many-to-Many

Back

Describe the action taken by kernel to context -switch between processes.

Front

1. In response to a clock interrupt, the OS saves the PC and user stack pointer of the currently executing process, and transfers control to the kernel clock interrupt handler, 2. The clock interrupt handler saves the rest of the registers, as well as other machine state, such as the state of the floating point registers, in the process PCB. 3. The OS invokes the scheduler to determine the next process to execute, 4. The OS then retrieves the state of the next process from its PCB, and restores the registers. This restore operation takes the processor back to the state in which this process was previously interrupted, executing in user code with user mode privileges.

Back

Assume that an operating system maps user-level threads to the kernel using the many-to-many model and that the mapping is done through the use of LWPs. Furthermore, the system allows program developers to create real-time threads. Is it necessary to bind a real-time thread to an LWP?

Front

Yes. Timing is crucial to real-time applications. If a thread is marked as real-time but is not bound to an LWP, the thread may have to wait to be attached to an LWP before running. Consider if a real-time thread is running (is attached to an LWP) and then proceeds to block (i.e. must perform I/O, has been preempted by a higher-priority real-time thread, is waiting for a mutual exclusion lock, etc.) While the real-time thread is blocked, the LWP it was attached to has been assigned to another thread. When the real-time thread has been scheduled to run again, it must first wait to be attached to an LWP. By binding an LWP to a real-time thread you are ensuring the thread will be able to run with minimal delay once it is scheduled.

Back

What are some Threading issues?

Front

Semantics of fork() and exec() system calls. Thread cancellation of target thread. Thread pools. Thread-specific data. Scheduler activation's.

Back

Describe possible states in which a process can be.

Front

new: The process is being created running: Instructions are being executed waiting: The process is waiting for some event to occur ready: The process is waiting to be assigned to a processor terminated: The process has finished execution

Back

What is Peterson's solution?

Front

a concurrent programming algorithm for mutual exclusion that allows two or more processes to share a single-use resource without conflict, using only shared memory for communication.

Back

What are scheduling criteria?

Front

CPU utilization - keep the CPU as busy as possible Throughput - # of processes that complete their execution per time unit Turnaround time - amount of time to execute a particular process Waiting time - amount of time a process has been waiting in the ready queue Response time - amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)

Back

What are differences between symmetric and asymmetric multiprocessing? What are three advantages and one disadvantages of multiprocessing?

Front

Symmetric processing treats all processors as equals; I/O can be processed on any of them. Asymmetric processing designates one CPU as the master, which is the only one capable of performing I/O; the master distributes computational work among the other CPUs. advantages: Multiprocessor systems can save money, by sharing power supplies, housings, and peripherals. Can execute programs more quickly and can have increased reliability. disadvantages: Multiprocessor systems are more complex in both hardware and software. Additional CPU cycles are required to manage the cooperation, so per-CPU efficiency goes down.

Back

Synchronization Hardware. What does Swap() do?

Front

Back

Describe three different models of establishing relationship between user-level and kernel-level threads.

Front

Three common ways of establishing a relationship between user threads and kernel threads are: Many-to-One One-to-One Many-to-Many

Back

What are user level threads?

Front

Support provided at the user-level Managed above the kernel without kernel support Management is done by thread library Does not take advantage of multiprocessors

Back

Describe the differences among short term, medium term, and long term schedulers.

Front

Long-term scheduler (or job scheduler) - selects which processes should be brought into the ready queue Short-term scheduler (or CPU scheduler) - selects which process should be executed next and allocates CPU Medium-term scheduler - removes processes from memory to reduce the degree of multiprogramming.

Back

What are two differences between user-level threads and kernel-level threads? Under what circumstances is one type better than the other?

Front

Kernel sees every thread of every process. If a thread makes a system call, other threads within that process can run. Takes advantage of multiprocessors Switching among kernel-level threads of a process is done via interrupts Context switching of kernel-level threads is slower than for user-level threads but faster than process context switching. Switching among user-level threads of a process is done by calls to library modules and is done very quickly. If a user-level thread makes a system call, other threads within that process will be blocked. Does not take advantage of multiprocessors

Back

Which of the following components of program state are shared across threads in a multithreaded process? a) Register values b) Code c) Global variables d) Stack memory

Front

Threads share the heap, global memory and the page table. They have private register values and private stack segments.

Back

DO ROUND ROBIN EXAMPLES!!!

Front

DO ROUND ROBIN EXAMPLES!!!

Back

Compare the overhead of context switch between threads of the same process and the overhead of context switch between processes.

Front

Context-switching between two threads is faster than between two processes. Switching among threads of a process is done by calls to library modules and is done very quickly

Back

What are some benefits of a multi-threading program?

Front

Responsiveness (May allow a program to continue running if part of it is blocked). Resource Sharing (Sharing code, data, memory, and the resources of process) Economy (Allocating memory and resources for process creation is costly, Context switching is faster, More efficient use of multiple CPUs, Easier cooperation among threads) Scalability (Threads may be running in parallel on different processors, A single-threaded process can only run on one processor, regardless how many are available)

Back

What does SJF stand for? Describe it.

Front

Shortest-Job-First (SJF) Scheduling. Associate with each process the length of its next CPU burst (Use these lengths to schedule the process with the shortest time, The earlier the I/O begins -the more work done by the system). Nonpreemptive - once CPU given to the process it cannot be preempted until completes its CPU burst. Advantage: SJF is optimal - gives minimum average waiting time for a given set of processes. Disadvantage: Non preemptive nature is not good for time sharing, The difficulty is knowing the length of the next CPU request, Could ask the user

Back

Process Synchronization. What are some requirements/ solutions to the critical section?

Front

A Critical Section is a code segment that accesses shared variables and has to be executed as an atomic action. It means that in a group of cooperating processes, at a given point of time, only one process must be executing its critical section. If any other process also wants to execute its critical section, it must wait until the first one finishes.

Back

What are advantages and disadvantages of each of the following? Synchronous and asynchronous communication Fixed-sized and variable-sized messages Consider both the system level and the programmer level.

Front

A benefit of synchronous communication is that it allows a rendezvous between the sender and receiver. A disadvantage of a blocking send is that a rendezvous may not be required and the message could be delivered asynchronously. As a result, message-passing systems often provide both forms of synchronization. In fixed-size messages, a buffer with a specific size can hold a known number of messages. (Easier for designers and more complicated for users) In variable-sized messages the number of messages that can be held by such a buffer is indefinite length. (Easier for users and more complicated for designers)

Back

Distinguish between PCS and SCS scheduling

Front

PCS scheduling is done local to the process. It is how the thread library schedules threads onto available LWPs. SCS scheduling is the situation where the operating system schedules kernel threads. On systems using either many-to-one or many-to-many, the two scheduling models are fundamentally different. On systems using one-to-one, PCS and SCS are the same.

Back

What is Bounded-Waiting exclusion with TestANDSet()?

Front

Back

Process Synchronization. What is it?

Front

Process Synchronization means sharing system resources by processes in a such a way that, Concurrent access to shared data is handled thereby minimizing the chance of inconsistent data. Maintaining data consistency demands mechanisms to ensure synchronized execution of cooperating processes.

Back

What are two models of interprocess communication? Briefly describe each of them.

Front

Shared-memory model. Strength: 1. Shared memory communication is faster the message passing model when the processes are on the same machine. Weaknesses: 1. Different processes need to ensure that they are not writing to the same location simultaneously. 2. Processes that communicate using shared memory need to address problems of memory protection and synchronization. Message-passing model. Strength: 1. Easier to implement than the shared memory model Weakness: 1. Communication using message passing is slower than shared memory because of the time involved in connection setup

Back

Can multithreaded solution using multiple user-level threads achieve better performance on a multiprocessor system than on a single-processor system?

Front

We assume that user-level threads are not known to the kernel. In that case, the answer is because the scheduling is done at the process level. On the other hand, some OS allows user-level threads to be assigned to different kernellevel processes for the purposes of scheduling. In this case the multithreaded solution could be faster.

Back

Section 4

(50 cards)

no pre-emption

Front

The processes must not have resources taken away while that resource is being used. Otherwise, deadlock could not occur since the operating system could simply take enough resources from running processes to enable any process to finish.

Back

What are two sets of the services and functions provided by an operating system? Briefly describe those two sets.

Front

One class of services provided by an operating system is to enforce protection between different processes running concurrently in the system. Processes are allowed to access only those memory locations that are associated with their address spaces. Also, processes are not allowed to corrupt files associatedwith other users. A process is also not allowed to access devices directly without operating system intervention. The second class of services provided by an operating system is to provide new functionality that is not supported directly by the underlying hardware. Virtual memory and file systems are two such examples of new services provided by an operating system.

Back

Throughput

Front

# of processes that complete their execution per time unit

Back

What are the user and system goals in operating system design?

Front

User goals include convenience, reliability, security, and speed. System goals include ease of design, implementation, maintenance, flexibility, and efficiency.

Back

Data parallelism

Front

distributes subsets of the same data across multiple cores, same operation/task on each

Back

What are advantages and disadvantages of the synchronous and asynchronous communication?

Front

A benefit of synchronous communication is that it allows a rendezvous between the sender and receiver. A disadvantage of a blocking send is that a rendezvous may not be required and the message could be delivered asynchronously. As a result, message-passing systems often provide both forms of synchronization.

Back

Deadlock detection

Front

check allocation against resource availability for all possible allocation sequences to determine if the system is in deadlocked state

Back

Under what circumstances is better to use kernel-level threads than user- level threads?

Front

User-level threads are threads that the OS is not aware of. They exist entirely within a process, and are scheduled to run within that process's timeslices. The OS is aware of kernel-level threads. Kernel threads are scheduled by the OS's scheduling algorithm, and require a "lightweight" context switch to switch between (that is, registers, PC, and SP must be changed, but the memory context remains the same among kernel threads in the same process).

Back

Waiting time

Front

amount of time a process has been waiting in the ready queue

Back

Can multithreaded solution using multiple user-level threads achieve better performance on a multiprocessor system than on a single-processor system?

Front

We assume that user-level threads are not known to the kernel. In that case, the answer is because the scheduling is done at the process level. On the other hand, some OS allows user-level threads to be assigned to different kernellevel processes for the purposes of scheduling. In this case the multithreaded solution could be faster.

Back

What are six major categories of system calls?

Front

Process Control, File Manipulation, Device Manipulation, Information Manipulation, Communication, Protection.

Back

What is Parallelism

Front

implies a system can perform more than one task simultaneously

Back

Explain the differences in the degree to which the following scheduling algorithms discriminate in favor of short processes: a. FCFS b. RR

Front

a. FCFS—discriminates against short jobs since any short jobs arriving after long jobs will have a longer waiting time. bullet b. RR—treats all jobs equally (giving them equal bursts of CPU time) so short jobs will be able to leave the system faster since they will finish first.

Back

What is the fundamental idea behind a virtual machine?

Front

an operating system (OS) or application environment that is installed on software, which imitates dedicated hardware.

Back

Describe four criteria to be used in selecting a CPU scheduling algorithm

Front

When a process switches from the running state to the waiting state, such as for an I/O request or invocation of the wait( ) system call. When a process switches from the running state to the ready state, for example in response to an interrupt. When a process switches from the waiting state to the ready state, say at completion of I/O or a return from wait( ). When a process terminates.

Back

Resource-Allocation graph

Front

A resource allocation graph tracks which resource is held by which process and which process is waiting for a resource of a particular type. It is very powerful and simple tool to illustrate how interacting processes can deadlock.

Back

Provide two programming examples of multithreading that would not improve performance over a single-threaded solution.

Front

Any kind of sequential program is not a good candidate to be threaded. An example of this is a program that calculates an individual tax return. (2) Another example is a "shell" program such as the C-shell or Korn shell. Such a program must closely monitor its own working space such as open files, environment variables, and current working directory.

Back

What is non-preemptive scheduling?

Front

A process is removed from the CPU by itself (system call) or by an interrupt generated by some other system component. After interrupt is serviced, the CPU may be assigned to any process that is ready to run.

Back

Banker's algorithm

Front

When a process starts up, it must state in advance the maximum allocation of resources it may request, up to the amount available on the system. When a request is made, the scheduler determines whether granting the request would leave the system in a safe state. If not, then the process must wait until the request can be granted safely.

Back

What are advantages of providing process cooperation?

Front

Information sharing - Computation speed-up - Modularity - Convenience

Back

CPU utilization

Front

keep the CPU as busy as possible

Back

Task parallelism

Front

distributing threads across cores, each thread performing unique operation

Back

What are advantages and disadvantages of the fixed-sized and variable-sized messages?

Front

The implications of this are mostly related to buffering issues; with fixed-size messages, a buffer with a specific size can hold a known number of messages. The number of variable-sized messages that can be held by such a buffer is unknown. Consider how Windows 2000 handles this situation: with fixed-sized messages (anything < 256 bytes), the messages are copied from the address space of the sender to the address space of the receiving process. Larger messages (i.e. variable-sized messages) use shared memory to pass the message.

Back

Consider a multiprocessor system and a multithreaded program written using the many-to-many threading model. Let the user-level threads in the program be more than the number of processors in the system. Discuss the performance implications of the following scenarios. a. The number of kernel threads allocated to the program is less than the number of processors b. The number of kernel threads allocated to the program is equal to the number of processors c. The number of kernel threads allocated to the program is grater than the number processors but less than the user-level threads

Front

When the number of kernel threads is less than the number of processors, then some of the processors would remain idle since the scheduler maps only kernel threads to processors and not user-level threads to processors. When the number of kernel threads is exactly equal to the number of processors, then it is possible that all of the processors might be utilized simultaneously. However, when a kernel-thread blocks inside the kernel (due to a page fault or while invoking system calls), the corresponding processor would remain idle. When there are more kernel threads than processors, a blocked kernel thread could be swapped out in favor of another kernel thread that is ready to execute, thereby increasing the utilization of the multiprocessor system

Back

When are processes created?

Front

On system startup. Daemons, or background processes, are automatically started by the Operating System when the system is initialized and they work in the background either listening for events to occur or maintaining some part of the system. 2. On system calls. A process that is already running may wish to give birth to additional processes to help it in a particular task. The latter processes are known as child processes and the former is the parent. It is common to have such a hierarchy of processes. 3. On user requests. Double clicking on a program icon or executing it via the command line automatically creates a process for that program. It is more than likely that it will then generate child processes to perform separate tasks.

Back

mutual exclusion

Front

The resources involved must be unshareable; otherwise, the processes would not be prevented from using the resource when necessary.

Back

What is Concurrency

Front

supports more than one task making progress

Back

resource waiting or circular wait

Front

A circular chain of processes, with each process holding resources which are currently being requested by the next process in the chain, cannot exist. If it does, the cycle theorem (which states that "a cycle in the resource graph is necessary for deadlock to occur") indicated that deadlock could occur.

Back

What are five major activities of an operating system with regard to process management?

Front

Creation and deletion of user and system processes. Suspension and resumption of processes. A mechanism for process synchronization. A mechanism for process communication. A mechanism for deadlock handling.

Back

Deadlock prevention

Front

Deadlocks can be prevented by preventing at least one of the four required conditions

Back

Many CPU scheduling algorithms are parameterized. For example, the RR algorithm requires a parameter to indicate the time slice. Multilevel feedback queues require parameters to define the number of queues, the scheduling algorithms for each queue, the criteria used to move processes between queues, and so on. These algorithms are thus really sets of algorithms (for example, the set of RR algorithms for all time slices, and so on). One set of algorithms may include another (for example, the FCFS algorithm is the RR algorithm with an infinite time quantum). What (if any) relation holds between the following pairs of sets of algorithms? a. Priority and SJF b. Multilevel feedback queues and FCFS c. Priority and FCFS d. RR and SJF

Front

a. The shortest job has the highest priority. b. The lowest level of MLFQ is FCFS. c. FCFS gives the highest priority to the job having been in existence the longest. d. None.

Back

Turnaround time

Front

amount of time to execute a particular process

Back

Consider a system running ten I/O-bound tasks and one CPU-bound task. Assume that the I/O-bound tasks issue an I/O operation once for every millisecond of CPU computing and that each I/O operation takes 10 milliseconds to complete. Also assume that the context switching overhead is 0.1millisecond and that all processes are long-running tasks. What is the CPU utilization for a round-robin scheduler when: a. The time quantum is 1 millisecond b. The time quantum is 10 milliseconds

Front

a) The time quantum is 1 millisecond Anwser: i. CPU utilization of the tasks 1ms ∗ 11 = 11ms ii. Every task will use up the whole quantum. The I/O operations for the I/O bound tasks will return in time for their next turn. Only non utilizations will be the context switch 11 ∗ .1 = 1.1ms iii. 1.1ms + 11ms = 12.1ms total time. iv. 11/12.1 = .909, 90.9% b) The time quantum is 10 milliseconds Anwser: i. Total time (1ms + .1ms) ∗ 10 + 10 + .1 = 21.1 ii. CPU utilization of CPU-bound task = 10ms iii. CPU utilization of I/O bound tasks = 1ms then context switch iv. 1 ∗ 10 + 10 = 20ms total CPU utilization v. 20/21.1 = .9478, 94.78%

Back

What is preemptive scheduling?

Front

If an interrupt causes the removal of a process from the CPU, after the interrupt is serviced the CPU is given back to the process that had the CPU. Consider access to shared data. Consider preemption while in kernel mode. Consider interrupts occurring during crucial OS activities

Back

hold and wait or partial allocation

Front

The processes must hold the resources they have already been allocated while waiting for other (requested) resources. If the process had to release its resources when a new resource or resources were requested, deadlock could not occur because the process would not prevent others from using resources that it controlled.

Back

MS-DOS provided no means of concurrent processing. Discuss three major complications that concurrent processing adds to an operating system.

Front

1. A method of time sharing must be implemented to allow each of several processes to have access to the system . 2. Processes and system resources must have protections and must be protected from each other. Any given process must be limited in the amount of memory it can use and the operations it can perform on devices like disks. 3. Care must be taken in the kernel to prevent deadlocks between processes, so processes aren't waiting for each other's allocated resources

Back

A CPU scheduling algorithm determines an order for the execution of its scheduled processes. Given n processes to be scheduled on one processor, how many possible different schedules are there? Give a formula in terms of n.

Front

n!

Back

What are advantages and disadvantages of using user-level threads?

Front

The most obvious advantage of this technique is that a user-level threads package can be implemented on an Operating System that does not support threads. Some other advantages are User-level threads does not require modification to operating systems. Simple Representation: Each thread is represented simply by a PC, registers, stack and a small control block, all stored in the user process address space. Simple Management: This simply means that creating a thread, switching between threads and synchronization between threads can all be done without intervention of the kernel. Fast and Efficient: Thread switching is not much more expensive than a procedure call. Disadvantages: There is a lack of coordination between threads and operating system kernel. Therefore, process as whole gets one time slice irrespect of whether process has one thread or 1000 threads within. It is up to each thread to relinquish control to other threads. User-level threads requires non-blocking systems call i.e., a multithreaded kernel. Otherwise, entire process will blocked in the kernel, even if there are runable threads left in the processes. For example, if one thread causes a page fault, the process blocks.

Back

Necessary conditions for deadlock

Front

mutual exclusion hold and wait no preemption circular wait.

Back

Can a link be associated with more than two processes?

Front

A link is associated with exactly one pair of communicating processes.

Back

What are three different forms of a user interface that almost all an operating system have?

Front

user interface=> command-line interface, batch interface, graphical user interface

Back

What advantage is there in having different time-quantum sizes on different levels of a multilevel queueing system?

Front

Processes which need more frequent servicing, such as interactive processes, can be in a queue with a small q. Processes that are computationally intensive can be in a queue with a larger quantum, requiring fewer context switches to complete the processing, making more efficient use of the CPU.

Back

What are two models of inter-process communication? Briefly describe each of them.

Front

1. Shared Memory Model: The co operating process shares a region of memory for sharing of information. 2. Message Passing Mode: data is shared between process by passing and receiving messages between co-operating process.

Back

Provide two programming examples of multithreading giving improved performance over a single-threaded solution.

Front

A Web server that services each request in a separate thread. (2) A parallelized application such as matrix multiplication where different parts of the matrix may be worked on in parallel. (3) An interactive GUI program such as a debugger where a thread is used to monitor user input, another thread represents the running application, and a third thread monitors performance.

Back

Deadlock avoidance

Front

The system needs to have information about all the processes to know what they may request in the future

Back

Define the difference between preemptive and nonpreemptive scheduling.

Front

(1) Preemptive scheduling allows a process to be interrupted in the midst of its execution, taking the CPU away and allocating it to another process. Nonpreemptive scheduling ensures that a process relinquishes control of the CPU only when it finishes with its current CPU burst. (2) If nonpreemptive scheduling is used in a computer center, a process is capable of keeping other processes waiting for a long time.

Back

What is the main advantage for an operating-system designer of using a virtual-machine architecture? What is the main advantage for a user?

Front

The system is easy to debug, and security problems are easy to solve. Virtual machines also provide a good platform for operating system research since many different operating systems may run on one physical system and system bugs will not crash machine and cause downtime.

Back

Which of the following components of program state are shared across threads in a multithreaded process? 1. Register values 2. Code 3. Global variables 4. Stack memory

Front

Threads share the heap, global memory and the page table. They have private register values and private stack segments.

Back

Response time

Front

amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)

Back

In what ways is the modular kernel approach similar to the layered approach? In what ways does it differ from the layered approach?

Front

The modular kernel approach requires subsystems to interact with each other through carefully constructed interfaces that are typically narrow (in terms of the functionality that is exposed to external modules). The layered kernel approach is similar in that respect. However, the layered kernel imposes a strict ordering of subsystems such that subsystems at the lower layers are not allowed to invoke operations corresponding to the upper-layer subsystems. There are no such restrictions in the modular-kernel approach, wherein modules are free to invoke each other without any constraints.

Back

Section 5

(50 cards)

Fragmentation in both systems

Front

Back

Binding of data to physical memory

Front

memory addresses must be bound to physical memory addresses, which typically occurs in several stages: Compile Time, Load Time, and Execution Time

Back

Process Synchronization: Readers-writers problem

Front

Back

Memory allocation Dynamic storage allocation

Front

Back

Second-Chance Algorithm

Front

Second-Chance algorithm is actually a FIFO replacement algorithm with a small modification that causes it to approximate LRU

Back

Recovery from deadlock

Front

Back

Recovery from deadlock

Front

(1)Inform the system operator, and allow him/her to take manual intervention. (2)Terminate one or more processes involved in the deadlock (3)Preempt resources.

Back

Memory protection in a paging system

Front

Back

Second chance algorithm

Front

Back

Wait-for graph

Front

Back

Fragmentation in both systems

Front

There is external fragmentation in segmentation and internal in paging

Back

Wait-for graph

Front

A wait-for graph can be constructed from a resource-allocation graph by eliminating the resources and collapsing the associated edges, as shown in the figure below.

Back

Necessary conditions for deadlock

Front

mutual exclusion, hold and wait, no preemption, circular wait

Back

Deadlock avoidance

Front

apply the resource allocation graph to know what is needed

Back

Different structures of the page table

Front

physical address (frame)>protection bits(read/write/execute)>bits for (valid, reference, modify, copy on write, age)

Back

Steps in handling a page fault

Front

reference>trap>page is on backing store>bring in missing page> reset page table>restart instruction

Back

Allocation algorithms Global vs local

Front

Back

Effective access time

Front

Back

Effective access time

Front

TLB lookup takes 5 nano sec. Memory access time is 100 nano sec. Hit ratio (probability to find page number in TLB) is ? Effective Access Time = (5+100) ? +( 5+100+100)(1- ?) Suppose ? = 80% (for example, TLB size = 16) EAT = 105.8 + 205.2 = 125 nano sec.

Back

Benefits of virtual memory

Front

The primary benefits of virtual memory include freeing applications from having to manage a shared memory space, increased security due to memory isolation, and being able to conceptually use more memory than might be physically available, using the technique of paging.

Back

Page replacement methods

Front

Back

Memory allocation Strategies of selecting the free space

Front

Back

Dynamic storage allocation

Front

Two basic operations in dynamic storage management: Allocate a given number of bytes Free a previously allocated block Two general approaches to dynamic storage allocation: Stack allocation (hierarchical): restricted, but simple and efficient. Heap allocation: more general, but less efficient, more difficult to implement.

Back

Demand paging

Front

Back

Memory protection in segmentation systems

Front

Back

Effective access time (virtual memory)

Front

Back

Deadlock prevention

Front

prevent one of the necessary conditions

Back

Memory protection in a paging system

Front

Memory protection implemented by associating protection bit with each frame. Valid-invalid bit attached to each entry in the page table: 1. "valid" indicates that the associated page is in the process' logical address space, and is thus a legal page. 2. "invalid" indicates that the page is not in the process' logical address space.

Back

Benefits of virtual memory

Front

Back

Copy-on-write

Front

Back

Resource-Allocation graph

Front

Back

Advantages of the paging system over segmentation system

Front

Back

Thrashing

Front

Back

Banker's algorithm

Front

Back

Binding of data to physical memory

Front

Back

Deadlock detection

Front

Deadlock detection is the process of actually determining that a deadlock exists and identifying the processes and resources involved in the deadlock. The basic idea is to check allocation against resource availability for all possible allocation sequences to determine if the system is in deadlocked state

Back

Advantages of the paging system over segmentation system

Front

May save memory if segments are very small and should not be combined into one page. Segment tables: only one entry per actual segment as opposed to one per page in VM Average segment size >> average page size Less overhead.

Back

Steps in handling a page fault

Front

Back

Logical address space vs physical address space

Front

Back

Copy-on-write

Front

Multiple callers ask for resources which are initially indistinguishable, you can give them pointers to the same resource. This function can be maintained until a caller tries to modify its "copy" of the resource, at which point a true private copy is created to prevent the changes becoming visible to everyone else. All of this happens transparently to the callers. The primary advantage is that if a caller never makes any modifications, no private copy need ever be created.

Back

Advantages of the segmentation system over paging system

Front

No internal fragmentation (advantage) There is external fragmentation (disadvantage) Keeps blocks of code or data as single units (advantage

Back

The method of implementing paging system

Front

We can implement paging by breaking physical memory into frames (fixed memory blocks) and logical memory into pages (equal sized blocks)

Back

Process Synchronization: The bounded-buffer problem

Front

Back

Demand paging

Front

Processes reside in secondary memory and pages are loaded only on demand, not in advance.

Back

Strategies of selecting the free space

Front

Back

The method of implementing paging system

Front

Back

Logical address space vs physical address space

Front

The address generated by the CPU is a logical address, whereas the address actually seen by the memory hardware is a physical address.

Back

Memory protection in segmentation systems

Front

Segmentation is one of the most common ways to achieve memory protection. In a computer system using segmentation, an instruction operand that refers to a memory location includes a value that identifies a segment and an offset within that segment. A segment has a set of permissions, and a length, associated with it. If the currently running process is allowed by the permissions to make the type of reference to memory that it is attempting to make, and the offset within the segment is within the range specified by the length of the segment, the reference is permitted; otherwise, a hardware exception is raised.

Back

Different structures of the page table

Front

Back

Process Synchronization: Monitors

Front

In concurrent programming, a monitor is a synchronization construct that allows threads to have both mutual exclusion and the ability to wait (block) for a certain condition to become true. Monitors also have a mechanism for signalling other threads that their condition has been met

Back

Section 6

(50 cards)

If a graph contains no cycles, is there a potential for Deadlock?

Front

No

Back

Different approaches to multi-processing and main memory sharing

Front

Partition memory into fixed size chunks Partition memory into variable size chunks Keep a process in the same memory space as it executes Allow a process to change memory location as it executes Require that a process be placed in one uninterrupted block Allow a process to be placed into separate portions of memory

Back

Program structure

Front

Back

Describe memory partitions of a Single User System

Front

One for the OS One for the USER

Back

Deadlock avoidance - addition priori info

Front

Each process declares the max number of each type of resource it may need. The avoidance algorithm examines resource-allocation state to ensure no circular-wait condition

Back

Main Memory Placement Strategy First-Fit

Front

Allocate the first hole that is big enough

Back

Describe a multi-partiton system

Front

Many fixed partitions of memory

Back

Name two Avoidance Algorithms

Front

Single instance of resource type: Resource-allocation graph Multiple instances of resource type: Use the Banker's Algorithm

Back

Describe Deadlock

Front

When a process is holding a resource and waiting to acquire a second resource held by another process Example: Process 1 wants A and B, has A. Process 2 has B still

Back

Deadlock Recovery Method Process Termination

Front

Abort all processes About one at a time until the deadlock is eliminated. Order: Priority, how long it's been computing and how much time is left, resources the process has used, resources it still needs, how many processes will be terminated, Is the process interactive of batch

Back

Describe the Readers-Writers problem

Front

Multiple readers can read at the same time but only one Writer can access at a time

Back

Define "Mutual Exclusion"

Front

Only one process at a time

Back

Describe the Bounded-Buffer Problem

Front

Set number of buffers can only hold one item each

Back

Deadlock prevention: Hold and wait

Front

- guarantee that when a process requests a resource, it does not hold any other resource

Back

Describe Readers and Writers

Front

With a shared dataset, Readers can only read data Writers can read and write

Back

Define "Hold and Wait"

Front

A process is holding at least one resource and waiting to acquire more from other processes

Back

Deadlock Recovery Method Resource Pre-emption

Front

Select a victim process to minimize cost Roll back to a safe start and restart process for that state

Back

In early multi-programming systems, processes were assigned ______ at compile time

Front

Absolute memory addresses

Back

List the Deadlock prevention methods

Front

Mutual Exclusion Hold and wait No Pre-emption Circular Wait

Back

Methods to handle Deadlock

Front

Ensure to never enter a deadlock state Allow system to enter Deadlock, then recover Ignore and pretend it DNE. (Used by most OSes)

Back

Main Memory Placement Strategy Worst-fit

Front

Allocate the largest hole. Worst among the 3 strategies

Back

Relocatable memory addresses allowed what in a Multi-Partition System

Front

Processes can run in any memory partition it can fit. Inefficient because a process might not occupy the entire partition May cause another process to wait because a partition is too small

Back

Prepaging

Front

Back

How to reduce fragmentation?

Front

Compaction Shuffle memory contents to create contiguous blocks of free memory Only possible if relocation is dynamic and done at execution

Back

What is multi-processes?

Front

Several processes sharing main memory

Back

What addresses do programs deal with?

Front

Logical addresses. Never sees the real physical address

Back

Page size

Front

Back

Describe the variations of the Readers-Writers problem

Front

1: No reader kept waiting unless the writer has permission to used the shared object 2: Once a Writer is ready it performs ASAP

Back

What four conditions must hold simultaneously to cause Deadlock?

Front

Mutual Exclusion, Hold and wait, No pre-emption, Circular wait

Back

Resource-Allocation graph scheme Request edge converts to assignment edge when

Front

The resource is allocated to the process

Back

Define "Circular Wait"

Front

If processes P0, P1, P2...Pn exist, such P0 is waiting for a resource from P1 P1 is waiting for a resource from P2 and on and on

Back

Banker's Algorithm

Front

Each process claims max resource use The requesting process may have to wait until it can get all it's needed resources it MUST return them in a finite amount of time

Back

Functions of the memory manager

Front

Allocating main memory to processes Protecting each process' memory space Using disk as extension of Main Mem

Back

Describe the Dining-Philosophers Problem

Front

A round table of philospohers eat from a shared bowl. A fork is place between each one and you need two forks to eat. Thus, if an adjacent person is eating, you cannot eat until it is finished and you can take their other fork. Eat for a fixed amount of time then stop to allow other processes. illustrates how to avoid deadlock (starvation) of processes where no progress is made.

Back

Main Memory Placement Strategy Best-fit

Front

Allocate the smallest hole that is big enough. Must search entire list unless it's ordered by size Produces smallest leftover hole

Back

Define a Safe State for the system

Front

IF resource Pi is not immediately available, the process can wait When the holding process is finished, the waiting processes can obtain needed resources and execute fully When the process terminates, the next process can get it's resources , and so on.

Back

Describe Monitors

Front

Encapsulates sub-processes so only one can operate at a time and only access variables defined within the functions definition.

Back

Memory-Management Unit does what

Front

Hardware. At runtime, maps virtual addresses to physical addresses

Back

What is a way to solve the Reader-Writer problem?

Front

Reader-Writer locks that prevent

Back

How can processes exceed the total physical memory space that exists?

Front

With Swapping of memory

Back

Describe Contiguous Memory Allocation System

Front

Multiple, variable partitions allocated as needed by the processes if a large enough, contiguous space exists The OS maintains a table containing the allocated partitions and free partitions

Back

Resource-Allocation graph scheme When a resource is released, the assignment edge is converted to

Front

A claim edge

Back

List methods to recover from Deadlock

Front

Terminate all processes Resource Pre-emption

Back

Deadlock prevention: No Pre-emption

Front

- If a requesting process requests additional resource and cannot immediate access it, release all held resources. Process is only restarted when it can regain all needed resources

Back

Deadlock Detection algorithm w/ Multiple instances of resource types

Front

Let WORK[] and FINISH[] be vectors of m and n (processes and resources) Find an index such WORK and FINISH are true it will allocate and do work else, the system is in deadlock

Back

Resource-Allocation graph scheme Claim edge converts to request edge when

Front

A process requests a resource

Back

Working-set model

Front

Back

Deadlock prevention: Circular Wait

Front

- Impose an order and resource types and require each request in increasing order

Back

Resource-Allocation graph scheme Claim Edge

Front

Dashed line Represents the process may request a resource

Back

Define "No Pre-emption"

Front

A resource can only be released by the holding process after it has completed it's task.

Back

Section 7

(50 cards)

Segmentation Architecture Logical Addressing Structure

Front

Segment #, Offset

Back

Regarding Segmentation and Paging Every process has what

Front

A segment table, which each segment has a page table. Multiple page tables per process

Back

Swapping Method: Backing Store

Front

Fast disk w/ copies of memory images for all users. Must provide direct access

Back

Segment-Table Base Register

Front

Points to segment table's location in Mem

Back

Calculating Effective Access Time

Front

EAT = (MA + e)a + (2MA + e)(1 - a) = 2MA + e - MAa Hit ratio = a Associative Lookup = e time unit

Back

T/F Pages of a Segment do not need to be contiguous in Memory

Front

True

Back

Segment Table

Front

Maps 2D logical addresses to physical addresses

Back

Advantages of FIFO Replacement

Front

Replaces the oldest frame Easy to implement

Back

STLR

Front

Segment-Table Length Register

Back

LRU Implementation Methods

Front

Counters for each page Stack of page numbers, doubly linked

Back

Where is a page table implemented?

Front

IN Main Mem Which means every data access requires two Mem Accesses. Resolved with the use of a Translation Look-aside buffer (TLB)

Back

Segmentation Table Each entry has

Front

Base - Starting Physical address where segment resides in memory Limit - Length of the segment

Back

Define: Pages

Front

Logical memory divided into blocks of the same size called pages

Back

Describe valid and invalid bit meanings

Front

Valid - indicates the page is part of the processes' logical address space and thus legal Invalid - Page is not in the processes' logical address space

Back

Define: Page Table

Front

Translates logical to physical addresses

Back

Paging: Calculate internal fragmentation

Front

Process Size / Page Size = # Pages Page Size - (# Pages + Page Size) = Fragmentation

Back

What page table is common when greater than 32 bit systems

Front

Hashed Page Tabes

Back

T/F: Physical address space of a process can be non-contiguous

Front

TRUE

Back

T/F Pages of a Segment do not need to be in Mem at the same time

Front

True

Back

T/F: Relocation registers are used to protect user processes from eachother

Front

true

Back

Describe a Hashed Page Table

Front

Virtual Page # is hashed into page table.

Back

Describe an Inverted Page Table

Front

Instead of tracking all logical pages, Track physical pages Associate page to pid

Back

Hashed Paging What does each element contain?

Front

Virtual Page Number Value of the page frame Pointer to next element

Back

Page Replacement Strategeies

Front

Random FIFO Least Recently Used Optimal Least Frequently Used Most Frequently Used

Back

Segment-Table Length Register

Front

Number of segments used by a program

Back

Page-Replacement Algorithm

Front

Chooses which frames to replace with the lowest page-fault rate on first access and re-access

Back

Working Set

Front

Collection of Pages a process is actively referencing

Back

Describe the logical -> physical address in a Pentium

Front

CPU -> Logical Address -> [Segmentation Unit] -> Linear Address -> [Paging Unit] -> Physical Address -> [Physical Memory]

Back

The Ready Queue contains

Front

Ready-to-run processes with memory stored on disk

Back

Forward-Mapped Page Table

Front

With two-level paging Where P1 is an index for an outer page table, which points to P2 of the inner page table

Back

Virtual Address =

Front

= ( S, P, D) S - Virtual Segment # P - Page # within this Segment D - Displacement from the start of the Page

Back

How to determine page number and offset of a logical address

Front

Page number = Address / Size of Page Offset = Address mod(Size of Page)

Back

What are the major concerns of swapping?

Front

Transfer time and amount of memory being swapped

Back

Translation look-aside buffer (TLB)

Front

Hardware Stores address-space identifiers in each entry

Back

Describe use of Shared Pages in memory

Front

Read-only. Good to store copy of re-entrant code shared among processes. Also used for inter-processes communications

Back

Advantages of random replacement

Front

Low overhead

Back

STBR

Front

Segment-Table Base Register

Back

Swapping Method: Roll out, Roll in

Front

Used for priority-based scheduling Lower-priority processes are swapped out so high-prioirty processes can be loaded and executed

Back

Describe Optimal Algorithm

Front

Replace page that will not be referenced for the longest period of time

Back

Describe the address translation scheme

Front

Page Number (p) | Page Offset (d)

Back

PTLR

Front

Page-Table Length Register Indicates size of the page table

Back

Describe Two-Level Page Table Translation

Front

P1 P2 D P1 -> P2 P2 -> D offset

Back

Describe a method for protecting pages

Front

Using protection bit to indicate read, read-write, or execute status is allowed

Back

Translation look-aside buffer, AKA

Front

Associative Memory

Back

Define: Frames

Front

Physical memory divided into fixed-sized blocks

Back

Least Recently Used (LRU) Algorithm

Front

Replace page not used in the most amount of time

Back

Frame-Allocation algorithm determines

Front

How many frames to give each process

Back

Disadvantage of random replacement

Front

Unpredictability

Back

PTBR

Front

Page-Table Base register

Back

Disadvantages of FIFO Replacement

Front

You may replace a heavily used page

Back

Section 8

(50 cards)

Considerations on logically organizing directory files to improve efficiency

Front

Naming Grouping by similar properties

Back

File Allocation Table (FAT)

Front

A table contains a list of all blocks Each block position in the table stores the number of the next block in the file

Back

Mandatory File Locking

Front

File access is denied depending on locks held and requested

Back

Memory Allocation First-fit

Front

Allocate the first hole big enough

Back

i node or index node

Front

Properties for each file managed by the OS

Back

Stateless file service

Front

Avoids state information by making each request self-contained. Server does not need to keep a table of open files. No need to establish and terminate a connection.

Back

Advantages of Disk Allocation

Front

File size can change. Fragmentation avoided, any free block can be used. Direct access is efficient.

Back

Free-Space Management Free-Space table can be implemented as

Front

Bit vector or bit map of n size

Back

Thrashing

Front

When CPU utilization decreases significantly due to an increase in page faults The process is very busy swapping pages in and out

Back

How to guarantee no cycles with a graph directory structure?

Front

Allow only links to files, not to sub-directories Garbage Collection When a new link is added, use a detection function to verify

Back

Unix File System (UFS)

Front

Writes to an open file visible immediately to other users of the same file. Sharing file pointer allow multiple users to read/write concurrently

Back

Sequential file access

Front

Records accesses one after another

Back

Benefits of Acyclic-Graph Directories

Front

Like tree directory but with shared subdirectories and files

Back

Explain "Buddy System" allocation

Front

Round up the request to the next highest power of 2 And make fixed size allocations, in half until the desired chunk is achieved.

Back

What do inodes (index nodes) store?

Front

Name Owner Type Location Size Time and date User ID Protection

Back

File Locking types

Front

Shared Lock Exclusive Lock

Back

Free-Space Management Advantages of Bit Map

Front

Simple and effecient to find the first free block Map shows how disk is allocated making it easier to allocate groups physically close Table can be kept in main memory while the

Back

Disadvantage to the Buddy System?

Front

Fragmentation

Back

Advisory File Locking

Front

Processes can find the lock status and decide what to do

Back

Free-Space Management Disadvantages of Linked List (free list)

Front

Not efficient to traverse

Back

How to prevent thrashing?

Front

Provide a process with as many frames as it needs

Back

Describe the working-set strategy

Front

Look at how many frames a process is actually using and keep track of the working set of frames

Back

Problems with Single-Level Directories for all Users

Front

Naming and Grouping

Back

Free-Space Management Linked List (free list)

Front

Linked list of free disk blocks

Back

Memory Allocation Best-fit

Front

Allocate smallest hole that is big enough,

Back

Disadvantages of Contiguous allocation

Front

Finding contiguous space for a new file may be impossible Programmer must specify size before hand External fragmentation occurs

Back

NFS

Front

Set of remote file operation calls. Stateless

Back

Memory Allocation Worst-fit

Front

Allocate the largest hole

Back

Free-Space Management Advantages of Linked List (free list)

Front

No waste of space

Back

Linked Disk Allocation

Front

Files blocks are linked by a linked list and do not need to be physically next to eachother

Back

Disadvantages of Link Disk Allocation

Front

Only sequential access to a file is efficient Pointers to next block of file uses storage space

Back

Free-Space Management Disadvantages of Bit Map

Front

Requires extra space

Back

Advantages of Contiguous allocation

Front

No space required to determine location of next block Sequential access required minimum head movement File descriptors are easy to implement Easily implemented

Back

Disadvantages of Disk Allocation

Front

Wasted disk space used for index block of every file

Back

Indexed Disk Allocation

Front

Files stored on linked list of blocks with all the pointers stored in one "index block"

Back

Stateful file service

Front

Client must open() and close(). Server fetches info about file and stores in Memory. Gives a unique connection identifier Higher performance but volatile in a crash

Back

Benefits / Disadvantages Of Two-level Directory

Front

Separate directory for each user Path naming Can have same file name for different users Efficient search No grouping

Back

Consistency Semantics

Front

Specify how multiple users access a shared file simulatenously

Back

Describe Slab Allocation

Front

Cache is mapped to one or more contiguous slabs of physical memory

Back

File System Access Methods

Front

Sequential Direct (relative Indexed

Back

Indexed file access

Front

Built on top of Direct Access method. Index contains a key and block number and kept in Main Mem

Back

Benefits to the Buddy System?

Front

Ability to quickly merge unused chunks into larger chunks

Back

Contiguous allocation

Front

Files stored physically next to eachother

Back

Andrew File System (AFS)

Front

Writes only visible to sessions started after the file is closed. Once a file is closed, changes are visible only in later sessions

Back

What is preemption?

Front

While a process is waiting for resources, and holding resources. The held resources are implicitly released and added to the list of resources it is waiting for. The process will only be restarted when all required resources are available

Back

Direct (relative) file access

Front

Records accessed directly by reading / writing to the sector directly Data must be accessed sequentially once the sector is obtained

Back

Advantages of Link Disk Allocation

Front

Size of file can change External fragmentation avoided since any free block and be used

Back

NFS Operations

Front

Searching, Reading, Manipulating links and directories, Accessing file attributes, Read/Write files

Back

Benefits to Tree Directories

Front

Efficient Search Grouping ability Current working directory capability

Back

How do you ensure hold and wait condition never occurs?

Front

We don't allocate resources to a process if it is currently holding resources

Back

Section 9

(50 cards)

The processor execution mode that user programs typically execute in is referred to as:

Front

User Mode

Back

Types of file access

Front

Read, Write, Execute, Append, Delete, List

Back

One of the disadvantages of User-‐Level Threads (ULTs) compared to Kernel-‐Level Threads (KLTs) is:

Front

When a ULT executes a system call, all threads in the process are blocked.

Back

It is a facility that allows programs to address memory from a logical point of view, without regard to the physical amount of main memory.

Front

Virtual Memory

Back

A computer hardware feature that is vital to the effective operation of a multiprogramming operating system is:

Front

I/O interrupts and DMA

Back

It is the portion of the operating system that selects the next process to run.

Front

What is the Dispatcher?

Back

T / F - The Process Image refers to the binary form of the program code.

Front

False; it refers to the process elements. Such as the program, data, stack and attributes

Back

T / F - An SMP O/S manages processor and other resources so that the user may view the system in the same fashion as a multiprogramming uniprocessor system.

Front

True

Back

When an external device becomes ready to be serviced by the processor, the device sends this type of signal to the processor:

Front

Interrupt Signal

Back

T / F - An operating system controls the execution of applications and acts as an interface between applications and the computer hardware.

Front

True

Back

The basic thread operation related to the change in thread state that occurs when a thread needs to wait for an event is referred to as the:

Front

Block Operation

Back

T / F - In a pure User-Level Thread (ULT) facility, all of the work of thread management is done by the application, but the kernel is aware of the existence of threads.

Front

False; The kernel is not aware of the existence of threads.

Back

The behavior of an individual process can be characterized by examining:

Front

A Single Process Trace.

Back

The instruction execution consists of what two steps?

Front

Fetch, Execute

Back

In a Linux system, it is a process that has been terminated but, for some reason, still must have its task structure in the process table.

Front

What is a zombie?

Back

T / F - The portion of the Process Control Block that consists of the contents of the processor registers is called the Process Control Information.

Front

False; processor state information portion of the PCB contains registers.

Back

The Process Identification, Processor State Information and the Process Control Information are the general categories that collectively make up what is referred to as the ______________________.

Front

Process Control Block.

Back

Two accepted methods of dealing with multiple interrupts is to:

Front

Define priorities for the interrupts, Disable interrupts while an interrupt is being processed

Back

Where does the processor hold the address of the next instruction?

Front

Program Counter (PC)

Back

Read-Ahead technique

Front

Requested page and several subsequent pages are read and cached

Back

Buffer Cache

Front

Separate section of Main Mem for frequently used blocks

Back

When a new block of data is written into cache memory, the following determines which cache location the block will occupy:

Front

Mapping Function

Back

The behavior of a processor can be characterized by examining:

Front

The interleaving of the process traces.

Back

What is faster, Main Memory or Registers?

Front

Registers are faster than main memory.

Back

A Computer's Basic Elements. List the 4 high level components of a computer:

Front

Processor, Bus (Control), I/O, Memory

Back

When to use Synchronous writes

Front

When writes are occasionally requested

Back

Efficiency is dependent on

Front

Disk allocation and directory algorithms. Types of data Pre-allocation or as-needed allocation structures. Fixed or varying data structure sizes.

Back

T / F - In a symmetric multiprocessing (SMP) system, each processor has access only to a private main memory area.

Front

False

Back

What is true regarding the relationship between processes and threads:

Front

It takes less time to create a new thread in an existing process than to create a new process. Less time to terminate a thread. Switching between two threads takes less time. Threads can communicate with each other without invoking the kernel.

Back

T / F - The concept of thread synchronization is required in multithreaded systems because threads of a single process share the process's process control block (PCB).

Front

False

Back

Direct Memory Access (DMA) operations require the following information from the processor:

Front

Address of I/O device, Starting memory location to read/write, Number of words to be read/written

Back

The User Data, User Program, System Stack and Process Control Block elements collectively make up what is referred to as the __________________.

Front

Process Image

Back

A common class of interrupts is:

Front

Program, Timer, I/O

Back

-Exploits the hardware resources of one or more processors -Provides a set of services to system users -Manages secondary memory and I/O devices

Front

What is an Operating System?

Back

In the operating system scheduling system, it contains processes that are in main memory.

Front

Short term queue

Back

What is smaller, Main Memory or Registers?

Front

Registers are smaller than main memory.

Back

Where does the processor fetch an instruction from?

Front

Cache / Main Memory

Back

It is the listing of a sequence of instructions that execute for a particular process.

Front

What is a Trace?

Back

When to use asynchronous writes

Front

When writes are more frequent

Back

Free-Behind technique

Front

Remove a page from the buffer as soon as the next page is requested

Back

It includes all information needed by the operating system and processor to manage and execute the process. It characterizes the process.

Front

Process State

Back

Where does the processor hold the instruction fetched?

Front

Instruction Register (IR)

Back

3 Types of registers typically available are:

Front

Data Registers (storing data), Address Registers (stack pointer, frame pointer), Condition Codes (addl, mult)

Back

T / F - The philosophy underlying the microkernel is that only absolutely essential core operating system functions should be in the kernel.

Front

True

Back

T / F - The operating system typically runs in parallel with application programs, on it's own special O/S processor.

Front

False

Back

...contains the address of an instruction to be fetched.

Front

What is the Program Counter?

Back

T / F - The primary advantage of the basic microkernel design over layered kernel designs involves increased performance.

Front

False; performance is a disadvantage

Back

...contains the instruction most recently fetched.

Front

What is the Instruction Register (IR)?

Back

A process is comprised of what three parts:

Front

Set of data, State of program, Code (data functions) / Control

Back

The processor typically maintains the current operating mode (i.e., user or kernel) in the _________________.

Front

Program Status Word (PSW)

Back

Section 10

(50 cards)

One step in the procedure for creating a new process involves:

Front

Initializing the process control block, Allocating space for the process, Assigning a unique identifier

Back

It is the reclaiming of reclaiming of the resources from the processor before the process is finished using it. The resources can be the process itself.

Front

What is Preemption?

Back

Reasons for Process Suspension:

Front

Swapping, Other OS Reason, Interactive User Request, Timing, Parent Process Request

Back

What are the 4 actions that a michine instruction can specify

Front

Processor-memory, Processor-I/O, Data Processing, Control

Back

Message Passing Syncronus

Front

Reduces overhead Has error detection Efficient But receiver needs to sample at correct intervals

Back

What is multithreading?

Front

A technique in which a process is divided into threads that can run concurrently.

Back

A technique used to overcome the problem of blocking threads. The purpose is to convert a blocking system call into a non-blocking system call.

Front

What is Jacketing

Back

What is temporal locality?

Front

Temporal Locality = processor accesses memory locations that have been used recently; happens w/loops. This methods keeps recently used instructions and data in cache memory.

Back

Describe process creation

Front

Parent processes create child processes.

Back

Purpose of Interprocess communications?

Front

Information Sharing Increase computation throughput or overall speed Modularity Convenience

Back

What are the strategies for exploiting temporal locality?

Front

Temporal = exploits the cache by keeping recently used instructions & data values in cache memory and by exploiting the cache hierarchy.

Back

What resources are shared by all threads of a process?

Front

The state and resources of the process (including address space, file resources, and execution privileges)

Back

What is Programmed I/O?

Front

the processor must determine when the instruction is complete

Back

Process States

Front

New Ready Waiting Running Terminated

Back

What is multiprogramming?

Front

An approach at logic flow that will allow the processor to switch between jobs in an effective fashion.

Back

Describe a shared memory system

Front

Multiple processes can share a section of memory and communicate or share data. But can have protection/security issues

Back

What is spatial locality?

Front

Spacial locality = multiple, clustered, memory locations; they will be fetched sequentially.

Back

What is a Thread?

Front

Threads are a dispatchable unit of work which includes a processor context and its own data area for a stack.

Back

T / F - The principal function of the processor is to execute machine instructions residing in main memory.

Front

True

Back

What 3 states exist for threads?

Front

Running, Ready, Blocked, Spawn, Block, Unblock, Finish *(The status Suspended is a process-level concept & A thread can spawns another thread)

Back

If we have a 32 bit instruction and the first 8 bits contain opcode...how large must the IR be?

Front

either 8 bits (for opcode) or 32 bits (to contain the whole instruction)

Back

What are the advantages of KLT?

Front

-The kernel can simultaneously schedule multiple threads from the same process on multiple processors. -If one thread in a process is blocked, the kernel can schedule another thread of the same process. -Kernel routines themselves can be multithreaded.

Back

Define the two main categories of processor registers.

Front

User-Visible registers, Control and Status registers

Back

T / F - A process trace is a listing of the sequence of instructions that execute for that process.

Front

True

Back

The scheduling strategy where each process in the queue is given a certain amount of time, in turn, to execute and then returned to the queue, unless blocked is referred to as:

Front

Round-Robin

Back

It is the portion of the operating system that selects the next process to run.

Front

What is the Dispatcher?

Back

List 2 separate characteristics embodied in the concept of a process.

Front

Resource ownership, Scheduling/execution

Back

What characteristics distinguish the various elements of a memory hierarchy?

Front

Speed(top=fast ->bottom=slow), Size(top=small->bottom=large), Cost(top=high->bottom=low), Access Frequency(top=often->bottom=infrequent)

Back

It is a process state in which a process cannot execute until some event occurs.

Front

What is a Blocked State?

Back

T / F - The principal responsibility of the operating system is to control the execution of processes.

Front

True

Back

If we have a 32 bit instruction and the first 8 bits contain opcode...how large must the program counter be?

Front

32-8 = 24 bits (at least)

Back

What is a KLT?

Front

Kernel Level Thread: Kernel maintains context information for the process and the threads (No thread management done by application) Scheduling is done on a thread basis.

Back

What is Interrupt-driven I/O?

Front

The I/O module will interrupt the processor to request service when it is ready to exchange data with the processor

Back

What is the kernel of an OS?

Front

Kernel = located in main memory, contains most freq used funcs in the OS. Link between programs and processes.

Back

What are the benefits of a Microkernel Organization?

Front

Uniform interfaces on requests made by a process - all services -> message passing • Extensibility - add new services • Flexibility - remove existing services • Portability - across processors • Reliability - rigorous testing & limited API • Distributed System Support • Object Oriented Operating Systems

Back

What are the strategies for exploiting spatial locality?

Front

Spatial = exploits the cache by using larger cache blocks and incorporating pre-fetching mechanisms into cache control logic.

Back

A process switch may occur when the system encounters an interrupt condition, such as that generated by a:

Front

Memory fault, Supervisor call, Trap

Back

It is the action that refers to when the OS moves one of the blocked processes out onto disk into a suspended queue. Then the OS brings in another process from the suspended queue.

Front

What is Swapping?

Back

Describe: Message Passing System

Front

Processes share a common channel to send messages

Back

4 General examples of the use of threads in a single-user multiprocessing system.

Front

Foreground/Background work, Asynchronous processing, Speed of execution, Modular program structure

Back

What is Cache

Front

When processor attempts to read byte/word, a check is made to determine if the byte/word is in cache. If not, a block of main memory is read into the cache, then the byte/word is delivered to the processor

Back

What is a process switch?

Front

process switch you must switch the processor context, program counter and the registers must be saved before the process control block of the process is moved to another queue. after executed the previous process is switch back in.

Back

The 4 types of entities that the OS maintains tables for include:

Front

I/O, Memory, File, Process

Back

3 Techniques for I/O operations

Front

Interrupts-driven I/O, DMA, Programmed I/O

Back

Cache Design Issues Include:

Front

- Cache size - Block size - Mapping function - Replacement algorithm - Write policy

Back

Message Passing Asynchronus

Front

Self-contained Higher overhead and no error detection

Back

What is the disadvantage of KLT?

Front

The transfer of control from one thread to another within the same process requires a mode switch to the kernel.

Back

What is the Key Responsibility of the OS? What is its fundamental Task?

Front

Key responsibility of an OS is to "manage system resources." Its fundamental task is "Process Management" -taufer

Back

What is a mode switch?

Front

mode switch is when the processor simply sets the pc to the starting address of an interrupt handler and switches from user modes to kernel mode.

Back

List the 5 storage management responsibilities of a typical OS.

Front

Process Isolation, Automatic allocation and management, support of modular programming, protection and access control, long-term storage

Back

Section 11

(7 cards)

Scheduling Algorithm Shortest Job First

Front

Non-Preemptive: Process must complete it's CPU burst Advantages: Minimum avg waiting time for a given set Disadvantages: Not good for time sharing, difficulty knowing the length of the next CPU request

Back

Scheduling Algorithm Shortest Remaining Time First

Front

Preemptive: If a new process arrives with a burst length less than he remaining time of the current process, stop and execute the new process Advantages: Can yield min avg waiting time Disadvantage: Increased overhead

Back

List the multi-threading models

Front

One-to-one Many-to-one Many-to-many

Back

Scheduling Algorithm First Come First Serve

Front

Non-Preemptive: Processes are assigned in the same order they arrive at the Ready Queue Advantages: Easy to implement and fairness Disadvantage: Low overall throughput

Back

CPU Scheduling Criteria

Front

CPU Utilization - keep as busy as possible Throughput - # of processes that can complete execution in a time unit Turnout time - Amount of time to execute a particular process Waiting time - Time from when a request was submitted to the first produced response (not output)

Back

Benefits of Multi-threading

Front

Increased responsiveness Sharing of resources Economical Scalable

Back

Scheduling Algorithm Priority

Front

CPU is allocated to processes with the highest priority Disadvantages: Potential starvation of low priority processes Solution: Over time, low priority processes increase in priority

Back