Section 1

Preview this deck

Types of Memory

Front

Star 0%
Star 0%
Star 0%
Star 0%
Star 0%

0.0

0 reviews

5
0
4
0
3
0
2
0
1
0

Active users

1

All-time users

1

Favorites

0

Last updated

1 year ago

Date created

Mar 1, 2020

Cards (208)

Section 1

(50 cards)

Types of Memory

Front

registers cache main memory SSDs Magnetic Disks

Back

virtualization

Front

running a computer on a computer - all modern computers have specialized software support for this

Back

kernel

Front

loads and starts other programs programs become processes and execute from main memory

Back

ready queue

Front

contains the processes that are in main memory and aren't waiting

Back

Register Set

Front

what the process thinks the state of all other registers currently is

Back

mobile computing

Front

laptops and tablets have a lot of characteristics of desktop operating systems tend to be optimized for information consumption

Back

interrupt handler

Front

executes then returns control to the program that was running

Back

real-time computers/OS

Front

when things have to be done now popular for control systems (Xrays) can show up as subsets of other systems

Back

multiprogramming

Front

scheduling more than one program at once

Back

State

Front

generally new, ready, waiting, running, or terminated

Back

Job/Process

Front

program in execution program becomes this when it's loaded into memory and started

Back

Privileged Instructions

Front

can only be executed in kernel mode include all hardware access, so OS must provide system function calls

Back

multitasking (interactive time sharing)

Front

multiprogramming with switches frequent enough that users can interact with running programs often requires a time-shared OS

Back

Long-Term Scheduler

Front

loads entire batch-scheduled processes from disk gets a mix of CPU bound and I/O bound processes going useless for fully-interactive OS

Back

Scheduling Information

Front

the process's priority, and anything else the OS uses to decide how to schedule it

Back

I/O

Front

everything except CPU and main memory is this performed by CPU via controllers connected to a bus disks are also this

Back

cloud computing

Front

client-server systems running over commodity internet or large private networks can include file servers, computation servers, virtual machines, etc not a new tech so much as radically scaled existing tech

Back

User Mode

Front

only allowed specific instructions and regions of memory make requests of the OS by raising interrupts OS in kernel mode fulfills and then returns control to the programs

Back

device queue

Front

each I/O device has one contains the processes that are waiting on that device

Back

I/O Status information

Front

what files and devices this process has open, what it's waiting on, etc

Back

Iron Law of the Memory Hierarchy

Front

fast memory is expensive cheap memory is slow

Back

Context Switching

Front

interrupt to return control to OS copy frozen process's current state change frozen process's state from running to ready select new process copy new process's state from its PCB into the registers change the new process's state from ready to running return control to the new process at its current program counter the running process has control of the program counter and registers, but others have their own copy ready to be loaded back in

Back

Kernel Mode

Front

aka privileged mode, system mode, monitor mode allowed to do anything only OS is allowed this

Back

Multiprocessor Systems

Front

include smartphones higher throughput, better economy symmestric multiprocessing

Back

child process

Front

created by parent process needs resources (constrained by parent process or requests from OS) can receive parameters from the parent process may execute concurrently with parent, or parent can wait may be duplicate of parent, or have new program

Back

emulation

Front

running code on another kind of computer

Back

Starting a Process The OS must:

Front

create and initialize a PCB for the process (new) load the program into main memory set the PCB program counter to the entry point of the program Set the PCB state to ready

Back

desktop/laptop computers

Front

interrupt driven and multitaskers supports multiple users, may or may not support multiple at once

Back

Von Neumann Machine

Front

Stored Programs CPU with: - Control Unit - Arithmetic/Logic Unit Memory Input and Output

Back

large systems

Front

initially require to centralize very scarce computing resources built around time-sharing; typically noninteractive uncommon

Back

Accounting information

Front

who's responsible for this process, how much time it's taken, etc

Back

interrupts

Front

stop execution of the running program and transfer control to an interrupt handle can be generated from - hardware events - timer events - requests for system services

Back

job queue

Front

contains all processes in a system

Back

CPU Scheduler

Front

selects ready processes and decides which one to run

Back

clustered systems

Front

multiprocessor systems over a network synchronization issues

Back

client-server systems

Front

networked systems that allow a client to ask a server to do something web servers, database servers, game servers

Back

Starting the Computer

Front

- boot program is run when computer turns on - loads and start kernel

Back

Scheduling

Front

deciding which processes to run from the processes that can be run all about queues

Back

I/O Methods

Front

polled interrupt-driven direct memory access

Back

Main Memory

Front

largest storage device the CPU can directly access programs have to be here to run

Back

Process Management

Front

Processes need memory, I/O, and CPU (and other resources)

Back

peer-to-peer systems

Front

any computer can be a client, a server, or both have to be able to discover each other famous for piracy but have other applications

Back

Medium-Term Scheduler

Front

kicks processes right back out to disk

Back

Program Counter

Front

register in CPU points at where the next instruction that's going to be executed is does not point at current instruction

Back

interpretation

Front

running code for a theoretical computer

Back

Single-Processor Systems

Front

systems with one general-purpose CPU

Back

Modern Operating Systems Need:

Front

process management memory management I/O management security

Back

Dual-Mode Operation

Front

normal processes shouldn't be able to access the hardware or mess with other processes, but they have have to be allowed to run and do normal thing User Mode and Kernel Mode

Back

Process Control Block

Front

where the OS stores all information about a process - centrally located, easily accessible

Back

files

Front

named sequential arrays of bytes make up almost all data on disks

Back

Section 2

(50 cards)

I/O Management

Front

the OS: communicates with device controllers provides output to devices receives input from devices

Back

Cooperating Process

Front

Affects or is affected by other processes Any process that needs to share data with other processes Require IPC mechanism

Back

Identifying Tasks (Threads)

Front

Issue Dividing up the work in terms of operations

Back

CPU Processes

Front

Have relatively long CPU bursts and less I/O waiting

Back

Thread Scalability

Front

Benefit Exploitation of multiple processors

Back

Response Time

Front

The time between the submission of a request by a user and the first output to that user Irrelevant to overall scheduler performance Most import measure in terms of user satisfaction

Back

Data Parallelism

Front

Divides up data, using multiple processors to run the same code on different data slices (image processing)

Back

Multiprogramming

Front

Processes run until they: Have to wait Are terminated Yield Are preempted Other processes can then take a turn

Back

Java Thread Implementation

Front

Any class can be made a thread by implementing Runnable interface

Back

Creating Processes on Windows

Front

CreateProcess is similar to UNIX, but accepts ten different parameters to account for every possible permutation on one function call when a process is terminated, its children may be too

Back

Linux Thread Implementation

Front

Doesn't have threads Has different levels of sharing Clone() helps to decide what to clone and not clone

Back

Thread Dependency

Front

issue Managing dependent work (think about producer/consumer problem)

Back

CPU Scheduler

Front

Selects a process from the ready queue to run

Back

Authorization

Front

process of determining, given who you are, what you may do

Back

Security Management

Front

The OS: deflects passive and active attempts to defeat its management mechanisms uses authentication authorization access control cryptography

Back

Creating Processes on UNIX

Front

fork creates a new copy of the current process as a child if you're the child, get back 0 otherwise, get back child's process ID exec can then be used to run a different process as the parent, either wait for a child, or keep going

Back

Task Parallelism

Front

Divides up code, using multiple processors to run different parts of the program (sockets)

Back

Access Control

Front

process of enforcing authorization

Back

Parallelism

Front

Doing different parts of the same job on different processors

Back

Symmetric Multiprocessor

Front

Allow each processor to be an equal participant

Back

Turnaround Time

Front

Process-individualistic version of throughput How long it takes to execute a given process, from start to termination Minimizing results in best experience for users waiting on (whole) processes

Back

Cooperative Multitasking Process Stop

Front

Have to wait (on I/O) Are terminated (by themselves or by OS) Yield (give another process a turn)

Back

Dispatcher

Front

Part of the CPU scheduler that actually does the business of context switching once the new process is selected Performance is just as important as performance of the scheduling algorithm

Back

Files I/O Management

Front

The OS: creates, writes, and deletes files and directories provides functions for processes to work with files maps files onto disks (partitions)

Back

cryptography

Front

process of making it difficult to effectively interpret private communications or impersonate users

Back

Preemptive Scheduling

Front

Processes are not allowed to run as long as they feel like Processes stop if preempted by timer interrupt

Back

Independent Process

Front

Apart from being managed by the OS, it doesn't affect and isn't affected by any other processes

Back

Waiting Time

Front

Amount of time a process is ready without running Only component of a process's total runtime that the CPU scheduler directly affects Considered a good overall measure of the efficiency of the CPU scheduler in specific

Back

Thread Data Splitting

Front

Issue Dividing up work in terms of data

Back

Scheduling Fundamentals

Front

A system can only run as many processes at once as it has processors CPU time is wasted if a process is just idle while waiting We only want processes to hold the CPU until they have to wait

Back

Thread Resource Sharing

Front

Benefit Can implement parallelism without the need for message passing or shared memory

Back

OS and Memory Management

Front

The OS: keeps track of what memory is being used, and by which processes decide what to move in and out of memory allocate and de-allocate memory as necessary make sure process stay within their bounds of memory

Back

OS and Processes Management

Front

loads programs into memory to turn them into processes create, delete, suspend, and resume processes schedule processes and threads to CPU assign resources provide synch and communication to processes

Back

Thread Debugging

Front

Issue Fixing multithreaded processes is very hard

Back

Under-loaded processors

Front

Pull processors from other processors

Back

Windows Thread Implementation

Front

Implements threads directly CreateThread() analogous to CreateProcess()

Back

Dispatch Latency

Front

Time dispatcher takes to stop one process and start another Total time taken every time the CPU scheduler runs + the time taken by the actual scheduling algorithm

Back

CPU Utilization

Front

Want to keep CPU as busy as possible 0-100% For an optimally loaded system not considering direct user responsiveness, 40-90% Avoids wasting hardware time via lack of use or via overload and its corresponding overhead

Back

Authentication

Front

process of determining that you are who you say you are

Back

Burst Cycle

Front

Processes alternate bursts of CPU activity and I/O waits Types of processes differ in their burst patterns What we are scheduling is not a process, but the next CPU burst for that process Kinds of processes you have affect the best scheduling algorithm to use

Back

Multiple-Processor Scheduling

Front

Symmetric Multiprocessor The CPU scheduler is run by, and for, each processor Each processor selects its own process from the ready queue to execute Avoid multiple processors choosing the same process to run Avoid moving processes between processors Ready queue per processor

Back

Throughput

Front

Number of processes completed per time unit Originally a figure used for batch-processing systems Maximizing results in the highest number of complete processes getting done in a given time period

Back

Overloaded processors

Front

push processors to other processors

Back

Ready Queue

Front

Conceptually a queue Not a FIFO queue

Back

CPU Scheduler Runs when

Front

Running process waits (I/O call) Running process becomes ready (yield or preemption) Waiting process becomes ready (I/O completion) Process terminates

Back

Thread Balancing

Front

Issue Make sure every thread gets enough work

Back

Thread Responsiveness

Front

Benefit Can allow for an application to be more responsive without having to logically interrupt its own work

Back

I/O Bound Processes

Front

have relatively short CPU bursts interspersed with I/O waits

Back

Disks I/O Management

Front

the OS: organizes files manages free space allocates space to files schedules for processes

Back

Threads

Front

Processes are allowed to have any number of program counters Each thread needs its own Program counter Register set Call stack (in kernel or user space) are a simple form of parallelism

Back

Section 3

(50 cards)

Shortest Job First Weakness

Front

Not actually possible Longer processes may never get run

Back

Interactive User Processes (Multilevel)

Front

Queue 2 Traditional applications

Back

EDF Advantages vs Rate-Monotonic

Front

Processes do not need to be periodic Can take different amounts of time each period Can achieve higher processor utilization

Back

First-Come First-Served (FCFS)

Front

Simplest of all reasonable schedulers Processes allocated the CPU in the order which they arrive Run until completion and termination Cannot be sensibly made preemptive

Back

System Process (Multilevel)

Front

Queue 0 OS Processes, device drivers

Back

FCFS Weakness

Front

Unpredictable wait times Non-preemptive nature makes it unusable to modern OSs

Back

Relative Priority Disadvantage

Front

Higher-priority processes more likely to be disrupted by lower-priority processes

Back

Processing time

Front

How long it takes to run each period

Back

FCFS Strengths

Front

Simple Easy to write and understand

Back

Relative Priority Advantages

Front

Avoids starvation

Back

Deadline

Front

How quickly the burst must be completed when it becomes ready

Back

Round-Robin

Front

Algorithm explicitly designed for time-sharing systems Redesigned from FCFS to have preemption Set timer to interrupt to stop the running process if its burst goes longer than the time quantum (10ms-100m) When a process becomes ready for whatever reason, put in on the back of queue each process is guaranteed to have to wait at most (n-1)q time units until it gets processor back

Back

Earliest Deadline First

Front

Priority based, strictly preemptive Process declarers deadline every time it becomes ready Each process assigned a priority based on deadline - Priority to process that needs to finish soonest Earliest deadline is immediately assigned to processor

Back

Multi-Multi-Level Queue scheduling

Front

Use combination of absolute and relative approaches Schedule so that the system queue and quasi-real-time queue

Back

Relative Priority

Front

Give each queue its own set of time slices to slice

Back

Multilevel Queue Disadvantages

Front

Starvation, as with anything with absolute priority

Back

True Background Process (Multilevel)

Front

Queue 4 Virus scanners, search indexers

Back

Period/rate

Front

How often it needs to run

Back

Windows Scheduling

Front

Explicit multilevel queue with internal priorities mapped to linear numeric values that in turn provide an implicit multilevel queue Higher-priority threads immediately preempt lower-priority threads - Soft real-time threads always preempt other threads

Back

Thread Scheduling

Front

Like scheduling processes Difference is contention scope

Back

Priority Weakness

Front

Dependent on priority being set reasonably Starvation

Back

EDF Disadvantages vs Rate-Monotonic

Front

Each process must declare its deadline each period Less predictable - may not know we will fail until a process becomes ready

Back

Real Time Scheduling

Front

Used for processes that need to respond to events as quickly as possible Non-preemptive multitasking non-starter

Back

Rate Monotonic

Front

The process with the faster rate always goes first Priority based Strictly preemptive Each process must declare its period and its decline Priority based on period Deadline is assumed to be beginning of the next period Throws away about 30% of CPU cycles

Back

Process Contention Scope

Front

Schedule processes and then schedule threads within them

Back

System contention scope

Front

Schedule threads just as we would schedule processes

Back

Shortest Job First

Front

Scheduled to the CPU giving priority to the process that will have the next-shortest burst Can be cooperative or preemptive

Back

Multilevel Feedback Queue

Front

- processes can move between queues - defined by: - number of queues - scheduling algorithm for each queue - algorithm used to decide when to move processes between queues - algorithm used to decide which queue a process starts in

Back

Admission control algorithms

Front

Determine whether it is possible to accept a given process given the three characteristics of hard real-time scheduler processes

Back

Active Array

Front

When this runs out of processes, the scheduler switches it

Back

Change in Priority for Windows Scheduling

Front

When a thread returns from an I/O wait, its priority is raised - Higher amount for user interaction - Lower for disk I/O When a thread uses its entire quantum, its priority is lowered - Never lowered below base priority The process owning the foreground window gets a priority bump - Usually about 3

Back

Priority Strength

Front

Flexible

Back

Hard real-time

Front

Need to ensure that we will either service a process's latency requirements or refuse it

Back

Quasi-Real-Time User Processes (Multilevel)

Front

Queue 1 Media players, games

Back

Processor-Level Threading

Front

Multicore processors present one physical processor as multiple logical processors, allowing multiple threads to be "running" on it The processor then decides internally how to schedule those threads, running a second-level, simpler CPU scheduling algorithm of its own

Back

Multilevel Queue

Front

Partition the ready queue into multiple separate queues Permanently assign each process to a given queue based on input or an inherent property of the process Queues given priority order

Back

Multilevel Queue Advantages

Front

Higher importance processes less likely to be stalled by lower-importance processes

Back

Linux Scheduling Advantages

Front

An O(1) scheduler This scheduler implements priority with a guaranteed constant upper bond of execution time a constant number of priority levels, and a queue for each of them

Back

Priority

Front

General version of Shortest Job First Each process has a priority that determines how soon it is scheduled compared to other processes (determined by number or function) Cooperative or preemptive

Back

Three Characteristics of a Hard Real-Time Scheduler Process

Front

- Period/rate - Deadline - Processing Time

Back

Absolute Priority

Front

Causes starvation In general, a bad idea

Back

Linux Scheduling Disadvantages

Front

Heuristics were pretty bad at figuring out which processes were interactive

Back

Linux Scheduling

Front

Has a constant number of priority levels, and a queue for each of them -> O(1) Scheduling 140 priority levels Each processer is given two arrays of doubly linked lists - Expired Array - Active Array When a process needs to be chosen, an active process with best priority is chosen Guaranteed to be doable in 140 compares or less

Back

Hard Real-Time Scheduling

Front

Attempts to guarantee that a task will be serviced by its deadline Failure to do so is considered complete failure Run the gamut from radios to life-safety-critical systems

Back

Expired Array

Front

A process is put here when its quantum expires, re-inserted into the corresponding queue Priority can change at this point

Back

Round Robin: q

Front

needs to be large with respect to the context switch time needs to be small enough for the system to be continually responsive

Back

Soft real-time

Front

Enough to ensure that real-time processes run with absolute priority over other processes

Back

Soft Real-Time Scheduling

Front

Guarantees only that real-time processes get priority over non-real-time processes

Back

Non-Interactive User Processes (Multilevel)

Front

Queue 3 Compilers, media encoders

Back

Shortest Job First Strength

Front

Guaranteed to produce an optimal schedule in terms of waiting time

Back

Section 4

(50 cards)

Completely Fair Scheduler (CFS)

Front

implements weighted fair queuing concept each process is given a virtual runtime scheduler selects whichever process currently has lowest virtual runtime for normal priority processes, virtual runtime = physical runtime processes and runtimes held in red-black tree takes O(log n) to insert a process

Back

Disabling Interrupts Issues

Front

Performance - If timer is controlled by interrupts, its going to mess them up Potential unpredictable consequences - Disables fundamental bits of the operating system Absolute non-starter on multiprocessor systems

Back

Hold and Wait

Front

At least one process is waiting on a resource while holding a different one

Back

Mutual Exclusion

Front

Resources are held by processes and cannot be shared

Back

Circular Wait

Front

A set of processes P = {P0, P1....Pn} must exist so that: - For each i so that 0<i<n, P is waiting on a resource that Pi+1 holds - Pn is waiting on a resource that P0 holds

Back

Deal With Deadlocks

Front

Prevent by eliminating one of the condition that allow them to occur Avoid by monitoring resource allocation with an algorithm Recover by detecting them when they occur and taking action Ignore and hope they don't happen very often

Back

Solaris Priority

Front

real-time threads always have priority system threads are second threads that return from I/O have priority increased threads that use up time quanta have priority decreased lower priority = longer time quanta

Back

priority inheritance

Front

once a process waits for a lock, the process that holds that lock receives the waiting process's priority prevents medium priority processes from preempting a lower priority one

Back

Deadlock Recovery Plan A

Front

Terminate every process involved in the deadlock Immediately takes care of the deadlock without further computation However, every process involved with the deadlock is terminated. All processes need to be restarted from the beginning

Back

Deadlocked

Front

If every process in a set is waiting on an event that can only be caused by another process in the same set If every process P is requesting some resource from R, and all resources in R are already held by processes in P, every process in P will wait indefinitely

Back

Solaris Scheduling

Front

each of the Solaris treads use different scheduling methods the scheduler maps each class-specific output to global priority uses dynamic priority

Back

Counting Semaphores

Front

like mutex locks but integer instead of boolean signaling increments, waiting decrements; processes are blocked when they wait on zero

Back

No Resource Preemption

Front

A resource cannot be forcibly removed from the process holding it

Back

Avoid Spin Waits

Front

turn mutex into queue when a process waits on a mutex that isn't available, it's inserted into a queue for that mutex queuing is a critical section

Back

Monitors

Front

an object with extra properties only one function can execute at the same time

Back

Mutual Exclusion Fix

Front

Dead end: resources that are inherently sharable are already marked as sharable for performance purposes, so they won't deadlock Resources that are inherently un-sharable will cause undefined behavior if they're shared EX: Mutex locks

Back

Solaris Classes of Tread

Front

Real-Time System Time-Sharing Interactive Fair Share Fixed Priority

Back

Hold and Wait Fix

Front

Set up a resource request method so you cannot request resources when you already have resources Has performance and starvation issues

Back

Mutex

Front

Primitive function provided by the operating system Implemented in such a way that wait cannot be interrupted between testing that m is true and setting it to false

Back

Mutex Locks and test-and-Set

Front

muex is free if m is false, releasing it sets it as false, and acquiring lock sets true returns false if m was false and true if m was true m is true after either way does not disable interrupts

Back

Multipliers affecting Virtual Runtime

Front

- Priority (direct multiplier) - Long-running processes get deprioritized - CPU-bound processes get deprioritized - I/O bound processes get prioritized all done preemptively

Back

Issues with Semaphores

Front

use, but low-level constructs like malloc and free - can cause resource leaks

Back

Signal and Wait

Front

signaling process is locked until the resumed process leaves the monitor

Back

How Processes Use Resources

Front

A process requests a resource, waiting until the resource is available, and once it is, obtaining exclusive access to it The process then uses the resource The process then releases the resources, freeing it for the use of other processes

Back

Examples of Deadlock Handling

Front

Only databases actually acknowledge Proper handling is hard Don't happen very often User-level programmers have incentive to avid them

Back

Producer

Front

Creates chunks of data must be delayed as appropriate to not overrun buffer

Back

Livelock: Priority Inversion

Front

Less severe than a deadlock, but still a problem if a lower priority process holds a lock needed by a higher-priority process, that higher priority process's priority is effectively lowered to that of the lower priority one

Back

methods

Front

functions and procedures that operate on upon the object

Back

Signal and Continue

Front

the signaling process continues, and the resumed process waits until it can enter the monitor normally

Back

Concurrent Pascal Method

Front

signaling leaves the monitor don't signal until critical section is done

Back

No Resource Preemption Fix

Front

take away all the resources it currently has and make it wait on all of them at once Can also allow a process to steal a resource from another process if that other process is waiting Not useful for mutex locks or semaphores

Back

Victim Selection Algorithm

Front

A cost function to determine the process out of the set that it will cause the least mischief to terminate Some Factors - Process priority - How close the process is to completion - How many resources the process has - How many more resources the process needs - How likely terminating the process is to help solve the problem - Whether or not the process is interactive

Back

Deadlock Recovery Plan C

Front

Preempt specific resources from a process Victim selection Terminate the process we're going to preempt them from Requires the victim process to be able to roll back the state before it got the resources it's having taken away

Back

Deadlock Recovery Plan B

Front

Terminate process involved in the deadlock until the deadlock goes away Define a victim selection algorithm Requires re-running the deadlock detection algorithm after every process is killed

Back

Shared Memory

Front

convenient for large amounts of data

Back

Circular Wait Fix

Front

Impose a total order on resources. Each resource has a number A process cannot request a resource if it has any resource with a number higher than that resource Mostly prevents circular waits bc circle can't be completed Still limits resource allocation

Back

object

Front

abstract data type that contains: public and private data public and private methods

Back

condition variables

Front

like a queued mutex a condition cannot be free waiting on this always waits signaling this can only free a process already waiting on it a thread that is waiting on this does not occupy the monitor provided by monitors internally

Back

Message Passing

Front

avoids conflict and works better across networks

Back

Test and Set Instruction

Front

defines minimum possible critical section in hardware if called on true, stays true and returns true if called on false, becomes true and returns false

Back

Critical Section

Front

Segment of code In a cooperating process Modifies shared information in a way that either: - Can destructively interfere with other critical sections - Can be destructively interfered with by other critical sections

Back

Mutex Locks with Disabling Interrupts

Front

Wait is never interrupted between determining that m is true and setting it to false In one version, wait yields, in another version, wait just keeps checking whenever it can get the CPU

Back

Dining Philosophers Problem

Front

A bunch of philosophers sit around a table with one fork between each philosopher Each philosopher: - Thinks until they get hungry - Tries to pock up both forks and eat - Eats until they're full - Puts down their forks - Repeats To eat, each philosopher needs both forks on left and right To prevent starvation, pick up left fork first until last philosopher Number the forks

Back

Deadlock Avoidance

Front

Look at the set of waiting processes If there's a circular wait, there's a deadlock O(n2) problem - detecting a cycle in a graph

Back

Windows Synchronization

Front

uses simple spinlocks and mutex locks i the kernel adapts spinlocks to interrupt-disabling for single-processor systems provides thread synch with dispatcher objects: - mutex locks - semaphores - timers - conditions (events) lock-guarded shared memory

Back

Inter-Process Communication

Front

Uses Shared Memory and Message Passing

Back

Conditions for Deadlock

Front

occur when all conditions are met: Mutual Exclusion Hold and Wait No Resource Preemption Circular Wait

Back

The Producer-Consumer Problem

Front

example of inter-process sharing producer places chunks of data in shared memory location where consumer can read them

Back

Monitors in Practical Use

Front

puts all synchronized behavior in one place no longer synch main code threads' code logic particular to each thread monitor's code contains all logic that lets the threads onteract

Back

Consumer

Front

operates on chunks of data needs to be delayed as appropriate to wait for more generated data

Back

Section 5

(8 cards)

Monitors in C#

Front

no monitors provides a type of lock variable called a Monitor, associated with specific object simulate actions of monitors manually gives one condition variable for free with Wait() and Pulse() also uses WaitHandle class

Back

Turnstiles

Front

Solaris's method of queuing for adaptive mutexes dynamically allocated queues that implement priority inheritance

Back

Adaptive Mutexes

Front

shift themselves from spinlocks to yielding if the kernel determines they're going to have to wait any real length of time for the lock

Back

Solaris Synchronization

Front

provide standard semaphores and conditions adaptive mutexes and turnstiles every synch mechanism available to kernel is available to user (except priority inherance)

Back

Monitors in Java

Front

every object is a monitor declare as synchronized method can also declare synchronized blocks wait() and notify() implement a single condition variable per object for free

Back

Wait Equation

Front

Turnaround - Burst

Back

Critical Section Objects

Front

provide mutexes at the user level that only allocate kernel mutexes if they have to used by Windows

Back

Turnaround Equation

Front

Completion - Arrival

Back