Section 1

Preview this deck

lock(&lock)

Front

Star 0%
Star 0%
Star 0%
Star 0%
Star 0%

0.0

0 reviews

5
0
4
0
3
0
2
0
1
0

Active users

0

All-time users

0

Favorites

0

Last updated

6 years ago

Date created

Mar 1, 2020

Cards (59)

Section 1

(50 cards)

lock(&lock)

Front

If no threads hold the lock when a thread gets to lock, it will acquire the lock and proceed. If a thread holds the lock, it will wait until that threads unlocks it.

Back

Turnaround time

Front

completion time - arrival time

Back

Persistence

Front

Achieving long-term storage for important information

Back

What is a process?

Front

A running instance of a program

Back

Are all programs threaded?

Front

Yes all programs have at least 1 thread.

Back

Process States

Front

Running Ready Blocked

Back

Wait() overview

Front

Wait until something finishes. Won't return until a child finishes Can use child PID as argument to wait for a specific one.

Back

Fork() overview

Front

Makes an exact copy of parent process - called child. Child has its own copy of address space, registers, and PC. Non-deterministic

Back

timedlock(&lock, time)

Front

returns after a timeout or after acquiring the lock, whichever happens first

Back

Stride Scheduling

Front

Each job in the system has a stride, which is inverse in proportion to the number of tickets it has. A process runs, increment a counter (pass value) for it by its stride. Choose the process with the lowest pass value to run next Quantum changes depending on the number of processes scheduled

Back

fork() return values

Front

< 0 - Error 0 - Child > 0 - parent, value is PID of child

Back

trylock(&lock)

Front

returns failure if the lock is already held

Back

Virtualization

Front

Transforms physical resources into more general, powerful, easy-to-use virtual forms

Back

Three key ideas of OS

Front

Virtualization Concurrency Persistence

Back

Exec() overview

Front

Execute a separate program than the running one

Back

Ready Process State

Front

a process is waiting to be assigned to a CPU

Back

function to wait for condition variable

Front

wait()

Back

Rules for MLFQ

Front

Higher priority jobs are run If jobs are tied, RR them When a job first arrives in system, give it the highest priority Once a job uses up its time allotment at a given level (regardless of how many times it has given up the CPU), its priority is reduced (moves down on queue) Periodically boost all jobs

Back

Effects of a small quantum

Front

Better response time Cost of context switching will dominate overall performance

Back

Blocked Process State

Front

a process has made an I/O request and is awaiting return from request.

Back

Components of a proccess

Front

Memory and Machine state

Back

Limited Direction Execution Protocol

Front

The way to run processes on the CPU Phase 1: initialize trap table upon boot Phase 2: switch to user mode upon process runtime

Back

Single-queue multiprocessor scheduling (SQMS)

Front

Reuse all the basic framework for a single processor for the multi- have a single queue and if you have 2 CPUs choose the best 2 jobs to run Has problems with locking, scaling, and keeping cache affinity

Back

Process vs Program

Front

Program: dead, static code/data on disk Process: dynamic instance of code and data

Back

Four components of computer system

Front

hardware, operating system, application programs, users

Back

TestAndSet code

Front

int TestAndSet(int *old_ptr, int new) { int old = *old_ptr; //fetch old value at old_ptr *old_ptr = new; // store 'new' into old_ptr return old; // return the old value }

Back

Trap instruction

Front

instruction that jumps into the kernel - used in system calls

Back

MLFQ Scheduler

Front

Multi Level Feedback Queue Uses multiple queues at different priorities to determine which job to run. Each is given a time slice.

Back

PCB

Front

(Process Control Block) Unique to every process. Serves as the repository for any information that may vary from process to process. -Process state -Program counter -CPU registers -Scheduling Info -Memory Management Info -Accounting Info -I/O info

Back

throughput

Front

amount of jobs completed per unit of time

Back

CFS sched latency

Front

determine how long one process should run before considering a switch (effectively determining its time slice but in a dynamic fashion). Runs processes for the sched latency time / the number of processes

Back

Proportional Share scheduler

Front

(aka Fair Share Scheduler) ensures all process get the same percentage of CPU not optimized for response or turnaround time

Back

Completely Fair Scheduler

Front

CFS A type of Stride Scheduler - Processes accumulate vruntime (virtual runtime) and the OS runs the processes with the least vruntime. Utilizes 'niceness'

Back

Concurrency

Front

Lockout processes to avoid collisions and conflicts

Back

CFS Min granularity

Front

time slice never gets smaller than this value

Back

Multi-queue multiprocessor scheduling (MQMS)

Front

Have multiple queues When a job arrives, it gets placed on a random queue (possibly algorithmically determined) Problem is imbalance - migration fixes this. A queue can 'work steal' from another queue if it doesn't have enough work and the other queue has a lot

Back

Effects of a large quantum

Front

Amortize (make worth it) the cost of switching Worse response time

Back

FIFO scheduler

Front

First in, first out. Problem: convoy effect - turnaround time can suffer if a short job is 'stuck behind' a long job Realistic use: real-time systems

Back

function to send condition variable

Front

signal()

Back

Niceness (CFS)

Front

How CFS prioritizes processes Ranges from -19 to +20 Default is 0 Higher niceness means lower priority

Back

Thread

Front

Thread is a lightweight process, it's an execution stream that shares an address space Each thread has its own stack

Back

Round Robin Scheduler

Front

Cycles through jobs running a time slice of each job. Great response time, poor turnaround time. Good for fairness.

Back

What is different between threads?

Front

Stack, registers, TCBs(for context switching)

Back

STCF scheduler

Front

Shortest to completion first introduces preemption to SJF. Bad for response time

Back

Running process state

Front

instructions of a process are being executed

Back

Response time

Front

initial schedule time - arrival time

Back

Problems with basic MLFQP

Front

You can game the scheduler (relinquish CPU after using 99% of time slice to maintain high priority) if there are too many 'interactive' jobs in the system, long-running jobs will be starved Programs could change behavior over time (Last two problems are countered by the periodic boost)

Back

REMEMBER TO STUDY THREAD CODE

Front

--- THREAD CODE ---

Back

SJF Scheduler

Front

Shortest Job first Problem: if you start running a job and a shorter one arrives, you cannot switch.

Back

What is the time slice in Round Robin called?

Front

scheduling quantum

Back

Section 2

(9 cards)

CompareandSwap code

Front

int CompareAndSwap(int *ptr, int expected, int new) { int actual = *ptr; if (actual == expected) { *ptr = new; } return actual; }

Back

Deadlock conditions

Front

1. Mutual exclusion 2. Hold and wait 3. No preemption 4. Circular wait

Back

Deadlock Prevention: Mutual Exclusion

Front

just try to not have mutual exclusion lol

Back

LDE

Front

limited direct execution

Back

semaphore post

Front

acquire s-->lock s->value ++ signal s-->cond free s-->lock

Back

Deadlock prevention: Hold and Wait

Front

Prevent other threads from acquiring the locks by acquiring them all at once - (use a global lock) Release locks over time (but cannot regain locks until all are released)

Back

semaphore wait

Front

acquire s-->lock s->value -- while(s-> value < 0) { wait } free s-->lock

Back

Deadlock prevention: No preemption

Front

Can use a trylock() potentially leads to livelock though

Back

two phase lock

Front

spin for awhile, hoping to gain lock soon. if not gained go to second phase. second phase: sleep until lock is available

Back