Section 1

Preview this deck

Lottery Scheduling

Front

Star 0%
Star 0%
Star 0%
Star 0%
Star 0%

0.0

0 reviews

5
0
4
0
3
0
2
0
1
0

Active users

1

All-time users

1

Favorites

0

Last updated

1 year ago

Date created

Mar 14, 2020

Cards (34)

Section 1

(34 cards)

Lottery Scheduling

Front

Hold a lottery to determine which process should get to run next; processes that should run more often should be given more chances to win the lottery

Back

Time sharing

Front

Run one process for a little while, then run another one, and so forth

Back

Temporal Locality

Front

Hit once, probably hit again

Back

Context Switch

Front

Switch by OS from one process to another 1. Save registers related to process 2. Jumps into kernel mode and kernel stack 3. Grabs new process and jumps back out

Back

States of a Process

Front

Running: process is executing instructions Ready: Ready to run but OS has chosen not to run it Blocked: Not ready to run until some other event takes place

Back

Elements of a Process

Front

Program Counter Stack pointer

Back

SJF Scheduler

Front

Shortest Job First Runs the shortest job first, then the next shortest, and so on. Decides upon arrival time What if a huge job comes in and then shortly after two small ones come in too? Get convoy effect again

Back

Cache Affinity

Front

When a process runs on a particular CPU, it is often advantageous to run it on the same CPU

Back

FIFO Scheduler

Front

First in First Out First process in is the first one to get done and so on down the line Convoy Effect: A number of relatively-short potential consumer of a resource get queued behind a heavyweight resource consumer

Back

Cache Coherence

Front

Making sure that caches update main memory and there is nothing missing

Back

Turnaround Time

Front

Time at which the job completes minus the time at which the job arrived in the system T(turnaround) = T(completion) - T(arrival)

Back

Lazily loading a process

Front

Loading pieces of code or dat only as they are needed during program execution Performed by modern operating systems

Back

Direct Execution

Front

Run a program directly on a CPU

Back

STCF Sheduler

Front

Shortest Time to Completion First Any time a new process enters the system, STCF determines which of the remaining jobs (including current one) has the least time left, and schedules that one

Back

Spatial Locality

Front

Hit once, something around it will probably be accessed

Back

User mode

Front

Code that runs in user mode is restricted in what it can do

Back

Linux Completely Fair Scheduler

Front

As each process runs, it accumulates virtual runtime. When a scheduling decision occurs, CFS will pick the process with the lowest virtual run time to run next sched_latency - How long one process should run before considering a switch min_granularity - time slice it is give

Back

Response Time

Front

Time from when the job arrives in a system to the first time it is scheduled T(response) = T(first run) - T(arrival)

Back

Round Robin Scheduler

Front

Instead of running jobs to completion, run job for a time slice and then switch to the next job in the run queuer. Repeat until all the jobs are finished It is fair!

Back

Workload

Front

The processes running the system

Back

Multi-level feedback queue

Front

+Optimize turn around time +Make a system more interactive to users Has distinct queues which are assigned different priority levels. MLFQ uses priorities to decide which job to run Possible issues of starvation and gaming the scheduler

Back

Return from Trap

Front

Jumps back to the user program and reduces the privileges back to user mode

Back

Process: wait()

Front

Parent waits for the child process to do what it has been doing When the child is done, wait() returns to the parent

Back

Trap Instruction

Front

Jumps into kernel mode and raises privilege level to kernel mode

Back

Kernel Mode

Front

Mode where the operating system runs in -> full access baby

Back

MLFQ Rules

Front

1) Priority(A) > Priority(B), A runs and B doesn't 2) Priority(A) = Priority(B), A&B run in RR 3) When a job enters the system, it is placed at the highest priority 4a) If a job uses up an entire time slice while running, its priority is reduced 4b) If a job gives up the CPU before the time slice is up, stays at the same priority level 5) After some time period S, move all jobs in the system to the topmost queue Updated 4: Once a job uses its time allotment at a given level, its priority is reduced

Back

Ticket Mechanisms

Front

Ticket Currency: Allows a user with a set of tickets to allocate tickets among their own jobs in whatever currency they would like Ticket Transfer: Process can temporarily hand off its tickets to another process Ticket Inflation: Process can temporarily raise or lower the number of tickets it has

Back

Cache

Front

Small, fast memories that hold copies of popular data that is found in the main memory of the system (main memory holds all the data)

Back

Eagerly loading a process

Front

Loading is done all at once before running the program

Back

Process: Signals

Front

Deliver external events to a process Process can catch signals and change what is is doing

Back

Tickets

Front

Represent the share of a resource that a process should receive

Back

Process: fork()

Front

Creates an exact copy of the current process except it has a different PID Parent is the process calling it and the child is the one being created Which one chosen to run is picked by the scheduler->non-deterministic

Back

Limited Direct Execution

Front

Run the program you want to on the CPU, but first make sure to set up the hardware so as to limit to what the process can do without assistance

Back

Process

Front

A running program

Back