Section 1

Preview this deck

Many-to-Many Multithreading Model

Front

Star 0%
Star 0%
Star 0%
Star 0%
Star 0%

0.0

0 reviews

5
0
4
0
3
0
2
0
1
0

Active users

1

All-time users

1

Favorites

0

Last updated

1 year ago

Date created

Mar 14, 2020

Cards (71)

Section 1

(50 cards)

Many-to-Many Multithreading Model

Front

Allows many user level threads to be mapped to many kernel threads. Allows the operating system to create a sufficient number of kernel threads to handle user threads

Back

Describe the reason why the Operating System has Zombie processes instead of directly freeing them.

Front

Zombie processes stay in memory to store their exit state information for their parent process to extract with wait in the future, when the parent is ready for it.

Back

In a system with non-preemptive scheduling, the CPU scheduler is invoked when:

Front

A process switches from the running state to the waiting state or the process terminates

Back

fork() and threads

Front

creates new process with only one thread

Back

Many to One Multithreading Model

Front

Many user-level threads mapped to single kernel thread One thread blocking causes all to block. Multiple threads may not run in parallel on muticore system because only one may be in kernel at a time

Back

Atomic operation means

Front

that it completes in its entirety without worrying about interruption by any other potentially conflict-causing process.

Back

Arbitrary Speed

Front

No assumption can be made about the relative speed of different processes (though all processes have a non-zero speed).

Back

t/f In Symmetric Multiprocessing, all the other CPUs are coordinated by a Master CPU.

Front

. False

Back

A race condition exists when

Front

the order of the threads change the final result. Solution: Make sure that at most one thread is in its critical section at a time. These critical sections must be mutually exclusive in time.

Back

One to One Multithreading Model

Front

Each user-level thread maps to one kernel thread. If one user-level thread blocks, the scheduler will just switch to another thread in the same process. Most used: Windows and Linux

Back

Processes have

Front

their own address space: • Text section (text segment) contains the executable code • Data section (data segment) contains the global variables • Stack contains temporary data (local variables, return addresses..) • A process may contain a heap, which contains memory that is dynamically allocated at run- time.

Back

On a single-processor system, which of the following would appear in a correct solution to the critical-section problem for user-level processes?

Front

A process that does not execute code related to the critical section should not prevent others from entering the critical section

Back

User threads

Front

supported above the kernel and are implemented by a thread library at the user level. Since the kernel is not involved, thread switching may be very fast and thread scheduler may be implemented in the user space very efficiently. The implementation of blocking system calls is highly problematic : all the threads in the process risk being blocked!

Back

Describe the key steps that occur when an Interrupt is generated.

Front

1. Interrupt is Issued 2. Finish the current instruction 3. Switch to Kernel Mode 4. Save the Process State 5. The Interrupt Service Routine (ISR) is executed 6. Restore the Process State 7. Switch to User Mode

Back

critical section

Front

code segment in which the shared data is accessed.

Back

If a process P executes the wait operation on condition variable C within a monitor:

Front

It is immediately suspended

Back

what were batch systems

Front

Operators batch together jobs to run as a group. • First (rudimentary) operating system • Main memory allocated to the kernel and a single user job at a time • Jobs with similar needs were often grouped. • No user interaction; programs ran until they finished. "Uniprogramming"

Back

Which of the following causes a Process to leave the Running state?

Front

Timer expires

Back

Progress:

Front

No process executing a code segment unrelated to a given critical section can block another process trying to enter the same critical section

Back

What is a real-time system and what is the difference between hard and soft real-time?

Front

A real-time system is one that has well-defined, rigid time constraints. Hard real-time has critical tasks that must be completed on time. Soft real-time makes not absolute time guarantees, but does it by best-effort.

Back

Assume that in Linux, a process P creates a child process Q. Process Q loads a program that will require many hours to run. Immediately after the Q is created, P calls exit(0) and exits without ever calling wait. Which of the following best describes one of the results of this scenario?

Front

Q will be adopted by init

Back

Von Neumann Cycle of Instruction Execution

Front

Fetches the next instruction from memory • Decodes it to determine its type and operands • Executes it

Back

Task parallelism

Front

Distributing threads across cores, each thread performing unique operation

Back

Data parallelism

Front

Distributes subsets of the same data across multiple cores, same operation on each core

Back

Multiple threads share and dont share:

Front

Share: The address space, Open files Don't Share: Registers, Stack

Back

What does multiprogramming refer to?

Front

Multiprogramming allows multiple processes/jobs to be in memory at a time, ready to run. This also requires a scheduler that can bring jobs in to memory and a scheduler that can switch between ready jobs

Back

Kernel threads

Front

supported directly by the OS. The kernel performs thread: Creation, Scheduling, Management All calls that might block a thread are implemented as system calls. When a thread blocks, the kernel may choose another thread from the same process, or a thread from a different process.

Back

Two-level Multithreading Model

Front

Similar to Many to Many, except that it allows a user thread to be directly bound to kernel thread

Back

Which of these hardware concepts stores the address of the next instruction to be executed?

Front

Program Counter

Back

busy waiting

Front

If the critical section is being used, waiting processes loop continuously at the entry point

Back

Select the best description of what happens when the following code is executed on the parent and child processes. pid_t id = fork ();

Front

id is set to the child's PID for the parent and 0 for the child.

Back

Asymmetric Multiprocessing

Front

"Boss/worker" structure • The kernel runs on a particular processor

Back

t/f After handling an Interrupt, the OS will set the dual mode bit to 0 before returning to the process

Front

false

Back

Describe the function of a short-term scheduler

Front

The short-term scheduler selects the next Process to run on the CPU from the Ready Processes.

Back

Amdahl's Law

Front

Identifies performance gains from adding additional cores to an application that has both serial and parallel components 1 / ( s + ( 1 - s) / n )

Back

Which mode is switched to when an Interrupt is received?

Front

Kernel Mode

Back

Bounded Waiting:

Front

No process should have to wait forever to enter a critical section.

Back

What does the Process Control Block save

Front

Context of process being switch out (ie General-purpose Registers, Stack Pointer, Program Status Word)

Back

What does the Interrupt Service Routine (Interrupt Service Vector) store?

Front

The address of each procedure to handle the interrupt.

Back

If a process P executes the signal operation on semaphore S:

Front

it will move another process blocked on S to the ready state

Back

Which of the following is stored in the Process Control Block during a context switch?

Front

Program Status Word

Back

Dual Mode

Front

User mode • Execution done on behalf of a user. Dual Mode bit is 1 Kernel mode Execution done on behalf of operating system. Dual Mode bit is 0

Back

What can cause an interrupt?

Front

I/O Device Instruction, System Call, Timer Triggers, Page Fault

Back

Mutual Exclusion

Front

No two processes may be simultaneously inside the same critical section

Back

Shared Memory v Message Passing

Front

Shared Memory is faster but is prone to race conditions

Back

what is a time sharing system

Front

multiprogramming, but also allowed User Interaction • Each user is provided with an on-line terminal. • Response time for each user should be short. • All active users must have a fair share of the CPU time

Back

Symmetric Multiprocessing

Front

All processors are peers • Kernel routines can execute on different CPUs, in parallel • Tightly-Coupled, running Independently

Back

TestAndSet (TAS) instruction

Front

tests and modifies the content of a memory word atomically. 1) Read a Variable from Memory in to a Register 2) Sets that Variable to a new Non-Zero Value (eg. 1) Both of these steps happen in hardware in a single operation.

Back

cac

Front

The new program replaces the entire process. • "All threads other than the calling thread are destroyed during an exec()"

Back

Describe the function of a medium-term scheduler. Mention how it will improve system performance

Front

The medium-term scheduler selects a process to remove from main memory, decreasing the degree of multiprogramming. The selected processes for removal are ones in a waiting state when more resources are required. It will free resources (memory) and allow the long-term scheduler to add more processes to run on the CPU, which will increase resource utilization.

Back

Section 2

(21 cards)

Dispatcher module

Front

gives control of the CPU to the process selected by the short-term scheduler: switching context • switching to user mode • jumping to the proper location in the user program to restart that program

Back

condition variable

Front

can only be used with the operations wait and signal. x.wait(); x.signal(); (no values)

Back

CPU-I/O Burst Cycle

Front

Process execution consists of a cycle of CPU execution and I/O waits

Back

Turnaround Time for Processes

Front

Time the process ends

Back

I/O-bound process

Front

Spends more time doing I/O than computations, many short CPU bursts

Back

Semaphores

Front

Avoid busy waiting by blocking a process execution

Back

wait(s)

Front

acquire lock. will block if semaphore is 0, decrement semaphore

Back

A set of processes are said to be in a deadlock state when

Front

every process in the set is waiting for an event that can be caused only by another process in the set.

Back

Waiting Times for Processes

Front

Time the process starts

Back

binary lock variable that uses busy waiting is called a

Front

spin lock

Back

Shortest-Job-First (SJF) Scheduling

Front

SJF is Optimal Gives minimum average waiting time for a given set of processes

Back

First-Come, First-Served (FCFS) Scheduling

Front

Suffers from Convoy effect - short process behind long process

Back

CPU scheduling decisions may take place when a process (preemptive v non-preemptive):

Front

1. Switches from running to waiting state 2. Switches from running to ready state 3. Switches from waiting to ready 4. Switches from new to ready 5. Terminates Scheduling under 1 and 5 is non-preemptive (process runs until no longer needs CPU) All is pre-emptive (process runs until the scheduler wants to change it)

Back

signal(s)

Front

release lock, increment semaphore

Back

Dispatch latency

Front

- time it takes for the dispatcher to stop one process and start another running

Back

CPU-bound process

Front

Spends more time doing computations; few very long CPU bursts.

Back

Cache coherence

Front

System behaves as if there is one copy of the data • If data is only being read, any number of caches can have a copy • If data is being modified, at most one cached copy

Back

Multiprocessor Locks

Front

Disabling interrupts on one processor doesn't help • Other threads are all running. How about atomic memory operations? • Each entry in a processor cache has either a read-only state or an exclusive state • To read or write data store in exclusive in some other cache, the processor must fetch that value • Atomic execution works by calling atomic, obtaining exclusive copy and removing copies from all other processors • After access other processors can reobtain copy

Back

MCS Lock

Front

Maintain a list of threads waiting for the lock • Front of list holds the lock • MCSLock::tail is last thread in list • New thread uses CompareAndSwap to add to the tail

Back

Monitors

Front

High-level synchronization construct. • Allows the safe sharing of an abstract data type among concurrent processes. This is an object with procedures !!! Only one thread may be anywhere in the monitor at any given time !!!

Back

A process that is forced to wait indefinitely in a synchronization program is said to be subject to

Front

starvation

Back