8 minute read

  1. If a kernel thread and interrupt service routine are accessing the same variable, how to implement synchronization for this variable? Local Interrupt Disabling all hardware or linux Memory barrier or atomic operations

  2. Say there is an operation like a=b; It’s the statement for which you want to avoid concurrency. (Say SMP or preemption or whatever) Without using locks (spin, semaphore, mutex etc. etc.) how would you make this statement protected? Atomic operations

  3. What happens if Non Recursive mutex is locked more than once? A mutex is a shared variable that can be in one of two states: unlocked or locked. Non Recursive mutex is ordinary mutex, when any attempt to perform the lock operation on mutex that is already locked would be failed or block => If a thread lock the mutex that it is acquiring, there will be deadlock because the thread is waiting itself to unlock the mutex Additional info: Reentrant mutex or recursive mutex may be locked multiple times by the same process/thread, without causing a deadlock (this operation will succeed if and only if the locking thread is the one that already holds the lock)
  4. In a multi-threaded application, many threads are trying to access the same resource, say a global count, g. Threads are synchronized by the following code (assume lock is a static int variable, initialized to 0 (unlocked state)):

if (lock) wait(); // It’s already locked so wait(sleep) till someone wakes me up else lock=1; // I locked it

/* Critical Section - Increment g */ lock = 0; // Lock released, so wakeup only one of other waiting threads, if any

What is the issue with this synchronization? PICK ONE OF THE CHOICES No issues – will work correctly Works only on a uniprocessor system and would not work on multiprocessor system Will not work on any system Cannot say – need more data

will not work on any system Single Processor: Let’s assume Thread1 executes first line and found that “lock = 0” so it will go in to else condition. Before executing “lock =1” statement task switching takes place and another thread2 also found that “lock = 0”. Thus at the same time 2 threads are in their critical section. Multiprocessor system: Assume that 2 threads are executing same piece of code in parallel and both of them found that “lock =0” so both threads will enter into critical section.

  1. What is thread? Process is an active program and also a resource container while Thread is the entities scheduled for execution on the CPU. In task_struct, the pid and tid are the same if it’s process. So we can use ps to see whether pid and LWP (thread) column have the same value to figure if the process is thread or process

Threads are needed to run many applications at once, Parallel entities share an address space and data. We can achieve both parallelism and blocking system call as parallelism improves performance and blocking system calls make programming easy threads share: Address spaces (only text, data and heap) Global variables Open files, Pending alarms (caused by alarm() syscall), Child processes Signals and signal handlers Accounting information: user and system CPU time used by the process, limits on the amount of CPU time a process may use, the maximum size of its stack, the number of page frames it may consume, etc)

threads do not share stack: each thread will generally call different procedures, the stack stored procedure’s local variables and the return address to use when the procedure call has finished. Although each thread has its own stack, they can access every memory address within the process’ address space, hence 1 thread can read, write or wipe out another thread’s stack. Thread do not have protection/access safety mechanism against other threads of the same process

Registers: Program Counter (PC), Stack Pointer (SP), Instruction Register (IR), Program Status Word (PSW)

  1. User Level thread Vs Kernel Level thread User space: Implemented by a library Each process needs its own private thread table to keep track of the threads properties: program counter, stack pointer, registers, state… The thread table is managed by the run-time system (user space). When a thread is moved to ready state or blocked state, the information needed to restart it is stored in the thread table Thread switching involves calling just local run-time system procedure so better performance Benefit: Can implement on an OS that does not support threads and allow each process to have its own customized scheduling algorithm Problems implementation of blocking call is not easy if a thread starts running, no other thread in that process will ever run unless the first thread voluntarily gives up the CPU since no clock interrupts If a thread causes a page fault, the kernel, unaware of even the existence of threads, naturally blocks the entire process until the disk I/O is complete, even though other threads might be runnable programmers generally want threads in applications where the threads block often, i.e. in a multithreaded Web server => these threads are constantly making system calls Kernel space: Kernel has only 1 table that keeps track of all threads in the system Blocking calls are possible through system call but greater cost Problem: what happens when a multi-threaded process forks? does the new process have as many threads or just one? signals are sent to processes, not threads; When a signal comes in, which thread should handle it? In Poxis, Signals generated for a process are delivered to only one thread. Thus, if more than one thread is eligible to receive a signal, one has to be chosen. The choice of threads is left entirely up to the implementation

How function pointers are shared across different processes? Using which IPCs? Two processes cannot share function pointers. (However, they can share variable using shared memory). Different processes have different virtual address space. If you send address of a function from process A to process B, address from A does not make any sense in process B. The memory mapping in process A will be different from that in process B. If you want to use functions in two processes make library for that functions and use that library in your processes

  1. What are Synchronization Solution ? Kernel preemption disabling, Interrupt disabling, Semaphore and Mutex, Monitor, Spinlock, atomic and memory barrier

Kernel preemption disabling: disabling kernel preemption before entering a critical region and re-enabling it right after leaving the region. Non-preempt ability is not enough for multiprocessor systems, because two kernel control paths running on different CPUs can concurrently access the same data structure.

Interrupt disabling: disabling all hardware interrupts before entering a critical region and reenabling them right after leaving it. If the critical region is large, interrupts can remain disabled for a relatively long time, potentially causing all hardware activities to freeze. Moreover, on a multiprocessor system, disabling interrupts on the local CPU is not sufficient

Semaphore: is a variable to count the number of wakeups saved for future use.

A semaphore could have the value 0 to indicate that no wakeups were saved, or some positive value if one or more wakeups were pending.

Semaphone has 2 operations: down and up Semaphore are useful if multiple instances (N) of a resource is to be shared among a set of users. As soon as all N resources are acquired, any new requester has to wait. Since there is no single lock to hold, there is as such no ownership of a semaphore.

Mutex is a simplified version of semaphore with only 2 states: locked or unlocked. Mutex is a mean to achieve mutual exclusion which means a resource can only be accessed by either zero or 1 process.

Monitors are high-level synchronization primitive that can help to achieve mutual exclusion as only 1 process can be active in monitor at any instance. Monitor is a collection of procedures, variables and data structures that are grouped together in a module or package of a specific language. When a process access monitor, it checks to see any active process is using

A spinlock is a lock which causes a thread trying to acquire it to simply wait in a loop (“spin”) while repeatedly checking if the lock is available ( this concept is called busy-waiting) Threads that do not wish to block or switch context use spinlocks and spin read/write locks Busy waiting can leads to priority inversion problem where scheduler schedules high priority process outside the critical region to keep running, and low priority process inside region has no chance to leave

Atomic operation: It is the operation that is being performed without interruption; i.e. one thread writes the variable and no other thread can write it unless the first thread has finished.. safe access to a global variable is ensured by using atomic operations . However, kernels contain many data structures that cannot be accessed with a single operation. For example, it usually isn’t possible to remove an element from a linked list with a single operation, because the kernel needs to access at least two pointers at once. Memory barrier/fence

A memory fence/barrier is a class of instructions that memory read/writes occur in the order you expect. For example a ‘full fence’ means all read/writes before the fence are committed before those after the fence. Memory fences are a hardware concept. In higher level languages we are used to dealing with mutexes and semaphores - these may well be implemented using memory fences at the low level and explicit use of memory barriers are not necessary.

  1. Is it possible to have a deadlock involving only one single-threaded process? It is not theoretically possible to have circular wait with only one process, thus failing a necessary condition for Circular wait. However, hypothetically it is possible to have deadlock with only 1 process when 1 process tries to lock mutex when it is already locked, we can implement mutext to block it resulting in deadlock (Although it makes no sense to implement mutex with only 1 process/thread)


Leave a comment