Saturday, July 30, 2016

Concurrency, Parallelism, and Barrier Synchronization - Multiprocess and Multithreaded Programming

Concurrency, parallelism, threads, and processes are often misunderstood concepts.

On a preemptive, timed sliced UNIX or Linux operating system (Solaris, AIX, Linux, BSD, OS X), sequences of program code from different software applications are executed over time on a single processor.  A UNIX process is a schedulable entity.   On a UNIX system, program code from one process executes on the processor for a time quantum, after which, program code from another process executes for a time quantum.  The first process relinquishes the processor either voluntarily or involuntarily so that another process can execute its program code. This is known as context switching.  When a process context switch occurs, the state of a process is saved to its process control block and another process resumes execution on the processor.  Finally, A UNIX process is heavyweight because it has its own virtual memory space, file descriptors, register state, scheduling information, memory management information, etc.  When a process context switch occurs, this information has to be saved, and this is a computationally expensive operation.

Concurrency refers to the interleaved execution of schedulable entities on a single processor.  Context switching facilitates interleaved execution.  The execution time quantum is so small that the interleaved execution of independent, schedulable entities, often performing unrelated tasks, gives the appearance that multiple software applications are running in parallel.

Concurrency applies to both threads and processes.  A thread is also a schedulable entitity and is defined as an independent sequence of execution within a UNIX process. UNIX processes often have multiple threads of execution that share the memory space of the process.  When multiple threads of execution are running inside of a process, they are typically performing related tasks.

While threads are typically lighter weight than processes, there have been different implementations of both across UNIX and Linux operating systems over the years.  The three models that typically define the implementations across preemptive, time sliced, multi user UNIX and Linux operating systems are defined as follows: 1:1, 1:N, and M:N where 1:1 refers to the mapping of one user space thread to one kernel thread, 1:N refers to the mapping of multiple user space threads to a single kernel thread, and M:N refers to the mapping of N user space threads to M kernel threads.

In summary, both threads and processes are scheduled for execution on a single processor.  Thread context switching is lighter in weight than process context switching.  Both threads and processes are schedulable entities and concurrency is defined as the interleaved execution over time of schedulable entities on a single processor.

The Linux user space APIs for process and thread management are abstracted from alot of the details but you can set the level of concurrency and directly influence the time quantum so that system throughput is affected by shorter and longer durations of schedulable entity execution time.

Conversely, parallelism refers to the simultaneous execution of multiple schedulable entities over a time quanta.  Both processes and threads can execute in parallel across multiple cores or multiple processors.  On a multiuser system with preemptive time slicing and multiple processor cores, both concurrency and parallelism are often at play.  Affinity scheduling refers to the scheduling of both processes and threads across multiple cores so that their concurrent and often parallel execution is close to optimal.

Software applications are often designed to solve computationally complex problems.  If the algorithm to solve a computationally complex problem can be parallelized, then multiple threads or processes can all run at the same time across multiple cores.  Each process or thread executes by itself and does not contend for resources with other threads or processes that are working on the other parts of the problem to be solved. When each thread or process reaches the point where it can no longer contribute any more work to the solution of the problem, it waits at the barrier.  When all threads or processes reach the barrier, the output of their work is synchronized, and often aggregated by the master process.  Complex test frameworks often implement the barrier synchronization problem when certain types of tests can be run in parallel.

Most individual software applications running on preemptive, time sliced, multiuser Linux and UNIX operating systems are not designed with heavy, parallel thread or parallel, multi-process execution in mind.  Expensive, parallel algorithms often require multiple, dedicated processor cores with hard real time scheduling constrains.  The following paper describes the solution to a popular, parallel algorithm; flight scheduling.

Last, when designing multithreaded and multiprocess software programs, minimizing lock granularity greatly increases concurrency, throughput, and execution efficiency.  Multithreaded and multiprocess programs that do not utilize course-grained synchronization strategies do not run efficiently and often require countless hours of debugging.  The use of semaphores, mutex locks, and other synchronization primitives should be minimized to the maximum extent possible in computer programs that share resources between multiple threads or processes.  Proper program design allows for schedulable entities to run in parallel or concurrently with high throughput and minimum resource contention, and this is optimal for solving computationally complex problems on preemptive, time scliced, multi user operating systems without requiring hard real time scheduling.

After a fairly considerable amount of research in the above areas, I utilized the above design techniques for several successful, multi threaded and multi process software programs.

No comments:

Post a Comment