Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

CSC241 Concurrent Programming

Mind Map of the key structures of Concurrent Programming (Adrians' Lectures)
by

Christopher Winstanley

on 2 May 2011

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of CSC241 Concurrent Programming

241 Concurrency Definitions Processor - hardware executes machine instructions Program - sequence of instructions Process - program in execution on processor

can have multiple processes
proces can run one program then another Parallelism - n processes underway simultaneously - all are executed

Concurrency - n processes underway simultaneously - not all are executed
-exploiting multiprocessor machines to make programs run faster Atomic - process is not subject to context switch during execution
Race Condition - multiple processes use the same data outcome depends on ordering
Critical Section - section of code where race conditions occur Process Contains Data Environment Variables Open Devices Instructions

fork() creates identical copy of parent process

exec() loads the new process

every process in unix has a parent
can create multiple children Thread
does not maintain list of created threads
share same address space
has unique:
Id's registers stack priority return value pointers local variables signal mask How concurrency works Unix Processes Private address space
User processes are protected from one another
heavyweight Threads common address space
fine grained concurrency User Threads Kernel Threads Supported In user-level library
Kernel knows about processes
Scheduled by a per-process user level scheduler
Best conceptually for concurrent applications Supported by the OS
OS is multithreaded
Threads in language runtime environment + No OS support required
cheap context switches - If thread uses system call that blocks all other threads block spawn() - executes new thread
exit() - thread dies
yield() - called thread enters scheduler, other threads can be run Program thread its executing
Stack (for sequence of function calls made) (uses stack pointer)
Place in the program thread its executing at (uses program counter) Consists Of Context Switching 1- Deschedule current running thread
2 - Ask Scheduler to pick best ready thread to run next
3- Reschedule that thread

Must be written in assembler Scheduling Three States Running - currently running on CPU
Ready - Eligible to run but not running awaiting turn of processor
Blocked - process is ineligible to run blocked awaiting input/output Response Time
Fairness - all process get expected cpu portion
Turnaround Time - maximise between start and end of process
Throughput - average number of terminating processes per time unit
Real-Time Deadlines
High CPU Utilisation Policies Scheduling Methods Round Robin Forms ready processes in queue
Take next process from queue
Let it run for Quantum
Put it to back of queue

Quantum time is important in terms of throughput and amount of context switches Each proces is given priority
The runnable process with the highest priority will use processor
can lead to process starvation Static priority-based scheduling Dynamic priority-based scheduling Priorities can be altered dynamically
If process uses entire quantum it moves down priority class
If process does not it moves up a class Shortest Job First Scheduling Priority is given to the shortest next CPU phase
Oprimal for minimising wait time Earliest Deadline First Scheduling Always choose ready process with earliest deadline
Scheduler maintains deadline for each process
Best for real time systems Concurrency Management Mutual Exclusion Threads don't run sequentially therefore result will be non-deterministic
Execution is non-atomic
Fixed by only letting one thread into critical section at a time
Use a lock to prevent threadscoming in at same time Problem with threads Java Implemented by synchronized keyword C pthread_mutex_lock(&mut)
/*critical section here*/
pthread_mutex_unlock Locks Spinning locks - spins until lock is available
Blocking Locks - blocked process until lock is available To Implement Synchronisation
Prevents context switches during critical sections
Can miss IO Events
Only one critical section for all locks in system Disable Interrupts Use Special Machine Instruction
Has Atomic action Petersons Algorithm Only works with two threads Common Pitfalls Unix Processes Subtle programming errors
Incorrectly protected critical sections
May work correctly sometimes but not others Non-Correctness Deadlock Two proceses become locked when attempting to share resources

Only Occurs with: Mutual Exclusion Wait and Hold Non-preemption Circular Wait processes need exclusive access to resource Processes are allowed to hold resource whilst waiting for a new one Resource cannot be taken away from process Process is waiting for next process with resource Dealing With Deadlock eliminate 1 of the 4 conditionsprove that a particular program is deadlock freeDetect deadlock and try to recover automaticallyRecover manually (reboot) Livelock Process cannot make progress but is different to deadlock
e.g.
Schedular will starve processes when higher priority process available (priority based scheduling)
Two process are too polite (after you) Road to Peterson LECTURE 9 Semaphores Non Negative Integers Can be accessed by: P() or down()
V() or up()

P: if (sem > 0)
sem--
V: sem++ Binary Semaphores Can only be 1 or 0
Used to implement one thread allowed into critical section
Initialised sem to 1
decrement and locks cs sem = 0
signals sem at the end of cs to release lock = 1
If sem is o thread will block Counting Semaphores Can have positive integer values
Could allow n amount of threads into critical section
Could represent amount of an available resource Rendevous Semaphores sem = 0
P() Thread 1 waits on sem
V() Thread 2 signals sem when ready

Used when thread 1 needs to know when thread 2 is done with something Ready queue
Blocked Queue
Takes processes from blocked queue and places them on ready queue
Runs one process from ready queue Scheduler Queues Condition Synchronisation Depending on appropriate conditions a thread may be blocked on entering an objects method or may unblock threads Condition Synchronisation wait() suspend calling thread
place on queue
release lock on target
retain any other locks notify() Take thread from objects wait queue
make it runnable notifyall() Steps are carried out on all waiting threads Condition Variables Allows threads to synchronise to value of shared resource

pthread_cond_wait() - block thread until specified condition is signalled

pthread_cond_signal() - routine is used to signal another thread

pthread_cond_broadcast() - to all waiting threads More Examples in lecture 11 Monitors Common Problems LECTURE 14 Monitors Encapsulation of internal data can only access it through the monitors' procedures
Ensures Mutual exclusion to whole package
Threads occupy monitors only one at a time
A monitor can have more than one conditional variable Nested Monitor Problem Thread calls monitor 1 which calls monitor 2;
If M2 holds on conditional variable do we:
Hold M1 - reduces potential for concurrency, deadlock if M1 then calls M2

Release M1 - means we need to re-aquire when we return from nested call, more concurrency More Examples in lecture 12 + - High level of abstraction
fits well into object orientated environment
Richer behaviour than simpler Java Programming language support
messy semantics for nested calls Path Expressions Tool for sync of concurrent processes Specified as an extension of type declaration
Restricts allowable sequence of operations on an object
High Level Abstraction
Concurrency Conditions for a whole class are defined in a singular regex at the top of a class called a path expression

e.g. PATH add, remove END = add and remove can be run in any order
PATH add; remove END add then remove cals must be run sequentially + Abstract and declarative
Concurrency only considered in one place
Explicitly documents concurrency - Granularity is at method level
Description is static
There are no conditions which can be changed at runtime Examples of Path expressions can be found in lecture 12 Message Passing Barriers Waits for task to be completed before overall task can proceed
Uses counter and condition variables or semaphores Message Passing communication concurrent components communicate via messages
Data is sent on channels
asynchronous or rendevous
reliable or unreliable
Process supervises critical section and executes it on behalf of requests

Recipient.send(data);
data = sender.receive(); Hoares Communicating Sequential Processes Concurrent language based on message passing sender and receiver both block until the other is ready to receive message
Full transcript