Thursday, August 27, 2009 at 2:35 AM | 0 comments  
A set of vertices V and a set of edges E.

· V is partitioned into two types:
» P = {P1, P2, …, Pn}, the set consisting of all the processes in the system.
» R = {R1, R2, …, Rm}, the set consisting of all resource types in the system.
· Request Edge – directed edge P1 -> Rj
· Assignment Edge – directed edge Rj -> Pi
· Process


· Resource Type With 4 Instances


· Pi request instance of Rj

· Pi holding an instance of Rj

How Would You Know If There's a Deadlock Based on the Resource Allocation Graph?

·If graph contains no cycles - no deadlock.

·If graph contains a cycle :
» if only one instance per resource type, then deadlock.
» if several instances per resource type, possibility of deadlock.

Posted by Roger Labels:
Example of a resource allocation graph







Resource allocation graph with a deadlock





Resource allocation graphwith a cycle but no deadlock







resource- allocation graph for deadlock avoidance






unsafe state in resource-allocation graph

Posted by Roger Labels:
Thursday, August 20, 2009 at 4:07 AM | 0 comments  
  • Allow system to enter deadlock state
  • Detection algorithm
  • Recovery scheme

Single Instance of Each Resource Type

• Maintain wait-for graph
-Nodes are processes.
– Pi → Pj if Pi is waiting for Pj.
• Periodically invoke an algorithm that searches for acycle in thegraph.
• An algorithm to detect a cycle in a graph requires an order of n2operations, where n is the number of vertices in the graph.

Detection Algorithm
1. Let Work and Finish be vectors of length m and n, respectiveInitialize:
(a) Work :- Available
(b) For i = 1,2, …, n, if Allocationi ≠ 0, thenFinish[i] := false;otherwise, Finish[i] := true.
2. Find an index i such that both:
(a) Finish[i] = false
(b) Requesti ≤ WorkIf no such i exists, go to step 4.
3.Work := Work + Allocation
Finish[i] := true
go to step 2.
4. If Finish[i] = false, for some i, 1 ≤ i ≤ n, then the system is indeadlock state. Moreover, if Finish[i] = false, then Pi isdeadlocked.

Algorithm requires an order of m x n2 operations to detect whetherthe system is in deadlocked state.
Posted by Roger Labels:

Recovery from Deadlock: Process Termination

  • Abort all deadlocked processes.
  • Abort one process at a time until the deadlock cycle is eliminated.
  • In which order should we choose to abort?l Priority of the process.
  • How long process has computed, and how much longer to completion.
  • Resources the process has used.l Resources process needs to complete.l How many processes will need to be terminated.
  • Is process interactive or batch?

Recovery from Deadlock: Resource Preemption

  • Selecting a victim – minimize cost.
  • Rollback – return to some safe state, restart process for that state.
  • Starvation – same process may always be picked as victim, include number of rollback in cost factor.
Posted by Roger Labels:
  • Mutual Exclusion – not required for sharable resources; must hold for nonsharable resources.
  • Hold and Wait – must guarantee that whenever a process requests a resource, it does not hold any other resources.
  • Require process to request and be allocated all its resources before it begins execution, or allow process to request resources only when the process has none.
  • Low resource utilization; starvation possible.Restrain the ways request can be made.
  • No Preemption :
  • If a process that is holding some resources requests another resource that cannot be immediately allocated to it, then all resources currently being held are released.
  • Preempted resources are added to the list of resources for whichthe process is waiting.
  • Process will be restarted only when it can regain its old resourcesas well as the new ones that it is requesting.
  • Circular Wait – impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration.
Posted by Roger Labels:
  • Ensure that the system will never enter a deadlock state.
  • Allow the system to enter a deadlock state and then recover.
  • gnore the problem and pretend that deadlocks never occur in he system; used by most operating systems, including UNIX.
Posted by Roger Labels:
  • Mutual exclusion: only one process at a time can use aresource.
  • Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes.
  • No preemption: a resource can be released only voluntarily by the process holding it, after that process has completed its task.
  • Circular wait: there exists a set {P0, P1, …, P0} of waitingprocesses such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a resource that is held by P0.Deadlock can arise if four conditions hold simultaneously.

Posted by Roger Labels:
Thursday, August 13, 2009 at 4:07 AM | 0 comments  
We introduced the thread scheduler, part of the OS (usually) that is responsible for sharing the available CPUs out between the various threads. How exactly the scheduler works depends on the individual platform, but various modern operating systems (notably Windows and Linux) use largely similar techniques that we'll describe here. We'll also mention some key varitions between the platforms.
Note that we'll continue to talk about a single thread scheduler. On multiprocessor systems, there is generally some kind of scheduler per processor, which then need to be coordinated in some way. (On some systems, switching on different processors is staggered to avoid contention on shared scheduling tables.) Unless otherwise specified, we'll use the term thread scheduler to refer to this overall system of coordinated per-CPU schedulers.


  • Across platforms, thread scheduling1 tends to be based on at least the following criteria:
    a priority, or in fact usually multiple "priority" settings that we'll discuss below;
  • a quantum, or number of allocated timeslices of CPU, which essentially determines the amount of CPU time a thread is allotted before it is forced to yield the CPU to another thread of the same or lower priority (the system will keep track of the remaining quantum at any given time, plus its default quantum, which could depend on thread type and/or system configuration);
  • a state, notably "runnable" vs "waiting";
  • metrics about the behaviour of threads, such as recent CPU usage or the time since it last ran (i.e. had a share of CPU), or the fact that it has "just received an event it was waiting for".

Most systems use what we might dub priority-based round-robin scheduling to someextent. The general principles are:

  • a thread of higher priority (which is a function of base and local priorities) will preempt a thread of lower priority;
  • otherwise, threads of equal priority will essentially take turns at getting an allocated slice or quantum of CPU;
  • there are a few extra "tweaks" to make things work
Posted by Roger Labels:
Visit the Site
MARVEL and SPIDER-MAN: TM & 2007 Marvel Characters, Inc. Motion Picture © 2007 Columbia Pictures Industries, Inc. All Rights Reserved. 2007 Sony Pictures Digital Inc. All rights reserved. blogger templates