Concepts, Techniques, and Models of Computer Programming - Chapter 4 pps

115 241 0
Concepts, Techniques, and Models of Computer Programming - Chapter 4 pps

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Chapter 4 Declarative Concurrency “Twenty years ago, parallel skiing was thought to be a skill attain- able only after many years of training and practice. Today, it is routinely achieved during the course of a single skiing season. [ ] All the goals of the parents are achieved by the children: [ ] But the movements they make in order to produce these results are quite different.” – Mindstorms: Children, Computers, and Powerful Ideas [141], Sey- mour Papert (1980) The declarative model of Chapter 2 lets us write many programs and use powerful reasoning techniques on them. But, as Section 4.7 explains, there exist useful programs that cannot be written easily or efficiently in it. For example, some programs are best written as a set of activities that execute independently. Such programs are called concurrent. Concurrency is essential for programs that interact with their environment, e.g., for agents, GUI programming, OS interac- tion, and so forth. Concurrency also lets a program be organized into parts that execute independently and interact only when needed, i.e., client/server and pro- ducer/consumer programs. This is an important software engineering property. Concurrency can be simple This chapter extends the declarative model of Chapter 2 with concurrency while still being declarative. That is, all the programming and reasoning techniques for declarative programming still apply. This is a remarkable property that deserves to be more widely known. We will explore it throughout this chapter. The intuition underlying it is quite simple. It is based on the fact that a dataflow variable can be bound to only one value. This gives the following two consequences: • What stays the same: The result of a program is the same whether or not it is concurrent. Putting any part of the program in a thread does not change the result. Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. 238 Declarative Concurrency • What is new: The result of a program can be calculated incrementally. If the input to a concurrent program is given incrementally, then the program will calculate its output incrementally as well. Let us give an example to fix this intuition. Consider the following sequential pro- gram that calculates a list of successive squares by generating a list of successive integers and then mapping each to its square: fun {Gen L H} {Delay 100} if L>H then nil else L|{Gen L+1 H} end end Xs={Gen 1 10} Ys={Map Xs fun {$ X} X*X end} {Browse Ys} (The {Delay 100} call waits for 100 milliseconds before continuing.) We can make this concurrent by doing the generation and mapping in their own threads: thread Xs={Gen 1 10} end thread Ys={Map Xs fun {$ X} X*X end} end {Browse Ys} This uses the thread s end statement, which executes s concurrently. What is the difference between the concurrent and the sequential versions? The result of the calculation is the same in both cases, namely [14916 81100].In the sequential version, Gen calculates the whole list before Map starts. The final result is displayed all at once when the calculation is complete, after one second. In the concurrent version, Gen and Map both execute simultaneously. Whenever Gen adds an element to its list, Map will immediately calculate its square. The result is displayed incrementally, as the elements are generated, one element each tenth of a second. We will see that the deep reason why this form of concurrency is so simple is that programs have no observable nondeterminism. A program in the declarative concurrent model always has this property, if the program does not try to bind the same variable to incompatible values. This is explained in Section 4.1. Another way to say it is that there are no race conditions in a declarative concurrent program. A race condition is just an observable nondeterministic behavior. Structure of the chapter The chapter can be divided into six parts: • Programming with threads. This part explains the first form of declar- ative concurrency, namely data-driven concurrency,alsoknownassupply- driven concurrency. There are four sections. Section 4.1 defines the data- driven concurrent model, which extends the declarative model with threads. This section also explains what declarative concurrency means. Section 4.2 Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. 4.1 The data-driven concurrent model 239 gives the basics of programming with threads. Section 4.3 explains the most popular technique, stream communication. Section 4.4 gives some other techniques, namely order-determining concurrency, coroutines, and concurrent composition. • Lazy execution. This part explains the second form of declarative con- currency, namely demand-driven concurrency,alsoknownaslazy execution. Section 4.5 introduces the lazy concurrent model and gives some of the most important programming techniques, including lazy streams and list compre- hensions. • Soft real-time programming. Section 4.6 explains how to program with time in the concurrent model. • Limitations and extensions of declarative programming. How far can declarative programming go? Section 4.7 explores the limitations of declarative programming and how to overcome them. This section gives the primary motivations for explicit state, which is the topic of the next three chapters. • The Haskell language. Section 4.8 gives an introduction to Haskell, a purely functional programming language based on lazy evaluation. • Advanced topics and history. Section 4.9 shows how to extend the declarative concurrent model with exceptions. It also goes deeper into var- ious topics including the different kinds of nondeterminism, lazy execution, dataflow variables, and synchronization (both explicit and implicit). Final- ly, Section 4.10 concludes by giving some historical notes on the roots of declarative concurrency. Concurrency is also a key part of three other chapters. Chapter 5 extends the eager model of the present chapter with a simple kind of communication chan- nel. Chapter 8 explains how to use concurrency together with state, e.g., for concurrent object-oriented programming. Chapter 11 shows how to do distribut- ed programming, i.e., programming a set of computers that are connected by a network. All four chapters taken together give a comprehensive introduction to practical concurrent programming. 4.1 The data-driven concurrent model In Chapter 2 we presented the declarative computation model. This model is sequential, i.e., there is just one statement that executes over a single-assignment store. Let us extend the model in two steps, adding just one concept in each step: • The first step is the most important. We add threads and the single in- struction thread s end.Athread is simply an executing statement, i.e., Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. 240 Declarative Concurrency ST1 Single-assignment store Multiple semantic stacks ST2 STn (‘‘threads’’) W=atom Y=42 X Z=person(age: Y) U Figure 4.1: The declarative concurrent model s ::= skip Empty statement |s 1 s 2 Statement sequence | local x in s end Variable creation |x 1 =x 2 Variable-variable binding |x=v Value creation | if x then s 1 else s 2 end Conditional | case x of pattern then s 1 else s 2 end Pattern matching | {xy 1 y n } Procedure application | thread s end Thread creation Table 4.1: The data-driven concurrent kernel language a semantic stack. This is all we need to start programming with declara- tive concurrency. As we will see, adding threads to the declarative model keeps all the good properties of the model. We call the resulting model the data-driven concurrent model. • The second step extends the model with another execution order. We add triggers and the single instruction {ByNeed P X}. This adds the possibility to do demand-driven computation, which is also known as lazy execution. This second extension also keeps the good properties of the declarative model. We call the resulting model the demand-driven concurrent model or the lazy concurrent model. We put off explaining lazy execution until Section 4.5. For most of this chapter, we leave out exceptions from the model. This is because with exceptions the model is no longer declarative. Section 4.9.1 looks closer at the interaction of concurrency and exceptions. Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. 4.1 The data-driven concurrent model 241 4.1.1 Basic concepts Our approach to concurrency is a simple extension to the declarative model that allows more than one executing statement to reference the store. Roughly, all these statements are executing “at the same time”. This gives the model illus- trated in Figure 4.1, whose kernel language is in Table 4.1. The kernel language extends Figure 2.1 with just one new instruction, the thread statement. Interleaving Let us pause to consider precisely what “at the same time” means. There are two ways to look at the issue, which we call the language viewpoint and the implementation viewpoint: • The language viewpoint is the semantics of the language, as seen by the programmer. From this viewpoint, the simplest assumption is to let the threads do an interleaving execution: in the actual execution, threads take turns doing computation steps. Computation steps do not overlap, or in other words, each computation step is atomic. This makes reasoning about programs easier. • The implementation viewpoint is how the multiple threads are actually implemented on a real machine. If the system is implemented on a single processor, then the implementation could also do interleaving. However, the system might be implemented on multiple processors, so that threads can do several computation steps simultaneously. This takes advantage of parallelism to improve performance. We will use the interleaving semantics throughout the book. Whatever the par- allel execution is, there is always at least one interleaving that is observationally equivalent to it. That is, if we observe the store during the execution, we can always find an interleaving execution that makes the store evolve in the same way. Causal order Another way to see the difference between sequential and concurrent execution is in terms of an order defined among all execution states of a given program: Causal order of computation steps For a given program, all computation steps form a par- tial order, called the causal order. A computation step occurs before another step, if in all possible executions of the program, it happens before the other. Similarly for a computation step that occurs after another step. Some- times a step is neither before nor after another step. In that case, we say that the two steps are concurrent. Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. 242 Declarative Concurrency Thread T1 T3 T2 T4 T5 order within a thread order between threads Concurrent execution Sequential execution (partial order) (total order) computation step Figure 4.2: Causal orders of sequential and concurrent executions 2 I 1 I 2 I 1 I 2 I 1 I 2 I 1 I a I a I a I a I b I b I b I b I c I c I c I c I T2 T1 1 I c I I 2 b I a I Some possible executionsCausal order Figure 4.3: Relationship between causal order and interleaving executions In a sequential program, all computation steps are totally ordered. There are no concurrent steps. In a concurrent program, all computation steps of a given thread are totally ordered. The computation steps of the whole program form a partial order. Two steps in this partial order are causally ordered if the first binds a dataflow variable X and the second needs the value of X. Figure 4.2 shows the difference between sequential and concurrent execution. Figure 4.3 gives an example that shows some of the possible executions corre- sponding to a particular causal order. Here the causal order has two threads T1 and T2, where T1 has two operations (I 1 and I 2 ) and T2 has three operations (I a ,I b ,andI c ). Four possible executions are shown. Each execution respects the causal order, i.e., all instructions that are related in the causal order are related in the same way in the execution. How many executions are possible in all? (Hint: there are not so many in this example.) Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. 4.1 The data-driven concurrent model 243 Nondeterminism An execution is nondeterministic if there is an execution state in which there is a choice of what to do next, i.e., a choice which thread to reduce. Nondeterminism appears naturally when there are concurrent states. If there are several threads, then in each execution state the system has to choose which thread to execute next. For example, in Figure 4.3, after the first step, which always does I a ,there is a choice of either I 1 or I b for the next step. In a declarative concurrent model, the nondeterminism is not visible to the programmer. 1 There are two reasons for this. First, dataflow variables can be bound to only one value. The nondeterminism affects only the exact moment when each binding takes place; it does not affect the plain fact that the binding does take place. Second, any operation that needs the value of a variable has no choice but to wait until the variable is bound. If we allow operations that could choose whether to wait or not then the nondeterminism would become visible. As a consequence, a declarative concurrent model keeps the good properties of the declarative model of Chapter 2. The concurrent model removes some but not all of the limitations of the declarative model, as we will see in this chapter. Scheduling The choice of which thread to execute next is done by part of the system called the scheduler. At each computation step, the scheduler picks one among all the ready threads to execute next. We say a thread is ready, also called runnable,if its statement has all the information it needs to execute at least one computation step. Once a thread is ready, it stays ready indefinitely. We say that thread reduction in the declarative concurrent model is monotonic. A ready thread can be executed at any time. A thread that is not ready is called suspended. Its first statement cannot continue because it does not have all the information it needs. We say the first statement is blocked. Blocking is an important concept that we will come across again in the book. We say the system is fair if it does not let any ready thread “starve”, i.e., all ready threads will eventually execute. This is an important property to make program behavior predictable and to simplify reasoning about programs. It is related to modularity: fairness implies that a thread’s execution does not depend on that of any other thread, unless the dependency is programmed explicitly. In the rest of the book, we will assume that threads are scheduled fairly. 4.1.2 Semantics of threads We extend the abstract machine of Section 2.4 by letting it execute with several semantic stacks instead of just one. Each semantic stack corresponds to the 1 If there are no unification failures, i.e., attempts to bind the same variable to incompatible partial values. Usually we consider a unification failure as a consequence of a programmer error. Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. 244 Declarative Concurrency intuitive concept “thread”. All semantic stacks access the same store. Threads communicate through this shared store. Concepts We keep the concepts of single-assignment store σ, environment E,semantic statement (s,E), and semantic stack ST. We extend the concepts of execution state and computation to take into account multiple semantic stacks: • An execution state is a pair (MST,σ)whereMST is a multiset of semantic stacks and σ is a single-assignment store. A multiset is a set in which the same element can occur more than once. MST has to be a multiset because we might have two different semantic stacks with identical contents, e.g., two threads that execute the same statements. • A computation is a sequence of execution states starting from an initial state: (MST 0 ,σ 0 ) → (MST 1 ,σ 1 ) → (MST 2 ,σ 2 ) → Program execution As before, a program is simply a statement s. Here is how to execute the program: • The initial execution state is: ({ [ statement    (s,φ)]    stack }    multiset ,φ) That is, the initial store is empty (no variables, empty set φ) and the initial execution state has one semantic stack that has just one semantic statement (s,φ) on it. The only difference with Chapter 2 is that the semantic stack is in a multiset. • At each step, one runnable semantic stack ST is selected from MST,leaving MST  .WecansayMST = {ST}MST  . (The operator denotes multiset union.) One computation step is then done in ST accordingtothesemantics of Chapter 2, giving: (ST,σ) → (ST  ,σ  ) The computation step of the full computation is then: ({ST}MST  ,σ) → ({ST  }MST  ,σ  ) We call this an interleaving semantics because there is one global sequence of computation steps. The threads take turns each doing a little bit of work. Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. 4.1 The data-driven concurrent model 245 (thread <s> end, E) ST ST (<s>,E) ST 1n 1 ST n single-assignment store single-assignment store Figure 4.4: Execution of the thread statement • ThechoiceofwhichST to select is done by the scheduler according to a well-defined set of rules called the scheduling algorithm. This algorithm is careful to make sure that good properties, e.g., fairness, hold of any computation. A real scheduler has to take much more than just fairness into account. Section 4.2.4 discusses many of these issues and explains how the Mozart scheduler works. • If there are no runnable semantic stacks in MST then the computation can not continue: – If all ST in MST are terminated, then we say the computation termi- nates. – If there exists at least one suspended ST in MST that cannot be re- claimed (see below), then we say the computation blocks. The thread statement The semantics of the thread statement is defined in terms of how it alters the multiset MST.A thread statement never blocks. If the selected ST is of the form [( thread s end,E)]+ST  , then the new multiset is {[(s,E)]}{ST  }MST  . In other words, we add a new semantic stack [(s,E)] that corresponds to the new thread. Figure 4.4 illustrates this. We can summarize this in the following computation step: ({[( thread s end,E)] + ST  }MST  ,σ) → ({[(s,E)]}{ST  }MST  ,σ) Memory management Memory management is extended to the multiset as follows: • A terminated semantic stack can be deallocated. • A blocked semantic stack can be reclaimed if its activation condition de- pends on an unreachable variable. In that case, the semantic stack would never become runnable again, so removing it changes nothing during the execution. Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. 246 Declarative Concurrency This means that the simple intuition of Chapter 2, that “control structures are deallocated and data structures are reclaimed”, is no longer completely true in the concurrent model. 4.1.3 Example execution The first example shows how threads are created and how they communicate through dataflow synchronization. Consider the following statement: local B in thread B=true end if B then {Browse yes} end end For simplicity, we will use the substitution-based abstract machine introduced in Section 3.3. • We skip the initial computation steps and go directly to the situation when the thread and if statements are each on the semantic stack. This gives: ( {[ thread b=true end, if b then {Browse yes} end]}, {b}∪σ ) where b is a variable in the store. There is just one semantic stack, which contains two statements. • After executing the thread statement, we get: ( {[b =true], [if b then {Browse yes} end]}, {b}∪σ ) There are now two semantic stacks (“threads”). The first, containing b =true, is ready. The second, containing the if statement, is suspend- ed because the activation condition (b determined) is false. • The scheduler picks the ready thread. After executing one step, we get: ( {[], [ if b then {Browse yes} end]}, {b = true}∪σ ) The first thread has terminated (empty semantic stack). The second thread is now ready, since b is determined. • We remove the empty semantic stack and execute the if statement. This gives: ( {[ {Browse yes}]}, {b = true}∪σ ) One ready thread remains. Further calculation will display yes. Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. [...].. .4. 1 The data-driven concurrent model 4. 1 .4 What is declarative concurrency? Let us see why we can consider the data-driven concurrent model as a form of declarative programming The basic principle of declarative programming is that the output of a declarative program should be a mathematical function of its input In functional programming, it is clear what this means:... is given, and then it gives the complete result A concurrent Fibonacci function Here is a concurrent divide -and- conquer program to calculate the Fibonacci function: Copyright c 200 1-3 by P Van Roy and S Haridi All rights reserved 4. 2 Basic thread programming techniques Figure 4. 6: The Oz Panel showing thread creation in {Fib 26 X} fun {Fib X} if X= . roots of declarative concurrency. Concurrency is also a key part of three other chapters. Chapter 5 extends the eager model of the present chapter with a simple kind of communication chan- nel. Chapter. for concurrent object-oriented programming. Chapter 11 shows how to do distribut- ed programming, i.e., programming a set of computers that are connected by a network. All four chapters taken together. divide -and- conquer program to calculate the Fibonacci func- tion: Copyright c  200 1-3 by P. Van Roy and S. Haridi. All rights reserved. 4. 2 Basic thread programming techniques 255 Figure 4. 6:

Ngày đăng: 14/08/2014, 10:22

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan