Grid Scheduling and Resource Management

58 401 1
  • Loading ...

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Tài liệu liên quan

Thông tin tài liệu

Ngày đăng: 19/10/2013, 03:20

6 Grid Scheduling and Resource Management LEARNING OBJECTIVES In this chapter, we will study Grid scheduling and resource man- agement, which play a critical role in building an effective and efficient Grid environment. From this chapter, you will learn: • What a scheduling system is about and how it works. • Scheduling paradigms. • Condor, SGE, PBS and LSF. • Grid scheduling with quality-of-services (QoS) support, e.g. AppLeS, Nimrod/G, Grid rescheduling. • Grid scheduling optimization with heuristics. CHAPTER OUTLINE 6.1 Introduction 6.2 Scheduling Paradigms 6.3 How Scheduling Works 6.4 A Review of Condor, SGE, PBS and LSF The Grid: Core Technologies Maozhen Li and Mark Baker © 2005 John Wiley & Sons, Ltd 244 GRID SCHEDULING AND RESOURCE MANAGEMENT 6.5 Grid Scheduling with QoS 6.6 Chapter Summary 6.7 Further Reading and Testing 6.1 INTRODUCTION The Grid is emerging as a new paradigm for solving problems in science, engineering, industry and commerce. Increasing numbers of applications are utilizing the Grid infrastructure to meet their computational, storage and other needs. A single site can simply no longer meet all the resource needs of today’s demanding appli- cations, and using distributed resources can bring many benefits to application users. The deployment of Grid systems involves the efficient management of heterogeneous, geographically distributed and dynamically available resources. However, the effectiveness of a Grid environment is largely dependent on the effectiveness and efficiency of its schedulers, which act as localized resource brokers. Figure 6.1 shows that user tasks, for example, can be submitted via Globus to a range of resource management and job scheduling sys- tems, such as Condor [1], the Sun Grid Engine (SGE) [2], the Portable Batch System (PBS) [3] and the Load Sharing Facility (LSF) [4]. Grid scheduling is defined as the process of mapping Grid jobs to resources over multiple administrative domains. A Grid job can Figure 6.1 Jobs, via Globus, can be submitted to systems managed by Condor, SGE, PBS and LSF 6.2 SCHEDULING PARADIGMS 245 be split into many small tasks. The scheduler has the responsibility of selecting resources and scheduling jobs in such a way that the user and application requirements are met, in terms of overall execution time (throughput) and cost of the resources utilized. This chapter is organized as follows. In Section 6.2, we present three scheduling paradigms – centralized, hierarchical and decen- tralized. In Section 6.3, we describe the steps involved in the scheduling process. In Section 6.4, we give a review of the current widely used resource management and job scheduling such as Condor and SGE. In Section 6.5, we discuss some issues related to scheduling with QoS. In Section 6.6, we conclude the chapter and in Section 6.7, provide references for further reading and testing. 6.2 SCHEDULING PARADIGMS Hamscher et al. [5] present three scheduling paradigms – central- ized, hierarchical and distributed. In this section, we give a brief review of the scheduling paradigms. A performance evaluation of the three scheduling paradigms can also be found in Hamscher et al. [5]. 6.2.1 Centralized scheduling In a centralized scheduling environment, a central machine (node) acts as a resource manager to schedule jobs to all the surrounding nodes that are part of the environment. This scheduling paradigm is often used in situations like a computing centre where resources have similar characteristics and usage policies. Figure 6.2 shows the architecture of centralized scheduling. In this scenario, jobs are first submitted to the central scheduler, which then dispatches the jobs to the appropriate nodes. Those jobs that cannot be started on a node are normally stored in a central job queue for a later start. One advantage of a centralized scheduling system is that the scheduler may produce better scheduling decisions because it has all necessary, and up-to-date, information about the available resources. However, centralized scheduling obviously does not scale well with the increasing size of the environment that it man- ages. The scheduler itself may well become a bottleneck, and if 246 GRID SCHEDULING AND RESOURCE MANAGEMENT Figure 6.2 Centralized scheduling there is a problem with the hardware or software of the sched- uler’s server, i.e. a failure, it presents a single point of failure in the environment. 6.2.2 Distributed scheduling In this paradigm, there is no central scheduler responsible for man- aging all the jobs. Instead, distributed scheduling involves multiple localized schedulers, which interact with each other in order to dis- patch jobs to the participating nodes. There are two mechanisms for a scheduler to communicate with other schedulers – direct or indirect communication. Distributed scheduling overcomes scalability problems, which are incurred in the centralized paradigm; in addition it can offer better fault tolerance and reliability. However, the lack of a global scheduler, which has all the necessary information on available resource, usually leads to sub-optimal scheduling decisions. Direct communication In this scenario, each local scheduler can directly communicate with other schedulers for job dispatching. Each scheduler has a list of remote schedulers that they can interact with, or there may exist a central directory that maintains all the information related to each scheduler. Figure 6.3 shows the architecture of direct com- munication in the distributed scheduling paradigm. If a job cannot be dispatched to its local resources, its scheduler will communicate with other remote schedulers to find resources 6.2 SCHEDULING PARADIGMS 247 Figure 6.3 Direct communications in distributed scheduling appropriate and available for executing its job. Each scheduler may maintain a local job queue(s) for job management. Communication via a central job pool In this scenario, jobs that cannot be executed immediately are sent to a central job pool. Compared with direct communication, the local schedulers can potentially choose suitable jobs to schedule on their resources. Policies are required so that all the jobs in the pool are executed at some time. Figure 6.4 shows the architecture of using a job pool for distributed scheduling. Figure 6.4 Distributed scheduling with a job pool 248 GRID SCHEDULING AND RESOURCE MANAGEMENT Figure 6.5 Hierarchical scheduling 6.2.3 Hierarchical scheduling In hierarchical scheduling, a centralized scheduler interacts with local schedulers for job submission. The centralized scheduler is a kind of a meta-scheduler that dispatches submitted jobs to local schedulers. Figure 6.5 shows the architecture of this paradigm. Similar to the centralized scheduling paradigm, hierarchical scheduling can have scalability and communication bottlenecks. However, compared with centralized scheduling, one advantage of hierarchical scheduling is that the global scheduler and local scheduler can have different policies in scheduling jobs. 6.3 HOW SCHEDULING WORKS Grid scheduling involves four main stages: resource discovery, resource selection, schedule generation and job execution. 6.3.1 Resource discovery The goal of resource discovery is to identify a list of authenticated resources that are available for job submission. In order to cope with the dynamic nature of the Grid, a scheduler needs to have 6.3 HOW SCHEDULING WORKS 249 some way of incorporating dynamic state information about the available resources into its decision-making process. This decision-making process is somewhat analogous to an ordinary compiler for a single processor machine. The compiler needs to know how many registers and functional units exist and whether or not they are available or “busy”. It should also be aware of how much memory it has to work with, what kind of cache configuration has been implemented and the various com- munication latencies involved in accessing these resources. It is through this information that a compiler can effectively schedule instructions to minimize resource idle time. Similarly, a scheduler should always know what resources it can access, how busy they are, how long it takes to communicate with them and how long it takes for them to communicate with each other. With this informa- tion, the scheduler optimizes the scheduling of jobs to make more efficient and effective use of the available resources. A Grid environment typically uses a pull model, a push model or a push–pull model for resource discovery. The outcome of the resource discovery process is the identity of resources available (R available ) in a Grid environment for job submission and execution. The pull model In this model, a single daemon associated with the scheduler can query Grid resources and collect state information such as CPU loads or the available memory. The pull model for gather- ing resource information incurs relatively small communication overhead, but unless it requests resource information frequently, it tends to provide fairly stale information which is likely to be constantly out-of-date, and potentially misleading. In centralized scheduling, the resource discovery/query process could be rather intrusive and begin to take significant amounts of time as the envi- ronment being monitored gets larger and larger. Figure 6.6 shows the architecture of the model. The push model In this model, each resource in the environment has a daemon for gathering local state information, which will be sent to a central- ized scheduler that maintains a database to record each resource’s 250 GRID SCHEDULING AND RESOURCE MANAGEMENT Figure 6.6 The pull model for resource discovery activity. If the updates are frequent, an accurate view of the system state can be maintained over time; obviously, frequent updates to the database are intrusive and consume network bandwidth. Figure 6.7 shows the architecture of the push model. The push–pull model The push–pull model lies somewhere between the pull model and the push model. Each resource in the environment runs a dae- mon that collects state information. Instead of directly sending this information to a central scheduler, there exist some intermediate nodes running daemons that aggregate state information from dif- ferent sub-resources that respond to queries from the scheduler. Figure 6.7 The push model for resource discovery 6.3 HOW SCHEDULING WORKS 251 Figure 6.8 The push–pull model for resource discovery A challenge of this model is to find out what information is most useful, how often it should be collected and how long this infor- mation should be kept around. Figure 6.8 shows the architecture of the push–pull model. 6.3.2 Resource selection Once the list of possible target resources is known, the second phase of the scheduling process is to select those resources that best suit the constraints and conditions imposed by the user, such as CPU usage, RAM available or disk storage. The result of resource selection is to identify a resource list R selected  in which all resources can meet the minimum requirements for a submitted job or a job list. The relationship between resources available R available  and resources selected R selected  is: R selected ⊆ R available 6.3.3 Schedule generation The generation of schedules involves two steps, selecting jobs and producing resource selection strategies. 252 GRID SCHEDULING AND RESOURCE MANAGEMENT Job selection The resource selection process is used to choose resource(s) from the resource list R selected  for a given job. Since all resources in the list R selected could meet the minimum requirements imposed by the job, an algorithm is needed to choose the best resource(s) to execute the job. Although random selection is a choice, it is not an ideal resource selection policy. The resource selection algorithm should take into account the current state of resources and choose the best one based on a quantitative evaluation. A resource selec- tion algorithm that only takes CPU and RAM into account could be designed as follows: Evaluation resource = Evaluation CPU + Evaluation RAM W CPU + W RAM (6.1) Evaluation CPU = W ∗ CPU 1 − CPU load  ∗ CPU speed CPU min (6.2) Evaluation RAM = W ∗ RAM 1 − RAM usage  ∗ RAM size RAM min (6.3) where W CPU – the weight allocated to CPU speed; CPU load – the current CPU load; CPU speed – real CPU speed; CPU min – minimum CPU speed; W RAM – the weight allocated to RAM; RAM usage – the current RAM usage; RAM size – original RAM size; and RAM min – minimum RAM size. Now we give an example to explain the algorithm used to choose one resource from three possible candidates. The assumed param- eters associated with each resource are given in Table 6.1. Let us suppose that the total weighting used in the algorithm is 10, where the CPU weight is 6 and the RAM weight is 4. The minimum CPU speed is 1 GHz and minimum RAM size is 256 MB. Table 6.1 The resource information matrix CPU speed (GHz) CPU load (%) RAM size (MB) RAM usage (%) Resource 1 1.8 50 256 50 Resource 2 2.6 70 512 60 Resource 3 1.2 40 512 30 [...]... management and scheduling system from Sun Microsystems that can be used to optimize the utilization of software and hardware resources in a UNIX-based computing environment The SGE can be used to find a pool of idle resources and harnesses these resources; also it can be used for normal activities, such as managing and scheduling jobs onto the available resources The latest version of SGE is Sun N1 Grid Engine... host: Submit hosts are machines configured to submit, monitor and administer jobs, and to manage the entire cluster • Execution host: Execution hosts have the permission to run SGE jobs 270 GRID SCHEDULING AND RESOURCE MANAGEMENT Table 6.2 A note of the differences between N1 Grid Engine and Sun Grid Engine N1GE 6 differs from the version 6 of Sun Grid Engine Open Source builds in the following aspects:... may need more resources than available and may also result in a long waiting time and inability to make good use of the available resources 254 GRID SCHEDULING AND RESOURCE MANAGEMENT • Backfilling selection [6]: The backfilling strategy requires knowledge of the expected execution time of a job to be scheduled If the next job in the job queue cannot be started due to a lack of available resources, backfilling... familiar Condor job-submission scripts with a few changes and run them on Grid resources managed by Globus, as shown in Figure 6.14 To use Condor-G, we do not need to install a Condor pool Condor-G is only the job management part of Condor Condor-G can be installed on just one machine within an organization and 268 GRID SCHEDULING AND RESOURCE MANAGEMENT Figure 6.13 Submitting jobs to a Condor pool via... REVIEW OF CONDOR, SGE, PBS AND LSF In this section, we give a review on Condor/Condor-G, the SGE, PBS and LSF The four systems have been widely used for Gridbased resource management and job scheduling 6.4.1 Condor Condor is a resource management and job scheduling system, a research project from University of Wisconsin–Madison In this section we study Condor based on its latest version, Condor 6.6.3 means higher priority, so a user with priority 5 will get more resources than a user with priority 50 266 GRID SCHEDULING AND RESOURCE MANAGEMENT Job scheduling policies in Condor Job scheduling in a Condor pool is not strictly based on a firstcome-first-server selection policy Rather, to keep large jobs from draining the pool of resources, Condor uses a unique up-down algorithm [8] that prioritizes... following policies in scheduling jobs • First come first serve: This is the default scheduling policy • Preemptive scheduling: Preemptive policy lets a pending highpriority job take resources away from a running job of lower priority • Dedicated scheduling: Dedicated scheduling means that jobs scheduled to dedicated resources cannot be preempted Resource matching in Condor Resource matching [9]... combined with resource reservation [11] Also, the Equal-share scheduling mentioned above has been replaced in N1GE 6 by a combination of other more advanced scheduling facilities 6.4.3 The Portable Batch System (PBS) The PBS is a resource management and scheduling system It accepts batch jobs (shell scripts with control attributes), preserves and protects the job until it runs; it executes the job, and delivers... dynamic and qualitative resource discovery Furthermore, 278 GRID SCHEDULING AND RESOURCE MANAGEMENT Figure 6.20 Jobs can be submitted to or from a PBS cluster to Globus system administrators have to anticipate the services that will be requested by clients and set up queues to provide these services Additional PBS Pro services include: • Cycle harvesting: PBS Pro can run jobs on idle workstations and suspend... The actual priority value can be displayed by using the qstat 274 GRID SCHEDULING AND RESOURCE MANAGEMENT command (the priority value is contained in the last column of the pending jobs display titled “P”) The default priority value that is assigned to a job at submission time is 0 The priority values are positive and negative integers and the pending job list is sorted correspondingly in the order of . 6 Grid Scheduling and Resource Management LEARNING OBJECTIVES In this chapter, we will study Grid scheduling and resource man- agement,. 244 GRID SCHEDULING AND RESOURCE MANAGEMENT 6.5 Grid Scheduling with QoS 6.6 Chapter Summary 6.7 Further Reading and Testing 6.1 INTRODUCTION The Grid
- Xem thêm -

Xem thêm: Grid Scheduling and Resource Management, Grid Scheduling and Resource Management, Grid Scheduling and Resource Management