System Analysis, Design, and Development Concepts, Principles, and Practices phần 9 potx

84 270 0
System Analysis, Design, and Development Concepts, Principles, and Practices phần 9 potx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

We exercise the simulations over a variety of OPERATING ENVIRONMENT scenarios and con- ditions. Results are analyzed and compiled and documented in an Architecture Trade Study. The Architecture Trade Study rank orders the results as part of its recommendations. Based on a review of the Architecture Trade Study, SEs select an architecture. Once the architecture is selected, the simulation serves as the framework for evaluation and refining each simulated architectural entity at lower levels of abstraction. Application 2: Simulation-Based Architectural Performance Allocations Modeling and simulation are also employed to perform simulation-based performance allocations as illustrated in Figure 51.2. Consider the following example: EXAMPLE 51.9 Suppose that Requirement A describes and bounds Capability A. Our initial analysis derives three subordinate capabilities, A1 through A3, that are specified and bounded by Requirements A1 through A3: The challenge is: How do SEs allocate Capability A’s performance to Capabilities A1 through A3? Let’s assume that basic analysis provides us with an initial set of performance allocations that is “in the ballpark.” However, the interactions among entities are complex and require modeling and simulation to support performance allocation decision making. We construct a model of the Capability A’s architecture to investigate the performance relationships and interactions of Entities A1 through A3. Next, we construct the Capability A simulation consisting of models, A1 through A3, representing subordinate Capabilities A1 through A3. Each supporting capability, A1 through A3, is modeled using the System Entity Capability Construct shown in Figure 22.1. The simulation is exercised for a variety of stimuli, cues, or excitations using Monte Carlo methods to understand the behavior of the interactions over a range of operating environment scenarios and conditions. The results of the interactions are captured in the system behavioral response characteristics. 51.5 Application Examples of Modeling and Simulation 659 Entity A Entity A Entity B Entity B Entity C Entity C Entity A Entity A Entity B Entity B Entity D Entity D Entity E Entity E Candidate Architecture #n Candidate Architecture #1 Simulation #1 Simulation #1 Simulation #n Simulation #n Architecture Trade Study Architectural Selection Recommendations #1 Architecture #3 #2 Architecture #1 #3 Architecture #2 1 2 3 4 5 6 Figure 51.1 Simulation-Based Architecture Selection Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com After several iterations to optimize the interactions, SEs arrive at a final set of performance allocations that become the basis for requirements specifications for capability A. Is this perfect? No! Remember, this is a human approximation or estimate. Due to variations in physical components and the OPERATING ENVI- RONMENT, the final simulations may still have to be calibrated, aligned, and tweaked for field operations based on actual field data. However, we initiated this process to reduce the complexity of the solution space into more manageable pieces. Thus, we arrived at a very close approximation to support requirements’ allo- cations without having to go to the expense of developing the actual working hardware and software. Application 3: Simulation-Based Acquisition (SBA) Traditionally, when an Acquirer acquired a system or product, they had to wait until the System Developer delivered the final system for Operational Test and Evaluation (OT&E) or final accept- ance. During OT&E the User or an Independent Test Agency (ITA) conducts field exercises to eval- uate system or product performance under actual OPERATING ENVIRONMENT conditions. Theoretically there should be no surprises. Why? 1. The System Performance Specification (SPS) perfectly described and bounded the well- defined solution space. 2. The System Developer created the ideal physical solution that perfectly complies with the SPS. In REALITY every system design solution has compromises due to the constraints imposed. Acquirers and User(s) of a system need a level of confidence “up front” that the system will perform as intended. Why? The cost of developing large complex systems, for example, and ensuring that they meet User validated operational needs is challenging. One method for improving the chances of delivery success is simulation-based acquisition (SBA). What is SBA? In general, when the Acquirer releases a formal Request for Proposal (RFP) solicitation for a system or product, a requirement is included for each Offeror to deliver a working simulation model along with their technical proposal. The RFP stipulates criteria for meeting a 660 Chapter 51 System Modeling and Simulation Entity A1 Model Entity A2 Model Entity A3 Model Capability A Simulation Stimulus, Cue, or Excitation System Behavioral Response Characteristic Requirement A (Capability A) Requirement A (Capability A) Derived Reqmt. A1 (Capability A1) Derived Reqmt. A1 (Capability A1) Derived Reqmt. A2 (Capability A2) Derived Reqmt. A2 (Capability A2) Derived Reqmt. A3 (Capability A3) Derived Reqmt. A3 (Capability A3) Entity A1 Entity A1 Entity A2 Entity A2 Entity A3 Entity A3 Capability A Architecture P erformance Allocations Requirement Derivations Acceptable Range of Inputs 1 2 3 4 5 6 7 8 Figure 51.2 Simulation-Based Performance Allocations Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com prescribed set of functionality, interface and performance requirements. To illustrate how SBA is applied, refer to Figure 51.3. EXAMPLE 51.10 Let’s suppose a User has an existing system and decides there is a need to replace a SUBSYSTEM such as a propulsion system. Additionally, an Existing System Simulation is presently used to investigate system per- formance issues. The User selects an Acquirer to procure the SUBSYSTEM replacement. The Acquirer releases an RFP to a qualified set of Offerors, competitors A through n. In response to RFP requirements, each Offeror delivers a simulation of their proposed system or product to support the evaluation of their technical proposal. On delivery, the Acquirer Source Selection Team evaluates each technical proposal using predefined proposal evaluation criteria. The Team also integrates the SUBSYSTEM simulation into the Existing System Simulation for further technical evaluation. During source selection, the offeror’s technical proposals and simulations are evaluated. Results of the evaluations are documented in a Product Acquisition Trade Study. The TSR provides a set of Acquisition Rec- ommendations to the Source Selection Team (SST), which in turn makes Acquisition Recommendations to a Source Selection Decision Authority (SSDA). Application 4: Test Environment Stimuli System Integration, Test, and Evaluation (SITE) can be a very expensive element of system devel- opment, not only from its labor intensiveness but also the creation of the test environment inter- faces to the unit under test (UUT). There are several approaches SEs can employ to test a UUT. The usual system integration, test, and evaluation (SITE) options include: 1) stimulation, 2) emu- lation, and 3) simulation. The simulations in this context are designed to reproduce external system interfaces to the (UUT). Refer to Figure 51.4. 51.5 Application Examples of Modeling and Simulation 661 Entity A Entity A Entity B Entity B Entity C Existing System Simulation Competitor A Competitor A Competitor “n” Competitor B Competitor B Product A Product A Product B Product B Product C OR Product Acquisition Trade Study Report Acquisition Recommendations #1 Product #3 #2 Product #n #3 Product #1 ………. #n Product #2 Candidate Performance Evaluations 2 3 1 6 7 4 5 Priority Figure 51.3 Simulation-Based Acquisition (SBA) Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Application 5: Simulation-Based Failure Investigations Large complex systems often require simulations that enable decision makers to explore dif- ferent aspects of performance in employing the system or product in a prescribed OPERATING ENVIRONMENT. Occasionally, these systems encounter an unanticipated failure mode that requires in-depth investigation. The question for SEs is: What set of system/operator actions or conditions and use case scenarios contributed to the failure? Was the root cause due to: 1) latent defects, design flaws, or errors, 2) reliability of components, 3) operational fatigue, 4) lack of proper mainte- nance, 5) misuse, abuse, or misapplication of the system from its intended application, or 6) an anomaly? Due to safety and other issues, it may be advantageous to explore the root cause of the FAILURE using the existing simulation. The challenge for SEs is being able to: 1. Construct the chain of events leading to the failure. 2. Reliably replicate the problem on a predictable basis. A decision could be made to use the simulation to explore the probable cause of the failure mode. Figure 51.5 illustrates how you might investigate the cause of failure. Let’s assume that a System Failure Report (1) documents the OPERATING ENVIRONMENT scenarios and conditions leading to a failure event. It includes a maintenance history record among the documents. Members of the failure analysis team extract the Operating Conditions and Data (2) from the report and incorporate actual data and into the Existing System Simulation (3). SEs perform analyses using Validated Field Data (4)—among which are the instrument data and a metallurgical analysis of components/residues—and they derive additional inputs and make valid assumptions as necessary. 662 Chapter 51 System Modeling and Simulation Unit Under Test (UUT) Unit Under Test (UUT) Physical Interfacing Entity • Hardware • Software Physical Interfacing Entity • Hardware • Software Simulated Interfacing Entity Simulated Interfacing Entity Emulated Interfacing Entity Emulated Interfacing Entity OR OPERATING ENVIRONMENT REPRESENTATION Stimulation Emulation Simulation 1 3 5 Figure 51.4 Stimulation, Emulation, and Simulation Testing Options Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com The failure analysis team explores all possible actions and rules out probable causes using Monte Carlo simulations and other methods. As with any failure mode investigation, the approach is based on the premise that all scenarios and conditions are suspect until they are ruled out by a process of fact-based elimination. Simulation Results (7) serve as inputs to a Failure Modes and Effects Analysis (FMEA) (8) that compares the results the scenarios and conditions identified in the System Failure Report (1). If the results are not predictable (9), the SEs continue to Refine the Model/Operations (10) until they are successful in duplicating the root cause on a predictable basis. Application 6: Simulation-Based Training Although simulations are used as analytical tools for technical decision making, they are also used to train system operators. Simulators are commonly used for air and ground vehicle training. Figure 51.6 provides an illustrative example. For these applications, simulators are developed as deliverable instructional training devices to provide the look and feel of actual systems such as aircraft. As instructional training devices, these systems support all phases of training including:1) briefing, 2) mission training, and 3) post- mission debriefing. From an SE perspective, these systems provide a Human-in-the Loop (HITL) training environment that includes: 1. Briefing Stations (3) support trainee briefs concerning the planned missions and mission scenarios. 2. Instructor/Operator Stations (IOS) (5) control the training scenario and environment. 3. Target System Simulation (1) simulates the physical system the trainee is being trained to operate. 4. Visual Systems (8) generate and display (9) (10) simulated OPERATING ENVIRONMENTS. 5. Databases (7) support visual system environments. 6. Debrief Stations (3) provide an instructional replay of the training mission and results. 51.5 Application Examples of Modeling and Simulation 663 Entity A Entity A Entity B Entity B Entity C Entity C Existing System Simulation System Failure Report Operating Conditions & Data Operating Conditions & Data Parameter A Acceptable Range of Inputs Parameter “n” Acceptable Range of Inputs Failure Modes, & Effects Analysis (FMEA) Failure Modes, & Effects Analysis (FMEA) Predictable Results? Refine Model/ Operations Refine Model/ Operations Failure Modes & Effects Data Validated Field Data Validated Field Data No Yes 1 2 5 6 3 89 10 4 11 Initial State Simulation Results 7 Final State Figure 51.5 Simulation-Based Failure Mode Investigations Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Training Simulator Implementation. In general, there are several types of training simulators: • Fixed Platform Simulators Provide static implementation and use only visual system motion and cues to represent dynamic motion of the trainee. • Motion System Simulators Employ one-, two-, or three-axis six-degree-of-freedom (6 DOF) motion platforms to provide an enhanced realism to a simulated training session. One of the challenges of training simulation development is the cost related to hardware and soft- ware. Technology advances sometimes outpace the time required to develop and delivery new systems. Additionally, the capability to create an immersive training environment that transcends the synthetic and physical worlds is challenging. One approach to these challenges is to develop a virtual reality simulator. What is a virtual reality simulation? • Virtual Reality Simulation The employment of physical elements such as helmet visors and sensory gloves to psychologically immerse a subject in an audio, visual, and haptic feed- back environment that creates the perception and sensation of physical reality. Application 7: Test Bed Environments for Technical Decision Support When we develop systems, we need early feedback on the downstream impacts of technical deci- sions. While methods such as breadboards, brassboards, rapid prototyping, and technical demon- strations enable us to reduce risk, the reality is that the effects of these decisions may not be known until the System Integration, Test, and Evaluation (SITE) Phase. Even worse, the cost to correct any design flaws or errors in these decisions or physical implementations increases significantly as a function of time after Contract Award. 664 Chapter 51 System Modeling and Simulation Instructor/ Operator Station (IOS) Instructor/ Operator Station (IOS) Physical System Interface Device(s) • Trainee Station(s) • Operator Station(s) Physical System Interface Device(s) • Trainee Station(s) • Operator Station(s) Instructor Instructor SYSTEM OF INTEREST (SOI) Simulation SYSTEM OF INTEREST (SOI) Simulation Trainee(s) Trainee(s) Brief/ Debrief Station(s) Brief/ Debrief Station(s) Image Generation System Image Generation System Visual Projection System Visual Projection System Simulated Imagery Visual Imagery and Cues Trainee Responses System Responses/ Haptic Feedback Simulation Parameters & Control Simulation Inputs & Control Simulation Stimuli & Responses Instruction and Communications Playback Scenarios Simulation Databases Visual Database Other Databases Visual Data Visual Data Projected Images 1 2 3 4 5 6 7 8 9 10 Physical Motion Devices Figure 51.6 Simulation-Based Training Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com From an engineering perspective, it would be desirable to evolve and mature models, or pro- totypes of a laboratory “working system,” directly into the deliverable system. An approach such as this provides continuity of: 1. The evolving system design solution and its element interfaces. 2. Verification of those elements. The question is: HOW can we implement this approach? One method is to create a test bed. So, WHAT is a Test Bed and WHY do you need one? Test Bed Development Environments. A test bed is an architectural framework and ENVI- RONMENT that allows simulated, emulated, or physical components to be integrated as “working” representations of a physical SYSTEM or configuration item (CI) and be replaced by actual com- ponents as they become available. IEEE 610.12 (1990) describes a test bed as “An environment containing the hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.” Test beds may reside in environmentally controlled laboratories and facilities, or they may be implemented on mobile platforms such as aircraft, ships, and ground vehicles. In general, a test bed serves as a mechanism that enables the virtual world of modeling and simulation to transition to the physical world over time. Test Bed Implementation. A test bed is implemented with a central framework that integrates the system elements and controls the interactions as illustrated in Figure 51.7. Here, we have a Test Bed Executive Backbone (1) framework that consists of Interface Adapters (2), (5), (10) that serve as interfaces to simulated or actual physical elements, PRODUCTS A through C. During the early stages of system development, PRODUCTS A, B, and C are MODELED and incorporated into simulations: Simulation A (4); Simulations B1 (7), B2 (9), B3 (8); and Simulation 51.5 Application Examples of Modeling and Simulation 665 Simulation C Simulation C Physical Device C PRODUCT C Interface Adapter Interface Adapter Interface Adapter Interface Adapter Testbed Executive Backbone Testbed Executive Backbone Simulation A Simulation A Physical Device A Interface Adapter Interface Adapter PRODUCT A PRODUCT B Subsystem B1 Subsystem B3 Subsystem B2 Subsystem Simulation B3 Subsystem Simulation B3 Subsystem Simulation B2 Subsystem Simulation B2 Subsystem Simulation B1 Subsystem Simulation B1 3 1 2 10 11 5 6 7 8 9 Where: = Simulated Interface = Physical Interface 4 12 Figure 51.7 Simulation Testbed Approach to System Development Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com C (12). The objective is to investigate critical operational or technical issues (COIs/CTIs) and facil- itate technical decision making. These initial simulations may be of LOW to MEDIUM fidelity. As the system design solution evolves, HIGHER fidelity models may be developed to replace the lower fidelity models, depending on specific requirements. As PRODUCTS A, B, and C or their subelements are physically implemented as prototypes, breadboards, brassboards, and the like, the physical entities may replace simulations A through C as plug-and-play modules. Consider the following example: EXAMPLE 51.11 During the development of PRODUCT B, SUBSYSTEMS B1 through B3 may be implemented as Simula- tion B1, B2, and B3. At some point in time SUBSYSTEM B2 is physically prototyped in the laboratory. Once the SUBSYSTEM B2 physical prototype reaches an acceptable level of maturity, Simulation B2 is removed and replaced by the SUBSYSTEM B2 prototype. Later, when the SUBSYSTEM B2 developer delivers the verified physical item, the SUBSYSTEM B2 prototype is replaced with the deliverable item. In summary, a test bed provides a controlled framework with interface “stubs” that enable devel- opers to integrate—“plug-and-play”—functional models, simulations, or emulations. As physical hardware (HWCI) and software configuration items (CSCIs) are verified, they replace the models, simulations, or emulations. Thus, over time the test bed evolves from an initial set of functional and physical models and simulation representations to a fully integrated and verified system. Reasons That Drive the Need for a Test Bed. Throughout the System Development and the Operation and Support (O&S) phases of the system/product life cycle, SEs are confronted with several challenges that drive the need for using a test bed. Throughout this decision-making process, a mechanism is required that enables SEs to incrementally build a level of confidence in the evolv- ing system architecture and design solution as well as to support field upgrades after deployment. Under conventional system development, breadboards, brassboards, rapid prototypes, and tech- nology demonstrations are used to investigate COIs/CTIs. Data collected from these decision aids are translated into design requirements—as mechanical drawings, electrical assembly drawings and schematics, and software design, for example. The translation process is prone to human errors; however, integrated tool environments min- imize the human translation errors but often suffer from format compatibility problems. Due to dis- continuities in the design and component development workflow, the success of these decisions and implementation may not be known until the System Integration, Test, and Evaluation (SITE) Phase. So, how can a test bed overcome these problems? There are several reasons why test beds can facilitate system development. Reason 1: Performance allocation–based decision making. When we engineer and develop systems, recursive application of the SE Process Model requires informed, fact-based decision making at each level of abstraction using the most current data available. Models and simulations provide a means to investigate and analyze performance and system responses to OPERATING ENVIRONMENT scenarios for a given set of WHAT IF assumptions. The challenge is that models and simulations are ONLY as GOOD as the algorithmic representations used and validated based on actual field data measurements. Reason 2: Prototype development expense. Working prototypes and demonstrations provide mechanisms to investigate a system’s behavior and performance. However, full pro- 666 Chapter 51 System Modeling and Simulation Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com totypes for some systems may be too risky due to the MATURITY of the technology involved and expense, schedule, and security issues. The question is: Do you have incur the expense of creating a prototype of an entire system just to study a part of it? Consider the following example: EXAMPLE 51.12 To study an aerodynamic problem, you may not need to physically build an entire aircraft. Model a “piece” of the problem for a given set of boundary conditions. Reason 3: System component delivery problems. Despite insightful planning, programs often encounter late vendor deliveries. When this occurs SITE activities may severely impact contract schedules unless you have a good risk mitigation plan in place. SITE activities may become bottlenecked until a critical component is delivered. Risk mit- igation activities might include some form of representation—simulation, emulation, or stimulation—of the missing component to enable SITE to continue to avoid inter- rupting the overall program schedule. Reason 4: New technologies. Technology drives many decisions. The challenge SEs must answer is: 1. Is a technology as mature as its literature suggests. 2. Is this the RIGHT technology for this User’s application and longer term needs. 3. Can the technology be seamlessly integrated with the other system components with minimal schedule impact. So a test bed enables the integration, analysis, and evaluation of new technologies without expos- ing an existing system to unnecessary risk. For example, new engines for aircraft. Reason 5: Post deployment field support. Some contracts require field support for a specific time frame following system delivery during the System Operations and Support (O&S) Phase. If the Users are planning a series of upgrades via builds, they have a choice: 1. Bear the cost of operating and maintaining a test article(s) of a fielded system for assess- ing incremental upgrades to a fielded configuration. 2. Maintain a test bed that allows the evaluation of configuration upgrades. Depending on the type of system and its complexity, test beds can provide a lower cost solution. Synthesizing the Challenges. In general, a test bed provides for plug-and-play simulations of a configuration items (CIs) or the actual physical component. Test beds are also useful for work arounds because they can minimize SITE schedule problems. They can be used to: • Integrate early versions of an architectural configuration that is populated with simulated model representations (functional, physical, etc.) of configuration items (CIs). • Establish a plug-and-play working test environment with prototype system components before an entire system is developed. • Evaluate systems or configuration items (CIs) to be represented by simulated or emulated models that can be replaced by higher fidelity models and ultimately by the actual physical configuration item (PCI). 51.5 Application Examples of Modeling and Simulation 667 Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com • Apply various technologies and alternative architectural and design solutions for configu- ration items (CIs). • Assess incremental capability and performance upgrades to system field configurations. Evolution of the Test Bed. Test beds evolve in a number of different ways. Test beds may be operated and maintained until the final deliverable system completes SITE. At that point actual systems serve as the basis for incremental or evolutionary development. Every system is different. So assess the cost–benefits of maintaining the test bed. All or portions of the test bed may be dismantled, depending on the development needs as well as the utility and expense of maintenance. For some large complex systems, it may be impractical to conduct WHAT IF experiments on the ACTUAL systems in enclosed facilities due to: 1. Physical space requirements. 2. Environmental considerations. 3. Geographically dispersed development organizations. In these cases it may be practical to keep a test bed intact. This, in combination with the capabil- ities of high-speed Internet access, may allow geographically dispersed development organizations to conduct work with a test bed without having to be physically colocated with the actual system. 51.6 MODELING AND SIMULATION CHALLENGES AND ISSUES Although modeling and simulation offer great opportunities for SEs to exploit technology to under- stand the problem and solution spaces, there are also a number of challenges and issues. Let’s explore some of these. Challenge 1: Failure to Record Assumptions and Scenarios Modeling and simulation requires establishing a base set of assumptions, scenarios, and operating conditions. Reporting modeling and simulation results without recording and noting this informa- tion in technical reports and briefings diminishes the integrity and credibility of the results. Challenge 2: Improper Application of the Model Before applying a model to a specific type of decision support task, the intended application of the model should be verified. There may be instances where models do not exist for the application. You may even be confronted with a model that has only a degree of relevance to the application. If this happens, you should take the relevancy into account and apply the results cautiously. The best approach may be to adapt the current model. Challenge 3: Poor Understanding of Model Deficiencies and Flaws Models and simulations generally evolve because an organization has an operational need to satisfy or resolve. Where the need to resolve critical operational or technical issues (COIs/CTIs) is imme- diate, the investigator may only model a segment of an application or “piece of the problem.” Other Users with different needs may want to modify the model to satisfy their own “segment” needs. Before long, the model will evolve through a series of undocumented “patches,” and then docu- mentation accuracy and configuration control become critical issues. 668 Chapter 51 System Modeling and Simulation Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com [...]... Control (GN&C) System System Energy Energy Transfer Transfer System System Crew/Passenger Crew/Passenger Compartment Compartment Environment Environment Wheel Wheel System System Ingress/ Ingress/ Egress Egress Systems Systems Cargo/ Cargo/ Payload Payload Communications Communications System System Security Security System System Lighting Lighting System System Visual Visual Systems Systems Figure... understanding the scope of its context The key question is: WHAT is being verified? The answer resides in the System Development Phase segments of the system/ product life cycle The key segments of the System Development Phase are illustrated in Figure 24.3 and include System Engineering Design, Component Procurement and Development, System Integration and Test, System Verification, Operational Test and. .. MIL-STD- 499 B 199 4 (canceled draft) Systems Engineering Washington, DC: Department of Defense (DoD) National Aeronautics and Space Administration (NASA) 199 4 Systems Engineering “Toolbox” for DesignOriented Engineers NASA Reference Publication 1358 Washington, DC ADDITIONAL READING US Federal Aviation Administration (FAA), ASD-100 Architecture and System Engineering 2003 National Airspace System Systems... Department of Defense (DoD) DSMC 199 8 Simulation Based Acquisition: A New Approach Defense System Management College (DSMC) Press Ft Belvoir, VA IEEE Std 610.12- 199 0 199 0 IEEE Standard Glossary of Software Engineering Terminology New York: Institute of Electrical and Electronic Engineers (IEEE) Kossiakoff, Alexander, and Sweet, William N 2003 Systems Engineering Principles and Practice New York: Wiley-InterScience... avoid expensive “fixes” and “retrofits” in the field after system delivery? Identify and correct deficiencies and defects “early” to avoid increased “downstream” development and operational costs and risks 3 How do we ensure the specified system will meet the User’s validated operational needs? Coordinate and communicate with the User to ensure that expected system outcomes, requirements, and assumptions are... major milestones and reviews Why is this necessary? To ensure that the evolving Developmental Configuration progresses toward maturity with an acceptable level of risk to the stakeholders and is compliant with the System Performance Specification (SPS), contract cost and schedule constraints, and ultimately satisfies the User’s validated operational needs System Analysis, Design, and Development, by Charles... acceptance tests and field trials, System Developers initiate V&V activities at the time of Contract Award and continue throughout all segments of the System Development Phase as shown in Figure 24.3 V&V is performed at all system levels of abstraction and on each entity within a level Under ISO 90 00, technical plans state how multi-disciplined system engineering and development are to be accomplished,... schedule, and work products to be produced V & V activities employ these work products at various stages of completion to assess compliance of the evolving system design solution to technical plans, tasks, and specifications 53.3 SYSTEM VERIFICATION PRACTICES Verification encompasses all of the System Development Phase activities from Contract Award through system acceptance This includes Developmental Test and. .. Decision Flow Within Each System Level and Entity Decision Flow Within Each System Level and Entity Architecture Trades Simpo PDF Merge and Split Unregistered Version - the Prescribed Trade Space 677 52.5 Understanding http://www.simpopdf.com Self-Propelled, Mobile Self-Propelled, Mobile Vehicle System Vehicle System Vehicle Vehicle Frame Frame Propulsion Propulsion System System Guidance, Guidance,... 610.12- 199 0 Standard Glossary of Software Engineering Terminology) • Verification and Validation (V&V) “The process of determining whether the requirements for a system or component are complete and correct, the products of each development phase fulfill the requirements or conditions imposed by the previous phase, and the final system or component complies with specified requirements.” (Source: IEEE 610.12- 199 0 . A PRODUCT B Subsystem B1 Subsystem B3 Subsystem B2 Subsystem Simulation B3 Subsystem Simulation B3 Subsystem Simulation B2 Subsystem Simulation B2 Subsystem Simulation B1 Subsystem Simulation. and when? REFERENCES 670 Chapter 51 System Modeling and Simulation DoD 5000. 59- M. 199 8. DoD Modeling and Simulation (M&S) Glossary. Washington, DC: Department of Defense. (DoD). DSMC. 199 8 System Communications System Communications System Lighting System Lighting System Wheel System Wheel System Ingress/ Egress Systems Ingress/ Egress Systems Energy Transfer System Energy Transfer System Visual Systems Visual Systems Figure 52.3 Mobile

Ngày đăng: 13/08/2014, 08:21

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan