Automation

9 111 0
Automation

Đang tải... (xem toàn văn)

Thông tin tài liệu

1 Automation What is Automation Automated testing is automating the manual testing process currently in use 1 Why Automate the Testing Process? Today, rigorous application testing is a critical part of virtually all software development projects. As more organizations develop mission-critical systems to support their business activities, the need is greatly increased for testing methods that support business objectives. It is necessary to ensure that these systems are reliable, built according to specification, and have the ability to support business processes. Many internal and external factors are forcing organizations to ensure a high level of software quality and reliability. In the past, most software tests were performed using manual methods. This required a large staff of test personnel to perform expensive, and time-consuming manual test procedures. Owing to the size and complexity of today’s advanced software applications, manual testing is no longer a viable option for most testing situations. Every organization has unique reasons for automating software quality activities, but several reasons are common across industries. Using Testing Effectively By definition, testing is a repetitive activity. The very nature of application software development dictates that no matter which methods are employed to carry out testing (manual or automated), they remain repetitious throughout the development lifecycle. Automation of testing processes allows machines to complete the tedious, repetitive work while human personnel perform other tasks. Automation allows the tester to reduce or eliminate the required “think time” or “read time” necessary for the manual interpretation of when or where to click the mouse or press the enter key. An automated test executes the next operation in the test hierarchy at machine speed, allowing tests to be completed many times faster than the fastest individual. Furthermore, some types of testing, such as load/stress testing, are virtually impossible to perform manually. Reducing Testing Costs The cost of performing manual testing is prohibitive when compared to automated methods. The reason is that computers can execute instructions many times faster, and with fewer errors than individuals. Many automated testing tools can replicate the activity of a large number of users (and their associated transactions) using a single computer. Therefore, load/stress testing using automated methods require only a fraction of the computer hardware that would be necessary to complete a manual test. Imagine performing a load test on a typical distributed client/server application on which 50 concurrent users were planned. To do the testing manually, 50 application users employing 50 PCs with associated software, an available network, and a cadre of coordinators to relay instructions to the users would be required. With an automated scenario, the entire test operation could be created on a single machine having the ability to run and rerun the test as necessary, at night or on weekends without having to assemble an army of end users. As another example, imagine the same application used by hundreds or thousands of users. It is easy to see why manual methods for load/stress testing is an expensive and logistical nightmare. Replicating Testing Across Different Platforms Automation allows the testing organization to perform consistent and repeatable tests. When applications need to be deployed across different hardware or software platforms, standard or benchmark tests can be created and repeated on target platforms to ensure that new platforms operate consistently. Repeatability and Control By using automated techniques, the tester has a very high degree of control over which types of tests are being performed, and how the tests will be executed. Using automated tests enforces consistent procedures that allow developers to evaluate the effect of various application modifications as well as the effect of various user actions. For example, automated tests can be built that extract variable data from external files or applications and then run a test using the data as an input value. Most importantly, automated tests can be executed as many times as necessary without requiring a user to recreate a test script each time the test is run. Greater Application Coverage The productivity gains delivered by automated testing allow and encourage organizations to test more often and more completely. Greater application test coverage also reduces the risk of exposing users to malfunctioning or non-compliant software. In some industries such as healthcare and pharmaceuticals, organizations are required to comply with strict quality regulations as well as being required to document their quality assurance efforts for all parts of their systems. 2 Automation Life Cycle Identifying Tests Requiring Automation Most, but not all, types of tests can be automated. Certain types of tests like user comprehension tests, tests that run only once, and tests that require constant human intervention are usually not worth the investment to automate. The following are examples of criteria that can be used to identify tests that are prime candidates for automation. High Path Frequency - Automated testing can be used to verify the performance of application paths that are used with a high degree of frequency when the software is running in full production. Examples include: creating customer records, invoicing and other high volume activities where software failures would occur frequently. Critical Business Processes - In many situations, software applications can literally define or control the core of a company’s business. If the application fails, the company can face extreme disruptions in critical operations. Mission-critical processes are prime candidates for automated testing. Examples include: financial month-end closings, production planning, sales order entry and other core activities. Any application with a high-degree of risk associated with a failure is a good candidate for test automation. Repetitive Testing - If a testing procedure can be reused many times, it is also a prime candidate for automation. For example, common outline files can be created to establish a testing session, close a testing session and apply testing values. These automated modules can be used again and again without having to rebuild the test scripts. This modular approach saves time and money when compared to creating a new end-to-end script for each and every test. Applications with a Long Life Span - If an application is planned to be in production for a long period of time, the greater the benefits are from automation. What to Look For in a Testing Tool Choosing an automated software testing tool is an important step, and one which often poses enterprise-wide implications. Here are several key issues, which should be addressed when selecting an application testing solution. Test Planning and Management A robust testing tool should have the capability to manage the testing process, provide organization for testing components, and create meaningful end-user and management reports. It should also allow users to include non-automated testing procedures within automated test plans and test results. A robust tool will allow users to integrate existing test results into an automated test plan. Finally, an automated test should be able to link business requirements to test results, allowing users to evaluate application readiness based upon the application's ability to support the business requirements. Testing Product Integration Testing tools should provide tightly integrated modules that support test component reusability. Test components built for performing functional tests should also support other types of testing including regression and load/stress testing. All products within the testing product environment should be based upon a common, easy-to-understand language. User training and experience gained in performing one testing task should be transferable to other testing tasks. Also, the architecture of the testing tool environment should be open to support interaction with other technologies such as defect or bug tracking packages. Internet/Intranet Testing A good tool will have the ability to support testing within the scope of a web browser. The tests created for testing Internet or intranet-based applications should be portable across browsers, and should automatically adjust for different load times and performance levels. Ease of Use Testing tools should be engineered to be usable by non-programmers and application end- users. With much of the testing responsibility shifting from the development staff to the departmental level, a testing tool that requires programming skills is unusable by most organizations. Even if programmers are responsible for testing, the testing tool itself should have a short learning curve. GUI and Client/Server Testing A robust testing tool should support testing with a variety of user interfaces and create simple-to manage, easy-to-modify tests. Test component reusability should be a cornerstone of the product architecture. Load and Performance Testing The selected testing solution should allow users to perform meaningful load and performance tests to accurately measure system performance. It should also provide test results in an easy-to-understand reporting format. 3 Preparing the Test Environment Once the test cases have been created, the test environment can be prepared. The test environment is defined as the complete set of steps necessary to execute the test as described in the test plan. The test environment includes initial set up and description of the environment, and the procedures needed for installation and restoration of the environment. Description - Document the technical environment needed to execute the tests. Test Schedule - Identify the times during which your testing facilities will be used for a given test. Make sure that other groups that might share these resources are informed of this schedule. Operational Support - Identify any support needed from other parts of your organization. Installation Procedures - Outline the procedures necessary to install the application software to be tested. Restoration Procedures - Finally, outline those procedures needed to restore the test environment to its original state. By doing this, you are ready to re-execute tests or prepare for a different set of tests. Inputs to the Test Environment Preparation Process Technical Environment Descriptions Approved Test Plan Test Execution Schedules Resource Allocation Schedule Application Software to be installed Test Planning Careful planning is the key to any successful process. To guarantee the best possible result from an automated testing program, those evaluating test automation should consider these fundamental planning steps. The time invested in detailed planning significantly improves the benefits resulting from test automation. Evaluating Business Requirements Begin the automated testing process by defining exactly what tasks your application software should accomplish in terms of the actual business activities of the end-user. The definition of these tasks, or business requirements, defines the high-level, functional requirements of the software system in question. These business requirements should be defined in such a way as to make it abundantly clear that the software system correctly (or incorrectly) performs the necessary business functions. For example, a business requirement for a payroll application might be to calculate a salary, or to print a salary check. Creating a Test Plan For the greatest return on automated testing, a testing plan should be created at the same time the software application requirements are defined. This enables the testing team to define the tests, locate and configure test-related hardware and software products and coordinate the human resources required to complete all testing. This plan is very much a “living document” that should evolve as the application functions become more clearly defined. A good testing plan should be reviewed and approved by the test team, the software development team, all user groups and the organization’s management. The following items detail the input and output components of the test planning process. Inputs to the Test Planning Process Application Requirements - What is the application intended to do? These should be stated in the terms of the business requirements of the end users. Application Implementation Schedules - When is the scheduled release? When are updates or enhancements planned? Are there any specific events or actions that are dependent upon the application? Acceptance Criteria for implementation - What critical actions must the application accomplish before it can be deployed? This information forms the basis for making informed decisions on whether or not the application is ready to deploy. Test Design and Development After the test components have been defined, the standardized test cases can be created that will be used to test the application. The type and number of test cases needed will be dictated by the testing plan. A test case identifies the specific input values that will be sent to the application, the procedures for applying those inputs, and the expected application values for the procedure being tested. A proper test case will include the following key components: Test Case Name(s) - Each test case must have a unique name, so that the results of these test elements can be traced and analyzed. Test Case Prerequisites - Identify set up or testing criteria that must be established before a test can be successfully executed. Test Case Execution Order - Specify any relationships, run orders and dependencies that might exist between test cases. Test Procedures – Identify the application steps necessary to complete the test case. Input Values - This section of the test case identifies the values to be supplied to the application as input including, if necessary, the action to be completed. Expected Results - Document all screen identifier(s) and expected value(s) that must be verified as part of the test. These expected results will be used to measure the acceptance criteria, and therefore the ultimate success of the test. Test Data Sources - Take note of the sources for extracting test data if it is not included in the test case. Inputs to the Test Design and Construction Process Test Case Documentation Standards Test Case Naming Standards Approved Test Plan Business Process Documentation Business Process Flow Test Data sources Outputs from the Test Design and Construction Process Revised Test Plan Test Procedures for each Test Case Test Case(s) for each application function described in the test plan Procedures for test set up, test execution and restoration Executing the Test The test is now ready to be run. This step applies the test cases identified by the test plan, documents the results, and validates those results against expected performance. Specific performance measurements of the test execution phase include: Application of Test Cases – The test cases previously created are applied to the target software application as described in the testing environment Documentation - Activities within the test execution are logged and analyzed as follows: Actual Results achieved during test execution are compared to expected application behavior from the test cases Test Case completion status (Pass/Fail) Actual results of the behavior of the technical test environment Deviations taken from the test plan or test process Inputs to the Test Execution Process Approved Test Plan Documented Test Cases Stabilized, repeatable, test execution environment Standardized Test Logging Procedures Outputs from the Test Execution Process Test Execution Log(s) Restored test environment The test execution phase of your software test process will control how the test gets applied to the application. This step of the process can range from very chaotic to very simple and schedule driven. The problems experienced in test execution are usually attributed to not properly performing steps from earlier in the process. Additionally, there may be several test execution cycles necessary to complete all the necessary types of testing required for your application. For example, a test execution may be required for the functional testing of an application, and a separate test execution cycle may be required for the stress/volume testing of the same application. A complete and thorough test plan will identify this need and many of the test cases can be used for both test cycles. The secret to a controlled test execution is comprehensive planning. Without an adequate test plan in place to control your entire test process, you may inadvertently cause problems for subsequent testing. Measuring the Results This step evaluates the results of the test as compared to the acceptance criteria set down in the test plan. Specific elements to be measured and analyzed include: Test Execution Log Review - The Log Review compiles a listing of the activities of all test cases, noting those that passed, failed or were not executed. Determine Application Status - This step identifies the overall status of the application after testing, for example: ready for release, needs more testing, etc. Test Execution Statistics - This summary identifies the total number of tests that were executed, the type of test, and the completion status. Application Defects - This final and very important report identifies potential defects in the software, including application processes that need to be analyzed further. 4 Automation Methods Capture/Playback Approach The Capture/Playback tools capture the sequence of manual operations in a test script that are entered by the test engineer. These sequences are played back during the test execution. The benefit of this approach is that the captured session can be re-run at some later point in time to ensure that the system performs the required behavior. The short-comings of Capture/Playback are that in many cases, if the system functionality changes, the capture/playback session will need to be completely re-run to capture the new sequence of user interactions. Tools like WinRunner provide a scripting language, and it is possible for engineers to edit and maintain such scripts. This sometimes reduces the effort over the completely manual approach, however overall savings is usually minimal. Data Driven Approach Data driven approach is a test that plays back the same user actions but with varying input values. This allows one script to test multiple sets of positive data. This is applicable when large volumes and different sets of data need to be fed to the application and tested for correctness. The benefit of this approach is that the time consumed is less and accurate than manually testing it. Testing can be done with both positive and negative approach simultaneously. Test Script execution: In this phase we execute the scripts that are already created. Scripts need to be reviewed and validated for results and accepted as functioning as expected before they are used live. Steps to be followed before execution of scripts: 1.Test tool to be installed in the machine. 2. Test environment /application to be tested to be installed in the machine. 3. Prerequisite for running the scripts such as tool settings, playback options, necessary data table or data pool updation needs to be taken care. 4.Select the script that needs to be executed and run it… 5.Wait until execution is done. 6.Analysis the results via Test manager or in the logs. Test script execution process: Test tool ready Result analysis Defect management Script execution Tool settings, Playback options Test application ready . 1 Automation What is Automation Automated testing is automating the manual testing process. lifecycle. Automation of testing processes allows machines to complete the tedious, repetitive work while human personnel perform other tasks. Automation

Ngày đăng: 25/10/2013, 03:20

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan