You test software testing




















This is known as static testing. In this type of testing, you have partial information about the product. But your partial knowledge of the product would help you identify such bugs. Like any other process, software testing can also be divided into different phases. This sequence of phases is often known as the software testing life cycle.

Every process starts with planning. In this phase, you collect all the required details about the product. You collect a list of tasks that has to be tested first.

Then you have the prioritize your checklist of tasks. If a complete team is involved, then division of tasks can also be done in this phase.

Once you know what you have to do, you have to build the foundation for testing. This includes preparing the test environment, collecting test-cases, researching product features and test-cases. Gathering tools and techniques for testing and getting familiar with them should also be done here. This is when you actually run tests on the product. You execute test-cases and collect the results. Then you compare the results with the expected result and see if the product is working as expected or not.

You make a note of all the successful and failed tests and test-cases. This is the last phase of software testing where you have to document all your findings and submit it to the concerned personnel. Test-case failures are of most interest here. A proper and clear explanation of tests run and outputs should be mentioned.

For complex tests, steps to reproduce the error, screenshots, and whatever is helpful should be mentioned. As we know, in the current age of machines, everything that involves manual effort is slowly automated. And the same thing is happening in the testing domain. There are two different ways of performing software testing—manual and automation. Manual labor in any field requires a lot of time and effort.

Manual testing is a process in which testers examine different features of an application. Here, the tester performs the process without using any tools or test scripts. Without using any automated tools, testers perform execution of different test cases. Finally, they generate a test report. Quality assurance analysts test the software under development for bugs. They do so by writing scenarios in an excel file or QA tool and testing each scenario manually.

But in automated testing, testers use scripts for testing thus automating the process. The pre-scripted tests run automatically to compare actual and expected outcomes. Even though the automatic performance of most processes takes place in automation testing, some manual labor is still a must. Generating the initial script for testing requires human efforts.

Also, in any automated process, human supervision is mandatory. Automation simply makes the testing process easier. You only get the best result by combining both manual and automated tests. And the reason is that it helps find more bugs in less time. By checking every single unit, automated testing also increases test coverage.

When the actual functionality deviates from the desired functionality, a defect is usually logged and those defects are prioritized based on severity. Some defects get fixed, other defects are low enough impact that they are just noted and left in the system.

Just like the development world has many concepts and methodologies for creating software, there are many ways to think about how to test and the field is changing all the time. Early in my career, it could be perceived as a slight or insult to call someone who worked in testing a tester; they preferred to be called QA or quality assurance professionals.

Just a year or two ago, I attended a testing conference and I made the mistake of calling someone a QA person. They corrected me and said that tester was the preferred term. When you do black-box testing, you are only concerned with inputs and outputs. Most testing is done in this fashion because it is largely unbiased. Real white-box testing is when you understand some of the internals of the system and perhaps have access to the actual source code , which you use to inform your testing and what you target.

With white-box testing , you have at least some idea of what is going on inside the software. Oftentimes, unit testing is called white-box testing, but I disagree. Unit testing is not testing at all. Advantages Disadvantages Discovering of hidden bugs more efficiently Tester needs to have coding knowledge Code will be optimized Code access required Fast problem and bug spotting Focus on existing software, missing functionality may not be discovered.

The basic idea of acceptance testing is that you have some tests which test the actual requirements or expectations of the customer, and other tests that run against the system as a whole. This kind of testing could be testing the functionality of the system or it could be testing the usability or both. Automated testing is any testing in which the execution of the test and the verification of the results is automated. So, you might automate the testing of a web application by running scripts which open up a web page, input some data, push some buttons and then check for some results on a page.

You could also automate the testing of an API by writing scripts which call out to the API with various data and then check the results that are returned. More and more of testing is moving towards automated testing because manually running through test cases over and over again can be tedious, error-prone and costly—especially in an Agile environment where the same set of tests may need to be run every two weeks or so to verify nothing has broken.

This brings us to regression testing, which is basically testing done to verify that the system still works the way it did before. This is extremely important with Agile development methodologies where software is developed incrementally and there is a constant potential that adding new features could break existing ones. In fact, you could really make the argument that all automated tests are regression tests since the whole purpose of automating a test is so that it can be run multiple times.

Functional testing is another broad term used in the testing world to refer to testing activities where what is being tested is the actual functionality of the system. Sauce Labs is a Selenium cloud-based solution that supports automated cross-browser testing. It can perform testing in any OS and platform and browser combination. Ghostlab is a Mac based testing app that allows test out responsive design across a variety of devices and browsers.

It is a tool for synchronized browser testing. It synchronizes scrolls, clicks, reloads and form input across all connected clients to test a full user experience. WebLOAD is an excellent testing tool which offers many powerful scripting capabilities, that is helpful for testing complex scenarios. The tool supports hundreds of technologies from Selenium to mobile, enterprise application to web protocols.

It is possible to generate load both in the cloud and on-premise using this tool. It is a load testing tool for Windows and Linux, which allows testing the web application efficiently. It helpful testing tool to determining the performance and result of the web application under heavy load. Wapt is a load, and stress testing tool works for all Windows. It provides an easy and cost-effective way to test all types of websites.

This testing tool also provides supports for RIA applications in the data-driven mode. Silk Performer is the cost-effective load testing tool to meet all the critical applications, performance expectations, and service-level requirements.

It also supports cloud integration which means that it is easy to simulate massive loads without a need to invest in hardware setup. Apache JMeter is one of the open source testing tools for load testing. It is a Java desktop application, designed to load test functional behavior and measure performance of websites.

The tool was developed for the purpose of load testing web applications, but it is now expanded to other test functions.

BlazeMeter is a Load testing tool which ensures delivery of high-performance software to quickly run performance tests for mobile apps, website or API to check the performance at every stage of its development.

Load Impact is the best cloud-based load testing system which widely used by enterprises all over the world to develop their websites, mobile applications, web-based apps, and APIs by performing all types of test.

This tool is not only used for recording, reporting but also integrated directly with code development environment. Mantis is an open source defect tracking tool that provides a great balance between simplicity and power.

The users can easily get started with this tool for managing their teammates and clients effectively. The FogBugz is a tracking tool which can be used to track the status of defects and changes in ongoing software projects, such as application development and deployment.

It is specifically helpful for organizations to keep track of bugs for multiple projects. Bugzilla is one of the best defect Tracking System.

The tool allows individual or groups of developers to keep track of outstanding bugs in their system. It is the best open source software used in the market by small scale as well as large- scale organizations.

BugNet is open source Bug Finding Tool. It is a cross-platform application that is written using an ASP. The main objective of this defect tracking tool is to make codebase simple and easy to deploy. It is an open source, web-based bug tracking software. Sometimes, the specification itself becomes an intractable problem: it is not possible to specify precisely every situation that can be encountered using limited words.

And people can seldom specify clearly what they want -- they usually can tell whether a prototype is, or is not, what they want after they have been finished.

Specification problems contributes approximately 30 percent of all bugs in software. The research in black-box testing mainly focuses on how to maximize the effectiveness of testing with minimum cost, usually the number of test cases.

It is not possible to exhaust the input space, but it is possible to exhaustively test a subset of the input space. Partitioning is one of the common techniques. If we have partitioned the input space and assume all the input values in a partition is equivalent, then we only need to test one representative value in each partition to sufficiently cover the whole input space.

Domain testing [Beizer95] partitions the input domain into regions, and consider the input values in each domain an equivalent class. Domains can be exhaustively tested and covered by selecting a representative value s in each domain. Boundary values are of special interest. Experience shows that test cases that explore boundary conditions have a higher payoff than test cases that do not.

Boundary value analysis [Myers79] requires one or more boundary values selected as representative test cases. The difficulties with domain testing are that incorrect domain definitions in the specification can not be efficiently discovered. Good partitioning requires knowledge of the software structure. A good testing plan will not only contain black-box testing, but also white-box approaches, and combinations of the two. Contrary to black-box testing, software is viewed as a white-box, or glass-box in white-box testing, as the structure and flow of the software under test are visible to the tester.

Testing plans are made according to the details of the software implementation, such as programming language, logic, and styles. Test cases are derived from the program structure. White-box testing is also called glass-box testing, logic-driven testing [Myers79] or design-based testing [Hetzel88].

There are many techniques available in white-box testing, because the problem of intractability is eased by specific knowledge and attention on the structure of the software under test. The intention of exhausting some aspect of the software is still strong in white-box testing, and some degree of exhaustion can be achieved, such as executing each line of code at least once statement coverage , traverse every branch statements branch coverage , or cover all the possible combinations of true and false condition predicates Multiple condition coverage.

Control-flow testing, loop testing, and data-flow testing, all maps the corresponding flow structure of the software into a directed graph. Test cases are carefully selected based on the criterion that all the nodes or paths are covered or traversed at least once. By doing so we may discover unnecessary "dead" code -- code that is of no use, or never get executed at all, which can not be discovered by functional testing.

In mutation testing, the original program code is perturbed and many mutated programs are created, each contains one fault. Each faulty version of the program is called a mutant.

Test data are selected based on the effectiveness of failing the mutants. The more mutants a test case can kill, the better the test case is considered. The problem with mutation testing is that it is too computationally expensive to use.

The boundary between black-box approach and white-box approach is not clear-cut. Many testing strategies mentioned above, may not be safely classified into black-box testing or white-box testing. It is also true for transaction-flow testing, syntax testing, finite-state testing, and many other testing strategies not discussed in this text.

One reason is that all the above techniques will need some knowledge of the specification of the software under test. Another reason is that the idea of specification itself is broad -- it may contain any requirement including the structure, programming language, and programming style as part of the specification content. We may be reluctant to consider random testing as a testing technique. The test case selection is simple and straightforward: they are randomly chosen. Study in [Duran84] indicates that random testing is more cost effective for many programs.

Some very subtle errors can be discovered with low cost. And it is also not inferior in coverage than other carefully designed testing techniques. One can also obtain reliability estimate using random testing results based on operational profiles.

Effectively combining random testing with other testing techniques may yield more powerful and cost-effective testing strategies. Not all software systems have specifications on performance explicitly. But every system will have implicit performance requirements. The software should not take infinite time or infinite resource to execute. Performance has always been a great concern and a driving force of computer evolution.

Performance evaluation of a software system usually includes: resource usage, throughput, stimulus-response time and queue lengths detailing the average or maximum number of tasks waiting to be serviced by selected resources. Typical resources that need to be considered include network bandwidth requirements, CPU cycles, disk space, disk access operations, and memory usage [Smith90]. The goal of performance testing can be performance bottleneck identification, performance comparison and evaluation, etc.

The typical method of doing performance testing is using a benchmark -- a program, workload or trace designed to be representative of the typical system usage. Software reliability refers to the probability of failure-free operation of a system. It is related to many aspects of software, including the testing process. Directly estimating software reliability by quantifying its related factors can be difficult. Testing is an effective sampling method to measure software reliability.

Guided by the operational profile, software testing usually black-box testing can be used to obtain failure data, and an estimation model can be further used to analyze the data to estimate the present reliability and predict future reliability. Therefore, based on the estimation, the developers can decide whether to release the software, and the users can decide whether to adopt and use the software.

Risk of using software can also be assessed based on reliability information. There is agreement on the intuitive meaning of dependable software: it does not fail in unexpected or catastrophic ways. The robustness of a software component is the degree to which it can function correctly in the presence of exceptional inputs or stressful environmental conditions. It only watches for robustness problems such as machine crashes, process hangs or abnormal termination.

The oracle is relatively simple, therefore robustness testing can be made more portable and scalable than correctness testing. This research has drawn more and more interests recently, most of which uses commercial operating systems as their target, such as the work in [Koopman97] [Kropp98] [Ghosh98] [Devale99] [Koopman99]. Stress testing, or load testing, is often used to test the whole system rather than the software alone.

In such tests the software or system are exercised with or beyond the specified limits.



0コメント

  • 1000 / 1000