낙서장

performance testing

아르비스 2010. 9. 12. 14:51

l  Performance testing. The mere fact that a section of code compiles and seems free of significant errors doesn't mean that your work as a developer is done. Performance bottlenecks need to be identified, as these problems can be magnified when the code is used with the rest of the application, or used in conjunction with other software, or load-tested with a large group of simultaneous users. While comprehensive performance testing needs to be done at a System level, individual developers can, execute limited performance testing along the way. In doing so, design defects leading to performance bottlenecks can often be discovered much earlier.

l  Ways to find bugs. Remember, these are either bugs you created or bugs arising from code you omitted. Here are some helpful ways to "think maliciously"—and a malicious attitude is what you need to cultivate when looking for defects:

n  Attempt to force all error conditions you can think of, and attempt to see all error messages that can occur.

n  Exercise code paths that interact with other components or I programs. If those other programs or components don't yet exist write some scaffolding code yourself so that you can exercise the APIs, or populate shared memory or shared queues, and so on.

n  For every input field in a GUI, try various unacceptable inputs: too many characters, too few characters, numbers that are too large or too small, and many other such items. The goal is to try to single out the errors in this way and then, once the simple test cases pass, try combinations of unacceptable inputs.

 

Try the following also: negative numbers (especially if you are expecting only positive numbers); cutting and pasting data and text into an input field (especially if you have written code to limit what the user can type into the field); combinations of text and I numbers; uppercase-only text and lowercase-only text; repeating the same steps over and over and over and over and over and ...; for arrays and buffers, adding n data items to your array (or buffer} and then attempting to remove n + 1 items. There are obviously many more these are offered only to whet your appetite for thinking maliciously.

 

This list the result of a combination of years of years of experience in development and testing, extensive reading on the subject of testing(especially works by James Whittaker and Boris Beizer), and countless discussions with other developers and testers, It is by no means a comprehensive-list; modify it as necessary according your own strengths and weaknesses.

 

Scaffolding Code

Scaffolding code is the "throwaway" code you write to mimic or simulate other parts of the code that have not yet been completed (sometimes also referred to as “stubbing” your code). If you need to create it for your own use, don't throw it away; make sure you pass it on to the test team. It may be that the scaffolding code you provide will allow them to get an early jump on testing your code, or at least give them better idea of what to expect when the other components are ready. It can also provide a solid basis for their test automation efforts.

 

If your product has protective security features, test those features carefully. Providing scaffolding code that can create the situation you are trying to prevent becomes very important; you must be able to create the very situations against which the system is attempting to protect itself.

 

Another simple example of "scaffolding code" is providing code to manipulate queues. If your product makes use of queues, imagine how easy it would be to have a tool that would allow you to add and delete items on the fly from the queue, corrupt the data within the queue (to ensure proper error handling), and so on.

 

Test-Driven Development

Arising from within the agile/XT development community is •in important technique known as test-driven development (TDD). While Still falling in the realm of defensive testing techniques-but providing a solution from an almost opposite direction—the concept if TDD

l  Set "watches" on variables. Putting a "watch" on a variable sets conditional break point that will be hit only if the value of a specified variable is being changed.

l  View the call stack. This allows you to see which routines called your code, which is a tremendous aid in debugging defects.

l  "Trap" errors when they occur. If you don't know exactly where a defect occurs, many source-level debuggers will automatically drop you into the debugger—at exactly the right location—when a system-level application error occurs (for example, an attempt to dereference a null pointer). Simply run your application under the debugger, without trying to step through the code and without setting breakpoints.

Conclusion

As strongly implied at the outset of this practice, preventing defects from occurring is a significant step toward improving code quality and developer efficiency. Further, finding any defects that do get introduced as early as possible also significantly improves product quality and efficiency. One of the best parts of this practice is that the techniques described work independently of any development meth­odology (e.g., iterative, waterfall, agile/XP) and generally cost virtu­ally nothing to adopt.

However, sometimes the use of such techniques seems counterintui­tive: given the typical schedule and staff pressures cited in the intro­ductory discussion of this practice, it is often tempting to sacrifice solid, strategic goals for tactical necessities. Projects and practitioner are often tempted to hit unrealistic or unreasonable schedules at the expense of using sound development techniques, but the implications are far-reaching and often affect many subsequent releases, not just the current one.

The ultimate benefits of considering these techniques thoroughly and implementing them intelligently and selectively in your organization will result in fewer defects in your code, fewer regressions, higher quality, and lower rework costs. But none of these benefits will occur unless the project leadership actively promotes an environment in is that the developer first writes a test case and then writes code for that test case to run against. If the test case fails, the code is changed until the test case pauses. And not until after the test case passes is new code written (other than the code necessary to make the next test case pass). The idea of this methodology is that when a developer has finished writing his or her code, the code has already been tested, and a full suite of automated test cases exits that can be run by test teams, changed teams, and even customers should the team so choose.

 

Kent Beck, the “Father of Extreme Programming,” has written about TDD in Test Driven Development: By Example , which provides an excellent introduction to TDD. A newer work by Dave Astels covers TDD more thoroughly and has received much acclaim. Consider using TDD if you and your team have already implemented many of the techniques and practices discussed above and you are ready to take your improvement and development efforts to the next level.

 

Source-Level Debuggers

The use of source-level debuggers is one key way in which thorough individual testing can be performed. For developers, being able to use debuggers is vital; the benefits of source-level debuggers far outweigh any learning curve, and we certainly encourage readers to make the effort to learn a debugger, and to learn it thoroughly. Here are just a fat* ways you can use source level debuggers to test your code:

       Set breakpoints, I his allows you to stop execution of the code at a specified location and then "single step" through the code so that you can watch what each line of code does.

       Manipulate data on the fly. You can set a break point just as your code is entered and then reset the value of a parameter that is passed in to see if your code handles the (now) invalid parameter in the way it should. Using the debugger in this way saves the time and effort of trying to get that actual error condition to occur.

that you should consider applying. The practice should also be considered. We believe that most development organizations will adopt a mix of techniques that work for them.

 

Levels of Adoption

This practice can be adopted at three different levels:

l  Basic. Coding guidelines and standards exist and are followed by developers.

The creation and use of coding guidelines and standards ensure that the development team begins to think about "defensive coding" ideas while writing code. (Note that many of the techniques listed in the Defensive Coding Techniques section above provide a good foundation for creating coding guidelines and standards.) Informal peer reviews of written code take place. Designs are thoroughly assessed in order to prevent design defects from being introduced. Developers try to understand the environment that the code will run in 50 that any operating system and middleware dependencies can be addressed early-thus preventing defects arising from API changes, middleware updates, and so on. The goal is defect prevention.

l  Intermediate. Developers actively lest their own code. Discovering and fixing defects as early as possible will work only if developers test code thoroughly before a separate test team runs the code in a test lab. Code reviews take place systematically and consistently A thorough understanding of customer environments and product usage contributes to detect prevention. Static cede analysis is being accomplished, and the introduction of pair programming is showing positive results. Following the defensive testing techniques described above (plus any others that developers may wish to include) will help to shift defect discovery to earlier in the project lifecycle and likely shorten it.

l  Advanced. More formal code inspections are performed are performed and static, structural, and runtime code analysis tools are used extensively. Performance testing is a standard part of the development process. While performing code inspections and setting up an environment where tool can be run against the code certainly takes more effort, doing so will help teams discover more defects earlier than would otherwise be case, Inspection and static, structural, and runtime analysis should not be viewed as replacement for coding standards or individual testing but as complementary to those efforts. Test-driven development is widespread.

 

Related Practices

l  Practice 1: Manure Risk discusses the key idea in risk management which is not to wait passively until a risk materializes and becomes a problem, but rather to seek out and deal with risks.

l  Practice 6: Leverage Test Automation Appropriately describes how automating appropriate amounts of the test effort can also realize improved product quality and shorter product development schedules.

l  Practice 7: Everyone Owns the Product! Addresses how to orient the responsibilities and mindset of team members to ensure that everybody takes ownership of the final quality of the product, broadens the scope of team responsibilities, and learn to collaborate more effectively within the team.

 

Additional Information

Information in the Unified Process

 

OpenUP/Basic covers basic informal reviews (optionally replaced by pair programming) and developer testing techniques. OpenUP/Basic assumes that programming guidelines exist and requires developers to follow those guidelines OpenUP/Basic recommends that an architectural skeleton of the system be implemented early in development to address technical risks and identify defects.

RUP adds guidance on static, structural, and runtime code analysis, as well as defensive coding and advanced testing techniques. Rup also