It would be very confusing if the second "JavaWorld" literal above sometimes represented a different object instance than the first one (resulting in a null return from map.get()) because the first one has been garbage collected. (Well, this can't really happen above because the map implementation retains references to all keys, but pretend that it doesn't.) Thus, unlike manually interned Strings, the lifetime of all String instances derived from compile-time string literals must be scoped to the set of their parent classes. It doesn't, however, need to be scoped to the JVM's lifetime. Using FitNesse, the development process could look like this: The requirements engineer writes the requirements in FitNesse (instead of Word). He tries to get the customer involved as much as possible, but that usually cannot be achievable on a daily basis. The tester peeks at the document repeatedly and asks difficult questions from day one. Because the tester thinks differently, he does not think, "What will the software do?" but "What might go wrong? How can I break it?" The developer thinks more like the requirements engineer; he wants to know, "What does the software have to do?" For the last two months, I've described testing, I've explained its terminology, and I've explained how it works. This month, I move from the abstract to the concrete by presenting a simple framework for unit and component testing. If you're joining us for the first time, you might want to read the first two articles in the series before launching into our sample code. Second, projects based on Java technologies now find real support and funding easily. Gone (forever, I hope) are the days of all-you-get-to-build-it-is-5,000-bucks. We could approach unit testing this core logic in several ways. We could set up multiple test databases (one for each test case), and between each unit test, change the database the program queries. We would have to ensure that those databases are immutable, since we want subsequent runs to produce identical results. If the system's nature is such that merely running a test causes the database to produce different results (for example, by setting the status of "pending" trades to "processed"), this approach is very problematic indeed. What's more, you would then have to somehow keep all those databases, with their precise state necessary for successful unit tests, stored with that source code version. For most source code revision-control tools, this requirement would present a major problem. The figure above depicts a class loosely modeled as one such idealized box. Theoretically, we should be able to apply a set of one or more inputs to a unit, observe the outputs in each case, and thereby determine whether the unit is functioning correctly. The unit-test approach thus presents a nice, simple, logically sound model. Neither the Unix or X Window efforts went far enough to provide universal portability. The Java programming environment, however, makes a significant step toward achieving this elusive goal. From the beginning, Java technology was intended to provide a programming environment that supports the write-once, run-anywhere (WORA) concept. Java technology has largely delivered on this promise by ensuring that Java programs will run across all Java-enabled platforms. Achieving this goal has reduced system dependencies to a very large degree. This article will introduce the JDemo framework and its techniques for writing code for interactive testing. It will also show the benefits that can be gained from writing demo code. Java Testing and Design: From Unit Testing to Automated Web Tests teaches you a fast and efficient method to build production-worthy, scalable, and well performing Web-enabled applications. The techniques, methodology, and tools presented in this book will enable developers, QA technicians, and IT managers to work together to achieve unprecedented productivity in development and test automation. Though many Java developers are familiar with JUnit, a brief discussion follows to allow those unfamiliar with JUnit to get caught up. To facilitate the discussion of these frameworks, and to try to provide apples to apples comparisons of features, I have written a simple test case to provide some basis for comparison. Bill Shannon is a Sun Distinguished Engineer and Spec Lead for the Java 2 Platform, Enterprise Edition. Karen Tegan is Director of J2EE Compatibility and Platform Services for Sun Microsystems. In this interview, Floyd Marinescu of TheServerSide.com/The Middleware Company interviews Bill and Karen about J2EE, the CTS and recent controversial events surrounding J2EE. The purpose of this document is to explain how to go about performing scalability testing, performance testing, and optimization, in a typical Java 2 Enterprise Edition (J2EE) environemnt. Testing concurrent programs is also harder than testing sequential ones. This is trivially true: tests for concurrent programs are themselves concurrent programs. But it is also true for another reason: the failure modes of concurrent programs are less predictable and repeatable than for sequential programs. Failures in sequential programs are deterministic; if a sequential program fails with a given set of inputs and initial state, it will fail every time. Failures in concurrent programs, on the other hand, tend to be rare probabilistic events. Managing and ensuring the NFRs (SLAs) for an Enterprise Application is called Performance Engineering. Performance engineering is a vast discipline in itself which includes Performance Modeling, Performance Prototyping, Performance Testing, different types of analyses, Performance Tuning, etc. This article will not explain Performance Engineering, Queuing Theory and the science behind the various laws. This article just covers the basics about the Performance Engineering and key activities in Performance Testing. Have you ever had to stress test an application only to discover that you couldn?t make sense of the results? Maybe the problem isn?t in the application. Maybe the problem was all in the way that you configured your stress test harness. If you have been in this situation or you are about to embark on a stress testing exercise, here are a few things that you need to consider. There are many different ways that testing can play a role in agile development. Indeed, there are even different types of tests that can play different roles in agile development. To address these roles, it is important that you have a good grounding in some of the basic philosophies behind agile development. Java Testing and Design comes in three parts. The first part describes the things we developers, QA techs and IT folks deal with everyday - tough schedules, user needs, messed up management and test methodologies past and present. All this is shown being applied to building Web applications. The second part takes on the nuts-and-bolts aspect of building networked applications, including different connectivity methods (from http through XML and SOAP, and even Email services), from functional unit tests to testing sequences of messages and session data. It puts a whole new light on testing from the user's perspective using a new method called user archetypes - basically test scripts that mimic a user's behavior. It's a cool technique to make testing a lot more simpler. The HtmlFixture test table loads a web page, submits its form, and then asserts that the submission result displays the "Correct Answer" page. Here is the example wiki source code: You wouldn't sign up to compete in a triathlon as your first goal toward exercising more in the new year; you'll experience the same pain and frustration if you attempt to test legacy code as your first testing exercise. Nothing kills a resolution quicker than going overboard. That being said, unless you're on a new project, legacy code -- code already written, but without tests -- is a fact of life. And without tests, legacy code is a liability. You can't change the code for fear of breaking something, and you usually can't write tests without having to change the code. Rock meets hard place. In object-oriented software development, the system is developed as a collaborative collection of objects. Messages are the heart of the communication between these objects. Most Java-based software development projects employ unit testing, which mainly tests the system's behavior. Yet, we rarely test the object-oriented nature of the written program. Object-oriented tests consist of sequences of tests to assess the class hierarchy, inheritance graph, abstractness of classes, encapsulation, and many more object oriented features. Unit testing is an essential practice for anyone seeking to develop better-designed, higher quality software. Testing the individual objects that make up your system provides you with a much higher degree of confidence in both the design and general quality of the application. However, testing objects in isolation often presents some unique challenges as few useful objects operate independently of others. The challenges are even greater in the context of a J2EE application where the container manages many of the collaborating objects. This article will look at the use of mock objects and the Mockrunner testing framework as a means of overcoming many of these challenges. You can download the MP3 of this podcast episode directly, or copy and paste the java.net Community Corner mini-talks feed into a podcast client like Juice or iTunes. You can also subscribe via the iTunes Store. ME Framework is an testing framework for the Java ME platform developed as part of the cqME open-source project. A set of plug-ins for the open-source JT Harness, ME Framework provides support for application and platform quality and conformance testing needs. This mini-talk covers testing framework features, Java ME application and security models, communication protocols optimization and debugging functionality. Recently Sun Microsystems released a set of tools called the Java Compatibility Test Tools (Java CTT). These tools are designed to help a specification lead, or other Java Community Process Program (JCP) member serving on an Expert Group, create a Technology Compatibility Kit (TCK). The JCP is an inclusive, consensus-building approach used by the international Java community to develop and evolve Java technology. If you are unfamiliar with the JCP, see the Java Community Process Program. This article gives you an overview of the Java CTT and generally describes how the tools can help JCP members create a TCK. It also focuses on one of the Java CTT tools, Spec Trac, and shows you how to use the tool. Future articles will show you how to use other Java CTT tools. So it shouldn't come as any surprise to find Sun Microsystems, Inc. at the forefront of this sometimes neglected, but all-important aspect of software development. This article, with the help of Frank Dibbell, Manager of Software Quality Assurance in Sun's Consumer and Embedded Division, explores the basic theories, principles, phases, categories, and tool types currently being used in software testing at Sun, as well as testing issues specific to the world of Java technology. Adam Shostack discusses the future of software testing with James Whittaker. Learn about the evolution of software testing methodologies and where the exploratory testing industry is heading. Part I of “Caught in the Web: Best Practices for Effective Web App Security Assessments” – featuring Shon Harris, globally recognized leader in CISSP training and best-selling author Hosts: Shon Harris of Logical Security, joined by Wayne Burke & Benjamin Böck of SecureIA Sponsor: Core Security Technologies Date: Wednesday, February 18, 2009 Time: 2pm EST / 11am PST (GMT –5:00, New York) Register: http://www.coresecurity.com/Form/generic/campaign/caughtnon *** A recording of the webcast will be sent to everyone who registers, so be sure to sign up even if you can’t make the live session. *** Core Security is pleased to invite you to a complimentary webcast, Part I of “Caught in the Web: Best Practices for Effective Web App Security Assessments,” hosted by Shon Harris of Logical Security, and Wayne Burke & Benjamin Böck of SecureIA. The webcast series will draw from SecureIA’s upcoming “IA Web Penetration Testing 101” course and present tips for assessing your web infrastructure against the most prevalent online threats today. You’ll see best practices for identifying critical web application vulnerabilities, getting data for efficient risk mitigation, and understanding the business implications of technical exposures. Register for “Caught in the Web Part I”: http://www.coresecurity.com/Form/generic/campaign/caughtnon The Caught in the Web webcast series will cover topics including: • Using practical threat analysis to identify where your organization is exposed • Comparing web application penetration testing to “traditional” penetration testing • In-depth assessment techniques including SQL injection, XSS, CSRF, etc. • Filtering techniques for identifying vulnerabilities requiring immediate remediation • Comparing manual penetration testing to automated tools • Pitfalls to avoid when conducting web app security assessments You’ll also learn how to connect technical issues identified during testing with underlying business risks – enabling you to effectively communicate and leverage the benefits of proactive, real-world security testing throughout your organization. To see all the cissp courses that we offer, visit our website at: http://www.logicalsecurity.com/education/education_overview.html James Whittaker provides an overview of Exploratory Testing--the subject of his latest book. Learn about ways to explore your application with intent, strategy, and tactics that find bugs and validate functionality. Before we delve into the issues surrounding enterprise testing in Java, it's important to define exactly what we mean by enterprise. In the Test-Taking Skills Clinic series, technical instructor Tim Warner, teaches how to analyze IT certification exam practice questions. Understanding the subject matter is only half the battle; we must also possess a sound test-taking strategy! “Agile” is a buzzword that will probably fall out of use someday and make this book seem obsolete. It’s loaded with different meanings that apply in different circumstances. One way to define “agile development” is to look at the Agile Manifesto (see Figure 1-1). In the Test-Taking Skills Clinic series, technical instructor Tim Warner, teaches how to analyze IT certification exam practice questions. Understanding the subject matter is only half the battle; we must also possess a sound test-taking strategy! The ability to apply the right tool for a job is among the most valuable developer skills. In this interview with Artima, Parasoft's Nada daVeiga explains that this skill is also crucial when it comes to choosing the right testing technologies—or, rather, the right combination of testing techniques and tools—for Web application testing: Elliotte Rusty Harold: I am a fan of Extreme Programming, but since I am essentially working by myself at home, pair programming isn't an option. I do use unit tests heavily on almost all the classes. The only areas where I don't have serious unit tests, where I have some tests but not full coverage, are in serialization and in parsing. Because writing unit tests for serialization and parsing is just bloody hard, so far I haven't done it. I really need to, though, because guess where all the bugs show up? They show up in serialization and parsing, where I don't have good unit test coverage. Here is the entire test class that tests private method Runner.parseArgsIntoLists, which is described in the article Testing Private Methods in JUnit and SuiteRunner. Many developers rely on a unit testing tool, such as JUnit, to ensure the correctness of their code. Although unit testing is still very important in the presence of multithreaded code, concurrent applications have characteristics that are not controllable by just the testing input, which is what you typically do when writing unit tests. This article demonstrates using filters for Web-application testing?specifically, applying a filter that validates a Web app's HTML against the HTML DTD. The filter uses the W3C's online Markup Validation Service to perform the actual HTML code validation. Since the W3C service can return an XML description of the validation results, you can parse those results and display a formatted list of any issues in the HTML page itself. With the open source jDefend test driver suite, you can configure all the input parameters and pass them to your test driver using a single XML-based configuration file. jDefend's parser class parses the configuration file and prepares all the test drivers listed for execution. It then executes each test driver?even on a different thread?for faster execution of the test scenarios. (Figure 1 shows the various components of the jDefend suite.) Unlike other mock libraries, the MockLib API contains only six classes! This is a huge advantage, as you have much less to learn than with other mock libraries. One of my favorite features is its ability to test?in just nanoseconds?a TimerTask scheduled to go off in 24 hours. I haven't seen another mock library that can do that yet. Nor have I found one that will help easily simulate an API like the ones in the forthcoming examples?when you need to simulate events back into the system you're testing. A project I once worked on established a good set of automated tests that could load test the application while it ran multiple transactions. The problem was that the tests required some manual tweaking, so the development team couldn't run them without human intervention. This limited testing to times when the tester was available (usually waking hours only). In practice, testing occurred only every couple of days ? not frequently enough for timely problem detection. Java developers have done a very good job of addressing unit testing, but integration testing doesn't generate quite as much excitement. Most Java testing frameworks, such as JUnit or TestNG, focus primarily on unit testing. One reason for the lack of integration-testing frameworks in Java programming is the lack of a centralized architecture or development philosophy. In the sections that follow, I'll continue to walk you through a Ruby on Rails example, focusing this time on functional testing and the new Rails integration testing framework. You'll see how much easier it is to test when you're working with an integrated framework. To come up with a good initial combination, we chose to run sets of tests against two Windows NT machines, one with a uniprocessor and one with a two-way multiprocessor. For our UNIX testing, we chose to use two RS/6000 machines, one with a uniprocessor and one with a four-way multiprocessor. In Listing 7, you fully configured your Ant build file to wrap your system tests with Cargo's deployment voodoo. The code in Listing 7 ensures that all the system tests in the test/system directory from Listing 8 are logically repeatable. You can run these system tests on any machine at any time, which is perfect for a Continuous Integration environment. The tests make no assumptions about the container -- not about its location or even if it's running! (Of course, these tests still make one assumption I haven't addressed, namely that the underlying database is properly configured and running. But that's a subject for another day.) Formal code inspections are one of the most powerful techniques available for improving code quality. Code inspections -- peers reviewing code for bugs -- complement testing because they tend to find different mistakes than testing does. In August 2003, the Web Services Interoperability Organization (WS-I) published the Basic Profile 1.0. This profile contains implementation guidelines for the core Web services specifications: XML 1.0, XML Schema 1.0, SOAP 1.1, WSDL 1.1 and UDDI 2.0. These guidelines are a set of requirements that define how these specifications should be used to develop interoperable Web services. The WS-I test tools can be used to verify that a Web service conforms to these requirements. A draft (beta) release of the WS-I test tools is available from the WS-I Web site (see Resources), and a final release should be available later this fall. Performance testing is often performed (no pun intended) long after developers have finished coding -- yet it's often the case that performance issues could have been found (and most likely solved) much earlier in the development cycle. Luckily, there is a way to solve this problem: continuous testing, or more specifically, continuously running JUnitPerf tests. "Understanding the WS-I Test Tools" presented an overview of the architecture and functions provided by the WS-I Test Tools. In this tutorial, we give you step-by-step instructions on how to use the Java version of the test tools to verify that a sample Web service conforms to the WS-I Basic Profile. The good news is that test-driven development isn't just for new code. Even programmers maintaining old systems can profitably write, run, and pass tests. Indeed tests are even more important for legacy systems already in production. Only by testing can you be confident that changes you make to one part of a system will not break another part somewhere else. Sure, you might not have the time or the budget to achieve 100 percent test coverage for a large legacy code base, but even less-than-perfect coverage reduces the risk of failure, speeds up development, and produces more robust code. One of the most important software practices is testing. Extreme Programming (XP) has pushed this logic to its limit by recommending test-first development and continuous integration, where tests are automatically run as often as possible. However, most non-XP shops practice testing in some form, whether they call it non-regression testing, blackbox testing, functional testing, or another name. A lot of projects use a relational database to store data, therefore any testing strategy needs to take into account what happens to the database during each test: If a test leaves a test database in an inconsistent state, all further tests are likely to fail! One way around this is to set up the database state to a known, coherent state before running each test. In this article, I will explain how our team achieved this using DbUnit together with JUnit and how we used Anthill to automate test report generation. Although it may seem like a costly setup, it actually isn't, and has proved a valuable tool. This was the case for the Eclipse Workbench V1.0. Subsequent Eclipse releases should approach near simultaneous domestic and international releases, since the bulk of the translation and testing carries forward from the prior release. When planning your validation test cycle, weigh the amount of time and personnel you expect to invest in proportion to the amount of material affected by translations. In general, minor changes in the translation materials are usually isolated risks, unlike functional modifications where one bad line of code can disrupt the stability of the entire system. This allows you to scale down the "version two" and subsequent translation efforts considerably, on the order of two-thirds to one-half your V1.0 investment. Java developers have a freely available tool for testing that they've written their portlets in accordance with the Portlet Specification. Apache Pluto is the reference implementation for JSR 168. It is a portlet container that implements the Portlet API. Portlet containers like Pluto and IBM WebSphere Portal Server serve as the runtime environment for portlets, much the way a servlet is powered by the runtime environment of a Web application server's servlet container. However, the portlet container is not standalone; it lives on top of a servlet container and relies on its services. In this article, we'll show you how to write a simple portlet and test it against the Pluto portlet container. As a developer, testing is so important that you should be doing it all of the time. It should not be relegated to a specific stage of the development cycle. It definitely shouldn't be the last thing done before giving your system to a customer. How else are you going to know when you're done? How else are you going to know if your fix for a minor bug broke a major function of the system? How else will the system be able evolve into something more than is currently envisioned? Testing, both unit and functional, needs to be an integrated part of the development process. TFP (also known as test-driven design or test-driven development) is actually a process for implementing the unit test practice that XP describes. Simply stated, the process is as follows: Before you write a single line of code, make sure you have a test that fails. In other words, write a test that exercises the code you are about to write. I often take a two-fold approach to SDAO testing. The first DAO I build is the "default" DAO; it goes to an in-memory collection of DTOs, then gets passed off to the team building the upper layers of the application (Servlets and JSP files, for instance). I then work with a second team to build the DAOs that will actually work with the database. This approach lets both teams work simultaneously, with their interaction defined by the shared contract of the DAO interface. Also remember that test times will vary depending on machine configurations and what's running at a given point in time during JUnitPerf test execution. I often find that placing JUnitPerf tests in their own category helps segregate them from normal tests. This means they aren't run every time during a test run, such as in a code check-in within a CI environment. I also end up creating specific Ant tasks to run these tests only during choreographed scenarios or environments where performance testing is taken into account. Once you've incorporated my patches, automating the build and test process is simply a matter of wrapping the execution of Ant inside a cron job (or an at job on Windows platforms). Before you can run Jester, all the unit tests must pass with the unmodified source code. If they don't, Jester won't know if its changes have broken anything. (For the demonstration I had to fix one bug I'd written a test case for but hadn't yet tracked down and stomped.) Mark Doliner's Cobertura (cobertura is Spanish for coverage) is a free as in speech GPL tool that handles this job. Cobertura monitors tests by instrumenting the bytecode with extra statements to log which lines are and are not being reached as the test suite executes. It then produces a report in HTML or XML that shows exactly which packages, classes, methods, and individual lines of code are not being tested. You can write more tests for those specific areas to reveal any lingering bugs. In this article and the next, you'll get a complete understanding of how testing works within the Ruby on Rails integrated development framework. Part 1 focuses on testing model objects and gives you some Rails-inspired strategies you can use to make your Java unit testing more productive. Part 2 spends more time on functional tests and integration tests. As a Java programmer, some of the ideas will be familiar to you, particularly if you test, and others will stretch your understanding. Tools using SOAP to enable interoperable software are inexpensive, freely available, and widely supported. Your considerations for tools, hardware and network equipment will greatly determine the performance and scalability potential of your deployed SOAP-based Web services. With that in mind, I would like to discuss a scalable framework for developing Web services, strategies for avoiding performance problems, and offer an open-source set of test objects and scripting language called Load that can help with performance and scalability testing. Listing 3 shows an aspect that can detect many violations of the single-thread rule. It has two parts: the list of methods that should not be called from outside the event thread and the code to insert before each call to one of those methods. The advice -- the code to be inserted -- is quite simple: check if the current thread is the event thread and if it is not, throw an AssertionError. This aspect instruments all calls to methods from the Swing packages plus any methods in classes that extend the most important Swing classes (so as to capture user-provided components and models), but it excludes methods in those classes that are known to be (or required to be) safe to call from multiple threads. The list of safe methods is not exhaustive; constructing an exhaustive list would require spending some additional time with the Swing Javadoc to find all the methods that are documented to be thread-safe. There are a host of tools available that claim to be able to stress test products under development. An area with fairly widespread coverage are those tools aimed at Web services. However, many of these tools are simple HTML/SOAP generators, which simulate many client connections and therefore generate a high load on the Web server (which is useful for finding problems with the Web server, but not so good for finding problems with the Web services). These tools are useful for basic stressing , but often they merely extend the functional verification phase to repeatedly perform the same functional task. If enough time and resources are available, more effective testing can be achieved by creating custom-built stress testing systems. Since the designers of the stressing system will usually have more knowledge of the product and the Web services being tested, they will be able to ensure that the stress system is able to target specific areas of the code. Part 1 of this brief series on effective testing built a FindBugs plugin to find a trivial bug pattern, that of calling System.gc(). Bug patterns identify problematic coding practices that are frequently found in the neighborhoods where bugs live. Of course, not all occurrences of bug patterns are necessarily bugs, but this doesn't keep bug pattern detectors from being tremendously useful. All that is needed for a bug pattern detector to be effective is that it turn up a high enough percentage of questionable code to make it worth the effort of using it. Creating bug pattern detectors can have very high leverage; once you've created a detector, you can run it on any code you want, now or in the future, and you might be surprised what turns up. For example, the trivial detector in Part 1 showed that there were calls to System.gc() buried in the JPEG image I/O library in JDK 1.4.2. 67. Fuzz testing This article introduces you to a technique that attempts to avert just this sort of disaster. In fuzz testing, you attack a program with random bad data (aka fuzz), then wait to see what breaks. The trick of fuzz testing is that it isn't logical: Rather than attempting to guess what data is likely to provoke a crash (as a human tester might do), an automated fuzz test simply throws as much random gibberish at a program as possible. The failure modes identified by such testing usually come as a complete shock to programmers because no logical person would ever conceive of them.In this article, I will discuss one such type of invariant and how sophisticated unit tests can be used to check it. The type of invariant I'm talking about is the proper order of invocations of a sequence of dependent methods. Just as good programming skill involves the knowledge of many design patterns, which you can combine and apply in various contexts, good debugging skill involves knowledge of bug patterns. Bug patterns are recurring correlations between signaled errors and underlying bugs in a program. This concept is not novel to programming. Medical doctors rely on similar types of correlations when diagnosing disease. They learn to do so by working closely with senior doctors during their internships. Their very education focuses on learning to make such diagnoses. In contrast, our education as software engineers focuses on design processes and algorithmic analysis. These skills are, of course, important, but little attention is paid to teaching the process of debugging. Instead, we are expected to "pick up" the skill on our own. With the advent of extreme programming and its emphasis on unit testing, this practice is starting to change. But frequent unit testing solves just part of the problem. Once bugs are found, they must be diagnosed and corrected. Fortunately, many bugs follow one of several patterns we can identify. Once you can recognize these patterns, you will be able to diagnose the cause of a bug and correct it more quickly. In the last installment, I talked about the need for some level of automated testing. I'll reiterate that the goal is not to automate 100% of all tests. But, you do want to move in that direction. I've yet to encounter a team that has suffered from having too many tests. (I have worked with teams who have had too many poorly written tests.) There are other interesting ways of injecting test doubles than I presented here. I might, for example, consider using aspects. But these three techniques—constructor/setter injection, factory injection, and subclass override injection—are the ones that I consistently use. Using these different injection techniques gives me a bit more flexibility when it comes to incorporating fakes into a system. But, an important thing I must remember is that the very introduction of these fakes implies that I now have a "hole" in my system—something that I will be unable to unit test. I can't neglect my integration tests! Some programmers do it as they code, and others wait until the end. Either way, testing is a necessary part of any software development project. Without it, one cannot determine that the software functions correctly. In this article, I present the basics of software testing from a programmer's perspective. In a follow-up article, I will illustrate unit testing with JUnit, an open source framework for testing Java applications. This is the first article in a series of articles on the testing tool called FitNesse. This installment talks about the need for such a tool in an agile software development environment. Subsequent installments will dig into how to code support for various types of testing using FitNesse. This last part is the real power of EasyMock. By verifying the easymock instance, you are making sure not only that the right calls were made, but also that all expected calls were made. This way, you make sure that your Invoice class really is calling the TaxRateManager, with the right argument, and calling it exactly one time. Anything else will generate an error at some point in the test—either when too many calls are made to the instance, or when it reaches the verify and there are unmade calls that were expected. In my previous article on software testing—Software Testing for Programmers, Part 1—I presented the basics of software testing for programmers. In this article, I illustrate how to do unit testing of Java applications with JUnit, an open source framework for testing Java programs. To illustrate the use of Guice, you need an example unit testing setup. I settled on something pretty simple—an invoice class that uses a sales tax rate manager (another object) to look up a sales tax percentage for a customer. The invoice then adds it on to the total sum of the line item costs in the invoice. You want to verify application functionality by using FitNesse. The application might be a web application, a web services API, a desktop UI, or something else. For the example, you'll simplify things and verify against a simple Java API. 3. Range of integer types: The following table purports to show the numeric range of the integer types in terms of a power of two. Identify which lines, if any, contain errors. (The syntax 2eX means 2 raised to the X power.) In test-after development (TAD), I use my skills as an experienced developer to write my code. I might sometimes write a test prior to development. Predominantly, however, I will first code a solution, and then look to refactor as necessary. I might code additional tests once my solution is in place, but that's my prerogative. Now that you have reviewed the role of the functional test plan, you clearly can see its importance in the context of the application development cycle. Aside from its main goal, which is to test whether an application meets business requirements, it promotes communication between technical and business areas and helps development teams better understand business requirement. An interesting usage for fuzzing comes in meeting regulation requirements and supplying certified secure code. One reason for the use of fuzzing in certification is the lack of false positives. When a black box testing tool, such as a fuzzer, finds a security vulnerability, you know the flaw is there because it was found by actually trying to trigger the problem. After all, if it walks like a duck and quacks like a duck, it must be a duck. |
w___ww__.___jav_a__2___s.c___o_m___ | Contact Us |
Copyright 2009 - 12 Demo Source and Support. All rights reserved. |
All other trademarks are property of their respective owners. |