Code « Development « Java Articles

Home
Java Articles
1.Build Deploy
2.Class
3.Core Library
4.Data Types
5.Database JDBC
6.Design
7.Development
8.File Input Output
9.Graphics Desktop
10.J2EE Enterprise
11.J2ME Wireless
12.JVM
13.Language
14.Library Product
15.Network
16.Security
17.SOA Web Services
18.Test
19.Web Development
20.XML
Java Articles » Development » Code 
Java is quite platform-independent and mostly upwards compatible, so it is common to compile a piece of code using a given J2SE version and expect it to work in later JVM versions. (Java syntax changes usually occur without any substantial changes to the byte code instruction set.) The question in this situation is: Can you establish some kind of base Java version supported by your compiled application, or is the default compiler behavior acceptable? I will explain my recommendation later.

Smart teams don't do code reviews manually: instead they rely on source code analyzers like Checkstyle, PMD, and JTest. Such tools come with readymade rules that help in maintaining code standards. These rules are a good starting point, but they don't account for project-specific requirements. The trick to a successful automated code review is to combine the built-in rules with custom ones. The more refined your rules, the more truly automated your code review becomes.

I started out writing code in C back in the seventies, when pretty much everything was do-it-yourself. There weren't many third-party libraries back then. The introduction of object-oriented programming with C++ was welcome; pretty much the same syntax and low-level control, but it let developers write much more elegant code.

When making bad code good, it's best to start with easy changes, to make sure you don't break anything. A large lump of bad code breaks easily, and nobody can be expected to fix bugs in it without some preparation.

The path my team took to choosing Javadoc for code-generation purposes was somewhat long, and probably common. In early implementations, we used Perl scripts to parse custom metadata grammar in a text file. This was an ad hoc solution, and adding additional output formats was difficult. Our second, short-lived attempt was to modify an existing Java-based IDL compiler. We soon realized that additional IDL keywords would have to be introduced to send hints to the code generator. Making an extension to IDL, or even starting from scratch with tools such as lex and yacc (which split a source file into tokens and define code that is invoked for each recognized token) were not personally palatable. (See Resources for more information.)

With the current proliferation of ever-more parallelized hardware, benchmarking is a hot topic again. In Part 1 of this article, I detailed the basic precepts behind parallel hardware. In Part 2, I want to move on and examine how to measure how effectively this hardware has been crystallized. This is a process fraught with difficulty—benchmarks, by their nature, are more often pilloried for what they don't measure, or what they measure inaccurately, than lauded for their impartiality. My benchmark will suffer the same fate. Nonetheless, after reading this article, you will see how I designed and implemented the framework, what I believe it measures (and doesn't), and its potential uses.

Let's take a step back and examine the factors that have precipitated the advent of parallel computing hardware in all tiers of IT, as opposed to specialized high-end niches. Why would we want hardware that can execute software in true parallel mode? For two reasons: You need an application to run more quickly on a given dataset and/or you need an application to support more end users or a larger dataset.

The preceding code swaps out a table's model for a decorator. After the swap, whenever the table accesses its model, it unknowingly accesses the sort decorator. The decorator adds sorting capabilities to the model it decorates, and delegates other functionality to the real model.

Checkstyle makes it easy to automate code reviews once the code has been committed, but wouldn't it be better if programming errors never made it that far? In this second half of "Automated code reviews with Checkstyle," authors ShriKant Vashishtha and Abhishek Gupta show you how to be proactive about code quality. Find out how to use Checkstyle's custom rules to enforce code standards and catch mistakes -- long before code is committed to your code base. Level: Intermediate

While software development technologies continue to advance on multiple fronts, the complexity of software and its management remains a complicated, expensive problem. For instance, too many developers can become involved in a project, and those developers may not be around later to maintain the code they wrote. Plus, software requirements can change, and it can be difficult to track what, exactly, the software was supposed to do.

If you're having trouble printing code examples from our pages, you're probably using a 4.x version of Netscape. You'll want to use something like Netscape 6 or MSIE to print articles that include example code.

With this in mind, applying various software metrics to a code base can be an effective overall gauge of software quality. One such metric, cyclomatic complexity, can be helpful in ascertaining areas of code that may require additional attention to head off future maintenance issues. That attention, moreover, can take the form of unit testing and refactoring.

"Bug-free code." Well, that's a bold statement to make about one's code. In August 2004, Mozilla announced that it would offer $500 for every serious bug found by security researchers. I wouldn't dare to make such a claim about my code on a regular basis, or I'd be broke in a month. However, if we make good use of some of the basic idioms and rules of thumb of design and programming, we can take a step closer towards software with fewer bugs. Any programmer worth his or her salt will agree that lately, design patterns have been overused to the point that the programmers start off directly with advanced patterns, while being completely ignorant of basic rules.

The first step is to tokenize each source code file. CPD does this by piggybacking on PMD's JavaCC-generated tokenizer. The tokenizer reads each file and converts the characters into tokens. For example, System.out.println produces five separate tokens: System, ., out, ., and println. Along the way, the tokenizer discards whitespace and some other unneeded tokens like import statements, package statements, and semicolons. This reduces the number of tokens that need to be scanned, and it gets rid of uninteresting duplicate chunks like duplicate sequences of import statements. Our source code snippet is now tokenized and looks like this:

Author's note: This article presents seven techniques I've developed and used in my consulting work that are designed to improve legacy code. You can apply some of these techniques using either freely available tools or with scripts. You'll apply the others manually, but they shouldn't represent a significant investment in time. Be forewarned, however, that all of these techniques may reveal other issues in the code base, such as hidden bugs, which could take a significant amount of time to fix.

If you're having trouble printing code examples from our pages, you're probably using a 4.x version of Netscape. You'll want to use something like Netscape 6 or MSIE to print articles that include example code.

In this article we will reimplement an exercise taken from the best-selling book "The Pragmatic Programmer" written by Andy Hunt and Dave Thomas [1]. In the third chapter "The Basic Tools" the authors motivate the reader to learn a "text manipulation language" (like Perl or Ruby) in order to be able to develop little code generators. The proposed exercise is very simple and code-centric and we will show how it can be implemented with today's generator development tools and languages.

If you ask someone if they use code generation, chances are that they swear by it. Code generation saves time and effort and can greatly improve the maintainability of a system. Letting your computer write code for you is so compelling that it's hard to imagine why any developer wouldn't embrace it. Yet, code generation remains a bit of a black art, practiced by a few and met with skepticism and distrust by the rest.If you haven't yet been bitten by the code generation bug, let me explain why I think code generation matters to Java developers.

Code generation is nothing new, especially for Java programmers, but it is still confusing to most people because of the variety of code generation models and solutions. This article will help you cut through the fog by providing a summary of the popular models and solutions in the Java world today.

In enterprise Java applications, it's considered poor form to instantiate database connections yourself. The enlightened path involves a database connection pool, in the form of a JDBC DataSource, that is provided by your container (Tomcat, Jetty, WebLogic). The container is responsible for maintaining a number of connections to the database service, and all your code has to do is look up the DataSource via JNDI to get a connection. Simple beauty.

The java.net project PatchExpert is a simple tool to make extending and patching software easier. It allows the developer to define some extension points in the target application. Through these extension points, it can insert implementation at runtime or by configuration. Figures 1 and 2 demonstrate the relationship among these elements.

A code review is an excellent checkpoint that can help flush out erroneous assumptions and gaps in reasoning. It helps minimize the impact of a problem by means of early detection. While testing is an excellent way to improve the quality, testing alone is not enough. For one, as shown by McConnell in Code Complete, it is statistically impossible to completely test a non-trivial software project. Testing must be bolstered by up-front code reviews.

Editor's note: Sometimes the most interesting discussions begin when someone says, "This may be a stupid question, but ...." If the person asking the question has taken the time to think about the problem before asking, the question is often not stupid at all. The uncertainty points out an ambiguity in the specs, holes in the docs, or a search for how more experienced programmers might address a particular problem. From time to time, we will print one of the "(Not So) Stupid Questions" we receive and invite our readers to answer the question in the feedback section.

The code generator provides a number of options for tweaking the generated code. While an exhaustive list of the options can be found in the Axis2 site, following are some interesting options that can be used in the client side.

Code generation is using one application to build code for another application. In this case, XSLT will be our generator application. Input for a code generator can come in many forms (source code, database schemas, XML models, etc.). Regardless of the source, we call the input the model because it represents (models) what is to be built. On the other side are the templates. The templates render the model into code, or other artifacts such as documentation. Figure 1 illustrates this process.

Editor's note: Sometimes the most interesting discussions begin when someone says, "This may be a stupid question, but ...." If the person asking the question has taken the time to think about the problem before asking, the question is often not stupid at all. The uncertainty points out an ambiguity in the specs, holes in the docs, or a search for how more experienced programmers might address a particular problem. From time to time, we will print one of the "(Not So) Stupid Questions" we receive and invite our readers to answer the question in the feedback section.

You've just built the basic program that all users have. Remember, we want to see the role-based upgrade in action. So I want the HR director to get new classes that allow access to salary and hourly rate details for staff and contractors, respectively. To demonstrate this, the code is arranged into two folders called old and upgrade. The files in the old folder are non-upgraded versions as per Figure 3. The files in the upgrade folder represent the modified files that produce the output illustrated in Figure 2.

Extensible Code Generation with Java, Part 1 Jack Herrington argues that machine-generated code not only solves problems of drudgery, but it can even be preferable to potentially buggy ,hand-written code. In Part 1 of his series on code generation, he shows how XSLT can be used to generate Java source from XML descriptor files.

Protecting software from reverse engineering works the same way. Java is wonderfully easy to effectively reverse engineer. (See the References section at the end of this article for tools that will let you do so literally at the click of a button.) You can decompile the code, crack the license, change the copyright to your name, or simply check out how the application hits the database. Reverse engineering based on compiled code is possible, but reverse engineering based on source is simply cake. The good news is that you can make the process harder--in fact, much harder. Protecting Java code raises the bar--it proverbially "adds a few locks on the door." If your code is worth protecting, there's no reason not to make the thief's job harder.

In recent years, few people have written more about the Java platform than has Sun Microsystems technology evangelist Brian Goetz. Since 2000, he has published some 75 articles on best practices, platform internals, and concurrent programming, and he is the principal author of the book Java Concurrency in Practice, a 2006 Jolt Award Finalist and the best-selling book at the 2006 JavaOne conference. Prior to joining Sun in August of 2006, he was a consultant for 15 years for his software firm, Quiotix, where, in addition to writing about Java technology, he spoke frequently at conferences and gave presentations on threading, the Java programming language memory model, garbage collection, Java technology performance myths, and other topics. In addition, he has consulted on kernel internals, device drivers, protocol implementations, compilers, server applications, web applications, scientific computing, data visualization, and enterprise infrastructure tools. He's participated in a number of open-source projects, including the Lucene text search and retrieval system, and the FindBugs static analysis toolkit.

The following code is instructive if you're learning how to do networking and threads in Java. It defines the ServerThread class. To see code that uses it, click here.

It is relatively simple and very effective to set up a way to notify developers about changes happening in the repository. I would even suggest that not doing so is a "communication anti-pattern". In order for it to be the most effective possible, developers should not be "spammed" by tons and tons of diff information. Thus to prevent this spamming you should consider 2 options: using RSS feeds instead of emails and allowing developers to choose their feeds. There are obviously some feeds that should be more "mandatory" than others. For example, the one notifiying of database schema change, the one notifying of build changes and the one notifying of public API changes. It's important, as always, that there is a champion in the team, explaining to others the benefit of these feeds and how they should be best used. Otherwise the power of them may exist but it won't be harnessed.

Martin Fowler: Yes, I think in many ways that is the case. Clarity helps you see what's in your code. You can make changes more quickly. It's harder for bugs to hide, because bugs are easier to see when the code is better designed.

Bill Venners: You were just talking about how it's not always obvious who wrote a particular line of code. That's also true of a wiki page. It's not always obvious who wrote a particular line of text. Sometimes people write on a wiki page, "I had this experience," and as a reader I don't know who "I" is. What's different about collective code ownership compared to collective text ownership that let's you end up with cleaner code than you do text?

Bill Venners: At previous JavaOnes, I have seen some visualization tools that I thought were useful. One of them analyzed your code and drew diagrams that showed the coupling between packages. I felt those diagrams could help you realize that you've got a lot of coupling going from one package to another, which you may not realize by looking at individual source files. Visualization tools like that can help, I think, but a lot of tools that draw graphical representations from source don't seem to help much. Let's say you analyze a big program and generate inheritance charts, with thousands of boxes and lines going all over the place. That often looks as confusing as the source code does.

On the extent to which code search can facilitate code reuse, given that pre-existing code you may find via search may not be exactly what you need, Merling said:

At JavaOne 2007, we asked Quail what role code reviews play in environments that favor the automation of repetitive, error-prone tasks, such as finding bugs and identifying defects in code. One question in this brief interview focused on the relationship between unit testing and code reviews:

Code generation is a time-saving technique that helps engineers do better, more creative, and useful work by reducing redundant hand-coding. In this world of increasingly code-intensive frameworks, the value of replacing laborious hand-coding with code generation is acute and, thus, its popularity is increasing.

A class-cast exception often occurs in a program that is performing a recursive descent over a data structure, usually when some part of the code is descending two levels per method call and is not dispatching appropriately on the second descent. Programmers can identify the problem by learning about the Double Descent bug pattern.

Last month, I showed you how to use code metrics to evaluate the quality of your code. While the cyclomatic complexity metrics introduced in that column focus on low-level details, such as the number of execution paths in a method, other types of metrics focus on more high-level aspects of code. This month, I'll show you how to use various coupling metrics to analyze and support your software architecture.

For example, as we saw back in "Improve the performance of your Java code" (May 2001), the JVM spec does not require optimization of tail-recursive calls. Tail-recursive calls are recursive method invocations that occur as the very last operation in a method. More generally, any method invocation, recursive or not, that occurs at the end of a method is a tail call. For example, consider the following simple code:

This installment hopefully has identified and demonstrated a major tool to let you interactively evaluate expressions and statements in Java programs without getting bogged down in recompilation -- the "read-eval-print-loop" or repl. We have also demonstrated how repls are important when building GUIs or when you simply want to quickly examine the wealth of Java APIs that are available to you.

In the past year or so of writing this column, I have introduced many tools and techniques that you can use to improve the quality of your code. I've showed you how to apply code metrics to monitor the attributes of your code base; how to use test frameworks like TestNG, FIT, and Selenium to verify application functionality; and how to use extension frameworks like XMLUnit and StrutsTestCase (and powerful helpers like Cargo and DbUnit) to extend the reach of your testing framework.

Measuring cyclomatic complexity is, therefore, particularly valuable in situations where you're working with a legacy code base. Moreover, it can be helpful to monitor CC values with distributed development teams, or even on large teams with various skill levels. Determining the CC of class methods in a code base and continually monitoring these values will give your team a head start on addressing complexity issues as they arise.

For various reasons (mostly bad), you will often see class definitions in which the class constructors don't take enough arguments to properly initialize all the fields of the class. Such constructors require client classes to initialize instances in several steps (setting the values of the uninitialized fields) rather than with a single constructor call. Initializing an instance in this way is an error-prone process that I refer to as run-on initialization. The types of bugs that result from this process have similar symptoms and remedies, so we can group them together into a pattern called the Run-on Initializer bug pattern.

In this farewell article, let's have some fun and look into our crystal ball. We'll discuss some of the prevailing trends in the software industry and the impact we can expect these trends to have on the future of software development. We'll focus our discussion through the lens of effective software development we've used in this series over the past two and a half years. As always, let's pay particular attention to the crucial role that effective error prevention and diagnosis play in allowing us to manage our increasingly complex digital world.

One of the problems with code quality tools is that they tend to overwhelm developers with problems that aren't really problems -- that is, false positives. When false positives occur, developers learn to ignore the output of the tool or abandon it altogether. The creators of FindBugs, David Hovemeyer and William Pugh, were sensitive to this issue and strove to reduce the number of false positives they report. Unlike other static analysis tools, FindBugs doesn't focus on style or formatting; it specifically tries to find real bugs or potential performance problems.

It's also possible to objectively determine whether code should be refactored, however, whether it's yours or someone else's. In previous articles in this series, I've shown you how to use code metrics to objectively measure code quality. In fact, you can use code metrics to easily spot code that might be difficult to maintain. Once you've objectively determined there's a problem in the code, you can use a handy refactoring pattern to improve it.

Applying the concepts in Part 1 can open new opportunities for code reuse. We'll start by presenting ways you can use Hibernate and polymorphism to incorporate behavior in the domain model. Next, we'll build on Part 1's discussion of the generic DAO. Once you incorporate and use a generic DAO in an application, you'll encounter more potential operations that are common across applications. We'll show how you can reduce code by incorporating paging of data and querying in the generic DAO. We finish with strategies for enhancing performance with the domain model. Without these strategies, incorrectly configured associations in the domain model can cause thousands of extra queries to be executed or can waste resources by retrieving records that are not needed.

In this installment of our series on the addition of generic types to Java programming, we'll consider one of the two limitations on the use of generics that we haven't discussed, namely the addition of support for new operations on "naked" type parameters (such as new T() in a class C).

In many ways, Tiger promises to be the biggest leap forward in Java programming so far, including significant extensions to the source language syntax. The most visible change scheduled to occur in Tiger is the addition of generic types, as previewed in the JSR-14 prototype compiler (which you can download right now for free; see Resources).

My JiBX XML data binding framework is a fast and flexible tool for translating Java objects to and from XML documents. Most frameworks for XML data binding take the approach of generating Java classes from XML schemas, with framework code to implement the binding built into the generated classes. JiBX instead uses classworking techniques to enhance compiled Java class files with added methods to implement the bindings. This approach allows JiBX to work with both existing classes and generated classes, and also gives the benefits of very fast operation with a relatively small runtime.

Many Diagnosing Java code installments have been written since this developerWorks column debuted in February 2001. You can browse through all the Diagnosing Java code columns, starting with the most recent, in these continually refreshed lists:

I'll say it one more time: you can (and should) use test coverage tools as part of your testing process, but don't be fooled by the coverage report. The main thing to understand about coverage reports is that they're best used to expose code that hasn't been adequately tested. When you examine a coverage report, seek out the low values and understand why that particular code hasn't been tested fully. Knowing this, developers, managers, and QA professionals can use test coverage tools where they really count -- namely for three common scenarios:

Such behavior is common in distributed and multithreaded systems. In these cases, the non-deterministic nature of the program is often the cause. But in the case of GUIs, there is another common cause -- the Liar View bug pattern.

FIT shines by helping organizations avoid the miscommunications, misunderstandings, and misinterpretations that often occur between business clients and developers. Bringing the people who write requirements into the testing process early is an obvious way to catch problems and fix them before they become the stuff of development nightmares. What's more, FIT is completely compatible with already entrenched technologies like JUnit. In fact, as I've shown here, JUnit and FIT complement each other beautifully. Make this a stellar year in your pursuit of code quality -- by resolving to get FIT!

This type of program is highly susceptible to a crash caused by corrupt internal data. I call this bug pattern the Saboteur Data pattern because such data can stay in the system indefinitely, much like Cold War sleeper spies, causing no trouble until the particular bit of data is accessed. The corrupt data then explodes like a bomb.

Reusing code is best done if the original system was designed to be extensible. Otherwise, the difficulties of reusing code can easily negate any productivity gained. But, designing for extensibility adds in all sorts of new considerations to software design.

All but the most trivial of programs manipulate some types of data. Static type systems provide a way to ensure that a program doesn't manipulate data of a given type inappropriately. One of the advantages of the Java language is that it is strongly typed, so that the possibility of a type error is eliminated before the program is ever run. As developers, we can use this type system to produce more robust and bug-free code. Often, though, the type system is not used to its full potential.

Before discussing any other problems, we should point out that, like the feature extensions of generic types discussed last month, support for mixins can't be added to the Java language using the simple type erasure strategy used by JSR-14 and Tiger.

I call bugs that fit this pattern split cleaners because the cleanup code for the resource is split along the various possible execution paths. Because the cleanup code along each path is likely to be identical, most split cleaners are also examples of rogue tiles. (Rogue tiles are what I call bugs resulting from first copying and pasting code, and then forgetting to appropriately modify all copies of the code when a change is made. For more on rogue tiles, see my article, "Bug patterns: An introduction.")

This month kicks off my new series on Java classworking tools. For the first installment, I'm covering a pair of related tools named Hansel and Gretel. These both address the issue of code coverage, determining which code is actually run during an execution of your application. Even though they're designed for very different situations, both Hansel and Gretel have some unique and interesting features that set them apart from other tools of this type.

Even knowing this, we're nowhere near the critical mass that would make writing tests before writing code a standard practice. Just as TDD was an evolutionary next step extending from Extreme Programming, which pushed unit-testing frameworks into the limelight, evolutionary leaps are waiting to be made from the foundation that is TDD. This month, I invite you to join me as I make the evolutionary leap from TDD to its slightly more intuitive cousin: behavior-driven development (BDD).

Next time, I'll discuss Jam, an extension of the Java language that allows for mixin-based programming. Just as Jiazzi provides a way to decouple package dependencies, mixins provide a way to decouple class dependencies. As you might have guessed, mixins provide us with yet another powerful mechanism for testing a program.

This article gives instructions on how to collect data to debug the core dump problem. If you complete the steps in this article before contacting the support center, it will expedite a solution, since you already have the data. Those steps include setting AIX environment variables to enable fullcore dump, setting Java code to disable JVM signal handling, and collecting the core file and associated associative libraries.

We don't have to search far to find many useful invariants that can help to keep bugs out of our programs. In fact, we can augment our efforts to eliminate some of the most common patterns of bugs through the use of such temporal logic assertions. In this article, we'll examine some of the bug patterns most positively affected by the use of temporal logic. We'll be using the following bug patterns as examples:

XDoclet can easily be one of the more versatile cross-technology code-generation tools in your Java programming toolbox. Unfortunately, developers often overlook XDoclet's general utility and use it only when it's bundled as a hidden element of a larger development framework or an IDE. XDoclet is often seen as difficult to apply for custom solutions. This article aims to debunk this myth, stripping XDoclet of its usual trappings of complexity and revealing how you can use this code-generation engine to your advantage.

J2SE 1.5 -- code-named "Tiger" -- is scheduled for release near the end of 2003 and will include generic types (as previewed in the JSR-14 prototype compiler, available for download right now). In Part 1, we discussed the basics of generic types and why they will be an important and much needed addition to the Java language. We also touched upon how the incarnation of generic types scheduled for Tiger includes several "kinks" that limit the contexts in which generic types can be used.

Effective debugging begins with good programming. Designing a program to be easy to maintain is one of the most difficult challenges a programmer faces, in part because programs are often maintained by programmers other than those who originated the code. To maintain such programs effectively, new programmers have to be able to quickly learn how the program works, a task that's done most easily if small parts of the program can be understood in isolation from the whole.

Accessors -- member functions that directly manipulate the value of fields -- come in two flavors: setters and getters. A setter modifies the value of a field, whereas a getter obtains its value. Although accessors add minimal overhead to your code, the loss in performance is often trivial compared to other factors (such as questionable database designs). Accessors help to hide the implementation details of your classes and, thus, increase the robustness of your code. By having, at most, two control points from which a field is accessed, one setter and one getter, you are able to increase the maintainability of your classes by minimizing the points at which changes need to be made.

I'm not ashamed to admit that my gut reaction to seeing a block of complex code is fear and trembling. In fact, I'll go so far as to say that you should tremble a little upon encountering extensive methods and sprawling classes. Casting about for an exit sign in these moments is not only perfectly human, it shows good developer instinct. Overly complex code is hard to test and maintain, which means it usually has a higher incidence of defects.

This month, I'll first show you why you don't want to use String comparisons to verify the structure and content of XML documents. Then I'll introduce XMLUnit, an XML validation tool created by and for Java developers, and show you how to use it to validate XML documents.

Although no formal specification (akin to that of ML) exists for the Java language, a great deal of care was put into the development of a precise informal specification. The language is typically compiled to bytecode for the JVM, which itself is well specified (although some ambiguities in that specification have been discovered by formal analysis). Additionally, the Java APIs are all specified as part of the JVM. This results in an unprecedented level of portability for Java code.

There are lots of other invariants we could specify. How do we expect a stack to handle multiple push operations? What behavior do we expect with multiple threads? It is difficult to enforce invariants such as these programmatically. We could (and should) mention them in the documentation, but a developer writing an implementation could easily ignore them. If that happens, then a client that relies on such invariants will not work with such an implementation, and we'll have a bug. I call bugs of this pattern fictitious implementations because I place the blame for them squarely on the implementation rather than the client. Like any bug that deserves its own pattern, a fictitious implementation may not be immediately apparent, but can lurk hidden until some uncommon execution path uncovers it.

Even if you try to avoid using null flags in your own code, you will inevitably have to deal with legacy code that uses them. In fact, many of the Java library classes themselves, such as the Hashtable class and the BufferedReader class that we used above, use null flags. When using such classes, you can avoid bugs by explicitly checking whether an operation will return null before performing it. For example, with Hashtables, I always test with containsKey before calling get. But, even with such preventative measures, this bug pattern is one of the most common patterns encountered.

There is one caveat to this approach, however: it's really easy to encounter broken dispatches this way. Remember them? The Broken Dispatch bug pattern occurs when we accidentally overload a method rather than overriding its implementation in the parent class. With depth-first visitors, this is especially easy to do.

Granted, static types are not a free lunch; they can be tedious to work with at times. However, if our main concern is to keep bugs out of our code, then, taken as a whole, Java programming is better off for having and using static types. Why? Static type checking:

While a good deal of code decompilation is completely aboveboard, the fact is that a good decompiler is one of the essential tools of software piracy. As such, the existence of cheap (or free) decompilation tools for Java code is a serious problem, especially for developers working in the commercial, closed-source arena.

Listing 2. An assertion to test a stacks interface This assertion helps catch fictitious implementations of the code in Listing 1.

So, as the example in Listing 3 shows, we cannot expect static compilers to perform transformation of tail recursion on Java code while preserving the semantics of the language. Instead, we must rely on dynamic compilation by the JIT. Depending on the JVM, the JIT may or may not do this.

Automatic generation of code documentation doesn't have to be a long and arduous process. Using a free tool that all developers using JDK will have at their disposal, you can create sophisticated looking HTML pages for all your classes. Not only will they look impressive, they're also handy during development. Javadoc produced documentation is particularly useful if you're working with other developers who are changing and "enhancing" classes by adding new methods and member variables. This way the entire development team can understand any changes.

XDoclet is a template-driven development tool that allows developers to generate code, configuration files, or even documentation. This article will demonstrate how to download, install, configure, and use XDoclet to take the drudgery out of writing J2EE code.

While learning Java, I often wished there was a convenient way to find out the constructors, methods and fields of a given Java class. Java's documentation is very good, of course, but you have to follow many links before you finally reach your destination. So I took some time out and worked on a handy solution. The Method Finder is an application that shows you the constructors, methods and fields of a class. It greatly helped me. I hope it will do so for you, too.

If you've been paying any attention at all, you've likely been inundated with PR about the features and benefits of Visual Studio 2005. There's the new high-end Visual Studio Team System that offers a variety of features for teams of architects, developers, and testers working together on huge projects. There are the new free Express editions for hobbyists and beginning developers. There are little fit-and-finish things, like on-screen arrows to show where a toolwindow will dock when you drop them. There are advances in code generation, in the underlying languages, in rapid development with ASP.NET, and much more. It would take many articles the size of this one just to list the improvements in Visual Studio 2005.

Most refactorings are this small, and should be this small. Having unit tests that run rapidly means that you can incrementally make simple changes to our code. Each incremental change is cheap. If you keep this attitude from day one, refactoring can usually remain inexpensive, and just part of how you craft software. Your software slowly gets better, or at least it doesn't get any worse.

Minimizing the size of changes has several good effects. First, if you are working with a system that locks files, you can avoid locking other people out for longer than necessary. Second, by keeping your commits small, you vastly lower the chance of needing to merge two incompatible versions of code. Finally, by working in small chunks, you can keep the comments in the source code control system targeted and informative.

Some impedance mismatches intrude, however. For example, it took two years to bring Java to the AS/400, the main stumbling block being that OS/400 lacked thread support. In addition, once the engineers on the porting project decided-for performance reasons-to implement the AS/400 JVM in the lower-level SLIC layer, rather than running the JVM under OS/400, they were faced with the fact that the Javasoft JVM was written in C, while SLIC was written in C++. So the engineers had to make several enhancements to implement a SLIC-level JVM.

I've written a class that extends the DefaultTableModel but allows column sorting (ascending and descending). It is based on Sun's Swing tutorial example, but it is improved: it is simpler and easier to use, and it uses the Collection class's sort method, instead of implementing it.

Recently, I needed some code that would quickly select all the checkboxes on a form. Using the gift of recursion, I wrote the following code. Later, I realized it could be adapted to many other functions (such as clearing text boxes, resetting the selected indexes in multiple list boxes, etc.). Here it is.

This user has graciously shared a little bag of tricks with us. The first class, IndexedListModel, allows you to have a user Object for each element in the List. All the add/addElement/insertElementAt/set/setElementAt have a user object as a parameter. You can also call the setUserData(int index, Object o) to set the user's object, as well as retrieve the user object by calling the method getUserObject(int row) or getUserObject(Object o).

If I judge from posts in discussion forums and from mails that I receive, I am sure it would help. Many developers encounter problems with networking code that would be trivial to solve, if they could see how well-known applications implement the protocols.

Like most developers, I have my preference regarding conventions. I want my opening brackets on the same line. My file names? I can't stand upper case letters, so give me dash-separated .jsp files. As for tabs, forget it; spaces are undoubtedly the way to go. While these are my preferences, I have one that's even more important—standardization. Standardization applies to more than just syntax layout; it applies to names of classes, methods, and files, the organization of code, and commit messages. And yes, standardization includes coding conventions as well.

Lines of code per day -- This is the classic definition of software productivity for individual programmers. Unfortunately, as other authors have noted as well, the definition makes little sense. Imagine a programmer named Fred Fastfinger who writes 5000 lines of code, on average, each workday. Now assume Fred's code is of such poor quality that, for each day of work he does, someone else must spend five days debugging the code. Is Fred highly productive? Certainly not. What we want is many lines of good code.

This is a utility for all those who hate coding JavaBeans -- especially the javadoc for get and set methods. BeanBuilder enables you to generate JavaBean source code by declaring class attributes in a descriptor file.

Configuration, rather than a quick "if...then", maybe the answer. Configuration is focused on creating ways to simplify individual business logic into a set of values that can be stored as configuration rather than hard coded into the logic of the program. Configuration is converting hard-coded logic into data that the program can operate on.

Most software is poorly designed and built. This statement comes as no surprise to anyone in the software industry and is elucidated well by Charles Mann in his popular article "Why Software Is So Bad". I have proposed a framework for better software architectures in my article "Most Software Stinks", which served as one of the sources for Mann's piece. But even if everyone accepted my proposal for improving software design (which they don't), there is still a problem. How do we get software designers and programmers to raise the quality of their work? Because few people ever see their immediate product (the source code), what would motivate engineers to do better? The answer is that all source code should be open and included in every software release. This single policy change would have a profound impact on the quality of software systems worldwide.

Code reviews in most organizations are a painful experience for everyone involved. The developer often feels like it's a bashing session designed to beat out their will. The development leads are often confused as to what is important to point out and what isn't. And other developers that may be involved often use this as a chance to show how much better they can be by pointing out possible issues in someone else's code.

This simple routine will check the Hex value of the mouse button being pressed and will send output to the system output telling you which button has been pressed. This can be changed so you can have other tasks done when the left or right mouse button is pressed. Note: This only works for two-button mice.

Sometimes I needed to send the same data to several output streams. The most trivial example is logging the data written to an output stream for debugging purpose. I thought it useful to have a class that allows me to do this job. Since the whole hierarchy of streams relys heavily on the decorator model, I decided to use a similar approach.

Here's some code I think others might find useful. It's a JComboBox that allows the user to select the date. The code (datecombobox1.java) contains a main to offer an example of how to use it.

Writing good comments is more difficult than writing good code, and therefore identifying code whose commenting is poor is a child’s play. If you see nontrivial functions or methods lacking a comment at their beginning explaining what they do, you know you’re in trouble. The same goes for global variables, class and structure fields, and code blocks implementing a complex algorithm: all should be accompanied by a comment. Note that I don’t expect to see everything adorned with a comment: getter and setter methods, straightforward code, and many local variables are better left to explain themselves.

Comments are one of the most idiosyncratic parts of a programmer's style. Compilers will spot all sorts of code errors, but comments are completely unchecked. The programmer has presumably tested whatever code you're looking at, and it worked?at least for some limited test under some circumstances in the past. But the comments could be bald-faced lies, half-truths, or simply out of date. Many programmers often choose to ignore the comments entirely, in favor of reading the code.

Architecturally, the Source Editor is a collection of different types of editors, each of which contains features specific to certain kinds of files. For example, when you open a Java file, there is a syntax highlighting scheme specifically for Java files, along with code completion, refactoring, and other features specific to Java files. Likewise, when you open JSP, HTML, XML, .properties, deployment descriptor, and other types of files, you get a set of features specific to those files.

Josh: In case you don't have your Java Language Specification handy, >>>= is the assignment operator corresponding to unsigned right shift. Come back next week for the answers. Our thanks go to Ron Gabor, a reader from Herzliya, Israel, for sending us these fine puzzlers. If you want to see your name in print too, send your puzzlers to [email protected].

Having looked at some of the nuts and bolts of network management technology, we now consider some of the problems of managing large networks. In many respects the large enterprise networks of today are reminiscent of the islands of automation that were common in manufacturing during the 1980s and 1990s. The challenge facing manufacturers was in linking together the islands of microprocessor-based controllers, PCs, minicomputers, and other components to allow end-to-end actions such as aggregated order entries leading to automated production runs. The hope was that the islands of automation could be joined so that the previously isolated intelligence could be leveraged to manufacture better products. Similar problems beset network operators at the beginning of the 21st century as traffic types and volumes continue to grow. In parallel with this, the range of deployed NMS is also growing. Multiple NMS adds to operational expense.

This is a book about good programming. It is filled with code. We are going to look at code from every different direction. We'll look down at it from the top, up at it from the bottom, and through it from the inside out. By the time we are done, we're going to know a lot about code. What's more, we'll be able to tell the difference between good code and bad code. We'll know how to write good code. And we'll know how to transform bad code into good code.

w___w__w.__ja__v__a__2_s___._c__o___m_ | Contact Us
Copyright 2009 - 12 Demo Source and Support. All rights reserved.
All other trademarks are property of their respective owners.