BigDecimal and Unit Testing
We begin this chapter by addressing the problem with floating point representation that is found in most languages. The problem revolves around the inability to represent every decimal fraction as a binary fraction, as pointed out in Chapter 4, Language Fundamentals – Data Types and Variables. In most situations, it can be accurate enough. But what happens if you must guarantee accuracy and precision? You must abandon floating point primitives and use the BigDecimal
class.
How do you know that the code you have just written works? The compiler can spot syntax errors. An error-free compilation only tells you that the compiler is happy. But does it work? How does your code handle invalid input, lost connections to a database, or edge cases? Always be aware that for most projects you work on, the most unreliable component of the systems you code for is the end users. You cannot fix them, but you need to design and implement your code to handle the...