Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Java Coding Problems
Java Coding Problems

Java Coding Problems: Become an expert Java programmer by solving over 250 brand-new, modern, real-world problems , Second Edition

eBook
€22.99 €32.99
Paperback
€28.99 €41.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Table of content icon View table of contents Preview book icon Preview Book

Java Coding Problems

Objects, Immutability, Switch Expressions, and Pattern Matching

This chapter includes 30 problems, tackling, among others, some less-known features of java.util.Objects, some interesting aspects of immutability, the newest features of switch expressions, and deep coverage of the cool pattern matching capabilities of instanceof and switch expressions.

At the end of this chapter, you’ll be up to date with all these topics, which are non-optional in any Java developer’s arsenal.

Problems

Use the following problems to test your programming prowess on Objects, immutability, switch expressions, and pattern matching. I strongly encourage you to give each problem a try before you turn to the solutions and download the example programs:

  1. Explaining and exemplifying UTF-8, UTF-16, and UTF-32: Provide a detailed explanation of what UTF-8, UTF-16, and UTF-32 are. Include several snippets of code to show how these work in Java.
  2. Checking a sub-range in the range from 0 to length: Write a program that checks whether the given sub-range [given start, given start + given end) is within the bounds of the range from [0, given length). If the given sub-range is not in the [0, given length) range, then throw an IndexOutOfBoundsException.
  3. Returning an identity string: Write a program that returns a string representation of an object without calling the overridden toString() or hashCode().
  4. Hooking unnamed classes and instance main methods: Give a quick introduction to JDK 21 unnamed classes and instance main methods.
  5. Adding code snippets in Java API documentation: Provide examples of adding code snippets in Java API documentation via the new @snippet tag.
  6. Invoking default methods from Proxy instances: Write several programs that invoke interface default methods from Proxy instances in JDK 8, JDK 9, and JDK 16.
  7. Converting between bytes and hex-encoded strings: Provide several snippets of code for converting between bytes and hex-encoded strings (including byte arrays).
  8. Exemplify the initialization-on-demand holder design pattern: Write a program that implements the initialization-on-demand holder design pattern in the classical way (before JDK 16) and another program that implements this design pattern based on the fact that, from JDK 16+, Java inner classes can have static members and static initializers.
  9. Adding nested classes in anonymous classes: Write a meaningful example that uses nested classes in anonymous classes (pre-JDK 16, and JDK 16+).
  10. Exemplify erasure vs. overloading: Explain in a nutshell what type erasure in Java and polymorphic overloading are, and exemplify how they work together.
  11. Xlinting default constructors: Explain and exemplify the JDK 16+ hint for classes with default constructors,-Xlint:missing-explicit-ctor.
  12. Working with the receiver parameter: Explain the role of the Java receiver parameter and exemplify its usage in code.
  13. Implementing an immutable stack: Provide a program that creates an immutable stack implementation from zero (implement isEmpty(), push(), pop(), and peek() operations).
  14. Revealing a common mistake with Strings: Write a simple use case of strings that contain a common mistake (for instance, related to the String immutability characteristic).
  15. Using the enhanced NullPointerException: Exemplify, from your experience, the top 5 causes of NullPointerException and explain how JDK 14 improves NPE messages.
  16. Using yield in switch expressions: Explain and exemplify the usage of the yield keyword with switch expressions in JDK 13+.
  17. Tackling the case null clause in switch: Write a bunch of examples to show different approaches for handling null values in switch expressions (including JDK 17+ approaches).
  18. Taking on the hard way to discover equals(): Explain and exemplify how equals() is different from the == operator.
  19. Hooking instanceof in a nutshell: Provide a brief overview with snippets of code to highlight the main aspect of the instanceof operator.
  20. Introducing pattern matching: Provide a theoretical dissertation including the main aspects and terminology for pattern matching in Java.
  21. Introducing type pattern matching for instanceof: Provide the theoretical and practical support for using the type pattern matching for instanceof.
  22. Handling the scope of a binding variable in type patterns for instanceof: Explain in detail, including snippets of code, the scope of binding variables in type patterns for instanceof.
  23. Rewriting equals() via type patterns for instanceof: Exemplify in code the implementation of equals() (including for generic classes) before and after type patterns for instanceof have been introduced.
  24. Tackling type patterns for instanceof and generics: Provide several examples that use the combo type patterns for instanceof and generics.
  25. Tackling type patterns for instanceof and streams: Can we use type patterns for instanceof and the Stream API together? If yes, provide at least an example.
  26. Introducing type pattern matching for switch: Type patterns are available for instanceof but are also available for switch. Provide here the theoretical headlines and an example of this topic.
  27. Adding guarded pattern labels in switch: Provide a brief coverage of guarded pattern labels in switch for JDK 17 and 21.
  28. Dealing with pattern label dominance in switch: Pattern label dominance in switch is a cool feature, so exemplify it here in a comprehensive approach with plenty of examples.
  29. Dealing with completeness (type coverage) in pattern labels for switch: This is another cool topic for switch expressions. Explain and exemplify it in detail (theory ad examples).
  30. Understanding the unconditional patterns and nulls in switch expressions: Explain how null values are handled by unconditional patterns of switch expressions before and after JDK 19.

The following sections describe solutions to the preceding problems. Remember that there usually isn’t a single correct way to solve a particular problem. Also remember that the explanations shown here include only the most interesting and important details needed to solve the problems. Download the example solutions to see additional details and to experiment with the programs at https://github.com/PacktPublishing/Java-Coding-Problems-Second-Edition/tree/main/Chapter02.

38. Explain and exemplifying UTF-8, UTF-16, and UTF-32

Character encoding/decoding is important for browsers, databases, text editors, filesystems, networking, and so on, so it’s a major topic for any programmer. Check out the following figure:

Figure 2.1.png

Figure 2.1: Representing text with different char sets

In Figure 2.1, we see several Chinese characters represented in UTF-8, UTF-16, and ANSI on a computer screen. But, what are these? What is ANSI? What is UTF-8 and how did we get to it? Why don’t these characters look normal in ANSI?

Well, the story may begin with computers trying to represent characters (such as letters from the alphabet or digits or punctuation marks). The computers understand/process everything from the real world as a binary representation, so as a sequence of 0 and 1. This means that every character (for instance, A, 5, +, and so on) has to be mapped to a sequence of 0 and 1.

The process of mapping a character to a sequence of 0 and 1 is known as character encoding or simply encoding. The reverse process of un-mapping a sequence of 0 and 1 to a character is known as character decoding or simply decoding. Ideally, an encoding-decoding cycle should return the same character; otherwise, we obtain something that we don’t understand or we cannot use.

For instance, the Chinese character, , should be encoded in the computer’s memory as a sequence of 0 and 1. Next, when this sequence is decoded, we expect back the same Chinese letter, . In Figure 2.1, this happens in the left and middle screenshots, while in the right screenshot, the returned character is …. A Chinese speaker will not understand this (actually, nobody will), so something went wrong!

Of course, we don’t have only Chinese characters to represent. We have many other sets of characters grouped in alphabets, emoticons, and so on. A set of characters has well-defined content (for instance, an alphabet has a certain number of well-defined characters) and is known as a character set or, in short, a charset.

Having a charset, the problem is to define a set of rules (a standard) that clearly explains how the characters of this charset should be encoded/decoded in the computer memory. Without having a clear set of rules, the encoding and decoding may lead to errors or indecipherable characters. Such a standard is known as an encoding scheme.

One of the first encoding schemes was ASCII.

Introducing ASCII encoding scheme (or single-byte encoding)

ASCII stands for American Standard Code for Information Interchange. This encoding scheme relies on a 7-bit binary system. In other words, each character that is part of the ASCII charset (http://ee.hawaii.edu/~tep/EE160/Book/chap4/subsection2.1.1.1.html) should be representable (encoded) on 7 bits. A 7-bit number can be a decimal between 0 and 127, as in the next figure:

Figure 2.2.png

Figure 2.2: ASCII charset encoding

So, ASCII is an encoding scheme based on a 7-bit system that supports 128 different characters. But, we know that computers operate on bytes (octets) and a byte has 8 bits. This means that ASCII is a single-byte encoding scheme that leaves a bit free for each byte. See the following figure:

Figure 2.3.png

Figure 2.3: The highlighted bit is left free in ASCII encoding

In ASCII encoding, the letter A is 65, the letter B is 66, and so on. In Java, we can easily check this via the existing API, as in the following simple code:

int decimalA = "A".charAt(0); // 65
String binaryA = Integer.toBinaryString(decimalA); // 1000001

Or, let’s see the encoding of the text Hello World. This time, we added the free bit as well, so the result will be 01001000 01100101 01101100 01101100 01101111 0100000 01010111 01101111 01110010 01101100 01100100:

char[] chars = "Hello World".toCharArray();
for(char ch : chars) {
  System.out.print("0" + Integer.toBinaryString(ch) + " ");
}

If we perform a match, then we see that 01001000 is H, 01100101 is e, 01101100 is l, 01101111 is o, 0100000 is space, 01010111 is W, 01110010 is r, and 01100100 is d. So, besides letters, the ASCII encoding can represent the English alphabet (upper and lower case), digits, space, punctuation marks, and some special characters.

Besides the core ASCII for English, we also have ASCII extensions, which are basically variations of the original ASCII to support other alphabets. Most probably, you’ve heard about the ISO-8859-1 (known as ISO Latin 1), which is a famous ASCII extension. But, even with ASCII extensions, there are still a lot of characters in the world that cannot be encoded yet. There are countries that have a lot more characters than ASCII can encode, and even countries that don’t use alphabets. So, ASCII has its limitations.

I know what you are thinking … let’s use that free bit (27+127). Yes, but even so, we can go up to 256 characters. Still not enough! It is time to encode characters using more than 1 byte.

Introducing multi-byte encoding

In different parts of the world, people started to create multi-byte encoding schemes (commonly, 2 bytes). For instance, speaker of the Chinese language, which has a lot of characters, created Shift-JIS and Big5, which use 1 or 2 bytes to represent characters.

But, what happens when most of the countries come up with their own multi-byte encoding schemes trying to cover their special characters, symbols, and so on? Obviously, this leads to a huge incompatibility between the encoding schemes used in different countries. Even worse, some countries have multiple encoding schemes that are totally incompatible with each other. For instance, Japan has three different incompatible encoding schemes, which means that encoding a document with one of these encoding schemes and decoding with another will lead to a garbled document.

However, this incompatibility was not such a big issue before the Internet, since which documents have been massively shared all around the globe using computers. At that moment, the incompatibility between the encoding schemes conceived in isolation (for instance, countries and geographical regions) started to be painful.

It was the perfect moment for the Unicode Consortium to be created.

Unicode

In a nutshell, Unicode (https://unicode-table.com/en/) is a universal encoding standard capable of encoding/decoding every possible character in the world (we are talking about hundreds of thousands of characters).

Unicode needs more bytes to represent all these characters. But, Unicode didn’t get involved in this representation. It just assigned a number to each character. This number is named a code point. For instance, the letter A in Unicode is associated with the code point 65 in decimal, and we refer to it as U+0041. This is the constant U+ followed by 65 in hexadecimal. As you can see, in Unicode, A is 65, exactly as in the ASCII encoding. In other words, Unicode is backward compatible with ASCII. As you’ll see soon, this is big, so keep it in mind!

Early versions of Unicode contain characters having code points less than 65,535 (0xFFFF). Java represents these characters via the 16-bit char data type. For instance, the French (e with circumflex) is associated with the Unicode 234 decimal or U+00EA hexadecimal. In Java, we can use charAt() to reveal this for any Unicode character less than 65,535:

int e = "ê".charAt(0);                // 234
String hexe = Integer.toHexString(e); // ea

We also may see the binary representation of this character:

String binarye = Integer.toBinaryString(e); // 11101010 = 234

Later, Unicode added more and more characters up to 1,114,112 (0x10FFFF). Obviously, the 16-bit Java char was not enough to represent these characters, and calling charAt() was not useful anymore.

Important note

Java 19+ supports Unicode 14.0. The java.lang.Character API supports Level 14 of the Unicode Character Database (UCD). In numbers, we have 47 new emojis, 838 new characters, and 5 new scripts. Java 20+ supports Unicode 15.0, which means 4,489 new characters for java.lang.Character.

In addition, JDK 21 has added a set of methods especially for working with emojis based on their code point. Among these methods, we have boolean isEmoji(int codePoint), boolean isEmojiPresentation(int codePoint), boolean isEmojiModifier(int codePoint), boolean isEmojiModifierBase(int codePoint), boolean isEmojiComponent(int codePoint), and boolean isExtendedPictographic(int codePoint). In the bundled code, you can find a small application showing you how to fetch all available emojis and check if a given string contains emoji. So, we can easily obtain the code point of a character via Character.codePointAt() and pass it as an argument to these methods to determine whether the character is an emoji or not.

However, Unicode doesn’t get involved in how these code points are encoded into bits. This is the job of special encoding schemes within Unicode, such as the Unicode Transformation Format (UTF) schemes. Most commonly, we use UTF-32, UTF-16, and UTF-8.

UTF-32

UTF-32 is an encoding scheme for Unicode that represents every code point on 4 bytes (32 bits). For instance, the letter A (having code point 65), which can be encoded on a 7-bit system, is encoded in UTF-32 as in the following figure next to the other two characters:

Figure 2.4.png

Figure 2.4: Three characters sample encoded in UTF-32

As you can see in Figure 2.4, UTF-32 uses 4 bytes (fixed length) to represent every character. In the case of the letter A, we see that UTF-32 wasted 3 bytes of memory. This means that converting an ASCII file to UTF-32 will increase its size by 4 times (for instance, a 1KB ASCII file is a 4KB UTF-32 file). Because of this shortcoming, UTF-32 is not very popular.

Java doesn’t support UTF-32 as a standard charset but it relies on surrogate pairs (introduced in the next section).

UTF-16

UTF-16 is an encoding scheme for Unicode that represents every code point on 2 or 4 bytes (not on 3 bytes). UTF-16 has a variable length and uses an optional Byte-Order Mark (BOM), but it is recommended to use UTF-16BE (BE stands for Big-Endian byte order), or UTF-16LE (LE stands for Little-Endian byte order). While more details about Big-Endian vs. Little-Endian are available at https://en.wikipedia.org/wiki/Endianness, the following figure reveals how the orders of bytes differ in UTF-16BE (left side) vs. UTF-16LE (right side) for three characters:

Figure 2.5.png

Figure 2.5: UTF-16BE (left side) vs. UTF-16LE (right side)

Since the figure is self-explanatory, let’s move forward. Now, we have to tackle a trickier aspect of UTF-16. We know that in UTF-32, we take the code point and transform it into a 32-bit number and that’s it. But, in UTF-16, we can’t do that every time because we have code points that don’t accommodate 16 bits. This being said, UTF-16 uses the so-called 16-bit code units. It can use 1 or 2 code units per code point. There are three types of code units, as follows:

  • A code point needs a single code unit: these are 16-bit code units (covering U+0000 to U+D7FF, and U+E000 to U+FFFF)
  • A code point needs 2 code units:
    • The first code unit is named high surrogate and it covers 1,024 values (U+D800 to U+DBFF)
    • The second code unit is named low surrogate and it covers 1,024 values (U+DC00 to U+DFFF)

A high surrogate followed by a low surrogate is named a surrogate pair. Surrogate pairs are needed to represent the so-called supplementary Unicode characters or characters having a code point larger than 65,535 (0xFFFF).

Characters such as the letter A (65) or the Chinese (26263) have a code point that can be represented via a single code unit. The following figure shows these characters in UTF-16BE:

Figure 2.6.png

Figure 2.6: UTF-16 encoding of A and

This was easy! Now, let’s consider the following figure (encoding of Unicode, Smiling Face with Heart-Shaped Eyes):

Figure 2.7.png

Figure 2.7: UTF-16 encoding using a surrogate pair

The character from this figure has a code point of 128525 (or, 1 F60D) and is represented on 4 bytes.

Check the first byte: the sequence of 6 bits, 110110, identifies a high surrogate.

Check the third byte: the sequence of 6 bits, 110111, identifies a low surrogate.

These 12 bits (identifying the high and low surrogates) can be dropped and we keep the rest of the 20 bits: 00001111011000001101. We can compute this number as 20 + 22 + 23 + 29 + 210 + 212 + 213 + 214 + 215 = 1 + 4 + 8 + 512 + 1024 + 4096 + 8192 + 16384 + 32768 = 62989 (or, the hexadecimal, F60D).

Finally, we have to compute F60D + 0x10000 = 1 F60D, or in decimal 62989 + 65536 = 128525 (the code point of this Unicode character). We have to add 0x10000 because the characters that use 2 code units(a surrogate pair) are always of form 1 F…

Java supports UTF-16, UTF-16BE, and UTF-16LE. Actually, UTF-16 is the native character encoding for Java.

UTF-8

UTF-8 is an encoding scheme for Unicode that represents every code point on 1, 2, 3, or 4 bytes. Having this 1- to 4-byte flexibility, UTF-8 uses space in a very efficient way.

Important note

UTF-8 is the most popular encoding scheme that dominates the Internet and applications.

For instance, we know that the code point of the letter A is 65 and it can be encoded using a 7-bit binary representation. The following figure represents this letter encoded in UTF-8:

Figure 2.8.png

Figure 2.8: Letter A encoded in UTF-8

This is very cool! UTF-8 has used a single byte to encode A. The first (leftmost) 0 signals that this is a single-byte encoding. Next, let’s see the Chinese character, :

Figure 2.9.png

Figure 2.9: Chinese character, , encoded in UTF-8

The code point of is 26263, so UTF-8 uses 3 bytes to represent it. The first byte contains 4 bits (1110) that signal that this is a 3-byte encoding. The next two bytes start with 2 bits of 10. All these 8 bits can be dropped and we keep only the remaining 16 bits, which gives us the expected code point.

Finally, let’s tackle the following figure:

Figure 2.10.png

Figure 2.10: UTF-8 encoding with 4 bytes

This time, the first byte signals that this is a 4-byte encoding via 11110. The remaining 3 bytes start with 10. All these 11 bits can be dropped and we keep only the remaining 21 bits, 000011111011000001101, which gives us the expected code point, 128525.

In the following figure you can see the UTF-8 template used for encoding Unicode characters:

Figure 2.11.png

Figure 2.11: UTF-8 template used for encoding Unicode characters

Did you know that 8 zeros in a row (00000000 – U+0000) are interpreted as NULL? A NULL represents the end of the string, so sending it “accidentally” will be a problem because the remaining string will not be processed. Fortunately, UTF-8 prevents this issue, and sending a NULL can be done only if we effectively send the U+0000 code point.

Java and Unicode

As long as we use characters with code points less than 65,535 (0xFFFF), we can rely on the charAt() method to obtain the code point. Here are some examples:

int cp1 = "A".charAt(0);                   // 65
String hcp1 = Integer.toHexString(cp1);    // 41
String bcp1 = Integer.toBinaryString(cp1); // 1000001
int cp2 = "".charAt(0);                  // 26263
String hcp2 = Integer.toHexString(cp2);    // 6697
String bcp2 = Integer.toBinaryString(cp2); // 1101100000111101

Based on these examples, we may write a helper method that returns the binary representation of strings having code points less than 65,535 (0xFFFF) as follows (you already saw the imperative version of the following functional code earlier):

public static String strToBinary(String str) {
   String binary = str.chars()
     .mapToObj(Integer::toBinaryString)
     .map(t -> "0" +  t)
     .collect(Collectors.joining(" "));
   return binary;
}

If you run this code against a Unicode character having a code point greater than 65,535 (0xFFFF), then you’ll get the wrong result. You’ll not get an exception or any kind of warning.

So, charAt() covers only a subset of Unicode characters. For covering all Unicode characters, Java provides an API that consists of several methods. For instance, if we replace charAt() with codePointAt(), then we obtain the correct code point in all cases, as you can see in the following figure:

Figure 2.12.png

Figure 2.12: charAt() vs. codePointAt()

Check out the last example, c2. Since codePointAt() returns the correct code point (128525), we can obtain the binary representation as follows:

String uc = Integer.toBinaryString(c2); // 11111011000001101

So, if we need a method that returns the binary encoding of any Unicode character, then we can replace the chars() call with the codePoints() call. The codePoints() method returns the code points of the given sequence:

public static String codePointToBinary(String str) {
   String binary = str.codePoints()
      .mapToObj(Integer::toBinaryString)
      .collect(Collectors.joining(" "));
   return binary;
}

The codePoints() method is just one of the methods provided by Java to work around code points. The Java API also includes codePointAt(), offsetByCodePoints(), codePointCount(), codePointBefore(), codePointOf(), and so on. You can find several examples of them in the bundled code next to this one for obtaining a String from a given code point:

String str1 = String.valueOf(Character.toChars(65)); // A
String str2 = String.valueOf(Character.toChars(128525));

The toChars() method gets a code point and returns the UTF-16 representation via a char[]. The string returned by the first example (str1) has a length of 1 and is the letter A. The second example returns a string of length 2 since the character having the code point 128525 needs a surrogate pair. The returned char[] contains both the high and low surrogates.

Finally, let’s have a helper method that allows us to obtain the binary representation of a string for a given encoding scheme:

public static String stringToBinaryEncoding(
      String str, String encoding) {
   final Charset charset = Charset.forName(encoding);
   final byte[] strBytes = str.getBytes(charset);
   final StringBuilder strBinary = new StringBuilder();
   for (byte strByte : strBytes) {
      for (int i = 0; i < 8; i++) {
        strBinary.append((strByte & 128) == 0 ? 0 : 1);
        strByte <<= 1;
      }
      strBinary.append(" ");
   }
   return strBinary.toString().trim();
}

Using this method is quite simple, as you can see in the following examples:

// 00000000 00000000 00000000 01000001
String r = Charsets.stringToBinaryEncoding("A", "UTF-32");
// 10010111 01100110
String r = Charsets.stringToBinaryEncoding("", 
              StandardCharsets.UTF_16LE.name());

You can practice more examples in the bundled code.

JDK 18 defaults the charset to UTF-8

Before JDK 18, the default charset was determined based on the operating system charset and locale (for instance, on a Windows machine, it could be windows-1252). Starting with JDK 18, the default charset is UTF-8 (Charset.defaultCharset() returns the string, UTF-8). Or, having a PrintStream instance, we can find out the used charset via the charset() method (starting with JDK 18).

But, the default charset can be explicitly set via the file.encoding and native.encoding system properties at the command line. For instance, you may need to perform such modification to compile legacy code developed before JDK 18:

// the default charset is computed from native.encoding
java -Dfile-encoding = COMPAT 
// the default charset is windows-1252
java -Dfile-encoding = windows-1252 

So, since JDK 18, classes that use encoding (for instance, FileReader/FileWriter, InputStreamReader/OutputStreamWriter, PrintStream, Formatter, Scanner, and URLEncoder/URLDecoder) can take advantage of UTF-8 out of the box. For instance, using UTF-8 before JDK 18 for reading a file can be accomplished by explicitly specifying this charset encoding scheme as follows:

try ( BufferedReader br = new BufferedReader(new FileReader(
   chineseUtf8File.toFile(), StandardCharsets.UTF_8))) {
   ...
}

Accomplishing the same thing in JDK 18+ doesn’t require explicitly specifying the charset encoding scheme:

try ( BufferedReader br = new BufferedReader(
   new FileReader(chineseUtf8File.toFile()))) {
   ...
}

However, for System.out and System.err, JDK 18+ still uses the default system charset. So, if you are using System.out/err and you see question marks (?) instead of the expected characters, then most probably you should set UTF-8 via the new properties -Dstdout.encoding and -Dstderr.encoding:

-Dstderr.encoding=utf8 -Dstdout.encoding=utf8

Or, you can set them as environment variables to set them globally:

_JAVA_OPTIONS="-Dstdout.encoding=utf8 -Dstderr.encoding=utf8"

In the bundled code you can see more examples.

39. Checking a sub-range in the range from 0 to length

Checking that a given sub-range is in the range from 0 to the given length is a common check in a lot of problems. For instance, let’s consider that we have to write a function responsible for checking if the client can increase the pressure in a water pipe. The client gives us the current average pressure (avgPressure), the maximum pressure (maxPressure), and the amount of extra pressure that should be applied (unitsOfPressure).

But, before we can apply our secret algorithm, we have to check that the inputs are correct. So, we have to ensure that none of the following cases happens:

  • avgPressure is less than 0
  • unitsOfPressure is less than 0
  • maxPressure is less than 0
  • The range [avgPressure, avgPressure + unitsOfPressure) is out of bounds represented by maxPressure

So, in code lines, our function may look as follows:

public static boolean isPressureSupported(
      int avgPressure, int unitsOfPressure, int maxPressure) {
  if(avgPresure < 0 || unitsOfPressure < 0 || maxPressure < 0
    || (avgPresure + unitsOfPressure) > maxPressure) {
    throw new IndexOutOfBoundsException(
           "One or more parameters are out of bounds");
  }
  // the secret algorithm
  return (avgPressure + unitsOfPressure) <
    (maxPressure - maxPressure/4);
}

Writing composite conditions such as ours is prone to accidental mistakes. It is better to rely on the Java API whenever possible. And, for this use case, it is possible! Starting with JDK 9, in java.util.Objects, we have the method checkFromIndexSize(int fromIndex, int size, int length), and starting with JDK 16, we also have a flavor for long arguments, checkFromIndexSize(int fromIndex, int size, int length). If we consider that avgPressure is fromIndex, unitsOfPressure is size, and maxPressure is length, then checkFromIndexSize() performs the arguments validation and throws an IndexOutOfBoundsException if something goes wrong. So, we write the code as follows:

public static boolean isPressureSupported(
      int avgPressure, int unitsOfPressure, int maxPressure) {
  Objects.checkFromIndexSize(
    avgPressure, unitsOfPressure, maxPressure);
  // the secret algorithm
  return (avgPressure + unitsOfPressure) <
   (maxPressure - maxPressure/4);
}

In the code bundle, you can see one more example of using checkFromIndexSize().

Besides checkFromIndexSize(), in java.util.Objects, we can find several other companions that cover common composite conditions such as checkIndex(int index, int length) – JDK 9, checkIndex(long index, long length) – JDK 16, checkFromToIndex(int fromIndex, int toIndex, int length) – JDK 9, and checkFromToIndex(long fromIndex, long toIndex, long length) – JDK 16.

And, by the way, if we switch the context to strings, then JDK 21 provides an overload of the well-known String.indexOf(), capable of searching a character/substring in a given string between a given begin index and end index. The signature is indexOf(String str, int beginIndex, int endIndex) and it returns the index of the first occurrence of str, or -1 if str is not found. Basically, this is a neat version of s.substring(beginIndex, endIndex).indexOf(str) + beginIndex.

40. Returning an identity string

So, what’s an identity string? An identity string is a string built from an object without calling the overridden toString() or hashCode(). It is equivalent to the following concatenation:

object.getClass().getName() + "@" 
  + Integer.toHexString(System.identityHashCode(object))

Starting with JDK 19, this string is wrapped in Objects.toIdentityString(Object object). Consider the following class (object):

public class MyPoint {
  private final int x;
  private final int y;
  private final int z;
  ...
  @Override
  public String toString() {
    return "MyPoint{" + "x=" + x + ", y=" + y 
                      + ", z=" + z + '}';
  }  
}

By calling toIdentityString(), we obtain something as follows:

MyPoint p = new MyPoint(1, 2, 3);
// modern.challenge.MyPoint@76ed5528
Objects.toIdentityString(p);

Obviously, the overridden MyPoint.toString() method was not called. If we print out the hash code of p, we get 76ed5528, which is exactly what toIdentityString() returned. Now, let’s override hashCode() as well:

@Override
public int hashCode() {
  int hash = 7;
  hash = 23 * hash + this.x;
  hash = 23 * hash + this.y;
  hash = 23 * hash + this.z;
  return hash;
}

This time, toIdentityString() returns the same thing, while our hashCode() returns 14ef3.

41. Hooking unnamed classes and instance main methods

Imagine that you have to initiate a student in Java. The classical approach of introducing Java is to show the student a Hello World! Example, as follows:

public class HelloWorld { 
  public static void main(String[] args) { 
    System.out.println("Hello World!");
  }
}

This is the simplest Java example but it is not simple to explain to the student what public or static or String[] are. The ceremony involved in this simple example may scare the student – if this is a simple example, then how is it a more complex one?

Fortunately, starting with JDK 21 (JEP 445), we have instance main methods, which is a preview feature that allows us to shorten the previous example as follows:

public class HelloWorld { 
  void main() { 
    System.out.println("Hello World!");
  }
}

We can even go further and remove the explicit class declaration as well. This feature is known as unnamed classes. An unnamed class resides in the unnamed package that resides in the unnamed module:

void main() { 
  System.out.println("Hello World!");
}

Java will generate the class on our behalf. The name of the class will be the same as the name of the source file.

That’s all we need to introduce Java to a student. I strongly encourage you to read JEP 445 (and the new JEPs that will continue this JDK 21 preview feature work) to discover all the aspects involved in these features.

42. Adding code snippets in Java API documentation

I’m sure that you are familiar with generating Java API documentation (Javadoc) for your projects. We can do it via the javadoc tool from the command line, via IDE support, via the Maven plugin (maven-javadoc-plugin), and so on.

A common case in writing the Javadoc consists of adding snippets of code to exemplify the usage of a non-trivial class or method. Before JDK 18, adding snippets of code in documentation can be done via {@code...} or the <pre> tag. The added code is treated as plain text, is not validated for correctness, and is not discoverable by other tools. Let’s quickly see an example:

/**
 * A telemeter with laser ranging from 0 to 60 ft including   
 * calculation of surfaces and volumes with high-precision
 *
 * <pre>{@code
 *     Telemeter.Calibrate.at(0.00001);
 *     Telemeter telemeter = new Telemeter(0.15, 2, "IP54");
 * }</pre>
 */
public class Telemeter {
   ...

In the bundled code, you can see the full example. The Javadoc is generated at build time via the Maven plugin (maven-javadoc-plugin), so simply trigger a build.

Starting with JDK 18 (JEP 413 - Code Snippets in Java API Documentation), we have brand new support for adding snippets of code in documentation via the {@snippet...} tag. The code added via @snippet can be discovered and validated by third-party tools (not by the javadoc tool itself).

For instance, the previous snippet can be added via @snippet as follows:

/**
 * A telemeter with laser ranging from 0 to 60 ft including   
 * calculation of surfaces and volumes with high-precision
 *
 * {@snippet :
 *     Telemeter.Calibrate.at(0.00001);
 *     Telemeter telemeter = new Telemeter(0.15, 2, "IP54");
 * }
 */
public class Telemeter {
   ...

A screenshot of the output is in the following figure:

Figure 2.13.png

Figure 2.13: Simple output from @snippet

The effective code starts from the newline placed after the colon (:) and ends before the closing right curly bracket (}). The code indentation is treated as in code blocks, so the compiler removes the incidental white spaces and we can indent the code with respect to the closing right curly bracket (}). Check out the following figure:

Figure 2.14.png

Figure 2.14: Indentation of code snippets

In the top example, the closing right curly bracket is aligned under the opening left curly bracket, while in the bottom example, we shifted the closing right curly bracket to the right.

Adding attributes

We can specify attributes for a @snippet via name=value pairs. For instance, we can provide a tip about the programming language of our snippet via the lang attribute. The value of the attribute is available to external tools and is present in the generated HTML. Here are two examples:

 * {@snippet lang="java" :
 *     Telemeter.Calibrate.at(0.00001);
 *     Telemeter telemeter = new Telemeter(0.15, 2, "IP54");
 * }

In the generated HTML, you’ll easily identify this attribute as:

<code class="language-java"></code>

If the code is a structured text such as a properties file, then you can follow this example:

 * {@snippet lang="properties" :
 *   telemeter.precision.default=42
 *   telemeter.clazz.default=2
 * }

In the generated HTML, you’ll have:

<code class="language-properties"></code>

Next, let’s see how can we alter what is displayed in a snippet.

Using markup comments and regions

We can visually alter a snippet of code via markup comments. A markup comment occurs at the end of the line and it contains one or more markup tags of the form @name args, where args are commonly name=value pairs. Common markup comments include highlighting, linking, and content (text) modifications.

Highlighting

Highlighting a whole line can be done via @highlight without arguments, as in the following figure:

Figure 2.15.png

Figure 2.15: Highlighting a whole line of code

As you can see in this figure, the first line of code was bolded.

If we want to highlight multiple lines, then we can define regions. A region can be treated as anonymous or have an explicit name. An anonymous region is demarcated by the word region placed as an argument of the markup tag and the @end tag placed at the end of the region. Here is an example for highlighting two regions (an anonymous one and a named one (R1)):

Figure 2.16.png

Figure 2.16: Highlighting a block of code using regions

Regular expressions allow us to highlight a certain part of the code. For instance, highlighting everything that occurs between quotes can be done via @highlight regex='".*"'. Or, highlighting only the word Calibrate can be done via the substring="Calibrate" argument, as in the following figure:

Figure 2.17.png

Figure 2.17: Highlighting only the word “Calibrate”

Next, let’s talk about adding links in code.

Linking

Adding links in code can be done via the @link tag. The common arguments are substring="…" and target="…". For instance, the following snippet provides a link for the text Calibrate that navigates in documentation to the description of the Calibrate.at() method:

Figure 2.18.png

Figure 2.18: Adding links in code

Next, let’s see how we can modify the code’s text.

Modifying the code’s text

Sometimes we may need to alter the code’s text. For instance, instead of Telemeter.Calibrate.at(0.00001, "HIGH");, we want to render in documentation Telemeter.Calibrate.at(eps, "HIGH");. So, we need to replace 0.00001 with eps. This is the perfect job for the @replace tag. Common arguments include substring="…" (or, regex="…") and replacement="...". Here is the snippet:

Figure 2.19.png

Figure 2.19: Replacing the code’s text

If you need to perform multiple replacements in a block of code, then rely on regions. In the following example, we apply a regular expression to a block of code:

Figure 2.20.png

Figure 2.20: Applying multiple replacements via a simple regex and an anonymous region

If you need to perform more replacements on the same line, then just chain multiple @replace tags (this statement applies to all tags such as @highlight, @link, and so on).

Using external snippets

So far, we have used only inlined snippets. But, there are scenarios when using inlined snippets is not a convenient approach (for instance, if we need to repeat some parts of the documentation) or it is not possible to use them (for instance, if we want to embed /*…*/ comments, which cannot be added in inlined snippets).

For such cases, we can use external snippets. Without any further configurations, JDK automatically recognizes external snippets if they are placed in a subfolder of the package (folder) containing the snippet tag. This subfolder should be named snippet-files and it can contain external snippets as Java sources, plain text files, or properties files. In the following figure, we have a single external file named MainSnippet.txt:

Figure 2.21.png

Figure 2.21: External snippets in snippet-files

If the external snippet is not a Java file, then it can be loaded via {@snippet file …} as follows:

{@snippet file = MainSnippet.txt}
{@snippet file = "MainSnippet.txt"}
{@snippet file = 'MainSnippet.txt'}

But, we can also customize the place and folder name of external snippets. For instance, let’s place the external snippets in a folder named snippet-src, as follows:

Figure 2.22.png

Figure 2.22: External snippets in a custom folder and place

This time, we have to instruct the compiler where to find the external snippets. This is done by passing the --snippet-path option to javadoc. Of course, you can pass it via the command line, via your IDE, or via maven-javadoc-plugin, as follows:

<additionalJOption>
  --snippet-path C:\...\src\snippet-src
</additionalJOption>

This path is relative to your machine, so feel free to adjust it accordingly in pom.xml.

Next, AtSnippet.txt and ParamDefaultSnippet.properties can be loaded exactly as you saw earlier for MainSnippet.txt. However, loading Java sources, such as DistanceSnippet.java, can be done via {@snippet class…}, as follows:

{@snippet class = DistanceSnippet}
{@snippet class = "DistanceSnippet"}
{@snippet class = 'DistanceSnippet'}

But, do not add explicitly the .java extension because you’ll get an error such as file not found on source path or snippet path: DistanceSnippet/java.java:

{@snippet class = DistanceSnippet.java}

When using Java sources as external snippets, pay attention to the following note.

Important note

Even if the predefined snippet-files name is an invalid name for a Java package, some systems may treat this folder as being part of the package hierarchy. In such cases, if you place Java sources in this folder, you’ll get an error such as Illegal package name: “foo.buzz.snippet-files”. If you find yourself in this scenario, then simply use another folder name and location for the documentation external snippets written in Java sources.

Regions in external snippets

The external snippets support regions via @start region=… and @end region=…. For instance, in AtSnippet.txt, we have the following region:

// This is an example used in the documentation
// @start region=only-code 
   Telemeter.Calibrate.at(0.00001, "HIGH");  
// @end region=only-code

Now, if we load the region as:

{@snippet file = AtSnippet.txt region=only-code}

We obtain only the code from the region without the text, // This is an example used in the documentation.

Here is another example of a properties file with two regions:

# @start region=dist
sc=[0,0]
ec=[0,0]
interpolation=false
# @end region=dist
# @start region=at
eps=0.1
type=null
# @end region=at

The region dist is used to show the default values for the arguments of the distance() method in the documentation:

Figure 2.23.png

Figure 2.23: Using the dist region

And, the at region is used to show the default values for the arguments of the at() method in the documentation:

Figure 2.24.png

Figure 2.24: Using the “at” region

In external snippets, we can use the same tags as in the inlined snippets. For instance, in the following figure, you can see the complete source of AtSnippet.txt:

Figure 2.25.png

Figure 2.25: Source of AtSnippet.txt

Notice the presence of @highlight and @replace.

Important note

Starting with JDK 19, the Javadoc search feature was also improved. In other words, JDK 19+ can generate a standalone search page for searching in the Javadoc API documentation. Moreover, the search syntax has been enhanced to support multiple search words.

You can practice these examples in the bundled code.

43. Invoking default methods from Proxy instances

Starting with JDK 8, we can define default methods in interfaces. For instance, let’s consider the following interfaces (for brevity, all methods from these interfaces are declared as default):

Figure 2.26.png

Figure 2.26: Interfaces: Printable, Writable, Draft, and Book

Next, let’s assume that we want to use the Java Reflection API to invoke these default methods. As a quick reminder, the Proxy class goal is used to provide support for creating dynamic implementations of interfaces at runtime.

That being said, let’s see how we can use the Proxy API for calling our default methods.

JDK 8

Calling a default method of an interface in JDK 8 relies on a little trick. Basically, we create from scratch a package-private constructor from the Lookup API. Next, we make this constructor accessible – this means that Java will not check the access modifiers to this constructor and, therefore, will not throw an IllegalAccessException when we try to use it. Finally, we use this constructor to wrap an instance of an interface (for instance, Printable) and use reflective access to the default methods declared in this interface.

So, in code lines, we can invoke the default method Printable.print() as follows:

// invoke Printable.print(String)
Printable pproxy = (Printable) Proxy.newProxyInstance(
  Printable.class.getClassLoader(),
  new Class<?>[]{Printable.class}, (o, m, p) -> {
    if (m.isDefault()) {
      Constructor<Lookup> cntr = Lookup.class
        .getDeclaredConstructor(Class.class);
      cntr.setAccessible(true);
      return cntr.newInstance(Printable.class)
                 .in(Printable.class)
                 .unreflectSpecial(m, Printable.class)
                 .bindTo(o)
                 .invokeWithArguments(p);
      }
      return null;
  });
// invoke Printable.print()
pproxy.print("Chapter 2");

Next, let’s focus on the Writable and Draft interfaces. Draft extends Writable and overrides the default write()method. Now, every time we explicitly invoke the Writable.write() method, we expect that the Draft.write() method is invoked automatically behind the scenes. A possible implementation looks as follows:

// invoke Draft.write(String) and Writable.write(String)
Writable dpproxy = (Writable) Proxy.newProxyInstance(
 Writable.class.getClassLoader(),
  new Class<?>[]{Writable.class, Draft.class}, (o, m, p) -> {
   if (m.isDefault() && m.getName().equals("write")) {
    Constructor<Lookup> cntr = Lookup.class
     .getDeclaredConstructor(Class.class);
    cntr.setAccessible(true); 
    cntr.newInstance(Draft.class)
        .in(Draft.class)
        .findSpecial(Draft.class, "write",
           MethodType.methodType(void.class, String.class), 
           Draft.class)
        .bindTo(o)
        .invokeWithArguments(p);
    return cntr.newInstance(Writable.class)
        .in(Writable.class)
        .findSpecial(Writable.class, "write",
           MethodType.methodType(void.class, String.class), 
           Writable.class)
        .bindTo(o)
        .invokeWithArguments(p);
    }
    return null;
  });
// invoke Writable.write(String)
dpproxy.write("Chapter 1");

Finally, let’s focus on the Printable and Book interfaces. Book extends Printable and doesn’t define any methods. So, when we call the inherited print() method, we expect that the Printable.print() method is invoked. While you can check this solution in the bundled code, let’s focus on the same tasks using JDK 9+.

JDK 9+, pre-JDK 16

As you just saw, before JDK 9, the Java Reflection API provides access to non-public class members. This means that external reflective code (for instance, third-party libraries) can have deep access to JDK internals. But, starting with JDK 9, this is not possible because the new module system relies on strong encapsulation.

For a smooth transition from JDK 8 to JDK 9, we can use the --illegal-access option. The values of this option range from deny (sustains strong encapsulation, so no illegal reflective code is permitted) to permit (the most relaxed level of strong encapsulation, allowing access to platform modules only from unnamed modules). Between permit (which is the default in JDK 9) and deny, we have two more values: warn and debug. However, --illegal-access=permit; support was removed in JDK 17.

In this context, the previous code may not work in JDK 9+, or it might still work but you’ll see a warning such as WARNING: An illegal reflective access operation has occurred.

But, we can “fix” our code to avoid illegal reflective access via MethodHandles. Among its goodies, this class exposes lookup methods for creating method handles for fields and methods. Once we have a Lookup, we can rely on its findSpecial() method to gain access to the default methods of an interface.

Based on MethodHandles, we can invoke the default method Printable.print() as follows:

// invoke Printable.print(String doc)
Printable pproxy = (Printable) Proxy.newProxyInstance(
    Printable.class.getClassLoader(),
    new Class<?>[]{Printable.class}, (o, m, p) -> {
      if (m.isDefault()) {
       return MethodHandles.lookup()
         .findSpecial(Printable.class, "print",  
           MethodType.methodType(void.class, String.class), 
           Printable.class)
         .bindTo(o)
         .invokeWithArguments(p);
      }
      return null;
  });
// invoke Printable.print()
pproxy.print("Chapter 2");

While in the bundled code, you can see more examples; let’s tackle the same topic starting with JDK 16.

JDK 16+

Starting with JDK 16, we can simplify the previous code thanks to the new static method, InvocationHandler.invokeDefault(). As its name suggests, this method is useful for invoking default methods. In code lines, our previous examples for calling Printable.print() can be simplified via invokeDefault() as follows:

// invoke Printable.print(String doc)
Printable pproxy = (Printable) Proxy.newProxyInstance(
  Printable.class.getClassLoader(),
    new Class<?>[]{Printable.class}, (o, m, p) -> {
      if (m.isDefault()) {
        return InvocationHandler.invokeDefault(o, m, p);
      }
      return null;
  });
// invoke Printable.print()
pproxy.print("Chapter 2");

In the next example, every time we explicitly invoke the Writable.write() method, we expect that the Draft.write() method is invoked automatically behind the scenes:

// invoke Draft.write(String) and Writable.write(String)
Writable dpproxy = (Writable) Proxy.newProxyInstance(
 Writable.class.getClassLoader(),
  new Class<?>[]{Writable.class, Draft.class}, (o, m, p) -> {
   if (m.isDefault() && m.getName().equals("write")) {
    Method writeInDraft = Draft.class.getMethod(
     m.getName(), m.getParameterTypes());
    InvocationHandler.invokeDefault(o, writeInDraft, p);
    return InvocationHandler.invokeDefault(o, m, p);
   }
   return null;
 });
// invoke Writable.write(String)
dpproxy.write("Chapter 1");

In the bundled code, you can practice more examples.

44. Converting between bytes and hex-encoded strings

Converting bytes to hexadecimal (and vice versa) is a common operation in applications that manipulate fluxes of files/messages, perform encoding/decoding tasks, process images, and so on.

A Java byte is a number in the [-128, +127] range and is represented using 1 signed byte (8 bits). A hexadecimal (base 16) is a system based on 16 digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F). In other words, those 8 bits of a byte value accommodate exactly 2 hexadecimal characters in the range 00 to FF. The decimal <-> binary <-> hexadecimal mapping is resumed in the following figure:

Figure 2.27.png

Figure 2.27: Decimal to binary to hexadecimal conversion

For instance, 122 in binary is 01111010. Since 0111 is in hexadecimal 7, and 1010 is A, this results in 122 being 7A in hexadecimal (also written as 0x7A).

How about a negative byte? We know from the previous chapter that Java represents a negative number as two’s complement of the positive number. This means that -122 in binary is 10000110 (retain the first 7 bits of positive 122 = 1111010, flip(1111010) = 0000101, add(0000001) = 00000110, and append sign bit 1, 10000110) and in hexadecimal, is 0x86.

Converting a negative number to hexadecimal can be done in several ways, but we can easily obtain the lower 4 bits as 10000110 & 0xF = 0110, and the higher four bits as (10000110>> 4) & 0xF = 1000 & 0xF = 1000 (here, the 0xF (binary, 1111) mask is useful only for negative numbers). Since, 0110 = 6 and 1000 = 8, we see that 10000110 is in hexadecimal 0x86.

If you need a deep coverage of bits manipulation in Java or you simply face issues in understanding the current topic, then please consider the book The Complete Coding Interview Guide in Java, especially Chapter 9.

So, in code lines, we can rely on this simple algorithm and Character.forDigit(int d, int r), which returns the character representation for the given digit (d) in the given radix (r):

public static String byteToHexString(byte v) {
  int higher = (v >> 4) & 0xF;
  int lower = v & 0xF;
  String result = String.valueOf(
    new char[]{
      Character.forDigit(higher, 16),
      Character.forDigit(lower, 16)}
    );
  return result;
}

There are many other ways to solve this problem (in the bundled code, you can see another flavor of this solution). For example, if we know that the Integer.toHexString(int n) method returns a string that represents the unsigned integer in base 16 of the given argument, then all we need is to apply the 0xFF (binary, 11111111) mask for negatives as:

public static String byteToHexString(byte v) {
  return Integer.toHexString(v & 0xFF);
}

If there is an approach that we should avoid, then that is the one based on String.format(). The String.format("%02x ", byte_nr) approach is concise but very slow!

How about the reverse process? Converting a given hexadecimal string (for instance, 7d, 09, and so on) to a byte is quite easy. Just take the first (d1) and second (d2) character of the given string and apply the relation, (byte) ((d1 << 4) + d2):

public static byte hexToByte(String s) {
  int d1 = Character.digit(s.charAt(0), 16);
  int d2 = Character.digit(s.charAt(1), 16);
  return (byte) ((d1 << 4) + d2);
} 

More examples are available in the bundled code. If you rely on third-party libraries, then check Apache Commons Codec (Hex.encodeHexString()), Guava (BaseEncoding), Spring Security (Hex.encode()), Bouncy Castle (Hex.toHexString()), and so on.

JDK 17+

Starting with JDK 17, we can use the java.util.HexFormat class. This class has plenty of static methods for handling hexadecimal numbers, including String toHexDigits(byte value) and byte[]parseHex(CharSequence string). So, we can convert a byte to a hexadecimal string as follows:

public static String byteToHexString(byte v) {
  HexFormat hex = HexFormat.of();
  return hex.toHexDigits(v);
}

And, vice versa as follows:

public static byte hexToByte(String s) {
  HexFormat hex = HexFormat.of();
  return hex.parseHex(s)[0];
}

In the bundled code, you can also see the extrapolation of these solutions for converting an array of bytes (byte[]) to a String, and vice versa.

45. Exemplify the initialization-on-demand holder design pattern

Before we tackle the solution of implementing the initialization-on-demand holder design pattern, let’s quickly recap a few ingredients of this solution.

Static vs. non-static blocks

In Java, we can have initialization non-static blocks and static blocks. An initialization non-static block (or simply, a non-static block) is automatically called every single time we instantiate the class. On the other hand, an initialization static block (or simply, a static block) is called a single time when the class itself is initialized. No matter how many subsequent instances of that class we create, the static block will never get executed again. In code lines:

public class A {
  {
    System.out.println("Non-static initializer ...");
  }
  static {
    System.out.println("Static initializer ...");
  }
}

Next, let’s run the following test code to create three instances of A:

A a1 = new A();
A a2 = new A();
A a3 = new A();

The output reveals that the static initializer is called only once, while the non-static initializer is called three times:

Static initializer ...
Non-static initializer ...
Non-static initializer ...
Non-static initializer ...

Moreover, the static initializer is called before the non-static one. Next, let’s talk about nested classes.

Nested classes

Let’s look at a quick example:

public class A {
    private static class B { ... }
}

Nested classes can be static or non-static. A non-static nested class is referred to as an inner class; further, it can be a local inner class (declared in a method) or an anonymous inner class (class with no name). On the other hand, a nested class that is declared static is referred to as a static nested class. The following figure clarifies these statements:

Figure 2.28.png

Figure 2.28: Java nested classes

Since B is a static class declared in A, we say that B is a static nested class.

Tackling the initialization-on-demand holder design pattern

The initialization-on-demand holder design pattern refers to a thread-safe lazy-loaded singleton (single instance) implementation. Before JDK 16, we can exemplify this design pattern in code as follows (we want a single thread-safe instance of Connection):

public class Connection { // singleton
  private Connection() {
  }
  private static class LazyConnection { // holder
    static final Connection INSTANCE = new Connection();
    static {
      System.out.println("Initializing connection ..." 
        + INSTANCE);
    }
  }
  public static Connection get() {
    return LazyConnection.INSTANCE;
  }
}

No matter how many times a thread (multiple threads) calls Connection.get(), we always get the same instance of Connection. This is the instance created when we called get() for the first time (first thread), and Java has initialized the LazyConnection class and its statics. In other words, if we never call get(), then the LazyConnection class and its statics are never initialized (this is why we name it lazy initialization). And, this is thread-safe because static initializers can be constructed (here, INSTANCE) and referenced without explicit synchronization since they are run before any thread can use the class (here, LazyConnection).

JDK 16+

Until JDK 16, an inner class could contain static members as constant variables but it couldn’t contain static initializers. In other words, the following code would not compile because of the static initializer:

public class A {
  public class B {
    {
      System.out.println("Non-static initializer ...");
    }
    static {
      System.out.println("Static initializer ...");
    }
  }
}

But, starting with JDK 16, the previous code is compiled without issues. In other words, starting with JDK 16, Java inner classes can have static members and static initializers.

This allows us to tackle the initialization-on-demand holder design pattern from another angle. We can replace the static nested class, LazyConnection, with a local inner class as follows:

public class Connection { // singleton
  private Connection() {
  }
  public static Connection get() {
    class LazyConnection { // holder
      static final Connection INSTANCE = new Connection();
      static {
        System.out.println("Initializing connection ..." 
          + INSTANCE);
      }
    }
    return LazyConnection.INSTANCE;
  }
}

Now, the LazyConnection is visible only in its containing method, get(). As long as we don’t call the get() method, the connection will not be initialized.

46. Adding nested classes in anonymous classes

In the previous problem, we had a brief overview of nested classes. As a quick reminder, an anonymous class (or, anonymous inner class) is like a local inner class without a name. Their purpose is to provide a more concise and expressive code. However, the code readability may be affected (look ugly), but it may be worth it if you can perform some specific task without having to do a full-blown class. For instance, an anonymous class is useful for altering the behavior of an existing method without spinning a new class. Java uses them typically for event handling and listeners (in GUI applications). Probably the most famous example of an anonymous class is this one from Java code:

button.addActionListener(new ActionListener() {
  public void actionPerformed(ActionEvent e) {
    ...
  }
}

Nevertheless, while local inner classes are actually class declarations, anonymous classes are expressions. To create an anonymous class, we have to extend an existing class or implement an interface, as shown in the following figure:

Figure 2.28.png

Figure 2.29: Anonymous class via class extension and interface implementation

Because they don’t have names, anonymous classes must be declared and instantiated in a single expression. The resulting instance can be assigned to a variable that can be referred to later. The standard syntax for expressions looks like calling a regular Java constructor having the class in a code block ending with a semi-colon (;). The presence of a semi-colon is a hint that an anonymous class is an expression that must be part of a statement.

Finally, anonymous classes cannot have explicit constructors, be abstract, have a single instance, implement multiple interfaces, or be extended.

Next, let’s tackle a few examples of nesting classes in anonymous classes. For instance, let’s consider the following interface of a printing service:

public interface Printer {
    public void print(String quality);
}

We use the Printer interface all over the place in our printing service, but we also want to have a helper method that is compact and simply tests our printer functions without requiring further actions or an extra class. We decided to hide this code in a static method named printerTest(), as follows:

public static void printerTest() {
  Printer printer = new Printer() {
  @Override
  public void print(String quality) {
    if ("best".equals(quality)) {
      Tools tools = new Tools();
      tools.enableLaserGuidance();
      tools.setHighResolution();
    }
    System.out.println("Printing photo-test ...");
  }
class Tools {
    private void enableLaserGuidance() {
      System.out.println("Adding laser guidance ...");
    }
    private void setHighResolution() {
      System.out.println("Set high resolution ...");
    }
  }
};

Testing the best quality print requires some extra settings wrapped in the inner Tools class. As you can see, the inner Tools class is nested in the anonymous class. Another approach consists of moving the Tools class inside the print() method. So, Tools becomes a local inner class as follows:

Printer printer = new Printer() {
  @Override
  public void print(String quality) {
    class Tools {
      private void enableLaserGuidance() {
        System.out.println("Adding laser guidance ...");
      }
      private void setHighResolution() {
        System.out.println("Set high resolution ...");
      }
    }
    if ("best".equals(quality)) {
      Tools tools = new Tools();
      tools.enableLaserGuidance();
      tools.setHighResolution();
    }
    System.out.println("Printing photo-test ...");
  }
};

The problem with this approach is that the Tools class cannot be used outside of print(). So, this strict encapsulation will restrict us from adding a new method (next to print()) that also needs the Tools class.

JDK 16+

But, remember from the previous problem that, starting with JDK 16, Java inner classes can have static members and static initializers. This means that we can drop the Tools class and rely on two static methods as follows:

Printer printer = new Printer() {
  @Override
  public void print(String quality) {
    if ("best".equals(quality)) {
      enableLaserGuidance();
      setHighResolution();
    }
    System.out.println("Printing your photos ...");
  }
  private static void enableLaserGuidance() {
    System.out.println("Adding laser guidance ...");
  }
  private static void setHighResolution() {
    System.out.println("Set high resolution ...");
  }
};

If you find it more convenient to pick up these helpers in a static class, then do it:

Printer printer = new Printer() {
  @Override
  public void print(String quality) {
    if ("best".equals(quality)) {
      Tools.enableLaserGuidance();
      Tools.setHighResolution();
    }
    System.out.println("Printing photo-test ...");
  }
  private final static class Tools {
    private static void enableLaserGuidance() {
      System.out.println("Adding laser guidance ...");
    }
    private static void setHighResolution() {
      System.out.println("Set high resolution ...");
    }
  }
};

You can practice these examples in the bundled code.

47. Exemplify erasure vs. overloading

Before we join them in an example, let’s quickly tackle erasure and overloading separately.

Erasure in a nutshell

Java uses type erasure at compile time in order to enforce type constraints and backward compatibility with old bytecode. Basically, at compilation time, all type arguments are replaced by Object (any generic must be convertible to Object) or type bounds (extends or super). Next, at runtime, the type erased by the compiler will be replaced by our type. A common case of type erasure implies generics.

Erasure of generic types

Practically, the compiler erases the unbound types (such as E, T, U, and so on) with the bounded Object. This enforces type safety, as in the following example of class type erasure:

public class ImmutableStack<E> implements Stack<E> {
  private final E head;
  private final Stack<E> tail;
  ...

The compiler applies type erasure to replace E with Object:

public class ImmutableStack<Object> implements Stack<Object> {
  private final Object head;
  private final Stack<Object> tail;
  ...

If the E parameter is bound, then the compiler uses the first bound class. For instance, in a class such as class Node<T extends Comparable<T>> {...}, the compiler will replace T with Comparable. In the same manner, in a class such as class Computation<T extends Number> {...}, all occurrences of T would be replaced by the compiler with the upper bound Number.

Check out the following case, which is a classical case of method type erasure:

public static <T, R extends T> List<T> listOf(T t, R r) {
  List<T> list = new ArrayList<>();
  list.add(t);
  list.add(r);
  return list;
}
// use this method
List<Object> list = listOf(1, "one");

How does this work? When we call listOf(1, "one"), we are actually passing two different types to the generic parameters T and R. The compiler type erasure has replaced T with Object. In this way, we can insert different types in the ArrayList and the code works just fine.

Erasure and bridge methods

Bridge methods are created by the compiler to cover corner cases. Specifically, when the compiler encounters an implementation of a parameterized interface or an extension of a parameterized class, it may need to generate a bridge method (also known as a synthetic method) as part of the type erasure phase. For instance, let’s consider the following parameterized class:

public class Puzzle<E> {
  public E piece;
  public Puzzle(E piece) {
    this.piece = piece;
  }
  public void setPiece(E piece) { 
    this.piece = piece;
  }
}

And, an extension of this class:

public class FunPuzzle extends Puzzle<String> {
  public FunPuzzle(String piece) {
    super(piece);
  }
  @Override
  public void setPiece(String piece) { 
    super.setPiece(piece);
  }
}

Type erasure modifies Puzzle.setPiece(E) as Puzzle.setPiece(Object). This means that the FunPuzzle.setPiece(String) method does not override the Puzzle.setPiece(Object) method. Since the signatures of the methods are not compatible, the compiler must accommodate the polymorphism of generic types via a bridge (synthetic) method meant to guarantee that sub-typing works as expected. Let’s highlight this method in the code:

/* Decompiler 8ms, total 3470ms, lines 18 */
package modern.challenge;
public class FunPuzzle extends Puzzle<String> {
   public FunPuzzle(String piece) {
      super(piece);
   }
   public void setPiece(String piece) {
      super.setPiece(piece);
   }
   // $FF: synthetic method
   // $FF: bridge method
   public void setPiece(Object var1) {
      this.setPiece((String)var1);
   }
}

Now, whenever you see a bridge method in the stack trace, you will know what it is and why it is there.

Type erasure and heap pollution

Have you ever seen an unchecked warning? I’m sure you have! It’s one of those things that is common to all Java developers. They may occur at compile-time as the result of type checking, or at runtime as a result of a cast or method call. In both cases, we talk about the fact that the compiler cannot validate the correctness of an operation, which implies some parameterized types. Not every unchecked warning is dangerous, but there are cases when we have to consider and deal with them.

A particular case is represented by heap pollution. If a parameterized variable of a certain type points to an object that is not of that type, then we are prone to deal with a code that leads to heap pollution. A good candidate for such scenarios involves methods with varargs arguments.

Check out this code:

public static <T> void listOf(List<T> list, T... ts) {
  list.addAll(Arrays.asList(ts));    
}

The listOf() declaration will cause this warning: Possible heap pollution from parameterized vararg type T. So, what’s happening here?

The story begins when the compiler replaces the formal T... parameter into an array. After applying type erasure, the T... parameter becomes T[], and finally Object[]. Consequently, we opened a gate to possible heap pollution. But, our code just added the elements of Object[] into a List<Object>, so we are in the safe area.

In other words, if you know that the body of the varargs method is not prone to generate a specific exception (for example, ClassCastException) or to use the varargs parameter in an improper operation, then we can instruct the compiler to suppress these warnings. We can do it via the @SafeVarargs annotation as follows:

@SafeVarargs
public static <T> void listOf(List<T> list, T... ts) {...}

The @SafeVarargs is a hint that sustains that the annotated method will use the varargs formal parameter only in proper operations. More common, but less recommended, is to use @SuppressWarnings({"unchecked", "varargs"}), which simply suppresses such warnings without claiming that the varargs formal parameter is not used in improper operations.

Now, let’s tackle this code:

public static void main(String[] args) {
  List<Integer> ints = new ArrayList<>();
  Main.listOf(ints, 1, 2, 3);
  Main.listsOfYeak(ints);
}
public static void listsOfYeak(List<Integer>... lists) {
  Object[] listsAsArray = lists;     
  listsAsArray[0] = Arrays.asList(4, 5, 6); 
  Integer someInt = lists[0].get(0);   
  listsAsArray[0] = Arrays.asList("a", "b", "c"); 
  Integer someIntYeak = lists[0].get(0); // ClassCastException
}

This time, the type erasure transforms the List<Integer>... into List[], which is a subtype of Object[]. This allows us to do the assignment: Object[] listsAsArray = lists;. But, check out the last two lines of code where we create a List<String> and store it in listsAsArray[0]. In the last line, we try to access the first Integer from lists[0], which obviously leads to a ClassCastException. This is an improper operation of using varargs, so it is not advisable to use @SafeVarargs in this case. We should have taken the following warnings seriously:

// unchecked generic array creation for varargs parameter 
// of type java.util.List<java.lang.Integer>[]
Main.listsOfYeak(ints);
// Possible heap pollution from parameterized vararg
// type java.util.List<java.lang.Integer>
public static void listsOfYeak(List<Integer>... lists) { ... }

Now, that you are familiar with type erasure, let’s briefly cover polymorphic overloading.

Polymorphic overloading in a nutshell

Since overloading (also known as “ad hoc” polymorphism) is a core concept of Object-Oriented Programming (OOP), I’m sure you are familiar with Java method overloading, so I’ll not insist on the basic theory of this concept.

Also, I’m aware that some people don’t agree that overloading can be a form of polymorphism, but that is another topic that we will not tackle here.

We will be more practical and jump into a suite of quizzes meant to highlight some interesting aspects of overloading. More precisely, we will discuss type dominance. So, let’s tackle the first quiz (wordie is an initially empty string):

static void kaboom(byte b) { wordie += "a";}   
static void kaboom(short s) { wordie += "b";}   
kaboom(1);

What will happen? If you answered that the compiler will point out that there is no suitable method found for kaboom(1), then you’re right. The compiler looks for a method that gets an integer argument, kaboom(int). Okay, that was easy! Here is the next one:

static void kaboom(byte b) { wordie += "a";}   
static void kaboom(short s) { wordie += "b";}  
static void kaboom(long l) { wordie += "d";}   
static void kaboom(Integer i) { wordie += "i";}   
kaboom(1);

We know that the first two kaboom() instances are useless. How about kaboom(long) and kaboom(Integer)? You are right, kaboom(long) will be called. If we remove kaboom(long), then kaboom(Integer) is called.

Important note

In primitive overloading, the compiler starts by searching for a one-to-one match. If this attempt fails, then the compiler searches for an overloading flavor taking a primitive broader domain than the primitive current domain (for instance, for an int, it looks for int, long, float, or double). If this fails as well, then the compiler checks for overloading taking boxed types (Integer, Float, and so on).

Following the previous statements, let’s have this one:

static void kaboom(Integer i) { wordie += "i";} 
static void kaboom(Long l) { wordie += "j";} 
kaboom(1);

This time, wordie will be i. The kaboom(Integer) is called since there is no kaboom(int/long/float/double). If we had a kaboom(double), then that method has higher precedence than kaboom(Integer). Interesting, right?! On the other hand, if we remove kaboom(Integer), then don’t expect that kaboom(Long) will be called. Any other kaboom(boxed type) with a broader/narrow domain than Integer will not be called. This is happening because the compiler follows the inheritance path based on an IS-A relationship, so after kaboom(Integer), it looks for kaboom(Number), since Integer is a Number.

Important note

In boxed type overloading, the compiler starts by searching for a one-to-one match. If this attempt fails, then the compiler will not consider any overloading flavor taking a boxed type with a broader domain than the current domain (of course, a narrow domain is ignored as well). It looks for Number as being the superclass of all boxed types. If Number is not found, the compiler goes up in the hierarchy until it reaches the java.lang.Object, which is the end of the road.

Okay, let’s complicate things a little bit:

static void kaboom(Object... ov) { wordie += "o";}   
static void kaboom(Number n) { wordie += "p";}   
static void kaboom(Number... nv) { wordie += "q";}  
kaboom(1);

So, which method will be called this time? I know, you think kaboom(Number), right? At least, my simple logic pushes me to think that this is a common-sense choice. And it is correct!

If we remove kaboom(Number), then the compiler will call the varargs method, kaboom(Number...). This makes sense since kaboom(1) uses a single argument, so kaboom(Number) should have higher precedence than kaboom(Number...). This logic reverses if we call kaboom(1,2,3) since kaboom(Number) is no longer representing a valid overloading for this call, and kaboom(Number...) is the right choice.

But, this logic applies because Number is the superclass of all boxed classes (Integer, Double, Float, and so on).

How about now?

static void kaboom(Object... ov) { wordie += "o";}   
static void kaboom(File... fv) { wordie += "s";}   
kaboom(1);

This time, the compiler will “bypass” kaboom(File...) and will call kaboom(Object...). Based on the same logic, a call of kaboom(1, 2, 3) will call kaboom(Object...) since there is no kaboom(Number...).

Important note

In overloading, if the call has a single argument, then the method with a single argument has higher precedence than its varargs counterpart. On the other hand, if the call has more arguments of the same type, then the varargs method is called since the one-argument method is not suitable anymore. When the call has a single argument but only the varargs overloading is available, then this method is called.

This leads us to the following example:

static void kaboom(Number... nv) { wordie += "q";}   
static void kaboom(File... fv) { wordie += "s";}   
kaboom();

This time, kaboom() has no arguments and the compiler cannot find a unique match. This means that the reference to kaboom() is ambiguous since both methods match (kaboom(java.lang.Number...) in modern.challenge.Main and method kaboom(java.io.File...) in modern.challenge.Main).

In the bundled code, you can play even more with polymorphic overloading and test your knowledge. Moreover, try to challenge yourself and introduce generics in the equation as well.

Erasure vs. overloading

Okay, based on the previous experience, check out this code:

void print(List<A> listOfA) {
  System.out.println("Printing A: " + listOfA);
}
void print(List<B> listofB) {
  System.out.println("Printing B: " + listofB);
}

What will happen? Well, this is a case where overloading and type erasure collide. The type erasure will replace List<A> with List<Object> and List<B> with List<Object> as well. So, overloading is not possible and we get an error such as name clash: print(java.util.List<modern.challenge.B>) and print (java.util.List<modern.challenge.A>) have the same erasure.

In order to solve this issue, we can add a dummy argument to one of these two methods:

void print(List<A> listOfA, Void... v) {
  System.out.println("Printing A: " + listOfA);
}

Now, we can have the same call for both methods:

new Main().print(List.of(new A(), new A()));
new Main().print(List.of(new B(), new B()));

Done! You can practice these examples in the bundled code.

48. Xlinting default constructors

We know that a Java class with no explicit constructor automatically gets an “invisible” default constructor for setting default values of the instance variables. The following House class falls in this scenario:

public class House {
  private String location;
  private float price;
  ...
}

If this is exactly what we wanted, then it is no problem. But, if we are concerned about the fact that the default constructors are exposed by classes to publicly exported packages, then we have to consider using JDK 16+.

JDK 16+ added a dedicated lint meant to warn us about the classes that have default constructors. In order to take advantage of this lint, we have to follow two steps:

  • Export the package containing that class
  • Compile with -Xlint:missing-explicit-ctor (or -Xlint, -Xlint:all)

In our case, we export the package modern.challenge in module-info as follows:

module P48_XlintDefaultConstructor {
  exports modern.challenge;
} 

Once you compile the code with -Xlint:missing-explicit-ctor, you’ll see a warning like in the following figure:

Figure 2.30.png

Figure 2.30: The warning produced by -Xlint:missing-explicit-ctor

Now, you can easily find out which classes have default constructors.

49. Working with the receiver parameter

Starting with JDK 8, we can enrich any of our instance methods with the optional receiver parameter. This is a purely syntactic parameter of enclosing type exposed via the this keyword. The following two snippets of code are identical:

public class Truck {
  public void revision1(Truck this) {
    Truck thisTruck = this;
    System.out.println("Truck: " + thisTruck);
  }
  public void revision2() {
    Truck thisTruck = this;
    System.out.println("Truck: " + thisTruck);
  }
}

Do not conclude that revision2() is an overloading of revision1(), or vice versa. Both methods have the same output, the same signature, and produce the same bytecode.

The receiver parameter can be used in inner classes as well. Here is an example:

public class PaymentService {
  class InvoiceCalculation {
    final PaymentService paymentService;
    InvoiceCalculation(PaymentService PaymentService.this) {
      paymentService = PaymentService.this;
    }
  }
}

Okay, but why use the receiver parameter? Well, JDK 8 introduced so-called type annotations, which are exactly as the name suggests: annotations that can be applied to types. In this context, the receiver parameter was added for annotating the type of object for which the method is called. Check out the following code:

@Target(ElementType.TYPE_USE)
public @interface ValidAddress {}
public String getAddress(@ValidAddress Person this) { ... }

Or, check this more elaborate example:

public class Parcel {
  public void order(@New Parcel this) {...}
  public void shipping(@Ordered Parcel this) {...}
  public void deliver(@Shipped Parcel this) {...}
  public void cashit(@Delivered Parcel this) {...}
  public void done(@Cashed Parcel this) {...}
}

Every client of a Parcel must call these methods in a precise sequence drawn via type annotations and receiver parameters. In other words, an order can be placed only if it is a new order, it can be shipped only if the order was placed, it can be delivered only if it was shipped, it can be paid only if it was delivered, and it can be closed only if it was paid.

At this moment, this strict sequence is pointed out only by these hypothetical annotations. But, this is the right road to implement further a static analysis tool that will understand the meaning of these annotations and trigger warnings every time a client of Parcel doesn’t follow this precise sequence.

50. Implementing an immutable stack

A common coding challenge in interviews is this: Implement an immutable stack in Java.

Being an abstract data type, a stack needs at least this contract:

public interface Stack<T> extends Iterable<T> {
  boolean isEmpty();
  Stack<T> push(T value);
  Stack<T> pop();
  T peek();    
}

Having this contract, we can focus on the immutable implementation. Generally speaking, an immutable data structure stays the same until an operation attempts to change it (for instance, to add, put, remove, delete, push, and so on). If an operation attempts to alter the content of an immutable data structure, a new instance of that data structure must be created and used by that operation, while the previous instance remains unchanged.

Now, in our context, we have two operations that can alter the stack content: push and pop. The push operation should return a new stack containing the pushed element, while the pop operation should return the previous stack. But, in order to accomplish this, we need to start from somewhere, so we need an empty initial stack. This is a singleton stack that can be implemented as follows:

private static class EmptyStack<U> implements Stack<U> {
  @Override
    public Stack<U> push(U u) {
      return new ImmutableStack<>(u, this);
    }
    @Override
    public Stack<U> pop() {
      throw new UnsupportedOperationException(
        "Unsupported operation on an empty stack");
    } 
    @Override
    public U peek() {
      throw new UnsupportedOperationException (
        "Unsupported operation on an empty stack");
    }
    @Override
    public boolean isEmpty() {
      return true;
    }
    @Override
    public Iterator<U> iterator() {
      return new StackIterator<>(this);
  }
}

The StackIterator is a trivial implementation of the Java Iterator. Nothing fancy here:

private static class StackIterator<U> implements Iterator<U> {
  private Stack<U> stack;
  public StackIterator(final Stack<U> stack) {
    this.stack = stack;
  }
  @Override
  public boolean hasNext() {
    return !this.stack.isEmpty();
  }
  @Override
  public U next() {
    U e = this.stack.peek();
    this.stack = this.stack.pop();
    return e;
  }
  @Override
  public void remove() {
  }
}

So far, we have the Iterator and an empty stack singleton. Finally, we can implement the logic of the immutable stack as follows:

public class ImmutableStack<E> implements Stack<E> {
  private final E head;
  private final Stack<E> tail;
  private ImmutableStack(final E head, final Stack<E> tail) {
    this.head = head;
    this.tail = tail;
  }
  public static <U> Stack<U> empty(final Class<U> type) {
    return new EmptyStack<>();
  }
  @Override
  public Stack<E> push(E e) {
    return new ImmutableStack<>(e, this);
  }
  @Override
  public Stack<E> pop() {
    return this.tail;
  }    
  @Override
  public E peek() {
    return this.head;
  }
  @Override
  public boolean isEmpty() {
    return false;
  }
  @Override
  public Iterator<E> iterator() {
    return new StackIterator<>(this);
  }
  // iterator code
  // empty stack singleton code
}

Creating a stack starts by calling theImmutableStack.empty() method, as follows:

Stack<String> s = ImmutableStack.empty(String.class);

In the bundled code, you can how this stack can be used further.

51. Revealing a common mistake with Strings

Everybody knows that String is an immutable class.

Even so, we are still prone to accidentally write code that ignores the fact that String is immutable. Check out this code:

String str = "start";
str = stopIt(str);
public static String stopIt(String str) {
  str.replace(str, "stop");
  return str;
}

Somehow, it is logical to think that the replace() call has replaced the text start with stop and now str is stop. This is the cognitive power of words (replace is a verb that clearly induces the idea that the text was replaced). But, String is immutable! Oh… we already know that! This means that replace() cannot alter the original str. There are many such silly mistakes that we are prone to accidentally make, so pay extra attention to such simple things, since they can waste your time in the debugging stage.

The solution is obvious and self-explanatory:

public static String stopIt(String str) {
  str =  str.replace(str, "stop");
  return str;
}

Or, simply:

public static String stopIt(String str) {
  return str.replace(str, "stop");
}

Don’t forget that String is immutable!

52. Using the enhanced NullPointerException

Take your time to dissect the following trivial code and try to identify the parts that are prone to cause a NullPointerException (these parts are marked as numbered warnings, which will be explained after the snippet):

public final class ChainSaw {
  private static final List<String> MODELS
    = List.of("T300", "T450", "T700", "T800", "T900");
  private final String model;
  private final String power;
  private final int speed;
  public boolean started;
  private ChainSaw(String model, String power, int speed) {
    this.model = model;
    this.power = power;
    this.speed = speed;
  }
  public static ChainSaw initChainSaw(String model) {
    for (String m : MODELS) {
      if (model.endsWith(m)) {WARNING 3! 
        return new ChainSaw(model, null, WARNING 5!
          (int) (Math.random() * 100));
      }
    }
    return null; WARNING 1,2!
  }
  public int performance(ChainSaw[] css) {
    int score = 0;
    for (ChainSaw cs : css) { WARNING 3!
      score += Integer.compare(
        this.speed,cs.speed); WARNING 4!
    }
    return score;
  }
  public void start() {
    if (!started) {
      System.out.println("Started ...");
      started = true;
    }
  }
  public void stop() {
    if (started) {
      System.out.println("Stopped ...");
      started = false;
    }
  } 
  public String getPower() {
    return power; WARNING 5!
  }
  @Override
  public String toString() {
    return "ChainSaw{" + "model=" + model 
      + ", speed=" + speed + ", started=" + started + '}';
  } 
}

You noticed the warnings? Of course, you did! There are five major scenarios behind most NullPointerException (NPEs) and each of them is present in the previous class. Prior to JDK 14, an NPE doesn’t contain detailed information about the cause. Look at this exception:

Exception in thread "main" java.lang.NullPointerException
    at modern.challenge.Main.main(Main.java:21)

This message is just a starting point for the debugging process. We don’t know the root cause of this NPE or which variable is null. But, starting with JDK 14 (JEP 358), we have really helpful NPE messages. For example, in JDK 14+, the previous message looks as follows:

Exception in thread "main" java.lang.NullPointerException: Cannot invoke "modern.challenge.Strings.reverse()" because "str" is null
    at modern.challenge.Main.main(Main.java:21)

The highlighted part of the message gives us important information about the root cause of this NPE. Now, we know that the str variable is null, so no need to debug further. We can just focus on how to fix this issue.

Next, let’s tackle each of the five major root causes of NPEs.

WARNING 1! NPE when calling an instance method via a null object

Consider the following code written by a client of ChainSaw:

ChainSaw cs = ChainSaw.initChainSaw("QW-T650");
cs.start(); // 'cs' is null

The client passes a chainsaw model that is not supported by this class, so the initChainSaw() method returns null. This is really bad because every time the client uses the cs variable, they will get back an NPE as follows:

Exception in thread "main" java.lang.NullPointerException: Cannot invoke "modern.challenge.ChainSaw.start()" because "cs" is null
    at modern.challenge.Main.main(Main.java:9)

Instead of returning null, it is better to throw an explicit exception that informs the client that they cannot continue because we don’t have this chainsaw model (we can go for the classical IllegalArgumentException or, the more suggestive one in this case (but quite uncommon for null value handling), UnsupportedOperationException). This may be the proper fix in this case, but it is not universally true. There are cases when it is better to return an empty object (for example, an empty string, collection, or array) or a default object (for example, an object with minimalist settings) that doesn’t break the client code. Since JDK 8, we can use Optional as well. Of course, there are cases when returning null makes sense but that is more common in APIs and special situations.

WARNING 2! NPE when accessing (or modifying) the field of a null object

Consider the following code written by a client of ChainSaw:

ChainSaw cs = ChainSaw.initChainSaw("QW-T650");
boolean isStarted = cs.started; // 'cs' is null

Practically, the NPE, in this case, has the same root cause as the previous case. We try to access the started field of ChainSaw. Since this is a primitive boolean, it was initialized by JVM with false, but we cannot “see” that since we try to access this field through a null variable represented by cs.

WARNING 3! NPE when null is passed in the method argument

Consider the following code written by a client of ChainSaw:

ChainSaw cs = ChainSaw.initChainSaw(null);

You are not a good citizen if you want a null ChainSaw, but who am I to judge? It is possible for this to happen and will lead to the following NPE:

Exception in thread "main" java.lang.NullPointerException: Cannot invoke "String.endsWith(String)" because "model" is null
   at modern.challenge.ChainSaw.initChainSaw(ChainSaw.java:25)
   at modern.challenge.Main.main(Main.java:16)

The message is crystal clear. We attempt to call the String.endWith() method with a null argument represented by the model variable. To fix this issue, we have to add a guard condition to ensure that the passed model argument is not null (and eventually, not empty). In this case, we can throw an IllegalArgumentException to inform the client that we are here and we are guarding. Another approach may consist of replacing the given null with a dummy model that passes through our code without issues (for instance, since the model is a String, we can reassign an empty string, ““). However, personally, I don’t recommend this approach, not even for small methods. You never know how the code will evolve and such dummy reassignments can lead to brittle code.

WARNING 4! NPE when accessing the index value of a null array/collection

Consider the following code written by a client of ChainSaw:

ChainSaw myChainSaw = ChainSaw.initChainSaw("QWE-T800");
ChainSaw[] friendsChainSaw = new ChainSaw[]{
  ChainSaw.initChainSaw("Q22-T450"),
  ChainSaw.initChainSaw("QRT-T300"),
  ChainSaw.initChainSaw("Q-T900"),
  null, // ops!
  ChainSaw.initChainSaw("QMM-T850"), // model is not supported
  ChainSaw.initChainSaw("ASR-T900")
};
int score = myChainSaw.performance(friendsChainSaw);

Creating an array of ChainSaw was quite challenging in this example. We accidentally slipped a null value (actually, we did it intentionally) and an unsupported model. In return, we get the following NPE:

Exception in thread "main" java.lang.NullPointerException: Cannot read field "speed" because "cs" is null
    at modern.challenge.ChainSaw.performance(ChainSaw.java:37)
    at modern.challenge.Main.main(Main.java:31)

The message informs us that the cs variable is null. This is happening at line 37 in ChainSaw, so in the for loop of the performance() method. While looping the given array, our code iterated over the null value, which doesn’t have the speed field. Pay attention to this kind of scenario: even if the given array/collection itself is not null, it doesn’t mean that it cannot contain null items. So, adding a guarding check before handling each item can save us from an NPE in this case. Depending on the context, we can throw an IllegalArgumentException when the loop passes over the first null or simply ignore null values and don’t break the flow (in general, this is more suitable). Of course, using a collection that doesn’t accept null values is also a good approach (Apache Commons Collection and Guava have such collections).

WARNING 5! NPE when accessing a field via a getter

Consider the following code written by a client of ChainSaw:

ChainSaw cs = ChainSaw.initChainSaw("T5A-T800");
String power = cs.getPower();
System.out.println(power.concat(" Watts"));

And, the associated NPE:

Exception in thread "main" java.lang.NullPointerException: Cannot invoke "String.concat(String)" because "power" is null
    at modern.challenge.Main.main(Main.java:37)

Practically, the getter getPower() returned null since the power field is null. Why? The answer is in the line return new ChainSaw(model, null, (int) (Math.random() * 100)); of the initChainSaw() method. Because we didn’t decide yet on the algorithm for calculating the power of a chainsaw, we passed null to the ChainSaw constructor. Further, the constructor simply sets the power field as this.power = power. If it was a public constructor, then most probably we would have added some guarded conditions, but being a private constructor, it is better to fix the issue right from the root and not pass that null. Since the power is a String, we can simply pass an empty string or a suggestive string such as UNKNOWN_POWER. We also may leave a TODO comment in code such as // TODO (JIRA ####): replace UNKNOWN_POWER with code. This will remind us to fix this in the next release. Meanwhile, the code has eliminated the NPE risk.

Okay, after we fixed all these five NPE risks, the code has become the following (the added code is highlighted):

public final class ChainSaw {
  private static final String UNKNOWN_POWER = "UNKNOWN";
  private static final List<String> MODELS
    = List.of("T300", "T450", "T700", "T800", "T900");
  private final String model;
  private final String power;
  private final int speed;
  public boolean started;
  private ChainSaw(String model, String power, int speed) {
    this.model = model;
    this.power = power;
    this.speed = speed;
  }
  public static ChainSaw initChainSaw(String model) {
    if (model == null || model.isBlank()) {
     throw new IllegalArgumentException("The given model 
               cannot be null/empty");
    }
    for (String m : MODELS) {
      if (model.endsWith(m)) { 
        // TO DO (JIRA ####): replace UNKNOWN_POWER with code
        return new ChainSaw(model, UNKNOWN_POWER, 
         (int) (Math.random() * 100));
        }
    }
    throw new UnsupportedOperationException(
      "Model " + model + " is not supported");
  }
  public int performance(ChainSaw[] css) {
    if (css == null) {
      throw new IllegalArgumentException(
        "The given models cannot be null");
    }
    int score = 0;
    for (ChainSaw cs : css) {
      if (cs != null) {
        score += Integer.compare(this.speed, cs.speed);
      }
    }
    return score;
  }
  public void start() {
    if (!started) {
      System.out.println("Started ...");
      started = true;
    }
  }
  public void stop() {
    if (started) {
      System.out.println("Stopped ...");
      started = false;
    }
  }
  public String getPower() {
    return power;
  }
  @Override
  public String toString() {
    return "ChainSaw{" + "model=" + model
      + ", speed=" + speed + ", started=" + started + '}';
  }
}

Done! Now, our code is NPE-free. At least until reality contradicts us and a new NPE occurs.

53. Using yield in switch expressions

Here, we’re going to look at how switch expressions have evolved in JDK 13+.

Java SE 13 added the new yield statement, which can be used instead of the break statement in switch expressions.

We know that a JDK 12+ switch expression can be written as follows (playerType is a Java enum):

return switch (playerType) {
  case TENNIS ->
    new TennisPlayer();
  case FOOTBALL ->
    new FootballPlayer();
  ...
};

Moreover, we know that a label’s arrow can point to a curly-braces block as well (this works only in JDK 12, not in JDK 13+):

return switch (playerType) {
  case TENNIS -> {
    System.out.println("Creating a TennisPlayer ...");
    break new TennisPlayer();
  }
  case FOOTBALL -> {
    System.out.println("Creating a FootballPlayer ...");
    break new FootballPlayer();
  }
  ...
};

Since break can be confusing because it can be used in old-school switch statements and in the new switch expressions, JDK 13 added the yield statement to be used instead of break. The yield statement takes one argument representing the value produced by the current case. The previous examples can be written from JDK 13+ as follows:

return switch (playerType) {
  case TENNIS:
    yield new TennisPlayer();
  case FOOTBALL:
    yield new FootballPlayer();
  ...
};
return switch (playerType) {
  case TENNIS -> {
    System.out.println("Creating a TennisPlayer ...");
    yield new TennisPlayer();
  }
  case FOOTBALL -> {
    System.out.println("Creating a FootballPlayer ...");
    yield new FootballPlayer();
  }
  ...
};

In other words, starting with JDK 13+, a switch expression can rely on yield but not on break, and a switch statement can rely on break but not on yield.

54. Tackling the case null clause in switch

Before JDK 17, a null case in a switch was commonly coded as a guarding condition outside the switch, as in the following example:

private static Player createPlayer(PlayerTypes playerType) {
  // handling null values in a condition outside switch
  if (playerType == null) {
    throw new IllegalArgumentException(
     "Player type cannot be null");
  }
  return switch (playerType) {
    case TENNIS -> new TennisPlayer();
    case FOOTBALL -> new FootballPlayer();
    ...
  };
}

Starting with JDK 17+ (JEP 427), we can treat a null case as any other common case. For instance, here we have a null case that is responsible for handling the scenarios when the passed argument is null:

private static Player createPlayer(PlayerTypes playerType) {
  return switch (playerType) {
    case TENNIS -> new TennisPlayer();
    case FOOTBALL -> new FootballPlayer();
    case SNOOKER -> new SnookerPlayer();
    case null -> throw new NullPointerException(
                   "Player type cannot be null");
    case UNKNOWN -> throw new UnknownPlayerException(
                      "Player type is unknown");
    // default is not mandatory
    default -> throw new IllegalArgumentException(
                 "Invalid player type: " + playerType);
  };
}

In certain contexts, null and default have the same meaning, so we can chain them in the same case statement:

private static Player createPlayer(PlayerTypes playerType) {
  return switch (playerType) {
    case TENNIS -> new TennisPlayer();
    case FOOTBALL -> new FootballPlayer();
    ...
    case null, default ->
      throw new IllegalArgumentException(
       "Invalid player type: " + playerType);
  };
}

Or you might find it more readable like this:

...    
case TENNIS: yield new TennisPlayer();
case FOOTBALL: yield new FootballPlayer();
...
case null, default:
  throw new IllegalArgumentException(
    "Invalid player type: " + playerType);
...

Personally, I suggest you think twice before patching your switch expressions with case null, especially if you plan to do it only for silently sweeping these values. Overall, your code may become brittle and exposed to unexpected behaviors/results that ignore the presence of null values. In the bundled code, you can test the complete examples.

55. Taking on the hard way to discover equals()

Check out the following code:

Integer x1 = 14; Integer y1 = 14;
Integer x2 = 129; Integer y2 = 129;
List<Integer> listOfInt1 = new ArrayList<>(
 Arrays.asList(x1, y1, x2, y2));
listOfInt1.removeIf(t -> t == x1 || t == x2);
List<Integer> listOfInt2 = new ArrayList<>(
 Arrays.asList(x1, y1, x2, y2));
listOfInt2.removeIf(t -> t.equals(x1) || t.equals(x2));

So, initially, listOfInt1 and listOfInt2 have the same items, [x1=14, y1=14, x2=129, y2=129]. But, what will contain listOfInt1/listOfInt2 after executing the code based on removeIf() and ==, respectively equals()?

The first list will remain with a single item, [129]. When t is x1, we know that x1 == x1, so 14 is removed. But, why is x2 removed? When t is y1, we know that y1 == x1 should be false since, via ==, we compare the object’s references in memory, not their values. Obviously, y1 and x1 should have different references in the memory… or shouldn’t they ? Actually, Java has an internal rule to cache integers in -127 … 128. Since x1=14 is cached, y1=14 uses the cache so no new Integer is created. This is why y1 == x1 and y1 is removed as well. Next, t is x2, and x2 == x2, so x2 is removed. Finally, t is y2, but y2 == x2 returns false, since 129 > 128 is not cached, so x2 and y2 have different references in memory.

On the other hand, when we use equals(), which is the recommended approach for comparing the object’s values, the resulting list is empty. When t is x1, x1 =x1, so 14 is removed. When t is y1, y1 =x1, so y1 is removed as well. Next, t is x2, and x2= x2, so x2 is removed. Finally, t is y2, and y2 =x2, so y2 is removed as well.

56. Hooking instanceof in a nutshell

Having an object (o) and a type (t), we can use the instanceof operator to test if o is of type t by writing o instanceof t. This is a boolean operator that is very useful to ensure the success of a subsequent casting operation. For instance, check the following:

interface Furniture {};
class Plywood {};
class Wardrobe extends Plywood implements Furniture {};

instanceof returns true if we test the object (for instance, Wardrobe) against the type itself:

Wardrobe wardrobe = new Wardrobe();
if(wardrobe instanceof Wardrobe) { } // true
Plywood plywood = new Plywood();
if(plywood instanceof Plywood) { } // true

instanceof returns true if the tested object (for instance, Wardrobe) is an instance of a subclass of the type (for instance Plywood):

Wardrobe wardrobe = new Wardrobe();
if(wardrobe instanceof Plywood) {} // true

instanceof returns true if the tested object (for instance, Wardrobe) implements the interface represented by the type (for instance, Furniture):

Wardrobe wardrobe = new Wardrobe();
if(wardrobe instanceof Furniture) {} // true

Based on this, consider the following note:

Important note

The logic behind instanceof relies on the IS-A relationship (this is detailed in The Complete Coding Interview Guide in Java, Chapter 6, What is inheritance?). In a nutshell, this relationship is based on interface implementation or class inheritance. For instance, wardrobe instanceof Plywood returns true because Wardrobe extends Plywood, so Wardrobe IS A Plywood. Similarly, Wardrobe IS A Furniture. On the other hand, Plywood IS-not-A Furniture, so plywood instanceof Furniture returns false. In this context, since every Java class extends Object, we know that foo instanceof Object returns true as long as foo is an instance of a Java class. In addition, null instanceof Object (or any other object) returns false, so this operator doesn’t require an explicit null check.

Finally, keep in mind that instanceof works only with reified types (reified type information is available at runtime), which include:

  • Primitive types (int, float)
  • Raw types (List, Set)
  • Non-generic classes/interfaces (String)
  • Generic types with unbounded wildcards (List<?>, Map<?, ?>)
  • Arrays of reifiable types (String[], Map<?, ?>[], Set<?>[])

This means that we cannot use the instanceof operator (or casts) with parameterized types because the type erasures alter all type parameters in generic code, so we cannot say which parameterized type for a generic type is in use at runtime.

57. Introducing pattern matching

JDK 16 has introduced one of the major and complex features of Java, referred to as pattern matching. The future is wide open for this topic.

In a nutshell, pattern matching defines a synthetic expression for checking/testing whether a given variable has certain properties. If those properties are met, then automatically extract one or more parts of that variable into other variables. From this point forward, we can use those extracted variables.

A pattern matching instance (pay attention, this has nothing to do with design patterns) is a structure made of several components as follows (this is basically the pattern matching terminology):

  • The target operand or the argument of the predicate: This is a variable (or an expression) that we aim to match.
  • The predicate (or test): This is a check that takes place at runtime and aims to determine if the given target operand does or doesn’t have one or more properties (we match the target operand against the properties).
  • One or more variables are referred to as pattern variables or binding variables: these variables are automatically extracted from the target operand if and only if the predicate/test succeeds.
  • Finally, we have the pattern itself, which is represented by the predicate + binding variables.
Figure 2.31.png

Figure 2.31: Pattern matching components

So, we can say that Java pattern matching is a synthetic expression of a complex solution composed of four components: target operand, predicate/test, binding variable(s), and pattern = predicate + binding variable(s).

The scope of binding variables in pattern matching

The compiler decides the scope (visibility) of the binding variables, so we don’t have to bother with such aspects via special modifiers or other tricks. In the case of predicates that always pass (like an if(true) {}), the compiler scopes the binding variables exactly as for the Java local variables.

But, most patterns make sense precisely because the predicate may fail. In such cases, the compiler applies a technique called flow scoping. That is actually a combination of the regular scoping and definitive assignment.

The definitive assignment is a technique used by the compiler based on the structure of statements and expressions to ensure that a local variable (or blank final field) is definitely assigned before it is accessed by the code. In a pattern-matching context, a binding variable is assigned only if the predicate passes, so the definitive assignment aim is to find out the precise place when this is happening. Next, the regular block scope represents the code where the binding variable is in scope.

Do you want this as a simple important note? Here it is.

Important note

In pattern matching, the binding variable is flow-scoped. In other words, the scope of a binding variable covers only the block where the predicate passed.

We will cover this topic in Problem 59.

Guarded patterns

So far, we know that a pattern relies on a predicate/test for deciding whether the binding variables should be extracted from the target operand or not. In addition, sometimes we need to refine this predicate by appending to it extra boolean checks based on the extracted binding variables. We name this a guarded pattern. In other words, if the predicate evaluates to true, then the binding variables are extracted and they enter in further boolean checks. If these checks are evaluated to true, we can say that the target operand matches this guarded pattern.

We cover this in Problem 64.

Type coverage

In a nutshell, the switch expressions and switch statements that use null and/or pattern labels should be exhaustive. In other words, we must cover all the possible values with switch case labels.

We cover this in Problem 66.

Current status of pattern matching

Currently, Java supports type pattern matching for instanceof and switch, and record pattern-destructuring patterns for records (covered in Chapter 4). These are the final releases in JDK 21.

58. Introducing type pattern matching for instanceof

Can you name the shortcomings of the following classical snippet of code (this is a simple code used to save different kinds of artifacts on a USB device)?

public static String save(Object o) throws IOException {
  if (o instanceof File) {
    File file = (File) o;
    return "Saving a file of size: " 
      + String.format("%,d bytes", file.length());
  } 
  if (o instanceof Path) {
    Path path = (Path) o;
    return "Saving a file of size: " 
      + String.format("%,d bytes", Files.size(path));
  }
  if (o instanceof String) {
    String str = (String) o;
    return "Saving a string of size: " 
      + String.format("%,d bytes", str.length());
  }
  return "I cannot save the given object";
}

You’re right…type checking and casting are burdensome to write and read. Moreover, those check-cast sequences are error-prone (it is easy to change the checked type or the casted type and forget to change the type of the other object). Basically, in each conditional statement, we do three steps, as follows:

  1. First, we do a type check (for instance, o instanceof File).
  2. Second, we do a type conversion via cast (for instance, (File) o).
  3. Third, we do a variable assignment (for instance, File file =).

But, starting with JDK 16 (JEP 394), we can use type pattern matching for instanceof to perform the previous three steps in one expression. The type pattern is the first category of patterns supported by Java. Let’s see the previous code rewritten via the type pattern:

public static String save(Object o) throws IOException {
  if (o instanceof File file) {
    return "Saving a file of size: " 
      + String.format("%,d bytes", file.length());
  }
  if (o instanceof String str) {
    return "Saving a string of size: " 
      + String.format("%,d bytes", str.length());
  }
  if (o instanceof Path path) {
    return "Saving a file of size: " 
      + String.format("%,d bytes", Files.size(path));
  }
  return "I cannot save the given object";
}

In each if-then statement, we have a test/predicate to determine the type of Object o, a cast of Object o to File, Path, or String, and a destructuring phase for extracting either the length or the size from Object o.

The piece of code, (o instanceof File file) is not just some syntactic sugar. It is not just a convenient shortcut of the old-fashioned code to reduce the ceremony of conditional state extraction. This is a type pattern in action!

Practically, we match the variable o against File file. More precisely, we match the type of o against the type File. We have that o is the target operand (the argument of the predicate), instanceof File is the predicate, and the variable file is the pattern or binding variable that is automatically created only if instanceof File returns true. Moreover, instanceof File file is the type pattern, or in short, File file is the pattern itself. The following figure illustrates this statement:

Figure 2.32.png

Figure 2.32: Type pattern matching for instanceof

In the type pattern for instanceof, there is no need to perform explicit null checks (exactly as in the case of plain instanceof), and no upcasting is allowed. Both of the following examples generate a compilation error in JDK 16-20, but not in JDK 14/15/21 (this is weird indeed):

if ("foo" instanceof String str) {}
if ("foo" instanceof CharSequence sequence) {}

The compilation error points out that the expression type cannot be a subtype of pattern type (no upcasting is allowed). However, with plain instanceof, this works in all JDKs:

if ("foo" instanceof String) {}
if ("foo" instanceof CharSequence) {}

Next, let’s talk about the scope of binding variables.

59. Handling the scope of a binding variable in type patterns for instanceof

From Problem 57, we know the headlines of scoping the binding variables in pattern matching. Moreover, we know from the previous problem that in the type pattern for instanceof, we have a single binding variable. It is time to see some practical examples, so let’s quickly crop this snippet from the previous problem:

if (o instanceof File file) {
  return "Saving a file of size: " 
    + String.format("%,d bytes", file.length());
}
// 'file' is out of scope here

In this snippet, the file binding variable is visible in the if-then block. Once the block is closed, the file binding variable is out of scope. But, thanks to flow scoping, a binding variable can be used in the if statement that has introduced it to define a so-called guarded pattern. Here it is:

// 'file' is created ONLY if 'instanceof' returns true
if (o instanceof File file
    // this is evaluated ONLY if 'file' was created
    && file.length() > 0 && file.length() < 1000) {
  return "Saving a file of size: " 
    + String.format("%,d bytes", file.length());
}
// another example
if (o instanceof Path path
     && Files.size(path) > 0 && Files.size(path) < 1000) {
  return "Saving a file of size: " 
    + String.format("%,d bytes", Files.size(path));
}

The conditional part that starts with the && short-circuit operator is evaluated by the compiler only if the instanceof operator is evaluated to true. This means that you cannot use the || operator instead of &&. For instance, is not logical to write this:

// this will not compile
if (o instanceof Path path
  || Files.size(path) > 0 && Files.size(path) < 1000) {...}

On the other hand, this is perfectly acceptable:

if (o instanceof Path path
  && (Files.size(path) > 0 || Files.size(path) < 1000)) {...}

We can also extend the scope of the binding variable as follows:

if (!(o instanceof String str)) {
  // str is not available here
  return "I cannot save the given object";
} else {
  return "Saving a string of size: " 
    + String.format("%,d bytes", str.length());
}

Since we negate the if-then statement, the str binding variable is available in the else branch. Following this logic, we can use early returns as well:

public int getStringLength(Object o) { 
  if (!(o instanceof String str)) {
    return 0;
  }
  return str.length();
}

Thanks to flow scoping, the compiler can set up strict boundaries for the scope of binding variables. For instance, in the following code, there is no risk of overlapping even if we keep using the same name for the binding variables:

private String strNumber(Object o) {
 if (o instanceof Integer nr) {
   return String.valueOf(nr.intValue());
 } else if (o instanceof Long nr) {
   return String.valueOf(nr.longValue());
 } else {
   // nr is out of scope here
   return "Probably a float number";
 }
}

Here, each nr binding variable has a scope that covers only its own branch. No overlapping, no conflicts! However, using the same name for the multiple binding variables can be a little bit confusing, so it is better to avoid it. For instance, we can use intNr and longNr instead of simple nr.

Another confusing scenario that is highly recommended to be avoided implies binding variables that hide fields. Check out this code:

private final String str
  = "   I am a string with leading and trailing spaces     ";
public String convert(Object o) {
  // local variable (binding variable) hides a field
  if (o instanceof String str) { 
    return str.strip(); // refers to binding variable, str
  } else {
    return str.strip(); // refers to field, str
  } 
}

So, using the same name for binding variables (this is true for any local variable as well) and fields is a bad practice that should be avoided.

In JDK 14/15, we cannot reassign binding variables because they are declared final by default. However, JDK 16+ solved the asymmetries that may occur between local and binding variables by removing the final modifier. So, starting with JDK 16+, we can reassign binding variables as in the following snippet:

String dummy = "";
private int getLength(Object o) { 
  if(o instanceof String str) {
      str = dummy; // reassigning binding variable
      // returns the length of 'dummy' not the passed 'str'
      return str.length(); 
  }
  return 0;
}

Even if this is possible, it is highly recommended to avoid such code smells and keep the world clean and happy by not re-assigning your binding variables.

60. Rewriting equals() via type patterns for instanceof

It is not mandatory to rely on instanceof to implement the equals() method, but it is a convenient approach to write something as follows:

public class MyPoint {
  private final int x;
  private final int y;
  private final int z;
  public MyPoint(int x, int y, int z) {
    this.x = x;
    this.y = y;
    this.z = z;
  }
  @Override
  public boolean equals(Object obj) {
    if (this == obj) {
      return true;
    }
    if (!(obj instanceof MyPoint)) {
      return false;
    }
    final MyPoint other = (MyPoint) obj;
    return (this.x == other.x && this.y == other.y
      && this.z == other.z); 
  }       
}

If you are a fan of the previous approach for implementing equals(), then you’ll love rewriting it via a type pattern for instanceof. Check out the following snippet:

@Override
public boolean equals(Object obj) {
  if (this == obj) {
    return true;
  }
  return obj instanceof MyPoint other
    && this.x == other.x && this.y == other.y
    && this.z == other.z; 
}

If MyPoint is generic (MyPoint<E>) then simply use a wildcard as follows (more details are available in the next problem):

return obj instanceof MyPoint<?> other
  && this.x == other.x && this.y == other.y
  && this.z == other.z;

Cool, right?! However, pay attention that using instanceof to express the equals() contract imposes the usage of a final class of final equals(). Otherwise, if subclasses are allowed to override equals(), then instanceof may cause transitivity/symmetry bugs. A good approach is to pass equals() through a dedicated verifier such as equals verifier (https://github.com/jqno/equalsverifier), which is capable of checking the validity of the equals() and hashCode() contracts.

61. Tackling type patterns for instanceof and generics

Consider the following snippet of code that uses instanceof in the old-school fashion:

public static <K, V> void process(Map<K, ? extends V> map) {
  if (map instanceof EnumMap<?, ? extends V>) {
    EnumMap<?, ? extends V> books 
     = (EnumMap<?, ? extends V>) map;
    if (books.get(Status.DRAFT) instanceof Book) {
      Book book = (Book) books.get(Status.DRAFT);
      book.review();
    }
  }
}
// use case
EnumMap<Status, Book> books = new EnumMap<>(Status.class);
books.put(Status.DRAFT, new Book());
books.put(Status.READY, new Book());
process(books);

As we know from Problem 56, we can combine instanceof with generic types via unbounded wildcards, such as our EnumMap<?, ? extends V> (or EnumMap<?, ?>, but not EnumMap<K, ? extends V>, EnumMap<K, ?>, or EnumMap<K, V>).

This code can be written more concisely via the type pattern for instanceof as follows:

public static <K, V> void process(Map<K, ? extends V> map) {
  if (map instanceof EnumMap<?, ? extends V> books
    && books.get(Status.DRAFT) instanceof Book book) {
      book.review();
  }
}

In the example based on plain instanceof, we can also replace EnumMap<?, ? extends V> with Map<?, ? extends V>. But, as we know from Problem 53, this is not possible with type patterns because the expression type cannot be a subtype of pattern type (upcasting is allowed). However, this is not an issue anymore starting with JDK 21.

62. Tackling type patterns for instanceof and streams

Let’s consider a List<Engine> where Engine is an interface implemented by several classes such as HypersonicEngine, HighSpeedEngine, and RegularEngine. Our goal is to filter this List and eliminate all RegularEngine classes that are electric and cannot pass our autonomy test. So, we can write code as follows:

public static List<Engine> filterRegularEngines(
              List<Engine> engines, int testSpeed) {
  for (Iterator<Engine> i = engines.iterator(); i.hasNext();){
    final Engine e = i.next();
    if (e instanceof RegularEngine) {
      final RegularEngine popularEngine = (RegularEngine) e;
      if (popularEngine.isElectric()) {
        if (!hasEnoughAutonomy(popularEngine, testSpeed)) {
          i.remove();
        }
      }
    }
  }
  return engines;
}

But, starting with JDK 8, we can safely remove from a List without using an Iterator via a default method from java.util.Collection named public default boolean removeIf(Predicate<? super E> filter). If we combine this method (and, therefore, the Stream API) with type patterns for instanceof, then we can simplify the previous code as follows:

public static List<Engine> filterRegularEngines(
              List<Engine> engines, int testSpeed) {
  engines.removeIf(e -> e instanceof RegularEngine engine 
    && engine.isElectric()
    && !hasEnoughAutonomy(engine, testSpeed));
  return engines;
}

So, whenever you have the chance to use type patterns with the Stream API, don’t hesitate.

63. Introducing type pattern matching for switch

JDK 17 (JEP 406) added type pattern matching for switch as a preview feature. A second preview was available in JDK 18 (JEP 420). The final release is available in JDK 21 as JEP 441.

Type pattern matching for switch allows the selector expression (that is, o in switch(o)) to be of any type not just an enum constant, number, or string. By “any type,” I mean any type (any object type, enum type, array type, record type, or sealed type)! The type pattern matching is not limited to a single hierarchy as it happens in the case of inheritance polymorphism. The case labels can have type patterns (referred to as case pattern labels or, simply, pattern labels), so the selector expression (o) can be matched against a type pattern, not only against a constant.

In the next snippet of code, we rewrote the example from Problem 58 via a type pattern for switch:

public static String save(Object o) throws IOException {
  return switch(o) {
    case File file -> "Saving a file of size: " 
              + String.format("%,d bytes", file.length());
    case Path path -> "Saving a file of size: " 
              + String.format("%,d bytes", Files.size(path));
    case String str -> "Saving a string of size: " 
              + String.format("%,d bytes", str.length());
    case null -> "Why are you doing this?";
    default -> "I cannot save the given object";
  }; 
}

The following figure identifies the main players of a switch branch:

Figure 2.33.png

Figure 2.33: Type pattern matching for switch

The case for null is not mandatory. We have added it just for the sake of completeness. On the other hand, the default branch is a must, but this topic is covered later in this chapter.

64. Adding guarded pattern labels in switch

Do you remember that type patterns for instanceof can be refined with extra boolean checks applied to the binding variables to obtain fine-grained use cases? Well, we can do the same for the switch expressions that use pattern labels. The result is named guarded pattern labels. Let’s consider the following code:

private static String turnOnTheHeat(Heater heater) {
  return switch (heater) {
    case Stove stove -> "Make a fire in the stove";
    case Chimney chimney -> "Make a fire in the chimney";
    default -> "No heater available!";
  };
}

Having a Stove and a Chimney, this switch decides where to make a fire based on pattern labels. But, what will happen if Chimney is electric? Obviously, we will have to plug Chimney in instead of firing it up. This means that we should add a guarded pattern label that helps us to make the difference between an electric and non-electric Chimney:

return switch (heater) {
  case Stove stove -> "Make a fire in the stove";
  case Chimney chimney
    && chimney.isElectric() -> "Plug in the chimney";
  case Chimney chimney -> "Make a fire in the chimney";
  default -> "No heater available!";
};

Well, that was easy, wasn’t it? Let’s have another example that starts from the following code:

enum FuelType { GASOLINE, HYDROGEN, KEROSENE }
class Vehicle {
  private final int gallon;
  private final FuelType fuel;
  ...
}

For each Vehicle, we know the fuel type and how many gallons of fuel fit in the tank. Now, we can write a switch that can rely on guarded pattern labels to try to guess the type of the vehicle based on this information:

private static String theVehicle(Vehicle vehicle) {
  return switch (vehicle) {
    case Vehicle v && v.getFuel().equals(GASOLINE)
      && v.getGallon() < 120 -> "probably a car/van"; 
    case Vehicle v && v.getFuel().equals(GASOLINE)
      && v.getGallon() > 120 -> "probably a big rig"; 
    case Vehicle v && v.getFuel().equals(HYDROGEN) 
      && v.getGallon() < 300_000 -> "probably an aircraft";
    case Vehicle v && v.getFuel().equals(HYDROGEN) 
      && v.getGallon() > 300_000 -> "probably a rocket";
    case Vehicle v && v.getFuel().equals(KEROSENE) 
      && v.getGallon() > 2_000 && v.getGallon() < 6_000 
         -> "probably a narrow-body aircraft";
    case Vehicle v && v.getFuel().equals(KEROSENE) 
      && v.getGallon() > 6_000 && v.getGallon() < 55_000
         -> "probably a large (B747-400) aircraft";
    default -> "no clue";
  };
}

Notice that the pattern labels are the same in all cases (Vehicle v) and the decision is refined via the guarded conditions. The previous examples work just fine in JDK 17 and 18, but they don’t work starting with JDK 19+. Because the && operator was considered confusing, starting with JDK 19+, we have to deal with a refinement syntax. Practically, instead of the && operator, we use the new context-specific keyword when between the pattern label and the refining boolean checks. So, in JDK 19+, the previous code becomes:

return switch (vehicle) {
  case Vehicle v when (v.getFuel().equals(GASOLINE)
    && v.getGallon() < 120) -> "probably a car/van"; 
  case Vehicle v when (v.getFuel().equals(GASOLINE)
    && v.getGallon() > 120) -> "probably a big rig"; 
  ...
  case Vehicle v when (v.getFuel().equals(KEROSENE) 
    && v.getGallon() > 6_000 && v.getGallon() < 55_000)
      -> "probably a large (B747-400) aircraft";
  default -> "no clue";
};

In the bundled code, you can find both versions for JDK 17/18, and JDK 19+.

65. Dealing with pattern label dominance in switch

The compiler matches the selector expression against the available pattern labels by testing the selector expression against each label starting from top to bottom (or, from the first to the last) in the exact order in which we wrote them in the switch block. This means that the first match wins. Let’s assume that we have the following base class (Pill) and some pills (Nurofen, Ibuprofen, and Piafen):

abstract class Pill {}
class Nurofen extends Pill {}
class Ibuprofen extends Pill {}
class Piafen extends Pill {}

Hierarchically speaking, Nurofen, Ibuprofen, and Piafen are three classes placed at the same hierarchical level since all of them have the Pill class as the base class. In an IS-A inheritance relationship, we say that Nurofen is a Pill, Ibuprofen is a Pill, and Piafen is also a Pill. Next, let’s use a switch to serve our clients the proper headache pill:

private static String headache(Pill o) {
  return switch(o) {
    case Nurofen nurofen -> "Get Nurofen ...";
    case Ibuprofen ibuprofen -> "Get Ibuprofen ...";
    case Piafen piafen -> "Get Piafen ...";
    default -> "Sorry, we cannot solve your headache!";
  };
}

Calling headache(new Nurofen()) will match the first pattern label, Nurofen nurofen. In the same manner, headache(new Ibuprofen()) matches the second pattern label, and headache(new Piafen()) matches the third one. No matter how we mix the order of these label cases, they will work as expected because they are on the same level and none of them dominate the others.

For instance, since people don’t want headaches, they order a lot of Nurofen, so we don’t have any anymore. We represent this by removing/comment the corresponding case:

return switch(o) { 
  // case Nurofen nurofen -> "Get Nurofen ...";
  case Ibuprofen ibuprofen -> "Get Ibuprofen ...";
  case Piafen piafen -> "Get Piafen ...";
  default -> "Sorry, we cannot solve your headache!";
}; 

So, what happens when a client wants Nurofen? You’re right … the default branch will take action since Ibuprofen and Piafen don’t match the selector expression.

But, what will happen if we modify the switch as follows?

return switch(o) { 
  case Pill pill -> "Get a headache pill ...";
  case Nurofen nurofen -> "Get Nurofen ...";
  case Ibuprofen ibuprofen -> "Get Ibuprofen ...";
  case Piafen piafen -> "Get Piafen ...";
};

Adding the Pill base class as a pattern label case allows us to remove the default branch since we cover all possible values (this is covered in detail in Problem 66). This time, the compiler will raise an error to inform us that the Pill label case dominates the rest of the label cases. Practically, the first label case Pill pill dominates all other label cases because every value that matches any of the Nurofen nurofen, Ibuprofen ibuprofen, Piafen piafen patterns also matches the pattern Pill pill. So, Pill pill always wins while the rest of the label cases are useless. Switching Pill pill with Nurofen nurofen will give a chance to Nurofen nurofen, but Pill pill will still dominate the remaining two. So, we can eliminate the dominance of the base class Pill by moving its label case to the last position:

return switch(o) { 
  case Nurofen nurofen -> "Get Nurofen ...";
  case Ibuprofen ibuprofen -> "Get Ibuprofen ...";
  case Piafenpiafen -> "Get Piafen ...";
  case Pill pill -> "Get a headache pill ...";
};

Now, every pattern label has a chance to win.

Let’s have another example that starts from this hierarchy:

abstract class Drink {}
class Small extends Drink {}
class Medium extends Small {}
class Large extends Medium {}
class Extra extends Medium {}
class Huge extends Large {}
class Jumbo extends Extra {}

This time, we have seven classes disposed of in a multi-level hierarchy. If we exclude the base class Drink, we can represent the rest of them in a switch as follows:

private static String buyDrink(Drink o) {
  return switch(o) { 
    case Jumbo j: yield "We can give a Jumbo ...";
    case Huge h: yield "We can give a Huge ..."; 
    case Extra e: yield "We can give a Extra ...";
    case Large l: yield "We can give a Large ...";
    case Medium m: yield "We can give a Medium ...";
    case Small s: yield "We can give a Small ...";
    default: yield "Sorry, we don't have this drink!";
  };
}

The order of pattern labels is imposed by the class hierarchy and is quite strict, but we can make some changes without creating any dominance issues. For instance, since Extra and Large are subclasses of Medium, we can switch their positions. Some things apply to Jumbo and Huge since they are both subclasses of Medium via Extra, respectively Large.

In this context, the compiler evaluates the selection expression by trying to match it against this hierarchy via an IS-A inheritance relationship. For instance, let’s order a Jumbo drink while there are no more Jumbo and Extra drinks:

return switch(o) { 
  case Huge h: yield "We can give a Huge ...";
  case Large l: yield "We can give a Large ...";
  case Medium m: yield "We can give a Medium ...";
  case Small s: yield "We can give a Small ...";
  default: yield "Sorry, we don't have this drink!";
};

If we order Jumbo (o is Jumbo), then we will get Medium. Why? The compiler matches Jumbo against Huge without success. The same result is obtained while matching Jumbo against Large. However, when it matches Jumbo against Medium, it sees that Jumbo is a Medium subclass via the Extra class. So, since Jumbo is Medium, the compiler chooses the Medium m pattern label. At this point, Medium matches Jumbo, Extra, and Medium. So, soon we will be out of Medium as well:

return switch(o) {
  case Huge h: yield "We can give a Huge ...";
  case Large l: yield "We can give a Large ...";
  case Small s: yield "We can give a Small ...";
  default: yield "Sorry, we don't have this drink!";
};

This time, any request for Jumbo, Extra, Medium, or Small will give us a Small. I think you get the idea.

Let’s take a step further, and analyze this code:

private static int oneHundredDividedBy(Integer value) {
  return switch(value) { 
    case Integer i -> 100/i;
    case 0 -> 0;
  };
}

Have you spotted the problem? A pattern label case dominates a constant label case, so the compiler will complain about the fact that the second case (case 0) is dominated by the first case. This is normal, since 0 is an Integer as well, so it will match the pattern label. The solution requires switching the cases:

  return switch(value) { 
    case 0 -> 0;
    case Integer i -> 100/i;
  };

Here is another case to enforce this type of dominance:

enum Hero { CAPTAIN_AMERICA, IRON_MAN, HULK }
private static String callMyMarvelHero(Hero hero) {
  return switch(hero) {
    case Hero h -> "Calling " + h;
    case HULK -> "Sorry, we cannot call this guy!";
  };
}

In this case, the constant is HULK and it is dominated by the Hero h pattern label case. This is normal, since HULK is also a Marvel hero, so Hero h will match all Marvel heroes including HULK. Again, the fix relies on switching the cases:

return switch(hero) { 
    case HULK -> "Sorry, we cannot call this guy!";
    case Hero h -> "Calling " + h;
  };

Okay, finally, let’s tackle this snippet of code:

private static int oneHundredDividedByPositive(Integer value){
  return switch(value) { 
    case Integer i when i > 0 -> 100/i;
    case 0 -> 0;
    case Integer i -> (-1) * 100/i;
  };
}

You may think that if we enforce the Integer i pattern label with a condition that forces i to be strictly positive, then the constant label will not be dominated. But, this is not true; a guarded pattern label still dominates a constant label. The proper order places the constant labels first, followed by guarded pattern labels, and finally, by non-guarded pattern labels. The next code fixes the previous one:

return switch(value) { 
  case 0 -> 0;
  case Integer i when i > 0 -> 100/i;
  case Integer i -> (-1) * 100/i;
};

Okay, I think you get the idea. Feel free to practice all these examples in the bundled code.

66. Dealing with completeness (type coverage) in pattern labels for switch

In a nutshell, switch expressions and switch statements that use null and/or pattern labels should be exhaustive. In other words, we must cover with explicit switch case labels all the possible values. Let’s consider the following example:

class Vehicle {}
class Car extends Vehicle {}
class Van extends Vehicle {}
private static String whatAmI(Vehicle vehicle) {
  return switch(vehicle) {
    case Car car -> "You're a car";
    case Van van -> "You're a van";
  };
}

This snippet of code doesn’t compile. The error is clear: The switch expression does not cover all possible input values. The compiler complains because we don’t have a case pattern label for Vehicle. This base class can be legitimately used without being a Car or a Van, so it is a valid candidate for our switch. We can add a case Vehicle or a default label. If you know that Vehicle will remain an empty base class, then you’ll probably go for a default label:

return switch(vehicle) {
    case Car car -> "You're a car";
    case Van van -> "You're a van";
    default -> "I have no idea ... what are you?";
  };

If we continue by adding another vehicle such as class Truck extends Vehicle {}, then this will be handled by the default branch. If we plan to use Vehicle as an independent class (for instance, to enrich it with methods and functionalities), then we will prefer to add a case Vehicle as follows:

return switch(vehicle) {
    case Car car -> "You're a car";
    case Van van -> "You're a van";
    case Vehicle v -> "You're a vehicle"; // total pattern
};

This time, the Truck class will match the case Vehicle branch. Of course, we can add a case Truck as well.

Important note

The Vehicle v pattern is named a total type pattern. There are two labels that we can use to match all possible values: the total type pattern (for instance, a base class or an interface) and the default label. Generally speaking, a total pattern is a pattern that can be used instead of the default label.

In the previous example, we can accommodate all possible values via the total pattern or the default label but not both. This makes sense since the whatAmI(Vehicle vehicle) method gets Vehicle as an argument. So, in this example, the selector expression can be only Vehicle or a subclass of Vehicle. How about modifying this method as whatAmI(Object o)?

private static String whatAmI(Object o) {
  return switch(o) {
    case Car car -> "You're a car";
    case Van van -> "You're a van";
    case Vehicle v -> "You're a vehicle"; // optional
    default -> "I have no idea ... what are you?";
  };
}

Now, the selector expression can be any type, which means that the total pattern Vehicle v is not total anymore. While Vehicle v becomes an optional ordinary pattern, the new total pattern is case Object obj. This means that we can cover all possible values by adding the default label or the case Object obj total pattern:

return switch(o) {
  case Car car -> "You're a car";
  case Van van -> "You're a van";
  case Vehicle v -> "You're a vehicle";  // optional
  case Object obj -> "You're an object"; // total pattern
};

I think you get the idea! How about using an interface for the base type? For instance, here is an example based on the Java built-in CharSequence interface:

public static String whatAmI(CharSequence cs) {
  return switch(cs) { 
    case String str -> "You're a string";
    case Segment segment -> "You're a Segment";
    case CharBuffer charbuffer -> "You're a CharBuffer";
    case StringBuffer strbuffer -> "You're a StringBuffer";
    case StringBuilder strbuilder -> "You're a StringBuilder";
  };
}

This snippet of code doesn’t compile. The error is clear: The switch expression does not cover all possible input values. But, if we check the documentation of CharSequence, we see that it is implemented by five classes: CharBuffer, Segment, String, StringBuffer, and StringBuilder. In our code, each of these classes is covered by a pattern label, so we have covered all possible values, right? Well, yes and no… “Yes” because we cover all possible values for the moment, and “no” because anyone can implement the CharSequence interface, which will break the exhaustive coverage of our switch. We can do this:

public class CoolChar implements CharSequence { ... }

At this moment, the switch expression doesn’t cover the CoolChar type. So, we still need a default label or the total pattern, case CharSequence charseq, as follows:

return switch(cs) { 
  case String str -> "You're a string";
  ...
  case StringBuilder strbuilder -> "You're a StringBuilder";
  // we have created this
  case CoolChar cool -> "Welcome ... you're a CoolChar";
  // this is a total pattern
  case CharSequence charseq -> "You're a CharSequence";
  // can be used instead of the total pattern
  // default -> "I have no idea ... what are you?";
};

Okay, let’s tackle this scenario on the java.lang.constant.ClassDesc built-in interface:

private static String whatAmI(ConstantDesc constantDesc) {
  return switch(constantDesc) { 
    case Integer i -> "You're an Integer";
    case Long l -> "You're a Long";
    case Float f -> " You're a Float";
    case Double d -> "You're a Double";
    case String s -> "You're a String";
    case ClassDesc cd -> "You're a ClassDesc";
    case DynamicConstantDesc dcd -> "You're a DCD";
    case MethodHandleDesc mhd -> "You're a MethodHandleDesc";
    case MethodTypeDesc mtd -> "You're a MethodTypeDesc";
  };
}

This code compiles! There is no default label and no total pattern but the switch expression covers all possible values. How so?! This interface is declared as sealed via the sealed modifier:

public sealed interface ClassDesc
  extends ConstantDesc, TypeDescriptor.OfField<ClassDesc>

Sealed interfaces/classes were introduced in JDK 17 (JEP 409) and we will cover this topic in Chapter 8. However, for now, it is enough to know that sealing allows us to have fine-grained control of inheritance so classes and interfaces define their permitted subtypes. This means that the compiler can determine all possible values in a switch expression. Let’s consider a simpler example that starts as follows:

sealed interface Player {}
final class Tennis implements Player {}
final class Football implements Player {}
final class Snooker implements Player {}

And, let’s have a switch expression covering all possible values for Player:

private static String trainPlayer(Player p) { 
  return switch (p) {
    case Tennis t -> "Training the tennis player ..." + t;
    case Football f -> "Training the football player ..." + f;
    case Snooker s -> "Training the snooker player ..." + s;
  };
}

The compiler is aware that the Player interface has only three implementations and all of them are covered via pattern labels. We can add a default label or the total pattern case Player player, but you most probably don’t want to do that. Imagine that we add a new implementation of the sealed Player interface named Golf:

final class Golf implements Player {}

If the switch expression has a default label, then Golf values will be handled by this default branch. If we have the total pattern Player player, then this pattern will handle the Golf values. On the other hand, if none of the default labels or total patterns are present, the compiler will immediately complain that the switch expression doesn’t cover all possible values. So, we are immediately informed, and once we add a case Golf g, the error disappears. This way, we can easily maintain our code and have a guarantee that our switch expressions are always up to date and cover all possible values. The compiler will never miss the chance to inform us when a new implementation of Player is available.

A similar logic applies to Java enums. Consider the following enum:

private enum PlayerTypes { TENNIS, FOOTBALL, SNOOKER }

The compiler is aware of all the possible values for PlayerTypes, so the following switch expression compiles successfully:

private static String createPlayer(PlayerTypes p) { 
  return switch (p) {
    case TENNIS -> "Creating a tennis player ...";
    case FOOTBALL -> "Creating a football player ...";
    case SNOOKER -> "Creating a snooker player ...";
  };
}

Again, we can add a default label or the total pattern, case PlayerTypes pt. But, if we add a new value in the enum (for instance, GOLF), the compiler will delegate the default label or the total pattern to handle it. On the other hand, if none of these are available, the compiler will immediately complain that the GOLF value is not covered, so we can add it (case GOLF g) and create a golf player whenever required.

So far, so good! Now, let’s consider the following context:

final static class PlayerClub implements Sport {};
private enum PlayerTypes implements Sport
  { TENNIS, FOOTBALL, SNOOKER }
sealed interface Sport permits PlayerTypes, PlayerClub {};

The sealed interface Sport allows only two subtypes: PlayerClub (a class) and PlayerTypes (an enum). If we write a switch that covers all possible values for Sport, then it will look as follows:

private static String createPlayerOrClub(Sport s) { 
  return switch (s) {
    case PlayerTypes p when p == PlayerTypes.TENNIS
      -> "Creating a tennis player ...";
    case PlayerTypes p when p == PlayerTypes.FOOTBALL
      -> "Creating a football player ...";
    case PlayerTypes p -> "Creating a snooker player ...";
    case PlayerClub p -> "Creating a sport club ...";
  };
}

We immediately observe that writing case PlayerTypes p when p == PlayerTypes.TENNIS is not quite neat. What we actually want is case PlayerTypes.TENNIS but, until JDK 21, this is not possible since qualified enum constants cannot be used in case labels. However, starting with JDK 21, we can use qualified names of enum constants as labels, so we can write this:

private static String createPlayerOrClub(Sport s) {
  return switch (s) {
    case PlayerTypes.TENNIS
      -> "Creating a tennis player ...";
    case PlayerTypes.FOOTBALL
      -> "Creating a football player ...";
    case PlayerTypes.SNOOKER
      -> "Creating a snooker player ...";
    case PlayerClub p 
      -> "Creating a sport club ...";
  };
}

Done! Now you know how to deal with type coverage in switch expressions.

67. Understanding the unconditional patterns and nulls in switch expressions

Let’s imagine that we use JDK 17 and we have the following code:

private static String drive(Vehicle v) {
  return switch (v) {
    case Truck truck -> "truck: " + truck;
    case Van van -> "van: " + van;
    case Vehicle vehicle -> "vehicle: " + vehicle.start();
  };
}
drive(null);

Notice the call, drive(null). This call will hit the Vehicle vehicle total pattern, so even null values match total patterns. But, this means that the binding variable vehicle will also be null, which means that this branch is prone to NullPointerException (for instance, if we call a hypothetical method, vehicle.start()):

Exception in thread "main" java.lang.NullPointerException: Cannot invoke "modern.challenge.Vehicle.start()" because "vehicle" is null

Because Vehicle vehicle matches all possible values, it is known as a total pattern but also as an unconditional pattern since it matches everything unconditionally.

But, as we know from Problem 54, starting with JDK 17+ (JEP 427), we can have a pattern label for null itself, so we can handle the previous shortcoming as follows:

return switch (v) {
    case Truck truck -> "truck: " + truck;
    case Van van -> "van: " + van;
    case null -> "so, you don't have a vehicle?";
    case Vehicle vehicle -> "vehicle: " + vehicle.start();
  };

Yes, everybody agrees that adding a case null between vehicles looks awkward. Adding it at the end is not an option since will raise a dominance issue. So, starting with JDK 19+, adding this case null is no longer needed in this kind of scenario. Basically, the idea remains the same meaning that the unconditional pattern still only matches null values so it will not allow the execution of that branch. Actually, when a null value occurs, the switch expressions will throw a NullPointerException without even looking at the patterns. So, in JDK 19+, this code will throw an NPE right away:

return switch (v) {
  case Truck truck -> "truck: " + truck;
  case Van van -> "van: " + van;
  // we can still use a null check
  // case null -> "so, you don't have a vehicle?";
  // total/unconditional pattern throw NPE immediately
  case Vehicle vehicle -> "vehicle: " + vehicle.start();
};

The NPE message reveals that vehicle.start() was never called. The NPE occurred much earlier:

Exception in thread "main" java.lang.NullPointerExceptionatjava.base/java.util.Objects.requireNonNull(Objects.java:233)

We will expand on this topic later when we will talk about Java records.

Summary

That’s all folks! This was a comprehensive chapter that covered four main topics, among others: java.util.Objects, immutability, switch expressions, and pattern matching for instanceof and switch expressions.

Join our community on Discord

Join our community’s Discord space for discussions with the author and other readers:

https://discord.gg/8mgytp5DGQ

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Solve Java programming challenges and get interview-ready with the power of modern Java 21
  • Test your Java skills using language features, algorithms, data structures, and design patterns
  • Explore tons of examples, all fully refreshed for this edition, meant to help you accommodate JDK 12 to JDK 21

Description

The super-fast evolution of the JDK between versions 12 and 21 has made the learning curve of modern Java steeper, and increased the time needed to learn it. This book will make your learning journey quicker and increase your willingness to try Java’s new features by explaining the correct practices and decisions related to complexity, performance, readability, and more. Java Coding Problems takes you through Java’s latest features but doesn’t always advocate the use of new solutions — instead, it focuses on revealing the trade-offs involved in deciding what the best solution is for a certain problem. There are more than two hundred brand new and carefully selected problems in this second edition, chosen to highlight and cover the core everyday challenges of a Java programmer. Apart from providing a comprehensive compendium of problem solutions based on real-world examples, this book will also give you the confidence to answer questions relating to matching particular streams and methods to various problems. By the end of this book you will have gained a strong understanding of Java’s new features and have the confidence to develop and choose the right solutions to your problems.

Who is this book for?

If you are a Java developer who wants to level-up by solving real-world problems, then this book is for you. Working knowledge of the Java programming language is required to get the most out of this book

What you will learn

  • Adopt the latest JDK 21 features in your applications
  • Explore Records, Record Patterns, Record serialization and so on
  • Work with Sealed Classes and Interfaces for increasing encapsulation
  • Learn how to exploit Context-Specific Deserialization Filters
  • Solve problems relating to collections and esoteric data structures
  • Learn advanced techniques for extending the Java functional API
  • Explore the brand-new Socket API and Simple Web Server
  • Tackle modern Garbage Collectors and Dynamic CDS Archives

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Mar 19, 2024
Length: 798 pages
Edition : 2nd
Language : English
ISBN-13 : 9781837637614
Category :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning

Product Details

Publication date : Mar 19, 2024
Length: 798 pages
Edition : 2nd
Language : English
ISBN-13 : 9781837637614
Category :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 95.97 108.97 13.00 saved
Mastering the Java Virtual Machine
€29.99
Java Coding Problems
€28.99 €41.99
The Complete Coding Interview Guide in Java
€36.99
Total 95.97 108.97 13.00 saved Stars icon

Table of Contents

15 Chapters
Text Blocks, Locales, Numbers, and Math Chevron down icon Chevron up icon
Text Blocks, Locales, Numbers, and Math
Problems
1. Creating a multiline SQL, JSON, and HTML string
2. Exemplifying the usage of text block delimiters
3. Working with indentation in text blocks
4. Removing incidental white spaces in text blocks
5. Using text blocks just for readability
6. Escaping quotes and line terminators in text blocks
7. Translating escape sequences programmatically
8. Formatting text blocks with variables/expressions
9. Adding comments in text blocks
10. Mixing ordinary string literals with text blocks
11. Mixing regular expression with text blocks
12. Checking if two text blocks are isomorphic
13. Concatenating strings versus StringBuilder
14. Converting int to String
15. Introducing string templates
16. Writing a custom template processor
17. Creating a Locale
18. Customizing localized date-time formats
19. Restoring Always-Strict Floating-Point semantics
20. Computing mathematical absolute value for int/long and result overflow
21. Computing the quotient of the arguments and result overflow
22. Computing the largest/smallest value that is less/greater than or equal to the algebraic quotient
23. Getting integral and fractional parts from a double
24. Testing if a double number is an integer
25. Hooking Java (un)signed integers in a nutshell
26. Returning the flooring/ceiling modulus
27. Collecting all prime factors of a given number
28. Computing the square root of a number using the Babylonian method
29. Rounding a float number to specified decimals
30. Clamping a value between min and max
31. Multiply two integers without using loops, multiplication, bitwise, division, and operators
32. Using TAU
33. Selecting a pseudo-random number generator
34. Filling a long array with pseudo-random numbers
35. Creating a stream of pseudo-random generators
36. Getting a legacy pseudo-random generator from new ones of JDK 17
37. Using pseudo-random generators in a thread-safe fashion (multithreaded environments)
Summary
Objects, Immutability, Switch Expressions, and Pattern Matching Chevron down icon Chevron up icon
Objects, Immutability, Switch Expressions, and Pattern Matching
Problems
38. Explain and exemplifying UTF-8, UTF-16, and UTF-32
39. Checking a sub-range in the range from 0 to length
40. Returning an identity string
41. Hooking unnamed classes and instance main methods
42. Adding code snippets in Java API documentation
43. Invoking default methods from Proxy instances
44. Converting between bytes and hex-encoded strings
45. Exemplify the initialization-on-demand holder design pattern
46. Adding nested classes in anonymous classes
47. Exemplify erasure vs. overloading
48. Xlinting default constructors
49. Working with the receiver parameter
50. Implementing an immutable stack
51. Revealing a common mistake with Strings
52. Using the enhanced NullPointerException
53. Using yield in switch expressions
54. Tackling the case null clause in switch
55. Taking on the hard way to discover equals()
56. Hooking instanceof in a nutshell
57. Introducing pattern matching
58. Introducing type pattern matching for instanceof
59. Handling the scope of a binding variable in type patterns for instanceof
60. Rewriting equals() via type patterns for instanceof
61. Tackling type patterns for instanceof and generics
62. Tackling type patterns for instanceof and streams
63. Introducing type pattern matching for switch
64. Adding guarded pattern labels in switch
65. Dealing with pattern label dominance in switch
66. Dealing with completeness (type coverage) in pattern labels for switch
67. Understanding the unconditional patterns and nulls in switch expressions
Summary
Working with Date and Time Chevron down icon Chevron up icon
Records and Record Patterns Chevron down icon Chevron up icon
Arrays, Collections, and Data Structures Chevron down icon Chevron up icon
Java I/O: Context-Specific Deserialization Filters Chevron down icon Chevron up icon
Foreign (Function) Memory API Chevron down icon Chevron up icon
Sealed and Hidden Classes Chevron down icon Chevron up icon
Functional Style Programming – Extending APIs Chevron down icon Chevron up icon
Concurrency – Virtual Threads and Structured Concurrency Chevron down icon Chevron up icon
Concurrency ‒ Virtual Threads and Structured Concurrency: Diving Deeper Chevron down icon Chevron up icon
Garbage Collectors and Dynamic CDS Archives Chevron down icon Chevron up icon
Socket API and Simple Web Server Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Most Recent
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.5
(14 Ratings)
5 star 71.4%
4 star 21.4%
3 star 0%
2 star 0%
1 star 7.1%
Filter icon Filter
Most Recent

Filter reviews by




Aashish Sharma Aug 31, 2024
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
It was helpful book but lot of the item is directly related to jdk 21 not the exact fit for ma. But it had very good content. Liked the book
Amazon Verified review Amazon
James C McMaster III Aug 24, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I have been programming in Java for a very long time, since 1.2. Most recently I spent a lot of time with Java 11, and am not familiar with many of the new features introduced since then. This book takes you through solving real problems using those new features. I learned a lot.
Amazon Verified review Amazon
Steven Fernandes Aug 05, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Mastering Modern Java is essential for any Java developer keen on understanding JDK 21's latest features. It provides a detailed look at modern Java capabilities, from Records and Sealed Classes to Context-Specific Deserialization Filters, each explained with practical examples to simplify applications in real-world scenarios. The book excels in elucidating advanced data handling, security enhancements, and system architecture optimization. It also covers new APIs and modern garbage collection techniques, making it a valuable resource for enhancing both knowledge and practical skills in Java. Recommended for its thoroughness and real-world applicability.
Amazon Verified review Amazon
N/A Jul 31, 2024
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
The book was a surprise in some kind and it's useful anyway. The content is to my opinion a good summary I was searching for. The book fulfilled me needs and gave me some inspiration.
Feefo Verified review Feefo
Mathew Zannoni Jul 29, 2024
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Anghel Leonard did a great job of putting together this excellent reference for common programming challenges found during Java development.Examples of things you will have the chance to practice and learn about from this book are:- Text manipulation- Mathematical operations- Pattern Matching and Expressions- Date and Time Operations- Advanced data structures- Deserialization- Concurrency- Garbage Collection- Socket API- Sealed and Hidden classesAfter you try your hand at the coding problems presented each section goes into an in-depth explanation about all the problems and their solutions. Great care is given by the author towards thoroughly explaining the ins and outs of each topic. When presenting a solution Anghel starts off by providing a history of the particular feature of the language that he is covering. I appreciated the level of detail provided in each response, and how it evolved from what you might do if using JDK 8 and then expanding into more modern solutions enabled by JDK 17 and up.Overall, I would recommend this book to beginners and experts alike. If you’ve been working with Java professionally for any length of time, I found the book a great reference point on how to perform certain tasks. I also thought that Java Coding Problems was an amazing resource to help prep for interviews where the main focus will be writing Java code. My one critique would be that I wish there was a section dedicated to applying the books knowledge to the interview process. What will be helpful for beginner’s is that the book provides a solid foundational set of skills and slowly progresses into more advanced topics that build upon the previous sections and topics.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.