A detailed description of all the numeric data types in each of the following four languages, C#, Java, Objective-C, and Swift, could easily encompass a book of its own. Here, we will review only the most common numeric type identifiers for each language. The simplest way to evaluate these types is based on the underlying size of the data, using examples from each language as a framework for the discussion.
Tip
Compare apples to apples!
When you are developing applications for multiple mobile platforms, you should be aware that the languages you use could share a data type identifier or keyword, but under the hood, those identifiers may not be equal in value. Likewise, the same data type in one language may have a different identifier in another. For example, examine the case of the 16-bit unsigned integer, sometimes referred to as an unsigned short
. Well, it's called an unsigned short
in Objective-C. In C#, we talk about a ushort
, while Swift calls it a UInt16
. Java's only provision for the 16-bit unsigned integer, on the other hand, is char
although this object would typically not be used for numeric values. Each of these data types represents a 16-bit unsigned integer; they just use different names. This may seem like a small point, but if you are developing apps for multiple devices using each platform's native language, for the sake of consistency, you will need to be aware of these differences. Otherwise, you may risk introducing platform-specific bugs that are extremely difficult to detect and diagnose.
Integer data types are defined as representing whole numbers and can be either signed (negative, zero, or positive values) or unsigned (zero or positive values). Each language uses its own identifiers and keywords for integer types, so it is easiest to think in terms of memory length. For our purpose, we will only discuss the integer types representing 8-, 16-, 32-, and 64-bit memory objects.
8-bit data types, or bytes as they are more commonly referred to, are the smallest data types that we will examine. If you have brushed up on your binary math, you will know that an 8-bit memory block can represent 28, or 256 values. Signed bytes can range in value from -128 to 127, or -(27) to (27) - 1. Unsigned bytes can range in value from 0 to 255, or 0 to (28) -1.
A 16-bit data type is often referred to as a short, although that is not always the case. These types can represent 216 values. Signed shorts can range in value from -(215) to (215) - 1. Unsigned shorts can range in value from 0 to (216) - 1.
A 32-bit data type is most commonly identified as an integer, although it is sometimes identified as a long. Int types can represent 232 values. Signed integers can range in values from -231 to 231 - 1. Unsigned integers can range in values from 0 to (232) - 1.
Finally, a 64-bit data type is most commonly identified as a long, although Objective-C identifies it as a long long. Long types can represent 264 values. Signed long types can range in values from -(263) to (263) - 1. Unsigned long types can range in values from 0 to (263) - 1.
Note
Note that these values happen to be consistent across the four languages we will work with, but some languages will introduce slight variations. It is always a good idea to become familiar with the details of a language's numeric identifiers. This is especially true if you expect to be working with cases that involve the identifier's extreme values.
C#
C# refers to integer types as integral types. The language provides two mechanisms for creating 8-bit types, byte
and sbyte
. Both containers hold up to 256 values, and the unsigned byte ranges from 0 to 255. The signed byte provides support for negative values and, therefore, ranges from -128 to 127:
// C#
sbyte minSbyte = -128;
byte maxByte = 255;
Console.WriteLine("minSbyte: {0}", minSbyte);
Console.WriteLine("maxByte: {0}", maxByte);
/*
Output
minSbyte: -128
maxByte: 255
*/
Interestingly, C# reverses its pattern for longer bit identifiers. Instead of prefixing signed identifiers with s
, as in the case of sbyte
, it prefixes unsigned identifiers with u
. So, for 16-, 32-, and 64-bit identifiers, we have short
, ushort
; int
, uint
; long
, and ulong
respectively:
short minShort = -32768;
ushort maxUShort = 65535;
Console.WriteLine("minShort: {0}", minShort);
Console.WriteLine("maxUShort: {0}", maxUShort);
int minInt = -2147483648;
uint maxUint = 4294967295;
Console.WriteLine("minInt: {0}", minInt);
Console.WriteLine("maxUint: {0}", maxUint);
long minLong = -9223372036854775808;
ulong maxUlong = 18446744073709551615;
Console.WriteLine("minLong: {0}", minLong);
Console.WriteLine("maxUlong: {0}", maxUlong);
/*
Output
minShort: -32768
maxUShort: 65535
minInt: -2147483648
maxUint: 4294967295
minLong: -9223372036854775808
maxUlong: 18446744073709551615
*/
Java
Java includes integer types as a part of its primitive data types. The Java language only provides one construct for 8-bit storage, also identified as a byte
. It is a signed data type, so it will represent values from -127 to 128. Java also provides a wrapper class called Byte
, which wraps the primitive value and provides additional constructor support for parsable strings, or text, which can be converted to a numeric value such as the text 42. This pattern is repeated in the 16-, 32-, and 64-bit data types:
//Java
byte myByte = -128;
byte bigByte = 127;
Byte minByte = new Byte(myByte);
Byte maxByte = new Byte("128");
System.out.println(minByte);
System.out.println(bigByte);
System.out.println(maxByte);
/*
Output
-128
127
127
*/
Java shares identifiers with C# for all of integer data type, which means it also provides the byte
, short
, int
, and long
identifiers for 8-, 16-, 32-, and 64-bit types. One exception to the pattern in Java is the char
identifier, which is provided for unsigned 16-bit data types. It should be noted, however, that the char
data type is typically only used for ASCII character assignment and not for actual integer values:
//Short class
Short minShort = new Short(myShort);
Short maxShort = new Short("32767");
System.out.println(minShort);
System.out.println(bigShort);
System.out.println(maxShort);
int myInt = -2147483648;
int bigInt = 2147483647;
//Integer class
Integer minInt = new Integer(myInt);
Integer maxInt = new Integer("2147483647");
System.out.println(minInt);
System.out.println(bigInt);
System.out.println(maxInt);
long myLong = -9223372036854775808L;
long bigLong = 9223372036854775807L;
//Long class
Long minLong = new Long(myLong);
Long maxLong = new Long("9223372036854775807");
System.out.println(minLong);
System.out.println(bigLong);
System.out.println(maxLong);
/*
Output
-32768
32767
32767
-2147483648
2147483647
2147483647
-9223372036854775808
9223372036854775807
9223372036854775807
*/
In the preceding code, take note of the int
type and Integer
class. Unlike the other primitive wrapper classes, Integer
does not share the same name as the identifier it is supporting.
Also, note the long
type and its assigned values. In each case, the values have the suffix L
. This is a requirement for long
literals in Java because the compiler interprets all numeral literals as 32-bit integers. If you want to explicitly specify that your literal is larger than 32-bit, you must append the suffix L
. Otherwise, the compiler will honk at you. This is not a requirement, however, when passing a string value into the Long
class constructor:
Long maxLong = new Long("9223372036854775807");
Objective-C
For 8-bit data, Objective-C provides the char
data type in both signed and unsigned formats. As with other languages, the signed data type ranges from -127 to 128, while the unsigned data type ranges from 0 to 255. Developers also have the option to use Objective-C's fixed-width counterparts named int8_t
and uint8_t
. This pattern is repeated in the 16-, 32-, and 64-bit data types. Finally, Objective-C also provides an object-oriented wrapper class for each of the integer types in the form of the NSNumber
class:
Note
The difference between the char
or the other integer data type identifiers and their fixed-width counterparts is an important distinction. With the exception of char, which is always precisely 1 byte in length, every other integer data type in Objective-C will vary in size, depending on the implementation and underlying architecture. This is because Objective-C is based on C, which was designed to work at peak efficiency with various types of underlying architectures. Although it is possible to determine the exact length of an integer type at runtime, at compile, you can only be certain that short <= int <= long <= long long
.
This is where fixed-width integers come in handy. If more rigid control over the number of bytes is required, the (u)int<n>_t
data types allow you to denote integers that are precisely 8-, 16-, 32-, or 64-bit in length.
//Objective-C
char number = -127;
unsigned char uNumber = 255;
NSLog(@"Signed char number: %hhd", number);
NSLog(@"Unsigned char uNumber: %hhu", uNumber);
//fixed width
int8_t fixedNumber = -127;
uint8_t fixedUNumber = 255;
NSLog(@"fixedNumber8: %hhd", fixedNumber8);
NSLog(@"fixedUNumber8: %hhu", fixedUNumber8);
NSNumber *charNumber = [NSNumber numberWithChar:number];
NSLog(@"Char charNumber: %@", [charNumber stringValue]);
/*
Output
Signed char number: -127
Unsigned char uNumber: 255
fixedNumber8: -127
fixedUNumber8: 255
Char charNumber: -127
*/
In the preceding example, you can see that, when using the char
data types in code, you must specify the unsigned
identifier, such as unsigned char
. However, signed
is the default and may be omitted, which means the char
type is equivalent to signed char
. This pattern applies to each of the integer data types in Objective-C.
Larger integer types in Objective-C include short
for 16-bit, int
for 32-bit, and long long
for 64-bit. Each of these has a fixed-width counterpart following the (u)int<n>_t
pattern. Supporting methods are also available for each type within the NSNumber
class:
//Larger Objective-C types
short aShort = -32768;
unsigned short anUnsignedShort = 65535;
NSLog(@"Signed short aShort: %hd", aShort);
NSLog(@"Unsigned short anUnsignedShort: %hu", anUnsignedShort);
int16_t fixedNumber16 = -32768;
uint16_t fixedUNumber16 = 65535;
NSLog(@"fixedNumber16: %hd", fixedNumber16);
NSLog(@"fixedUNumber16: %hu", fixedUNumber16);
NSNumber *shortNumber = [NSNumber numberWithShort:aShort];
NSLog(@"Short shortNumber: %@", [shortNumber stringValue]);
int anInt = -2147483648;
unsigned int anUnsignedInt = 4294967295;
NSLog(@"Signed Int anInt: %d", anInt);
NSLog(@"Unsigned Int anUnsignedInt: %u", anUnsignedInt);
int32_t fixedNumber32 = -2147483648;
uint32_t fixedUNumber32 = 4294967295;
NSLog(@"fixedNumber32: %d", fixedNumber32);
NSLog(@"fixedUNumber32: %u", fixedUNumber32);
NSNumber *intNumber = [NSNumber numberWithInt:anInt];
NSLog(@"Int intNumber: %@", [intNumber stringValue]);
long long aLongLong = -9223372036854775808;
unsigned long long anUnsignedLongLong = 18446744073709551615;
NSLog(@"Signed long long aLongLong: %lld", aLongLong);
NSLog(@"Unsigned long long anUnsignedLongLong: %llu", anUnsignedLongLong);
int64_t fixedNumber64 = -9223372036854775808;
uint64_t fixedUNumber64 = 18446744073709551615;
NSLog(@"fixedNumber64: %lld", fixedNumber64);
NSLog(@"fixedUNumber64: %llu", fixedUNumber64);
NSNumber *longlongNumber = [NSNumber numberWithLongLong:aLongLong];
NSLog(@"Long long longlongNumber: %@", [longlongNumber stringValue]);
/*
Output
Signed short aShort: -32768
Unsigned short anUnsignedShort: 65535
fixedNumber16: -32768
fixedUNumber16: 65535
Short shortNumber: -32768
Signed Int anInt: -2147483648
Unsigned Int anUnsignedInt: 4294967295
fixedNumber32: -2147483648
fixedUNumber32: 4294967295
Int intNumber: -2147483648
Signed long long aLongLong: -9223372036854775808
Unsigned long long anUnsignedLongLong: 18446744073709551615
fixedNumber64: -9223372036854775808
fixedUNumber64: 18446744073709551615
Long long longlongNumber: -9223372036854775808
*/
SwiftThe Swift language is similar to others, in that, it provides separate identifiers for signed and unsigned integers, for example Int8
and UInt8
. This pattern applies to each of the integer data types in Swift, making it possibly the simplest language in terms of remembering which identifier applies to which type:
//Swift
var int8 : Int8 = -127
var uint8 : UInt8 = 255
print("int8: \(int8)")
print("uint8: \(uint8)")
/*
Output
int8: -127
uint8: 255
*/
In the preceding example, I have explicitly declared the data type using the :Int8
and : UInt8
identifiers to demonstrate explicit declaration. In Swift, it is also acceptable to leave these identifiers out and allow Swift to infer the types dynamically at runtime:
//Larger Swift types
var int16 : Int16 = -32768
var uint16 : UInt16 = 65535
print("int16: \(int16)")
print("uint16: \(uint16)")
var int32 : Int32 = -2147483648
var uint32 : UInt32 = 4294967295
print("int32: \(int32)")
print("uint32: \(uint32)")
var int64 : Int64 = -9223372036854775808
var uint64 : UInt64 = 18446744073709551615
print("int64: \(int64)")
print("uint64: \(uint64)")
/*
Output
int16: -32768
uint16: 65535
int32: -2147483648
uint32: 4294967295
int64: -9223372036854775808
uint64: 18446744073709551615
*/
Why do I need to know this?
You may ask, Why do I need to know the ins and outs of these data types? Can't I just declare an int
object or some similar identifier and move on to writing the interesting code? Modern computers and even mobile devices provide nearly unlimited resources, so it's not a big deal, right?
Well, not exactly. It is true that, in many circumstances in your daily programming experience, any integer type will do. For example, looping through a list of license plates issued at Department of Motor Vehicles (DMV) offices across the state of West Virginia on any given day may yield anything from a few dozen to perhaps a few hundred results. You could control the for
loop's iterations using a short
or you could use long long
. Either way, the loop will have very little impact on the performance of your system.
However, what if you're dealing with a set of data where each discrete result in that set can fit in a 16-bit type, but you choose a 32-bit identifier just because that's what you're used to? You've just doubled the amount of memory required to manage that collection. This decision wouldn't matter with 100 or maybe even 100,000 results. However, when you start working with very large sets of data, with hundreds of thousands or even millions of discrete results, such design decisions can have a huge impact on system performance.
Single precision floating point numbers, or floats as they are more commonly referred to, are 32-bit floating point containers that allow storing values with much greater precision than integer types, typically to six or seven significant digits. Many languages use the float
keyword or identifier for single-precision float values, and that is the case for each of the four languages we are discussing.
You should be aware that floating point values are subject to rounding errors because they cannot represent base-10 numbers exactly. The arithmetic of floating point types is a fairly complex topic, the details of which will not be pertinent to the majority of developers on any given day. However, it is still a good practice to familiarize yourself with the particulars of the underlying science as well as the implementation in each language.
Note
As I am by no means an expert on the subject, this discussion will only scratch the surface of the science behind these types, and we will not even begin to cover the arithmetic. There are others who truly are experts in this area, however, and I highly recommend you review some of their work listed in the Additional resources section at the end of this chapter.
C#In C#, the float
keyword identifies 32-bit floating point values. The C# float
data type has an approximate range of -3.4 × 1038 to +3.4 × 1038 and a precision of six significant digits:
//C#
float piFloat = 3.14159265358979323846264338327f;
Console.WriteLine("piFloat: {0}", piFloat);
/*
Output
piFloat: 3.141593
*/
When you examine the preceding code, you will notice that the float
value assignment has the f
suffix. This is because, like other C-based languages, C# treats real numeric literals on the right-hand side of assignments as a double (discussed later) by default. If you leave the f
or F
suffix off the assignment, you will receive a compilation error, because you are trying to assign a double point precision value to a single point precision type.
Also, note the rounding error in the last digit. We populated the piFloat
object with pi presented out to 30 significant digits. However, float
can only retain six significant digits, so the software rounded off everything after that. When pi is calculated out to six significant digits, we get 3.141592, but our float
value is now 3.141593 due to this limitation.
Java
As with C#, Java uses the float identifier for floating point values. In Java, a float
has an approximate range of -3.4 × 1038 to +3.4 × 1038 and a precision of six or seven significant digits:
//Java
float piFloat = 3.141592653589793238462643383279f;
System.out.println(piFloat);
/*
Output
3.1415927
*/
When you examine the preceding code, you will notice that the float value assignment has the f
suffix. This is because, like other C based languages, Java treats real numeric literals on the right side of assignments as a double by default. If you leave the f
or F
suffix off the assignment, you will receive a compilation error because you are trying to assign a double-point precision value to a single-point precision type.
Objective-C
Objective-C uses the float
identifier for floating point values. In Objective-C, a float
has an approximate range of -3.4 × 1038 to +3.4 × 1038 and a precision of 6 significant digits:
//Objective-C
float piFloat = 3.14159265358979323846264338327f;
NSLog(@"piFloat: %f", piFloat);
NSNumber *floatNumber = [NSNumber numberWithFloat:piFloat];
NSLog(@"floatNumber: %@", [floatNumber stringValue]);
/*
Output
piFloat: 3.141593
floatNumber: 3.141593
*/
When you examine the preceding code, you will notice that the float value assignment has the f
suffix. This is because, like other C-based languages, Objective-C treats real numeric literals on the right-hand side of assignments as a double by default. If you leave the f
or F
suffix off of the assignment, you will receive a compilation error because you are trying to assign a double-point precision value to a single-point precision type.
Also, note the rounding error in the last digit. We populated the piFloat
object with pi presented out to 30 significant digits, but float can only retain six significant digits, so the software rounded off everything after that. When pi is calculated out to six significant digits, we get 3.141592, but our float value is now 3.141593 due to this limitation.
Swift
Swift uses the float
identifier for floating point values. In Swift, a float
has an approximate range of -3.4 × 1038 to +3.4 × 1038 and a precision of six significant digits:
//Swift
var floatValue : Float = 3.141592653589793238462643383279
print("floatValue: \(floatValue)")
/*
Output
floatValue: 3.141593
*/
When you examine the preceding code, you will notice that the float value assignment has the f
suffix. This is because, like other C-based languages, Swift treats real numeric literals on the right-hand side of assignments as a double by default. If you leave the f
or F
suffix off of the assignment, you will receive a compilation error because you are trying to assign a double-point precision value to a single-point precision type.
Also, note the rounding error in the last digit. We populated the floatValue
object with pi presented out to 30 significant digits, but float can only retain six significant digits, so the software rounded off everything after that. When pi is calculated out to six significant digits, we get 3.141592, but our float value is now 3.141593 due to this limitation.
Double precision floating point numbers, or doubles as they are more commonly referred to, are 64-bit floating point values that allow storing values with much greater precision than the integer types, typically to 15 significant digits. Many languages use the double identifier for double precision float values and that is also the case for each of the four languages we are discussing.
Note
In most circumstances, it will not matter whether you choose float
over double
unless memory space is a concern, in which case you will want to choose float
whenever possible. Many argue that float
is more performant than double under most conditions, and generally speaking, this is the case. However, there are other conditions where double
will be more performant than float
. The reality is that the efficiency of each type is going to vary from case to case, based on criteria that are too numerous to detail in the context of this discussion. Therefore, if your particular application requires truly peak efficiency, you should research the requirements and environmental factors carefully and decide what is best for your situation. Otherwise, just use whichever container will get the job done and move on.
C#In C#, the double
keyword identifies 64-bit floating point values. The C# double
has an approximate range of ±5.0 × 10−324 to ±1.7 × 10308 and a precision of 14 or 15 significant digits:
//C#
double piDouble = 3.14159265358979323846264338327;
double wholeDouble = 3d;
Console.WriteLine("piDouble: {0}", piDouble);
Console.WriteLine("wholeDouble: {0}", wholeDouble);
/*
Output
piDouble: 3.14159265358979
wholeDouble: 3
*/
When you examine the preceding code, you will notice that the wholeDouble
value assignment has the d
suffix. This is because, like other C-based languages, C# treats real numeric literals on the right-hand side of assignments as integers by default. If you were to leave the d
or D
suffix off the assignment, you will receive a compilation error because you are trying to assign an integer value to a double-point precision type.
Also, note the rounding error in the last digit. We populated the piDouble
object using pi out to 30 significant digits, but double can only retain 14 significant digits, so the software rounded off everything after that. When pi is calculated out to 15 significant digits, we get 3.141592653589793, but our float value is now 3.14159265358979 due to this limitation.
Java
In Java, the double
keyword identifies 64-bit floating-point values. The Java double
has an approximate range of ±4.9 × 10−324 to ±1.8 × 10308 and a precision of 15 or 16 significant digits:
double piDouble = 3.141592653589793238462643383279;
System.out.println(piDouble);
/*
Output
3.141592653589793
*/
When you examine the preceding code, note the rounding error in the last digit. We populated the piDouble
object using pi out to 30 significant digits, but double can only retain 15 significant digits, so the software rounded off everything after that. When pi is calculated out to 15 significant digits, we get 3.1415926535897932, but our float value is now 3.141592653589793 due to this limitation.
Objective-C
Objective-C also uses the double
identifier for 64-bit floating point values. The Objective-C double has an approximate range of 2.3E-308 to 1.7E308 and a precision of 15 significant digits. Objective-C takes accuracy a step further by providing an even more precise version of double called the long double. The long double identifier is used for an 80 bit storage container with a range of 3.4E-4932 to 1.1E4932 and a precision of 19 significant digits:
//Objective-C
double piDouble = 3.14159265358979323846264338327;
NSLog(@"piDouble: %.15f", piDouble);
NSNumber *doubleNumber = [NSNumber numberWithDouble:piDouble];
NSLog(@"doubleNumber: %@", [doubleNumber stringValue]);
/*
Output
piDouble: 3.141592653589793
doubleNumber: 3.141592653589793
*/
In our preceding example, note the rounding error in the last digit. We populated the piDouble
object using pi out to 30 significant digits, but double can only retain 15 significant digits, so the software rounded off everything after that. When pi is calculated out to 15 significant digits, we get 3.1415926535897932, but our float value is now 3.141592653589793 due to this limitation.
Swift
Swift uses the double
identifier for 64-bit floating-point values. In Swift, a double has an approximate range of 2.3E-308 to 1.7E308 and a precision of 15 significant digits. Note that, according to Apple's documentation for Swift, when either float
or double
types will suffice, double is recommended:
//Swift
var doubleValue : Double = 3.141592653589793238462643383279
print("doubleValue: \(doubleValue)")
/*
Output
doubleValue: 3.14159265358979
*/
In our preceding example, note the rounding error in the last digit. We populated the doubleValue
object using pi out to 30 significant digits, but double can only retain 15 significant digits, so the software rounded off everything after that. When pi is calculated out to 15 significant digits, we get 3.141592653589793, but our float
value is now 3.14159265358979 due to this limitation.
Due to the inherent inaccuracy found in floating point arithmetic, grounded in the fact that they are based on binary arithmetic, floats, and doubles cannot accurately represent the base-10 multiples we use for currency. Representing currency as a float
or double
may seem like a good idea at first as the software will round off the tiny errors in your arithmetic. However, as you begin to perform more and complex arithmetic operations on these inexact results, your precision errors will begin to add up and result in serious inaccuracies and bugs that can be very difficult to track down. This makes float and double data types insufficient for working with currency where perfect accuracy for multiples of 10 is essential. Luckily, each of the languages we are discussing provides a mechanism to work with currency, and other arithmetic problems require high precision in based-10 values and calculations.
C#
C# uses the decimal
keyword for precise floating-point values. In C#, decimal
has a range of ±1.0 x 10-28 to ±7.9 x 1028 with a precision of 28 or 29 significant digits:
var decimalValue = NSDecimalNumber.init(string:"3.141592653589793238462643383279")
print("decimalValue \(decimalValue)")
/*
Output
piDecimal: 3.1415926535897932384626433833
*/
In the preceding example, note that we populated the decimalValue
object with pi out to 30 significant digits, but the framework rounded this off to 28 significant digits.
Java
Java provides an object-oriented solution to the currency problem in the form of the BigDecimal
class:
BigDecimal piDecimal = new BigDecimal("3.141592653589793238462643383279");
System.out.println(piDecimal);
/*
Output
3.141592653589793238462643383279
*/
In the preceding example, we are initializing the BigDecimal
class using a constructor that takes a string representation of our decimal value as a parameter. When the program runs, the output proves that the BigDecimal
class did not lose any of our intended precision, returning pi to 30 significant digits.
Objective-C
Objective-C also provides an object-oriented solution to the currency problem in the form of the NSDecimalNumber
class:
//Objective-C
NSDecimalNumber *piDecimalNumber = [[NSDecimalNumber alloc] initWithDouble:3.14159265358979323846264338327];
NSLog(@"piDecimalNumber: %@", [piDecimalNumber stringValue]);
/*
Output
piDecimalNumber: 3.141592653589793792
*/
Swift
Swift also provides an object-oriented solution to the currency problem, and it is the same class used in Objective-C, the NSDecimalNumber
class. The Swift version is initialized slightly differently, but it retains the same functionality as its Objective-C counterpart:
var decimalValue = NSDecimalNumber.init(string:"3.141592653589793238462643383279")
print("decimalValue \(decimalValue)")
/*
Output
decimalValue 3.141592653589793238462643383279
*/
Note that precision, in both the Objective-C and Swift examples, is retained out to 30 significant digits, proving that the NSDecimalNumber
class is superior for working with currency and other base-10 values.
Tip
In the spirit of full disclosure, there is a simple and arguably more elegant alternative to using these custom types. You could just use int
or long
for your currency calculations and count in cents rather than dollars:
//C# long total = 316;
//$3.16
In the realm of computer science, type conversion or typecasting means to converting an instance of one object or data type into another. For example, let's say you make a call to a method that returns an integer value but you need to use that value in another method that requires a long value as the input parameter. Since an integer value by definition exists within the realm of allowable long
values, the int
value can be redefined as a long.
Such conversions can be done through either implicit conversion, sometimes called coercion, or explicit conversion, otherwise known as casting. To fully appreciate casting, we also need to understand the difference between static and dynamic languages.
Statically versus dynamically typed languages
A statically typed language will perform its type checking at compile time. This means that, when you try to build your solution, the compiler will verify and enforce each of the constraints that apply to the types in your application. If they are not enforced, you will receive an error and the application will not build. C#, Java, and Swift are all statically typed languages.
Dynamically typed languages, on the other hand, do most or all of their type checking at run time. This means that the application might build just fine, but could experience a problem while it is actually running if the developer wasn't careful in how he wrote the code. Objective-C is a dynamically typed language because it uses a mixture of statically typed objects and dynamically typed objects. The plain C objects used for numeric values discussed earlier in this chapter are all examples of statically typed objects, while the Objective-C classes NSNumber
and NSDecimalNumber
are both examples of dynamically typed objects. Consider the following code example in Objective-C:
double myDouble = @"chicken";
NSNumber *myNumber = @"salad";
The compiler will throw an error on the first line, stating Initializing 'double' with an expression of incompatible type 'NSString *'
. That's because double
is a plain C object, and it is statically typed. The compiler knows what to do with this statically typed object before we even get to the build, so your build will fail.
However, the compiler will only throw a warning on the second line, stating Incompatible pointer types initializing 'NSNumber *' with an expression of type 'NSString *'
. That's because NSNumber
is an Objective-C class, and it is dynamically typed. The compiler is smart enough to catch your mistake, but it will allow the build to succeed (unless you have instructed the compiler to treat warnings as errors in your build settings).
Tip
Although the forthcoming crash at runtime is obvious in the previous example, there are cases where your app will function perfectly fine despite the warnings. However, no matter what type of language you are working with, it is always a good idea to consistently clean up your code warnings before moving on to new code. This helps keep your code clean and avoids any runtime errors which can be difficult to diagnose.
On those rare occasions where it is not prudent to address the warning immediately, you should clearly document your code and explain the source of the warning so that other developers will understand your reasoning. As a last resort, you can take advantage of macros or pre-processor (pre-compiler) directives that can suppress warnings on a line-by-line basis.
Implicit and explicit casting
Implicit casting does not require any special syntax in your source code. This makes implicit casting somewhat convenient. Consider the following code example in C#:
int a = 10;
double b = a++;
In this scenario, since a
can be defined as both an int
and a double
, the cast to type double
is perfectly acceptable because we have defined both types manually. However, since implicit casts do not necessarily define their types manually, the compiler cannot always determine which constraints apply to the conversion and therefore will not be able to check these constraints until runtime. This makes the implicit cast also somewhat dangerous. Consider the following code example also in C#:
double x = "54";
This is an implicit conversion because you have not told the compiler how to treat the string value. In this case, the conversion will fail when you try to build the application, and the compiler will throw an error for this line, stating Cannot implicitly convert type 'string' to 'double'
. Now, consider the explicitly cast version of this example:
double x = double.Parse("42");
Console.WriteLine("40 + 2 = {0}", x);
/*
Output
40 + 2 = 42
*/
This conversion is explicit and therefore type-safe, assuming that the string value is parsable.
When casting between two types, an important consideration is whether the result of the change is within the range of the target data type. If your source data type supports more bytes than your target data type, the cast is considered to be a narrowing conversion.
Narrowing conversions are either casts that cannot be proven to always succeed or casts that are known to possibly lose information. For example, casting from a float to an integer will result in loss of information (precision in this case), as the result will be rounded off to the nearest whole number. In most statically typed languages, narrowing casts cannot be performed implicitly. Here is an example by borrowing from the C# single-precision and double-precision examples earlier in this chapter:
//C#
piFloat = piDouble;
In this example, the compiler will throw an error, stating Cannot implicitly convert type 'double' to 'float'. And explicit conversion exists (Are you missing a cast?)
. The compiler sees this as a narrowing conversion and treats the loss of precision as an error. The error message itself is helpful and suggests an explicit cast as a potential solution for our problem:
//C#
piFloat = (float)piDouble;
We have now explicitly cast the double value piDouble
to a float
, and the compiler no longer concerns itself with loss of precision.
If your source data type supports fewer bytes than your target data type, the cast is considered to be a widening conversion. Widening conversions will preserve the source object's value, but may change its representation in some way. Most statically typed languages will permit implicit widening casts. Let's borrow again from our previous C# example:
//C#
piDouble = piFloat;
In this example, the compiler is completely satisfied with the implicit conversion and the app will build. Let's expand the example further:
//C#
piDouble = (double)piFloat;
This explicit cast improves readability, but does not change the nature of the statement in any way. The compiler also finds this format to be completely acceptable, even if it is somewhat more verbose. Beyond improved readability, explicit casting when widening adds nothing to your application. Therefore, it is your preference if you want to use explicit casting when widening is a matter of personal preference.