Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
iOS Application Development with OpenCV 3
iOS Application Development with OpenCV 3

iOS Application Development with OpenCV 3: Create four mobile apps and explore the world through photography and computer vision

eBook
€15.99 €23.99
Paperback
€29.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Table of content icon View table of contents Preview book icon Preview Book

iOS Application Development with OpenCV 3

Chapter 1. Setting Up Software and Hardware

Every year since 2007, the iPhone has spawned a new generation of hardware, and eager buyers have queued up outside their local Apple Store to get it. The iPhone and iPad have become centerpieces of consumer culture, promising instant gratification, timely information, and easy achievements. Apps are designed for retirees, working people, job hunters, vacationers, students, gamers, hospital patients, babies, and cats. Like a Swiss Army knife, an iPhone is a premium product that supposedly prepares the user for all kinds of contingencies. Moreover, the iPhone is a fashion item and sometimes inspires idiosyncratic behavior. For example, it enables the user to share large numbers of selfies and pictures of lunch.

As software developers and scholars of computer vision, we need to think a bit harder about the iPhone, the iPad, and their cameras. We need to make preparations before we can properly use these versatile tools in our work. We also need to demystify Apple's proprietary systems and appreciate the role of open source, cross-platform libraries such as OpenCV. Apple provides a fine mobile platform in iOS, but computer vision is not a fundamental part of this platform. OpenCV uses this platform efficiently but adds a layer of abstraction, providing high-level functionality for computer vision.

This chapter is the primer for the rest of the book. We assume that you already have a computer running Mac OS 10.10 (or a later version) as well as an iPhone, iPad, or iPod Touch running iOS 9 (or a later version). We will take the following steps to prepare a workspace and learn good habits for our future projects:

  1. Set up Apple's standard tools for iOS developers, which include Xcode, iOS SDK, and Xcode Command Line Tools.
  2. Set up OpenCV 3.1 (or a later version) for iOS. We have the option to use a standard, prebuilt version or a custom-built version with extra functionality.
  3. Develop a minimal application that uses the iOS SDK and OpenCV to display an image with a special effect.
  4. Join Apple's iOS Developer Program and obtaining permission to distribute an application to other users to test.
  5. Find documentation and support for the iOS SDK and OpenCV.
  6. Learn about the kinds of lights, tripods, and lens attachments that may enable us to capture specialized images with an iPhone or iPad.

By the end of this chapter, you will possess the necessary software and skills to build a basic OpenCV project for iOS. You will also have a new appreciation of your iPhone or iPad's camera as a tool for scientific photography and computer vision.

Setting up Apple's developer tools

The Xcode integrated development environment (IDE) is Apple's core product for developers. It includes GUI tools for the design, configuration, development, and testing of apps. As an add-on, the Xcode Command Line Tools enable full control of Xcode projects from the command prompt in Terminal. For iOS developers, the iOS SDK is also essential. It includes all the standard iOS libraries as well as tools for simulation and deployment.

Xcode is available for free from the Mac App Store and comes with the current version of the iOS SDK. Go to https://itunes.apple.com/us/app/xcode/id497799835, open Xcode's App Store link, and start the installer. The installer may run for an hour or longer, including the time to download Xcode and the iOS SDK. Give your agreement to any prompts, including the prompt to reboot.

Once Xcode is installed, open Terminal and run the following command to install the Xcode Command Line Tools:

$ xcode-select install

Again, give your agreement to any prompts. Once the Xcode Command Line Tools are installed, run the following command to ensure that you have reviewed and accepted the required license agreements:

$ sudo xcodebuild -license

The text of the agreements will appear in Terminal. Press spacebar repeatedly until you reach the end of the text, then type agree, and press Enter. Now, we have the basic tools to develop iOS projects in Xcode and Terminal.

Setting up the OpenCV framework

OpenCV for iOS is distributed as a framework file, which is a bundle containing the library's header files as well as binary files for static linkage. The binaries support all iOS device architectures (ARMv7, ARMv7s, and ARM64) and all iOS simulator architectures (x86 and x64). Thus, we can use the same framework file for all configurations of an iOS application project.

OpenCV 3 is designed to be modular. Its build process is highly configurable to allow modules to be added, reimplemented, or removed without breaking other modules. Each module consists of one public header file along with various private header files and implementation files. Some modules are considered standard components of an OpenCV build, and these standard modules are maintained and tested by the library's core development team. Other modules are considered extras, and these extra or "contributed" modules are maintained and tested by third-party contributors. Collectively, the extra modules are called opencv_contrib.

If we just want to use the standard modules, we can obtain the official, prebuilt distribution of OpenCV for iOS. This prebuilt distribution consists of a framework file, opencv2.framework. If we want to use extra modules, we must build opencv2.framework for ourselves. Next, let's examine the steps to get or build the framework.

Note

For this book's projects, the extra modules are not required but they are recommended because we will use them to implement some optional features.

Getting the prebuilt framework with standard modules

Go to http://opencv.org/downloads.html and click on the download link for the latest version of OpenCV for iOS. Specifically, we require OpenCV 3.1 or a later version. The download's filename is opencv2.framework.zip. Unzip it to get the framework file, opencv2.framework. Later, we will add this framework to our iOS application projects; we will import its header files using the following code:

#import <opencv2/core.hpp>

This imports the core module's header file from opencv2.framework. The import statement will vary according to the module's name.

Building the framework from source with extra modules

We will try to get and build all of OpenCV's modules. Broadly, this process will consist of the following four steps:

  1. Get the source code for OpenCV's standard modules. Store this in any folder, which we will refer to as <opencv_source_path>.
  2. Get the source code for OpenCV's extra modules. Store this in any folder, which we will refer to as <opencv_contrib_source_path>.
  3. Try to build all the modules and store the build in any folder, which we will refer to as <opencv_contrib_build_path>.
  4. If any module fails to build, resolve the issue by either removing the module or patching its source code. Then, try to build again.

Now, let's discuss the details as we walk through the steps. To obtain OpenCV's latest source code, we can use Git, an open source version control tool. We already installed Git as part of the Xcode Command Line Tools. OpenCV's standard and extra modules are hosted in two repositories on GitHub, an online repository hosting service. To download the standard modules' source code to <opencv_source_path>, run the following command:

$ git clone https://github.com/Itseez/opencv.git <opencv_source_path>

Similarly, to download the extra modules' source code to <opencv_contrib_source_path>, run the following command:

$ git clone https://github.com/Itseez/opencv_contrib.git <opencv_contrib_source_path>

Note

For an exhaustive guide to Git, see the book Pro Git, 2nd Edition (Apress, 2014) by Scott Chacon and Ben Straub. The free eBook version is available at https://www.git-scm.com/book.

OpenCV's source code comes with build scripts for various platforms. The iOS build script takes two arguments—the build path and the opencv_contrib source path. Run the script in the following manner:

$ ./<opencv_source_path>/platforms/ios/build_framework.py <opencv_contrib_build_path> --contrib <opencv_contrib_source_path>

Read the script's output to see whether it failed to build any modules. Remember that opencv_contrib contains experimental modules from various authors, and some authors might not test their modules for iOS compatibility. For example, the following output shows a compilation error in the saliency module (modules/saliency):

** BUILD FAILED **


The following build commands failed:
  CompileC /Users/Joe/SDKs/OpenCV/fork_build_ios/build/iPhoneOS-armv7/modules/saliency/OpenCV.build/Release-iphoneos/opencv_saliency_object.build/Objects-normal/armv7/FilterTIG.o /Users/Joe/SDKs/OpenCV/fork_contrib/modules/saliency/src/BING/FilterTIG.cpp normal armv7 c++ com.apple.compilers.llvm.clang.1_0.compiler
(1 failure)
('Child returned:', 65)

If we do not require the problematic module, we may simply delete its source subfolder in <opencv_contrib_source_path>/modules, and then rerun build_framework.py. For example, to avoid building the saliency module, we may delete <opencv_contrib_source_path>/modules/saliency.

Note

For this book's projects, the following extra modules are useful:

  • xfeatures2d: This provides extra algorithms to match images based on distinctive details in the images
  • xphoto: This provides extra photo processing techniques

On the other hand, if we do require the problematic module, first somebody must modify its source code so that it successfully compiles and runs for iOS. Patching opencv_contrib is beyond the scope of this book, but if you are skilled in C++ programming, I encourage you to try it sometime. Alternatively, you may decide to file an issue report at https://github.com/Itseez/opencv_contrib/issues and wait for the module's authors to respond.

When build_framework.py works properly, it prints ** INSTALL SUCCEEDED **, and creates the framework file at <opencv_contrib_build_path>/opencv2.framework. Later, we will add this framework to our iOS application projects; we will import its header files using the following code:

#import <opencv2/xphoto.hpp>

This imports the xphoto module's header file from opencv2.framework. The import statement will vary according to the module's name.

Making the extra modules optional in our code

As the extra modules are less stable than the standard modules, we may want to make them optional in our code. By enclosing the optional code inside a preprocessor condition, we can easily disable or re-enable it in order to test the effect. Consider the following example:

#ifdef WITH_OPENCV_CONTRIB
#import <opencv2/xphoto.hpp>
#endif

If we want to use opencv2_contrib, we will edit the Xcode project settings to add WITH_OPENCV_CONTRIB as a preprocessor definition. Then, in the preceding example, the xphoto.hpp headers will be imported in our code. Detailed steps to create a preprocessor definition are provided later in this chapter, in the Configuring the project section.

Developing a minimal application

So far, we have set up a development environment including Xcode, the iOS SDK, and OpenCV. Now, let's use these tools and libraries to develop our first iOS application. The app will have the following flow of execution:

  1. When the application starts:
    1. Load an image from a file that is bundled with the app.
    2. If the image is in color (not grayscale), automatically adjust its white balance.
    3. Display the image in fullscreen mode.
  2. Every two seconds:
    1. Create an updated image by applying a random tint to the original image.
    2. Display the updated image.

Note that the application will not use a camera or any user input at all. However, the user will see an image that appears to be backlit with a colorful, changing light. This is not really a demo of computer vision, but it is a demo of image processing and integration between the iOS SDK and OpenCV. Moreover, it is decorative, festive, and best of all it has a theme—cool pigs. Our app's name will be CoolPig and it will display a cool picture of a pig. Consider the following example of a black-and-white photo of a piglet (left), along with three tinted variants:

Developing a minimal application

Note

In this book's print version, all images appear in grayscale. To see them in color, download them from Packt Publishing's website at https://www.packtpub.com/sites/default/files/downloads/iOSApplicationDevelopmentwithOpenCV3_ColorImages.pdf, or read the eBook.

The original image is the work of Gustav Heurlin (1862-1939), a Swedish photographer who documented rural life in the early 20th century. He was an early adopter of the autochrome color photography process, and National Geographic published many of his photographs during 1919-1931.

When our users see a pig in a beautiful series of pop-art colors, they will question their preconceptions and realize it is a really cool animal.

Note

To obtain the completed projects for this book, go to the author's GitHub repository at https://github.com/JoeHowse/iOSWithOpenCV, or log in to your account on Packt Publishing's site at https://www.packtpub.com/.

Creating the project

Open Xcode. Click on the Create new Xcode project button or select the File | New | Project… menu item. Now, a dialog asks you to choose a project template. Select iOS | Application | Single View Application, as shown in the following screenshot:

Creating the project

Single View Application is the simplest template as it just creates an empty GUI with no special navigational structure. Click on the Next button to confirm the selection. Now, a dialog asks you to pick a few project settings. Fill out the form as shown in the following screenshot:

Creating the project

Let's review the items in the form:

  • Product Name: This is the application's name, such as CoolPig.
  • Organization Name: This is the name of the application's vendor, such as Nummist Media Corporation Limited.
  • Organization Identifier: This is the vendor's unique identifier. The identifier should use reverse domain name notation, such as com.nummist.
  • Bundle Identifier: This is the application's unique identifier, which is generated based on the Product Name and Organization Identifier. This field is non-editable.
  • Language: This is the project's high-level programming language, either Objective-C or Swift. This book uses Objective-C, which is a pure superset of C and interoperable with C++ to a great extent. Swift is not interoperable with C++. OpenCV's core language is C++, so Objective-C's interoperability makes it an obvious choice.
  • Devices: This is the supported hardware, which may be Universal (all iOS devices), iPhone (including iPod Touch), or iPad. This book's projects are Universal.
  • Use Core Data: If this is enabled, the project will contain a database using Apple's Core Data framework. For this book's projects, disable it.
  • Include Unit Tests: If this is enabled, the project will contain a set of tests using the OCUnit framework. For this book's projects, disable it.
  • Include UI Tests: If this is enabled, the project will contain a set of tests using Apple's UI automation framework for iOS. Disable it for this book's projects.

Click on the Next button to confirm the project options. Now, a file chooser dialog asks you to pick a folder for the project. Pick any location, which we will refer to as <app_project_path>.

Optionally, you may enable the Create Git repository checkbox if you want to put the project under version control using Git. Click on the Create button. Now, Xcode creates and opens the project.

Adding files to the project

Use Finder or Terminal to copy files to the following locations:

  • <app_project_path>/opencv2.framework: This framework contains the standard OpenCV modules. We downloaded or built it previously, as described in the Getting the prebuilt framework with standard modules or Building the framework from source with extra modules section.
  • <app_project_path>/CoolPig/Piggy.png: This may be any cool picture of a pig in grayscale or color. Any species of pig is acceptable, be it a swine, boar, Muppet, or other variety.

Go back to Xcode to view the project. Navigate to the File | Add Files to "CoolPig"… menu item. Now, Xcode opens a file chooser dialog. Select opencv2.framework and click on the Add button. Repeat the same steps for CoolPig/Piggy.png. Note that these files appear in the project navigator pane, which is the leftmost section of the Xcode window. In this pane, drag Piggy.png to the CoolPig | Supporting Files group. When you are finished, the navigator pane should look similar to the following screenshot:

Adding files to the project

Configuring the project

First, let's configure our app to run in fullscreen mode with no status bar. Select the CoolPig project file at the top of the navigator pane. Now, select the General tab in the editor area, which is the central part of the Xcode window. Find the Deployment Info group, and enable the Hide status bar and Requires full screen checkboxes, as shown in the following screenshot:

Configuring the project

The status bar and fullscreen settings are stored in the app's Info.plist file. Select CoolPig | CoolPig | Info.plist in the navigator pane. Now, in the editor area, note that the UIRequiresFullscreen and Status bar is initially hidden properties both have the YES value. However, we still need to add another property to ensure that the status bar will not appear. Hover over the last item in the list, and click on the + button to insert a new property. Enter View controller-based status bar appearance as the property's key and set its value to NO, as shown in the following screenshot:

Configuring the project

Next, let's link the project with additional frameworks. OpenCV depends on two of Apple's frameworks called CoreGraphics.framework and UIKit.framework. Optionally, for optimizations, OpenCV can also use a third Apple framework called Accelerate.framework.

Note

The Accelerate framework contains Apple's hardware-accelerated implementation of industry-standard APIs for vector mathematics. Notably, it implements standards called Basic Linear Algebra Subprograms (BLAS) and Linear Algebra Package (LAPACK). OpenCV is designed to leverage these standards on various platforms including iOS.

Select the CoolPig project file in the navigator pane and then select the Build Phases tab in the editor area. Find the Link Binary With Libraries group. Click on the + button, select Accelerate.framework from the dialog, and click on the Add button. Repeat these steps for CoreGraphics.framework and UIKit.framework. Now, the editor area should look similar to the following screenshot:

Configuring the project

Now, the linker will be able to find OpenCV's dependencies. However, we need to change another setting to ensure that the compiler will understand the C++ code in OpenCV's header files. Open the Build Settings tab in the editor area and find the Apple LLVM 7.0 - Language group. Set the value of the Compile Sources As item to Objective-C++, as seen in the following screenshot:

Configuring the project

Note

Alternatively, we could leave the Compile Sources As item at its default value, which is According to File Type. Then, we would need to rename our source files to give them the extension .mm, which Xcode associates with Objective-C++.

We have just one more thing to configure in the Build Settings tab. Remember that we consider the opencv2_contrib modules to be an optional dependency of our projects, as described earlier in the Making the extra modules optional in our code section. If we did build opencv2.framework with these modules and if we do want to use their functionality, let's create a preprocessor definition, WITH_OPENCV_CONTRIB. Find the Apple LLVM 7.0 - Preprocessing group. Edit Preprocessor Macros | Debug and Preprocessor Macros | Release to add the WITH_OPENCV_CONTRIB text. Now, the settings should look like the following screenshot:

Configuring the project

As a final, optional step in the configuration, you may want to set the app's icon. Select CoolPig | CoolPig | Assets.xcassets in the project navigator pane. Assets.xcassets is a bundle, which may contain several variants of the icon for different devices and different contexts (the Home screen, Spotlight searches, and the Settings menu).

Click on the AppIcon list item in the editor area and then drag and drop an image file into each square of the AppIcon grid. If the image's size is incorrect, Xcode will notify you so that you may resize the image and try again. Once you have added your images, the editor area might look similar to the following screenshot:

Configuring the project

Laying out an interface

Now, our project is fully configured and we are ready to design its graphical user interface (GUI). Xcode comes with a built-in tool called Interface Builder, which enables us to arrange GUI elements, connect them to variables and events in our code, and even define the transitions between scenes (or informally, screens). Remember that CoolPig's GUI is just a fullscreen image. However, even our simple GUI has a transition between a static loading screen (where the image does not change color) and dynamic main screen (where the image changes color every two seconds). Let's first configure the loading screen and then the main screen.

Select CoolPig | CoolPig | LaunchScreen.storyboard in the navigator pane. This file is a storyboard, which stores the configuration of a set of scenes (or a single scene in this case). A scene hierarchy appears in the editor area. Navigate to View Controller Scene | View Controller | View. A blank view appears on the right-hand side of the editor area, as seen in the following screenshot:

Laying out an interface

Let's add an image view inside the empty view. Notice the list of available GUI widgets in the lower-right corner of the Xcode window. This area is called the library pane. Scroll through the library pane's contents. Find the Image View item and drag it to the empty view. Now, the editor area should look like this:

Laying out an interface

Drag the corners of the highlighted rectangle to make the image view fill its parent view. The result should look like this:

Laying out an interface

We still need to take a further step to ensure that the image view scales up or down to match the screen size on all devices. Click on the Pin button in the toolbar at the bottom of the editor area. The button's icon looks like a rectangle pinned between two lines. Now, a pop-up menu appears with the title Add New Constraints. Constraints define a widget's position and size relative to other widgets.

Specifically, we want to define the image view's margins relative to its parent view. To define a margin on every side, click on the four I-shaped lines that surround the square. They turn red. Now, enter 0 for the top and bottom values and -20 for the left and right values. Some iOS devices have built-in horizontal margins, and our negative values ensure that the image extends to the screen's edge even on these devices. The following screenshot shows the settings:

Laying out an interface

Click on the Add 4 Constraints button to confirm these parameters.

Finally, we want to show an image! Look at the inspector pane, which is in the top-right area of the Xcode window. Here, we can configure the currently selected widget. Select the Attributes tab. Its icon looks like a slider. From the Image drop-down list, select Piggy.png. From the Mode drop-down list, select Aspect Fill. This mode ensures that the image will fill the image view in both dimensions, without appearing stretched. The image may appear cropped in one dimension. Now, the editor area and inspector pane should look similar to the following screenshot:

Laying out an interface

So far, we have completed the loading screen's layout. Now, let's turn our attention to the main screen. Select CoolPig | CoolPig | Main.storyboard in the project navigator. This storyboard, too, has a single scene. Select its view. Add an image view and configure it in exactly the same way as the loading screen's image view. Later, in the Connecting an interface element to the code section, we will connect this new image view to a variable in our code.

Writing the code

As part of the Single View Application project template, Xcode has already created the following code files for us:

  • AppDelegate.h: This defines the public interface of an AppDelegate class. This class is responsible for managing the application's life cycle.
  • AppDelegate.m: This contains the private interface and implementation of the AppDelegate class.
  • ViewController.h: This defines the public interface of a ViewController class. This class is responsible for managing the application's main scene, which we saw in Main.Storyboard.
  • ViewController.m: This contains the private interface and implementation of the ViewController class.

For CoolPig, we simply need to modify ViewController.m. Select CoolPig | CoolPig | ViewController.m in the project navigator. The code appears in the editor area. At the beginning of the code, let's add more #import statements to include the header files for several OpenCV modules, as seen in the following code:

#import <opencv2/core.hpp>
#import <opencv2/imgcodecs/ios.h>
#import <opencv2/imgproc.hpp>

#ifdef WITH_OPENCV_CONTRIB
#import <opencv2/xphoto.hpp>
#endif

#import "ViewController.h"

We will need to generate random numbers to create the image's random tint. For convenience, let's define the following macro, which generates a 64-bit floating-point number in the range of 0 to 1:

#define RAND_0_1() ((double)arc4random() / 0x100000000)

Note

The arc4random() function returns a random 32-bit integer in the range of 0 to 2^32-1 (or 0x100000000). The first time it is called, the function automatically seeds the random number generator.

The remainder of ViewController.m deals with the private interface and implementation of the ViewController class. Elsewhere, in ViewController.h, the class is declared as follows:

@interface ViewController : UIViewController
@end

Note that ViewController is a subclass of UIViewController, which is an important class in the iOS SDK. UIViewController manages the life cycle of a set of views and provides reasonable default behaviors as well as many methods that may override these defaults. If we develop applications according to the model-view-controller (MVC) pattern, then UIViewController is the controller or coordinator, which enforces good separation between the platform-specific view or GUI and platform-independent model or "business logic".

Let's turn our attention back to the private interface of ViewController in ViewController.m. The class keeps the original image and updated image as member variables. They are instances of OpenCV's cv::Mat class, which can represent any kind of image or other multidimensional data. ViewController also has a reference to the image view where we will display the image. Another of the class's properties is an NSTimer object, which will fire a callback every two seconds. Finally, the class has a method, updateImage, which will be responsible for displaying a new random variation of the image. Here is the code for ViewController's private interface:

@interface ViewController () {
  cv::Mat originalMat;
  cv::Mat updatedMat;
}

@property IBOutlet UIImageView *imageView;
@property NSTimer *timer;

- (void)updateImage;

@end

Now, let's implement the methods of the ViewController class. It inherits many methods from its parent class, UIViewController, and we could override any of these. First, we want to override the viewDidLoad method, which runs when the scene is loaded from its storyboard. Typically, this is an appropriate time to initialize the view controller's member variables. Our implementation of viewDidLoad will begin by loading Piggy.png from file and converting it to OpenCV's RGB format. If the image was not originally grayscale and OpenCV's extra photo module is available, we will use a function from this module to adjust the white balance. Finally, we will start a timer to invoke our updateImage method every two seconds. Here is our code for viewDidLoad:

@implementation ViewController

- (void)viewDidLoad {
  [super viewDidLoad];
  
  // Load a UIImage from a resource file.
  UIImage *originalImage =
      [UIImage imageNamed:@"Piggy.png"];
  
  // Convert the UIImage to a cv::Mat.
  UIImageToMat(originalImage, originalMat);
  
  switch (originalMat.type()) {
    case CV_8UC1:
      // The cv::Mat is in grayscale format.
      // Convert it to RGB format.
      cv::cvtColor(originalMat, originalMat,
          cv::COLOR_GRAY2RGB);
      break;
    case CV_8UC4:
      // The cv::Mat is in RGBA format.
      // Convert it to RGB format.
      cv::cvtColor(originalMat, originalMat,
          cv::COLOR_RGBA2RGB);
#ifdef WITH_OPENCV_CONTRIB
      // Adjust the white balance.
      cv::xphoto::autowbGrayworld(originalMat,
          originalMat);
#endif
      break;
    case CV_8UC3:
      // The cv::Mat is in RGB format.
#ifdef WITH_OPENCV_CONTRIB
      // Adjust the white balance.
      cv::xphoto::autowbGrayworld(originalMat, originalMat);
#endif
      break;
    default:
      break;
  }
  
  // Call an update method every 2 seconds.
  self.timer = [NSTimer scheduledTimerWithTimeInterval:2.0
      target:self selector:@selector(updateImage)
      userInfo:nil repeats:YES];
}

Note

NSTimer only fires callbacks when the app is in the foreground. This behavior is convenient for our purposes because we only want to update the image when it is visible.

Now, let's implement the updateImage helper method. It will multiply each color channel by a random floating-point number. The following table describes the effects of multiplying various channels by a coefficient, k:

Value of k

Effect of multiplying red channel by k

Effect of multiplying green channel by k

Effect of multiplying blue channel by k

0 <= k < 1

Image becomes darker, with a cyan tint

Image becomes darker, with a magenta tint

Image becomes darker, with a yellow tint

k == 1

No change

No change

No change

k > 1

Image becomes brighter, with a red tint

Image becomes brighter, with a green tint

Image becomes brighter, with a blue tint

The following code generates the random color, multiplies it together with the original image, and displays the result in the image view:

- (void)updateImage {
  // Generate a random color.
  double r = 0.5 + RAND_0_1() * 1.0;
  double g = 0.6 + RAND_0_1() * 0.8;
  double b = 0.4 + RAND_0_1() * 1.2;
  cv::Scalar randomColor(r, g, b);
  
  // Create an updated, tinted cv::Mat by multiplying the
  // original cv::Mat and the random color.
  cv::multiply(originalMat, randomColor, updatedMat);
  
  // Convert the updated cv::Mat to a UIImage and display
  // it in the UIImageView.
  self.imageView.image = MatToUIImage(updatedMat);
}

@end

Tip

Feel free to adjust the range of each random color coefficient to your taste. OpenCV clamps the result of the multiplication so that a color channel's value cannot overflow the 8-bit range of 0 to 255.

We have implemented all the custom logic of CoolPig in just 50 lines of code! The project template, storyboard, iOS SDK, and OpenCV provide many useful abstractions and thus enable us to focus on writing concise, application-specific code.

Connecting an interface element to the code

Let's connect the image view in Main.Storyboard to the imageView property in ViewController.m. Open Main.Storyboard in the project navigator, hold command and click on View Controller in the scene hierarchy. A dialog with a dark background appears. Right-click on the Piggy.png image view in the scene hierarchy and drag it to the circle beside Outlets | imageView in the dark dialog box, as shown in the following screenshot:

Connecting an interface element to the code

Release the mouse button to complete the connection. Close the dark dialog box.

Building and running the application

We are ready to build the app and run it in an iOS simulator or on an iOS device. First, if you want to use an iOS device, connect it to the Mac via a USB cable. The first time you connect a device, Xcode's top toolbar might show a progress bar and message, Processing symbol files. Wait for the message to disappear. Now, click on the CoolPig drop-down menu in Xcode's top toolbar and select the device or simulator that you want to use, such as Devices | Joseph's iPad or iOS Simulators | iPad Pro. Click on the Run button. Its icon is the standard triangular play symbol. Xcode builds the app, copies it to the device or simulator, and then launches it. Watch the pig change colors! For example, the app might look like this on an iPad Mini device:

Building and running the application

Tip

If you are using a simulator, you might find that its screen is too large to fit on your Mac's screen. To scale down the simulator's screen, go to the simulator's menu and select Window | Scale | 50% or another value.

Congratulations! We have built and run our first iOS application, including OpenCV for image processing and a pig for artistic reasons.

Distributing to testers and customers

Using the techniques we have learned thus far, we can build an app for iOS simulators and local iOS devices. For this, we do not require permission from Apple, and we do not need to purchase anything except a Mac for our development environment and any iOS devices for our testing.

On the other hand, if we want to distribute an app to other testers or publish it on the App Store, we must take a few more steps, spend a bit more money, and obtain permission from Apple. For details, see Apple's official App Distribution Guide at https://developer.apple.com/library/ios/documentation/IDEs/Conceptual/AppDistributionGuide. Briefly, a typical distribution process involves the following steps:

  1. Enroll in the iOS Developer Program at https://developer.apple.com/programs/enroll. The cost of membership varies depending on where you live. It is $99 per year in the United States.
  2. Optionally, use the iOS Provisioning Portal at https://developer.apple.com/account to create the credentials in order to distribute the app. Configure the Xcode project to use the credentials. Alternatively, Xcode may be able to create the credentials automatically even if you do not use the iOS Provisioning Portal.
  3. Distribute your app to beta testers via Apple's TestFlight tools, which are part of the iTunes Connect tools at https://itunesconnect.apple.com.
  4. If necessary, revise the app based on beta testers' feedback and retest.
  5. Submit your app for publication via the iTunes Connect tools.
  6. If necessary, revise the app based on Apple's feedback and resubmit.
  7. Receive Apple's blessing and confirm that you are ready to release your app to the App Store. Reap the rewards of app publication!

Publishing an app (or a book!) is a significant undertaking and can be invigorating and humbling at the same time. Publication entails an ongoing responsibility to validate, fix, and promote your work and support your customers. This book's role is to impart valuable technical skills so that you can develop your own publishable projects in the field of computer vision!

Finding documentation and support

Outside this book, there is not much documentation or support on how to integrate OpenCV 3 into iOS projects. However, if you seek answers about OpenCV 3 in general or iOS in general, you will find a bigger community and a wealth of documentation. Consult the following sites:

Understanding the camera and setting up photographic accessories

You have probably taken photos with an iOS device before. Perhaps you are even familiar with a variety of apps for image capture and image processing. Photography with an iPhone is certainly a popular pass time, and some people even define it as a distinct photographic movement called iPhoneography.

Tip

If you are entirely new to iPhone photography, take some time now to try Apple's Camera and Photo apps, as well as some third-party photography apps.

iPhone users are not alone in espousing a brand-centric view of photography. For example, another movement called Lomography derives its inspiration from a film camera called the LOMO LC-A, released by the Leningrad Optical Mechanical Associtaion (LOMO) in 1984. LOMO makes precise optical instruments including microscopes, telescopes, night-vision devices, and medical imaging systems, but ironically the company entered the consumer market with a cheap and quirky camera. By conventional standards, the LC-A and its successors suffer from major optical and mechanical flaws, which result in blurry images with uneven brightness and coloration. Lomographers like the unconventional appearance of these images.

Likewise, iPhoneographers are not necessarily concerned with the predictability and fidelity (true-to-life quality) of the camera's images. Considering that a new iPhone costs between $450 and $750, many photographers would find its image quality disappointing and its controls very limited. It bears no resemblance to conventional cameras in the same price range. On the other hand, iPhoneographers may assign greater value to the iPhone's ability to capture photos discretely and edit and share them immediately.

Some users may crave the best of both worlds—the brains of an iPhone in the body of a slightly more conventional camera. There are many third-party photo accessories for iOS devices and these accessories mimic some of the components of a modular, professional photo system. Particularly, we will discuss three kinds of accessories: lighting, tripods, and lens attachments. To help us appreciate the purpose of these accessories, let's establish a baseline of comparison. The following table shows the specifications of the built-in lenses and image sensors in iOS devices' rear cameras:

Device

Resolution (pixels)

Sensor diagonal (mm)

Focal length (mm)

Diagonal FOV (degrees)

Maximum aperture

iPhone 4

2592x1936

5.68

3.85

72.8

f/2.8

iPhone 4S

3264x2448

5.68

4.28

67.1

f/2.4

iPhone 5, 5C

3264x2448

5.68

4.10

69.4

f/2.4

iPhone 5S

3264x2448

6.11

4.12

73.1

f/2.2

iPhone 6, 6 Plus

3264x2448

6.11

4.15

72.7

f/2.2

iPhone 6S, 6S Plus

4032x3024

6.11

4.15

72.7

f/2.2

iPad 3, 4

2592x1936

5.68

4.3

66.9

f/2.4

iPad Air 1

iPad Mini 1, 2, 3

iPod Touch 5

2592x1936

4.33

3.3

66.5

f/2.4

iPad Air 2

iPad Mini 4

3264x2448

4.61

3.3

69.9

f/2.4

The field of view (FOV) is the angle formed by the lens's focal point and two points at diagonally opposite edges of the visible space. Some authors may specify horizontal or vertical FOV instead of diagonal FOV. By convention, FOV implies diagonal FOV if not otherwise specified. The focal length is the distance between the image sensor and the lens's optical center when the lens is focused on an infinitely distant subject. See the following diagram:

Understanding the camera and setting up photographic accessories

The diagonal FOV, the sensor's diagonal size, and the focal length are geometrically related according to the following formula:

diagonalFOVDegrees = 2 * atan(0.5 * sensorDiagonal / focalLength) * 180/pi

Depending on the model of the iOS device, the diagonal FOV ranges from 73.1 to 66.5 degrees. These values are equivalent to the FOV of a 29 mm to 33 mm lens in a traditional 35 mm camera system. Most photographers would characterize this FOV as moderately wide. Is moderately wide a good compromise? It depends on the use case. A wider angle helps to ensure that the subject literally has nowhere to hide. For example, this can be important in security applications. A narrower angle helps to ensure that details are captured even at a distance. For example, this can be important in product inspection applications, if the constraints of the workspace do not allow the camera to be placed close to the subject. If we want to choose the FOV, we must modify or accessorize the iOS device's optical system!

All the iOS cameras have small sensors. Their diagonal size ranges from 4.33 mm to 6.11 mm. For comparison, the diagonal size of the film or digital sensor in a 35 mm camera system is 43.3 mm. A smaller sensor has less capacity to gather light. To compensate, camera systems with small sensors tend to amplify the sensor's signal (the measurement of the light), but at the same time they amplify the random noise. Furthermore, to compensate for the noise, the system may blur the image. Thus, if we compare two images of the same scene at the same resolution, the image from the smaller sensor will tend to be noisier or blurrier. This difference becomes especially obvious when the light is dim. To summarize, we must expect that an iOS camera will take poor pictures in poor light. Thus, we must find or create good light!

Note

Engineers may refer to the amplification of the sensor's signal as gain and photographers may refer to it as ISO speed. The latter metric is formally defined by the International Standards Organization (ISO).

The ability to gather light is also directly related to the area of the lens's aperture. Typically, the aperture is expressed as an f-number or f-stop, which is defined as the ratio of the focal length to the aperture's diameter. Typically, an aperture is approximately circular and thus its area is proportional to the square of its radius. It follows that the intensity of the light passing through the aperture is inversely proportional to the square of the f-number. For example, an f/2 lens admits twice as much light as an f/2.8 lens. The iOS lenses have maximum apertures of f/2.2 to f/2.8, depending on the model. Values in this range are quite typical of wide-angle lenses in general, so the iOS lenses have neither an advantage nor disadvantage in this respect.

Finally, let's consider an issue of ergonomics. All iOS devices are lightweight and smooth and most of them are too small to hold in both hands. Thus, the user's grip is not firm. When a typical user holds out an iPhone to take a photo, the user's arm is like a long branch and the little iPhone shakes like a leaf. A high proportion of the pictures may suffer from motion blur. Comparatively, the design of a more traditional camera and lens may permit the user to brace the equipment's weight against his or her body in several places. Consider the following photograph:

Understanding the camera and setting up photographic accessories

The man in the background is Bob. Bob is left-handed. He is holding an iPhone in his left hand as he taps its camera button with his right hand. The man in the foreground is Joe. Joe is right-handed. He is equipped with a photo-sniper kit, which is a long lens mounted on two handles and a shoulder stock. The equipment's weight is braced against Joe's right knee and right shoulder. For additional stability, Joe's legs are folded and he is leaning leftward against a steel post and concrete slab. From another angle, the same pose looks like this:

Understanding the camera and setting up photographic accessories

This type of human stabilization can work well for some equipment. However, a more reliable approach is to use rigid support such as a tripod and we should definitely consider this when we tackle computer vision problems with a smartphone or tablet.

Now, let's take stock of the types of accessories that can change the lighting, stabilization, and perspective.

Lights

Many iOS devices have a built-in flash, which consists of a white LED light on the back of the device. Camera apps may activate the flash during photo capture, especially if the scene is dimly lit. Other apps may activate the flash for a long duration so that it acts as a flashlight or torch to help the user see. With only a single LED, the built-in flash may provide insufficient or uneven illumination in some circumstances.

If you need stronger or more evenly distributed illumination, or if your iOS device lacks a built-in flash altogether, you may want to purchase an external flash. Depending on the design, the external flash may mount as part of a case or may plug into the iOS device's audio jack. Typically, the external flash will have multiple white LEDs arranged in a line, grid, or ring. The latter design is called a ring flash.

Alternatively, in a controlled environment, you may set up any kind of lighting anywhere you please and you do not need to rely on the iOS device as a power source. Even a pair of well-placed desk lamps can greatly enhance the clarity and beauty of a scene. Normally, it is best to illuminate the subject from multiple angles to prevent shadows. Do not shine a light directly into the camera and do not illuminate the background more brightly than the subject, as these types of lighting tend to give the subject a very murky appearance with low contrast in the foreground. Sometimes, murky light can be artistically interesting, but it is not good for computer vision.

Tripods and other stabilization

A conventional photo camera has a threaded mount where the user may screw in the head of a tripod. Of course, an iOS device has no threaded mount. If we want to use a tripod with a standard screw, we may purchase an adapter that consists of a threaded mount and clip to hold the iOS device. Alternatively, we may purchase a tripod that has a built-in clip instead of a standard screw. Regardless of the type of mount, we also need to consider the following characteristics of the tripod:

  • Height: How tall is the tripod? Most tripods have extensible legs so that their height can vary. To help you decide what tripod height you require for a given application, consider how a person would normally look at the subject. If the subject is a small object such as a coin, a person might inspect it up close and similarly a short tripod might be appropriate. If the subject is a large object such as a lineup of cars on a highway, a person might watch it at eye level or might even look down on it from higher ground and similarly a tall tripod might be appropriate.
  • Weight: A heavy tripod is cumbersome to carry, but it may be able to resist a destabilizing force such as a gust of wind.
  • Material: Plastic may flex and crack. Metal may vibrate. Carbon fiber is less prone to these weaknesses, but it is relatively expensive. Some small tripods have bendable wire legs so that the user may wrap the tripod around another support, such as a branch or post. For example, GorillaPod is a well-known brand of tripods with bendable legs.

Typically, a small, lightweight tripod might cost between $10 and $30. This kind is often marketed as a mini or travel tripod. A tripod is a useful but optional accessory for all chapters in this book.

If you do not have a tripod or there is nowhere to place it, you may want to experiment with makeshift forms of stabilization. For example, if you want to monitor a room or hallway, you can tape the iOS device to a wall or ceiling. Be careful to choose tape that you can remove cleanly, without damaging the device's screen.

Lens attachments

A lens attachment or add-on lens is an additional optical unit that sits in front of the iPhone or iPad's built-in lens. Typically, the attachment is designed for the rear camera and its mount may consist of a magnet, clip, or case. The types of add-on lenses include the following:

  • Telephoto attachment: This enables the lens to capture a narrower (zoomed in) field of view, comparable to a spyglass. Sometimes, a telephoto attachment is called a zoom attachment.
  • Wide-angle attachment: This enables the lens to capture a wider (zoomed out) field of view.
  • Fisheye attachment: This enables the lens to capture an extremely wide field of view, between 100 and 180 degrees diagonally. By design, the fisheye perspective is distorted such that straight lines appear curved. Sometimes, a fisheye attachment is called a panoramic attachment because software can convert a fisheye image into a panorama (a perspective-corrected image with a wide aspect ratio).
  • Macro or close-up attachment: This enables the lens to focus at a short distance in order to capture a sharp image at a high level of magnification, comparable to a magnifying glass.
  • Microscope attachment: This enables a more extreme level of magnification, comparable to a microscope. The focus distance is so short that the lens attachment may almost touch the subject. Typically, the attachment includes a ring of LED lights to illuminate the subject.

Typically, a lens attachment might cost between $20 and $50. The sharpness of the optics can vary greatly, so try to compare reviews before you choose a product. A fisheye attachment could be a fun accessory for our photographic work in Chapter 2, Capturing, Storing, and Sharing Photos. A macro, close-up, or microscope attachment could be useful for our work with small objects in Chapter 5, Classifying Coins and Commodities. Generally, you can experiment with any lens attachment in any chapter's project.

Summary

This chapter has introduced the software and hardware that we will use to make computer vision applications for iOS. We set up a development environment, including Xcode, the iOS SDK, the Xcode Command Line Tools, a prebuilt version of OpenCV's standard modules, and optionally a custom-built version of OpenCV's extra modules. Using these tools and libraries, we developed an iOS application that performs a basic image processing function and we built it for iOS simulators and local devices. We discussed some of the places where we can seek more information about Apple's app distribution process, the iOS SDK, and OpenCV. Finally, we compared the camera specifications of iOS devices and learned about accessories that may help us capture clearer and more specialized images. The next chapter delves deeper into the topics of computational photography and image processing as we will build an application that can capture, edit, and share photographs.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Efficiently harness iOS and OpenCV to capture and process high-quality images at high speed
  • Develop photographic apps and augmented reality apps quickly and easily
  • Detect, recognize, and morph faces and objects

Description

iOS Application Development with OpenCV 3 enables you to turn your smartphone camera into an advanced tool for photography and computer vision. Using the highly optimized OpenCV library, you will process high-resolution images in real time. You will locate and classify objects, and create models of their geometry. As you develop photo and augmented reality apps, you will gain a general understanding of iOS frameworks and developer tools, plus a deeper understanding of the camera and image APIs. After completing the book's four projects, you will be a well-rounded iOS developer with valuable experience in OpenCV.

Who is this book for?

If you want to do computational photography and computer vision on Apple’s mobile devices, then this book is for you. No previous experience with app development or OpenCV is required. However, basic knowledge of C++ or Objective-C is recommended.

What you will learn

  • Use Xcode and Interface Builder to develop iOS apps
  • Obtain OpenCV s standard modules and build extra modules from source
  • Control all the parameters of the iOS device s camera
  • Capture, save, and share photos and videos
  • Analyze colors, shapes, and textures in ordinary and specialized photographs
  • Blend and compare images to create special photographic effects and augmented reality tools
  • Detect faces and morph facial features
  • Classify coins and other objects

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jun 30, 2016
Length: 228 pages
Edition : 1st
Language : English
ISBN-13 : 9781785281464
Vendor :
Apple
Category :
Languages :
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want

Product Details

Publication date : Jun 30, 2016
Length: 228 pages
Edition : 1st
Language : English
ISBN-13 : 9781785281464
Vendor :
Apple
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 113.97
iOS Application Development with OpenCV 3
€29.99
OpenCV By Example
€41.99
OpenCV 3 Computer Vision Application Programming Cookbook
€41.99
Total 113.97 Stars icon

Table of Contents

6 Chapters
1. Setting Up Software and Hardware Chevron down icon Chevron up icon
2. Capturing, Storing, and Sharing Photos Chevron down icon Chevron up icon
3. Blending Images Chevron down icon Chevron up icon
4. Detecting and Merging Faces of Mammals Chevron down icon Chevron up icon
5. Classifying Coins and Commodities Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
(5 Ratings)
5 star 40%
4 star 40%
3 star 0%
2 star 20%
1 star 0%
Sai Prasanth Dec 01, 2016
Full star icon Full star icon Full star icon Full star icon Full star icon 5
great book. I had some questions about the code and Joseph was kind enough to answer them personally. Kudos! Good for beginners of ObjC
Amazon Verified review Amazon
Amazon Customer Jul 18, 2016
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is a very handy book for not only those that are getting started with iOS but also those who want to specifically focus on OpenCV with iOS. The code is well written and documented and explains concepts in a nice and crisp manner.
Amazon Verified review Amazon
Wayne Duquaine Jan 20, 2024
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Feefo Verified review Feefo
Andre Frank Jan 04, 2017
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
I would highly recommend this book for all beginners who wants to use OpenCV3 for IOS development. The writer explains the OpenCV classes/methods very good. Step by step you will learn what exactly OpenCV is and how you can use it for IOS development. There are not so much tutorials but this doesn't matter. The level of details and complexity gains with every tutorial.At the end you will have a solid knowledge to write own code with OpenCV.You also learn how to install OpenCV in a correct way . It can be frustrating to do so without introduction...The only negative aspect for me was the format of the eBook. I struggled with line breaks of the printed source code. It looks a bit fragmented.
Amazon Verified review Amazon
Dillon Jun 30, 2017
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
I tried the first project, but could not get it to work. I download the necessary components and imported the OpenCV library, but it still would not work, thus rendering the rest of the book useless.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.