Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-getting-started-codeblocks
Packt
28 Oct 2013
7 min read
Save for later

Getting Started with Code::Blocks

Packt
28 Oct 2013
7 min read
(For more resources related to this topic, see here.) Why Code::Blocks? Before we go on learning more about Code::Blocks let us understand why we shall use Code::Blocks over other IDEs. It is a cross-platform Integrated Development Environment (IDE). It supports Windows, Linux, and Mac operating system. It supports GCC compiler and GNU debugger on all supported platforms completely. It supports numerous other compilers to various degrees on multiple platforms. It is scriptable and extendable. It comes with several plugins that extend its core functionality. It is lightweight on resources and doesn't require a powerful computer to run it. Finally, it is free and open source. Installing Code::Blocks on Windows Our primary focus of this article will be on Windows platform. However, we'll touch upon other platforms wherever possible. Official Code::Blocks binaries are available from www.codeblocks.org. Perform the following steps for successful installation of Code::Blocks: For installation on Windows platform download codeblocks-12.11mingw-setup.exe file from http://www.codeblocks.org/downloads/26 or from sourceforge mirror http://sourceforge.net/projects/codeblocks/files/Binaries/12.11/Windows/codeblocks-12.11mingw-setup.exe/download and save it in a folder. Double-click on this file and run it. You'll be presented with the following screen: As shown in the following screenshot click on the Next button to continue. License text will be presented. The Code::Blocks application is licensed under GNU GPLv3 and Code::Blocks SDK is licensed under GNU LGPLv3. You can learn more about these licenses at this URL—https://www.gnu.org/licenses/licenses.html. Click on I Agree to accept the License Agreement. The component selection page will be presented in the following screenshot: You may choose any of the following options: Default install: This is the default installation option. This will install Code::Block's core components and core plugins. Contrib Plugins: Plugins are small programs that extend Code::Block's functionality. Select this option to install plugins contributed by several other developers. C::B Share Config: This utility can copy all/parts of configuration file. MinGW Compiler Suite: This option will install GCC 4.7.1 for Windows. Select Full Installation and click on Next button to continue. As shown in the following screenshot installer will now prompt to select installation directory: You can install it to default installation directory. Otherwise choose Destination Folder and then click on the Install button. Installer will now proceed with installation. As shown in the following screenshot Code::Blocks will now prompt us to run it after the installation is completed: Click on the No button here and then click on the Next button. Installation will now be completed: Click on the Finish button to complete installation. A shortcut will be created on the desktop. This completes our Code::Blocks installation on Windows. Installing Code::Blocks on Linux Code::Blocks runs numerous Linux distributions. In this section we'll learn about installation of Code::Blocks on CentOS linux. CentOS is a Linux distro based on Red Hat Enterprise Linux and is a freely available, enterprise grade Linux distribution. Perform the following steps to install Code::Blocks on Linux OS: Navigate to Settings | Administration | Add/Remove Software menu option. Enter wxGTK in the Search box and hit the Enter key. As of writing wxGTK-2.8.12 is the latest wxWidgets stable release available. Select it and click on the Apply button to install wxGTK package via the package manager, as shown in the following screenshot. Download packages for CentOS 6 from this URL—http://www.codeblocks.org/downloads/26. Unpack the .tar.bz2 file by issuing the following command in shell: tar xvjf codeblocks-12.11-1.el6.i686.tar.bz2 Right-click on the codeblocks-12.11-1.el6.i686.rpm file as shown in the following screenshot and choose the Open with Package Installer option. The following window will be displayed. Click on the Install button to begin installation, as shown in the following screenshot: You may be asked to enter the root password if you are installing it from a user account. Enter the root password and click on the Authenticate button. Code::Blocks will now be installed. Repeat steps 4 to 6 to install other rpm files. We have now learned to install Code::Blocks on the Windows and Linux platforms. We are now ready for C++ development. Before doing that we'll learn about the Code::Blocks user interface. First run On the Windows platform navigate to the Start | All Programs | CodeBlocks | CodeBlocks menu options to launch Code::Blocks. Alternatively you may double-click on the shortcut displayed on the desktop to launch Code::Blocks, as in the following screenshot: On Linux navigate to Applications | Programming | Code::Blocks IDE menu options to run Code::Blocks. Code::Blocks will now ask the user to select the default compiler. Code::Blocks supports several compilers and hence, is able to detect the presence of other compilers. The following screenshot shows that Code::Blocks has detected GNU GCC Compiler (which was bundled with the installer and has been installed). Click on it to select and then click on Set as default button, as shown in the following screenshot: Do not worry about the items highlighted in red in the previous screenshot. Red colored lines indicate Code::Blocks was unable to detect the presence of a particular compiler. Finally, click on the OK button to continue with the loading of Code::Blocks. After the loading is complete the Code::Blocks window will be shown. The following screenshot shows main window of Code::Blocks. Annotated portions highlight different User Interface (UI) components: Now, let us understand more about different UI components: Menu bar and toolbar: All Code::Blocks commands are available via menu bar. On the other hand toolbars provide quick access to commonly used commands. Start page and code editors: Start page is the default page when Code::Blocks is launched. This contains some useful links and recent project and file history. Code editors are text containers to edit C++ (and other language) source files. These editors offer syntax highlighting—a feature that highlights keywords in different colors. Management pane: This window shows all open files (including source files, project files, and workspace files). This pane is also used by other plugins to provide additional functionalities. In the preceding screenshot FileManager plugin is providing a Windows Explorer like facility and Code Completion plugin is providing details of currently open source files. Log windows: Log messages from different tools, for example, compiler, debugger, document parser, and so on, are shown here. This component is also used by other plugins. Status bar: This component shows various status information of Code::Blocks, for example, file path, file encoding, line numbers, and so on. Introduction to important toolbars Toolbars provide easier access to different functions of Code::Blocks. Amongst the several toolbars following ones are most important. Main toolbar The main toolbar holds core component commands. From left to right there are new file, open file, save, save all, undo, redo, cut, copy, paste, find, and replace buttons. Compiler toolbar The compiler toolbar holds commonly used compiler related commands. From left to right there are build, run, build and run, rebuild, stop build, build target buttons. Compilation of C++ source code is also called a build and this terminology will be used throughout the article. Debugger toolbar The debugger toolbar holds commonly used debugger related commands. From left to right there are debug/continue, run to cursor, next line, step into, step out, next instruction, step into instruction, break debugger, stop debugger, debugging windows, and various info buttons. Summary In this article we have learned to download and install Code::Blocks. We also learnt about different interface elements. Resources for Article: Further resources on this subject: OpenGL 4.0: Building a C++ Shader Program Class [Article] Application Development in Visual C++ - The Tetris Application [Article] Building UI with XAML for Windows 8 Using C [Article]
Read more
  • 0
  • 0
  • 5944

article-image-scratching-surface-zend-framework-2
Packt
25 Oct 2013
11 min read
Save for later

Scratching the Surface of Zend Framework 2

Packt
25 Oct 2013
11 min read
Bootstrap your app There are two ways to bootstrap your ZF2 app. The default is less flexible but handles the entire configuration, and the manual is really flexible but you have to take care of everything. The goal of the bootstrap is to provide to the application, ZendMvcApplication, with all the components and dependencies needed to successfully handle a request. A Zend Framework 2 application relies on the following six components: Configuration array ServiceManager instance EventManager instance ModuleManager instance Request object Response object As these are the pillars of a ZF2 application, we will take a look at how these components are configured to bootstrap the app. To begin with, we will see how the components interact from a high perspective and then we will jump into details of how each one works. When a new request arrives to our application, ZF2 needs to set up the environment to be able to fulfill it. This process implies reading configuration files and creating the required objects and services; attach them to the events that are going to be used and finally create the request object based on the request data. Once we have the request object, ZF2 will tell the router to do his job and will inspect the request object to determine who is responsible for processing the data. Once a controller and action has been identified as the one in charge of the request, ZF2 dispatches it and gives the controller/action the control of the program in order to execute the code that will interpret the request and will do something with it. This can be from accepting an uploaded image to showing a sign-up form and also changing data on an external database. When the controller processes the data, sometimes a view object is generated to encapsulate the data that we should send to the client who made the request, and a response object is created. After we have a response object, ZF2 sends it to the browser and the request ends. Now that we have seen a very simple overview of the lifecycle of a request we will jump into the details of how each object works, the options available and some examples of each one. Configuration array Let's dissect the first component of the list by taking a look at the index.php file: chdir(dirname(__DIR__)); // Setup autoloading require 'init_autoloader.php'; // Run the application! ZendMvcApplication::init(require 'config/application.config.php')->run(); As you can see we only do three things. The first thing is we change the current folder for the convenience of making everything relative to the root folder. Then we require the autoloader file; we will examine this file later. Finally, we initialize a ZendMvcApplication object by passing a configuration file and only then does the run method get called. The configuration file looks like the following code snippet: return array( 'modules' => array( 'Application', ), 'module_listener_options' => array( 'config_glob_paths' => array( 'config/autoload/{,*.}{global,local}.php', ), 'module_paths' => array( './module', './vendor', ), ), ); This file will return an array containing the configuration options for the application. Two options are used: modules and module_listener_options. As ZF2 uses a module organization approach, we should add the modules that we want to use on the application here. The second option we are using is passed as configuration to the ModuleManager object. The config_glob_path array is used when scanning the folders in search of config files and the module_paths array is used to tell ModuleManager a set of paths where the module resides. ZF2 uses a module approach to organize files. A module can contain almost anything, simple PHP files, view scripts, images, CSS, JavaScript, and so on. This approach will allow us to build reusable blocks of functionality and we will adhere to this while developing our project. PSR-0 and autoloaders Before continuing with the key components, let's take a closer look at the init_autoloader.php file used in the index.php file. As is stated on the first block comment, this file is more complicated than it's supposed to be. This is because ZF2 will try to set up different loading mechanisms and configurations. if (file_exists('vendor/autoload.php')) { $loader = include 'vendor/autoload.php'; } The first thing is to check if there is an autoload.php file inside the vendor folder; if it's found, we will load it. This is because the user might be using composer, in which case composer will provide a PSR-0 class loader. Also, this will register the namespaces defined by composer on the loader. PSR-0 is an autoloading standard proposed by the PHP Framework Interop Group (http://www.php-fig.org/) that describes the mandatory requirements for autoloader interoperability between frameworks. Zend Framework 2 is one of the projects that adheres to it. if (getenv('ZF2_PATH')) { $zf2Path = getenv('ZF2_PATH'); } elseif (get_cfg_var('zf2_path')) { $zf2Path = get_cfg_var('zf2_path'); } elseif (is_dir('vendor/ZF2/library')) { $zf2Path = 'vendor/ZF2/library'; } In the next section we will try to get the path of the ZF2 files from different sources. We will first try to get it from the environment, if not, we'll try from a directive value in the php.ini file. Finally, if the previous methods fail the code, we will try to check whether a specific folder exists inside the vendor folder. if ($zf2Path) { if (isset($loader)) { $loader->add('Zend', $zf2Path); } else { include $zf2Path . '/Zend/Loader/AutoloaderFactory.php'; ZendLoaderAutoloaderFactory::factory(array( 'ZendLoaderStandardAutoloader' => array( 'autoregister_zf' => true ) )); } } Finally, if the framework is found by any of these methods, based on the existence of the composer autoloader, the code will just add the Zend namespace or will instantiate an internal autoloader, ZendLoaderAutoloader, and use it as a default. As you can see, there are multiple ways to set up the autoloading mechanism on ZF2 and at the end what matters is which one you prefer, as all of them in essence will behave the same. ServiceManager After all this execution of code, we arrive at the last section of the index.php file where we actually instantiate the ZendMvcApplication object. As we said, there are two methods of creating an instance of ZendMvcApplication. In the default approach, we call the static method init of the class by passing an optional configuration as the first parameter. This method will take care of instantiating a new ServiceManager object, storing the configuration inside, loading the modules specified in the configuration, and getting a configured ZendMvcApplication. ServiceManager is a service/object locator that implements the Service Locator design pattern; its responsibility is to retrieve other objects. $serviceManager = new ServiceManager( new ServiceServiceManagerConfig($smConfig) ); $serviceManager->setService('ApplicationConfig', $configuration); $serviceManager->get('ModuleManager')->loadModules(); return $serviceManager->get('Application')->bootstrap(); As you can see, the init method calls the bootstrap() method of the ZendMvcApplication instance. Service Locator is a design pattern used in software development to encapsulate the process of obtaining other objects. The concept is based on a central repository that stores the objects and also knows how to create them if required. EventManager This component is designed to provide multiple functionalities. It can be used to implement simple observer patterns, and also can be used to do aspect-oriented design or even create event-driven architectures. The basic operations you can do over these components is attaching and detaching listeners to named events, trigger events, and interrupting the execution of listeners when an event is fired. Let's see a couple of examples on how to attach to an event and how to fire them: //Registering an event listener $events = new EventManager(); $events->attach(array('EVENT_NAME'), $callback); //Triggering an event $events->trigger('EVENT_NAME', $this, $params); Inside the bootstrap method of ZendMvcApplication, we are registering the events of RouteListener, DispatchListener, and ViewManager. After that, the code is instantiating a new custom event called MvcEvent that will be used as the target when firing events. Finally, this piece of code will fire the bootstrap event. ModuleManager Zend Framework 2 introduces a completely redesigned ModuleManager. This new module has been built with simplicity, flexibility, and reuse in mind. These modules can hold everything from PHP to images, CSS, library code, views, and so on. The responsibility of this component in the bootstrap process of an app is loading the available modules specified by the config file. This is accomplished by the following code line located in the init method of ZendMvcApplication: $serviceManager->get('ModuleManager')->loadModules(); This line, when executed, will retrieve the list of modules located at the config file and will load each module. Each module has to contain a file called Module.php with the initialization of the components of the module if needed. This will allow the module manager to retrieve the configuration of the module. Let's see the usual content of this file: namespace MyModule; class Module { public function getAutoloaderConfig() { return array( 'ZendLoaderClassMapAutoloader' => array( __DIR__ . '/autoload_classmap.php', ), 'ZendLoaderStandardAutoloader' => array( 'namespaces' => array( __NAMESPACE__ => __DIR__ . '/src/' . __NAMESPACE__, ), ), ); } public function getConfig() { return include __DIR__ . '/config/module.config.php'; } } As you can see we are defining a method called getAutoloaderConfig() that provides the configuration for the autoloader to ModuleManager. The last method getConfig() is used to provide the configuration of the module to ModuleManager; for example, this will contain the routes handled by the module. Request object This object encapsulates all data related to a request and allows the developer to interact with the different parts of a request. This object is used in the constructor of ZendMvcApplication and also is set inside MvcEvent to be able to retrieve when some events are fired. Response object This object encapsulates all the parts of an HTTP response and provides the developer with a fluent interface to set all the response data. This object is used in the same way as the request object. Basically it is instantiated on the constructor and added to MvcEvent to be able to interact with it across all the events and classes. The request object As we said, the request object will encapsulate all the data related to a request and provide the developer with a fluent API to access the data. Let's take a look at the details of the request object in order to understand how to use it and what it can offer to us: use ZendHttpRequest; $string = "GET /foo HTTP/1.1rnrnSome Content"; $request = Request::fromString($string); $request->getMethod(); $request->getUri(); $request->getUriString(); $request->getVersion(); $request->getContent(); This example comes directly from the documentation and shows how a request object can be created from a string, and then access some data related with the request using the methods provided. So, every time we need to know something related to the request, we will access this object to get the data we need. If we check the code on ZendHttpPhpEnvironmentRequest.php, the first thing we can notice is that the data is populated on the constructor using the superglobal arrays. All this data is processed and then populated inside the object to be able to expose it in a standard way using methods. To manipulate the URI of the request you can get/set the data with three methods, two getters and one setter. The only difference with the getters is that one returns a plain string and the other returns an HttpUri object. getUri() and getUriString() setUri() To retrieve the data passed in the request, there are a few specialized methods depending on the data you want to get: getQuery() getPost() getFiles() getHeader() and getHeaders() About the request method, the object has a general way to know the method used, returning a string or nine specialized functions that will test specific methods based on the RFC 2616, which defines the standard methods for an HTTP request. getMethod() isOptions() isGet() isHead() isPost() isPut() isDelete() isTrace() isConnect() isPatch() Finally, two more methods are available in this object that will test special requests such as AJAX and requests made by a flash object. isXmlHttpRequest() isFlashRequest() Notice that the data stored on the superglobal arrays when populated on the object are converted from an Array to a Parameters object. The Parameters object lives in the Stdlib section of ZF2, a folder where common objects can be found and used across the framework. In this case, the Parameters class is an extension of ArrayObject and implements ParametersInterface that will bring ArrayAccess, Countable, Serializable, and Traversable functionality to the parameters stored inside the object. The goal with this object is to provide a common interface to access data stored in the superglobal arrays. This expands the ways you can interact with the data in an object-oriented approach.
Read more
  • 0
  • 0
  • 1357

Packt
24 Oct 2013
13 min read
Save for later

Using Media Files – playing audio files

Packt
24 Oct 2013
13 min read
(For more resources related to this topic, see here.) Playing audio files JUCE provides a sophisticated set of classes for dealing with audio. This includes: sound file reading and writing utilities, interfacing with the native audio hardware, audio data conversion functions, and a cross-platform framework for creating audio plugins for a range of well-known host applications. Covering all of these aspects is beyond the scope of this article, but the examples in this section will outline the principles of playing sound files and communicating with the audio hardware. In addition to showing the audio features of JUCE, in this section we will also create the GUI and autogenerate some other aspects of the code using the Introjucer application. Creating a GUI to control audio file playback Create a new GUI application Introjucer project of your choice, selecting the option to create a basic window. In the Introjucer application, select the Config panel, and select Modules in the hierarchy. For this project we need the juce_audio_utils module (which contains a special Component class for configuring the audio device hardware); therefore, turn ON this module. Even though we created a basic window and a basic component, we are going to create the GUI using the Introjucer application. Navigate to the Files panel and right-click (on the Mac, press control and click) on the Source folder in the hierarchy, and select Add New GUI Component… from the contextual menu. When asked, name the header MediaPlayer.h and click on Save. In the Files hierarchy, select the MediaPlayer.cpp file. First select the Class panel and change the Class name from NewComponent to MediaPlayer. We will need four buttons for this basic project: a button to open an audio file, a Play button, a Stop button, and an audio device settings button. Select the Subcomponents panel, and add four TextButton components to the editor by right-clicking to access the contextual menu. Space the buttons equally near the top of the editor, and configure each button as outlined in the table as follows: Purpose member name name text background (normal) Open file openButton open Open... Default Play/pause file playButton play Play Green Stop playback stopButton stop Stop Red Configure audio settingsButton settings Audio Settings... Default Arrange the buttons as shown in the following screenshot: For each button, access the mode pop-up menu for the width setting, and choose Subtracted from width of parent. This will keep the right-hand side of the buttons the same distance from the right-hand side of the window if the window is resized. There are more customizations to be done in the Introjucer project, but for now, make sure that you have saved the MediaPlayer.h file, the MediaPlayer.cpp file, and the Introjucer project before you open your native IDE project. Make sure that you have saved all of these files in the Introjucer application; otherwise the files may not get correctly updated in the file system when the project is opened in the IDE. In the IDE we need to replace the MainContentComponent class code to place a MediaPlayer object within it. Change the MainComponent.h file as follows: #ifndef __MAINCOMPONENT_H__#define __MAINCOMPONENT_H__#include "../JuceLibraryCode/JuceHeader.h"#include "MediaPlayer.h"class MainContentComponent : public Component{public:MainContentComponent();void resized();private:MediaPlayer player;};#endif Then, change the MainComponent.cpp file to: #include "MainComponent.h"MainContentComponent::MainContentComponent(){addAndMakeVisible (&player);setSize (player.getWidth(),player.getHeight());}void MainContentComponent::resized(){player.setBounds (0, 0, getWidth(), getHeight());} Finally, make the window resizable in the Main.cpp file, and build and run the project to check that the window appears as expected. Adding audio file playback support Quit the application and return to the Introjucer project. Select the MediaPlayer.cpp file in the Files panel hierarchy and select its Class panel. The Parent classes setting already contains public Component. We are going to be listening for state changes from two of our member objects that are ChangeBroadcaster objects. To do this, we need our MediaPlayer class to inherit from the ChangeListener class. Change the Parent classes setting such that it reads: public Component, public ChangeListener Save the MediaPlayer.h file, the MediaPlayer.cpp file, and the Introjucer project again, and open it into your IDE. Notice in the MediaPlayer.h file that the parent classes have been updated to reflect this change. For convenience, we are going to add some enumerated constants to reflect the current playback state of our MediaPlayer object, and a function to centralize the change of this state (which will, in turn, update the state of various objects, such as the text displayed on the buttons). The ChangeListener class also has one pure virtual function, which we need to add. Add the following code to the [UserMethods] section of MediaPlayer.h: //[UserMethods]-- You can add your own custom methods...enum TransportState {Stopped,Starting,Playing,Pausing,Paused,Stopping};void changeState (TransportState newState);void changeListenerCallback (ChangeBroadcaster* source);//[/UserMethods] We also need some additional member variables to support our audio playback. Add these to the [UserVariables] section: //[UserVariables] -- You can add your own custom variables...AudioDeviceManager deviceManager;AudioFormatManager formatManager;ScopedPointer<AudioFormatReaderSource> readerSource;AudioTransportSource transportSource;AudioSourcePlayer sourcePlayer;TransportState state;//[/UserVariables] The AudioDeviceManager object will manage our interface between the application and the audio hardware. The AudioFormatManager object will assist in creating an object that will read and decode the audio data from an audio file. This object will be stored in the ScopedPointer<AudioFormatReaderSource> object. The AudioTransportSource object will control the playback of the audio file and perform any sampling rate conversion that may be required (if the sampling rate of the audio file differs from the audio hardware sampling rate). The AudioSourcePlayer object will stream audio from the AudioTransportSource object to the AudioDeviceManager object. The state variable will store one of our enumerated constants to reflect the current playback state of our MediaPlayer object. Now add some code to the MediaPlayer.cpp file. In the [Constructor] section of the constructor, add following two lines: playButton->setEnabled (false);stopButton->setEnabled (false); This sets the Play and Stop buttons to be disabled (and grayed out) initially. Later, we enable the Play button once a valid file is loaded, and change the state of each button and the text displayed on the buttons, depending on whether the file is currently playing or not. In this [Constructor] section you should also initialize the AudioFormatManager as follows: formatManager.registerBasicFormats(); This allows the AudioFormatManager object to detect different audio file formats and create appropriate file reader objects. We also need to connect the AudioSourcePlayer, AudioTransportSource and AudioDeviceManager objects together, and initialize the AudioDeviceManager object. To do this, add the following lines to the [Constructor] section: sourcePlayer.setSource (&transportSource);deviceManager.addAudioCallback (&sourcePlayer);deviceManager.initialise (0, 2, nullptr, true); The first line connects the AudioTransportSource object to the AudioSourcePlayer object. The second line connects the AudioSourcePlayer object to the AudioDeviceManager object. The final line initializes the AudioDeviceManager object with: The number of required audio input channels (0 in this case). The number of required audio output channels (2 in this case, for stereo output). An optional "saved state" for the AudioDeviceManager object (nullptr initializes from scratch). Whether to open the default device if the saved state fails to open. As we are not using a saved state, this argument is irrelevant, but it is useful to set this to true in any case. The final three lines to add to the [Constructor] section to configure our MediaPlayer object as a listener to the AudioDeviceManager and AudioTransportSource objects, and sets the current state to Stopped: deviceManager.addChangeListener (this);transportSource.addChangeListener (this);state = Stopped; In the buttonClicked() function we need to add some code to the various sections. In the [UserButtonCode_openButton] section, add: //[UserButtonCode_openButton] -- add your button handler...FileChooser chooser ("Select a Wave file to play...",File::nonexistent,"*.wav");if (chooser.browseForFileToOpen()) {File file (chooser.getResult());readerSource = new AudioFormatReaderSource(formatManager.createReaderFor (file), true);transportSource.setSource (readerSource);playButton->setEnabled (true);}//[/UserButtonCode_openButton] When the openButton button is clicked, this will create a FileChooser object that allows the user to select a file using the native interface for the platform. The types of files that are allowed to be selected are limited using the wildcard *.wav to allow only files with the .wav file extension to be selected. If the user actually selects a file (rather than cancels the operation), the code can call the FileChooser::getResult() function to retrieve a reference to the file that was selected. This file is then passed to the AudioFormatManager object to create a file reader object, which in turn is passed to create an AudioFormatReaderSource object that will manage and own this file reader object. Finally, the AudioFormatReaderSource object is connected to the AudioTransportSource object and the Play button is enabled. The handlers for the playButton and stopButton objects will make a call to our changeState() function depending on the current transport state. We will define the changeState() function in a moment where its purpose should become clear. In the [UserButtonCode_playButton] section, add the following code: //[UserButtonCode_playButton] -- add your button handler...if ((Stopped == state) || (Paused == state))changeState (Starting);else if (Playing == state)changeState (Pausing);//[/UserButtonCode_playButton] This changes the state to Starting if the current state is either Stopped or Paused, and changes the state to Pausing if the current state is Playing. This is in order to have a button with combined play and pause functionality. In the [UserButtonCode_stopButton] section, add the following code: //[UserButtonCode_stopButton] -- add your button handler...if (Paused == state)changeState (Stopped);elsechangeState (Stopping);//[/UserButtonCode_stopButton] This sets the state to Stopped if the current state is Paused, and sets it to Stopping in other cases. Again, we will add the changeState() function in a moment, where these state changes update various objects. In the [UserButtonCode_settingsButton] section add the following code: //[UserButtonCode_settingsButton] -- add your button handler...bool showMidiInputOptions = false;bool showMidiOutputSelector = false;bool showChannelsAsStereoPairs = true;bool hideAdvancedOptions = false;AudioDeviceSelectorComponent settings (deviceManager,0, 0, 1, 2,showMidiInputOptions,showMidiOutputSelector,showChannelsAsStereoPairs,hideAdvancedOptions);settings.setSize (500, 400);DialogWindow::showModalDialog(String ("Audio Settings"),&settings,TopLevelWindow::getTopLevelWindow (0),Colours::white,true); //[/UserButtonCode_settingsButton] This presents a useful interface to configure the audio device settings. We need to add the changeListenerCallback() function to respond to changes in the AudioDeviceManager and AudioTransportSource objects. Add the following to the [MiscUserCode] section of the MediaPlayer.cpp file: //[MiscUserCode] You can add your own definitions...void MediaPlayer::changeListenerCallback (ChangeBroadcaster* src){if (&deviceManager == src) {AudioDeviceManager::AudioDeviceSetup setup;deviceManager.getAudioDeviceSetup (setup);if (setup.outputChannels.isZero())sourcePlayer.setSource (nullptr);elsesourcePlayer.setSource (&transportSource);} else if (&transportSource == src) {if (transportSource.isPlaying()) {changeState (Playing);} else {if ((Stopping == state) || (Playing == state))changeState (Stopped);else if (Pausing == state)changeState (Paused);}}}//[/MiscUserCode] If our MediaPlayer object receives a message that the AudioDeviceManager object changed in some way, we need to check that this change wasn't to disable all of the audio output channels, by obtaining the setup information from the device manager. If the number of output channels is zero, we disconnect our AudioSourcePlayer object from the AudioTransportSource object (otherwise our application may crash) by setting the source to nullptr. If the number of output channels becomes nonzero again, we reconnect these objects. If our AudioTransportSource object has changed, this is likely to be a change in its playback state. It is important to note the difference between requesting the transport to start or stop, and this change actually taking place. This is why we created the enumerated constants for all the other states (including transitional states). Again we issue calls to the changeState() function depending on the current value of our state variable and the state of the AudioTransportSource object. Finally, add the important changeState() function to the [MiscUserCode] section of the MediaPlayer.cpp file that handles all of these state changes: void MediaPlayer::changeState (TransportState newState){if (state != newState) {state = newState;switch (state) {case Stopped:playButton->setButtonText ("Play");stopButton->setButtonText ("Stop");stopButton->setEnabled (false);transportSource.setPosition (0.0);break;case Starting:transportSource.start();break;case Playing:playButton->setButtonText ("Pause");stopButton->setButtonText ("Stop");stopButton->setEnabled (true);break;case Pausing:transportSource.stop();break;case Paused:playButton->setButtonText ("Resume");stopButton->setButtonText ("Return to Zero");break;case Stopping:transportSource.stop();break;}}} After checking that the newState value is different from the current value of the state variable, we update the state variable with the new value. Then, we perform the appropriate actions for this particular point in the cycle of state changes. These are summarized as follows: In the Stopped state, the buttons are configured with the Play and Stop labels, the Stop button is disabled, and the transport is positioned to the start of the audio file. In the Starting state, the AudioTransportSource object is told to start. Once the AudioTransportSource object has actually started playing, the system will be in the Playing state. Here we update the playButton button to display the text Pause, ensure the stopButton button displays the text Stop, and we enable the Stop button. If the Pause button is clicked, the state becomes Pausing, and the transport is told to stop. Once the transport has actually stopped, the state changes to Paused, the playButton button is updated to display the text Resume and the stopButton button is updated to display Return to Zero. If the Stop button is clicked, the state is changed to Stopping, and the transport is told to stop. Once the transport has actually stopped, the state changes to Stopped (as described in the first point). If the Return to Zero button is clicked, the state is changed directly to Stopped (again, as previously described). When the audio file reaches the end of the file, the state is also changed to Stopped. Build and run the application. You should be able to select a .wav audio file after clicking the Open... button, play, pause, resume, and stop the audio file using the respective buttons, and configure the audio device using the Audio Settings… button. The audio settings window allows you to select the input and output device, the sample rate, and the hardware buffer size. It also provides a Test button that plays a tone through the selected output device. Summary This article has covered a few of the techniques for dealing with audio files in JUCE. The article has given only an introduction to get you started; there are many other options and alternative approaches, which may suit different circumstances. The JUCE documentation will take you through each of these and point you to related classes and functions. Resources for Article: Further resources on this subject: Quick start – media files and XBMC [Article] Audio Playback [Article] Automating the Audio Parameters – How it Works [Article]
Read more
  • 0
  • 0
  • 1575

article-image-testing-groovy
Packt
18 Oct 2013
24 min read
Save for later

Testing with Groovy

Packt
18 Oct 2013
24 min read
(For more resources related to this topic, see here.) This article is completely devoted to testing in Groovy. Testing is probably the most important activity that allows us to produce better software and make our users happier. The Java space has countless tools and frameworks that can be used for testing our software. In this article, we will direct our focus on some of those frameworks and how they can be integrated with Groovy. We will discuss not only unit testing techniques, but also integration and load testing strategies. Starting from the king of all testing frameworks, JUnit and its seamless Groovy integration, we move to explore how to test: SOAP and REST web services Code that interacts with databases The web application interface using Selenium The article also covers Behavior Driven Development (BDD) with Spock, advanced web service testing using soapUI, and load testing using JMeter. Unit testing Java code with Groovy One of the ways developers start looking into the Groovy language and actually using it is by writing unit tests. Testing Java code with Groovy makes the tests less verbose and it's easier for the developers that clearly express the intent of each test method. Thanks to the Java/Groovy interoperability, it is possible to use any available testing or mocking framework from Groovy, but it's just simpler to use the integrated JUnit based test framework that comes with Groovy. In this recipe, we are going to look at how to test Java code with Groovy. Getting ready This recipe requires a new Groovy project that we will use again in other recipes of this article. The project is built using Gradle and contains all the test cases required by each recipe. Let's create a new folder called groovy-test and add a build.gradle file to it. The build file will be very simple: apply plugin: 'groovy' apply plugin: 'java' repositories { mavenCentral() maven { url 'https://oss.sonatype.org' + '/content/repositories/snapshots' } } dependencies { compile 'org.codehaus.groovy:groovy-all:2.1.6' testCompile 'junit:junit:4.+' } The standard source folder structure has to be created for the project. You can use the following commands to achieve this: mkdir -p src/main/groovy mkdir -p src/test/groovy mkdir -p src/main/java Verify that the project builds without errors by typing the following command in your shell: gradle clean build How to do it... To show you how to test Java code from Groovy, we need to have some Java code first! So, this recipe's first step is to create a simple Java class called StringUtil, which we will test in Groovy. The class is fairly trivial, and it exposes only one method that concatenates String passed in as a List: package org.groovy.cookbook; import java.util.List; public class StringUtil { public String concat(List<String> strings, String separator) { StringBuilder sb = new StringBuilder(); String sep = ""; for(String s: strings) { sb.append(sep).append(s); sep = separator; } return sb.toString(); } } Note that the class has a specific package, so don't forget to create the appropriate folder structure for the package when placing the class in the src/main/java folder. Run the build again to be sure that the code compiles. Now, add a new test case in the src/test/groovy folder: package org.groovy.cookbook.javatesting import org.groovy.cookbook.StringUtil class JavaTest extends GroovyTestCase { def stringUtil = new StringUtil() void testConcatenation() { def result = stringUtil.concat(['Luke', 'John'], '-') assertToString('Luke-John', result) } void testConcatenationWithEmptyList() { def result = stringUtil.concat([], ',') assertEquals('', result) } } Again, pay attention to the package when creating the test case class and create the package folder structure. Run the test by executing the following Gradle command from your shell: gradle clean build Gradle should complete successfully with the BUILD SUCCESSFUL message. Now, add a new test method and run the build again: void testConcatenationWithNullShouldReturnNull() { def result = stringUtil.concat(null, ',') assertEquals('', result) } This time the build should fail: 3 tests completed, 1 failed :test FAILED FAILURE: Build failed with an exception. Fix the test by adding a null check to the Java code as the first statement of the concat method: if (strings == null) { return ""; } Run the build again to verify that it is now successful. How it works... The test case shown at step 3 requires some comments. The class extends GroovyTestCase is a base test case class that facilitates the writing of unit tests by adding several helper methods to the classes extending it. When a test case extends from GroovyTestCase, each test method name must start with test and the return type must be void. It is possible to use the JUnit 4.x @Test annotation, but, in that case, you don't have to extend from GroovyTestCase. The standard JUnit assertions (such as assertEquals and assertNull) are directly available in the test case (without explicit import) plus some additional assertion methods are added by the super class. The test case at step 3 uses assertToString to verify that a String matches the expected result. There are other assertions added by GroovyTestCase, such as assertArrayEquals, to check that two arrays contain the same values, or assertContains to assert that an array contains a given element. There's more... The GroovyTestCase class also offers an elegant method to test for expected exceptions. Let's add the following rule to the concat method: if (separator.length() != 1) { throw new IllegalArgumentException( "The separator must be one char long"); } Place the separator length check just after the null check for the List. Add the following new test method to the test case: void testVerifyExceptionOnWrongSeparator() { shouldFail IllegalArgumentException, { stringUtil(['a', 'b'], ',,') } shouldFail IllegalArgumentException, { stringUtil(['c', 'd'], '') } } The shouldFail method takes a closure that is executed in the context of a try-catch block. We can also specify the expected exception in the shouldFail method. The shouldFail method makes testing for exceptions very elegant and simple to read. See also http://junit.org/ http://groovy.codehaus.org/api/groovy/util/GroovyTestCase.html Testing SOAP web services This recipe shows you how to use the JUnit 4 annotations instead of the JUnit 3 API offered by GroovyTestCase. Getting ready For this recipe, we are going to use a publicly available web service, the US holiday date web service hosted at http://www.holidaywebservice.com. The WSDL of the service can be found on the /Holidays/US/Dates/USHolidayDates.asmx?WSDL path. We have already encountered this service in the Issuing a SOAP request and parsing a response recipe in Article 8, Working with Web Services in Groovy. Each service operation simply returns the date of a given US holiday such as Easter or Christmas. How to do it... We start from the Gradle build that we created in the Unit testing Java code with Groovy recipe. Add the following dependency to the dependencies section of the build.gradle file: testCompile 'com.github.groovy-wslite:groovy-wslite:0.8.0' Let's create a new unit test for verifying one of the web service operations. As usual, the test case is created in the src/test/groovy/org/groovy/cookbook folder: package org.groovy.cookbook.soap import static org.junit.Assert.* import org.junit.Test import wslite.soap.* class SoapTest { ... } Add a new test method to the body of the class: @Test void testMLKDay() { def baseUrl = 'http://www.holidaywebservice.com' def service = '/Holidays/US/Dates/USHolidayDates.asmx?WSDL' def client = new SOAPClient("${baseUrl}${service}") def baseNS = 'http://www.27seconds.com/Holidays/US/Dates/' def action = "${baseNS}GetMartinLutherKingDay" def response = client.send(SOAPAction: action) { body { GetMartinLutherKingDay('>gradle -Dtest.single=SoapTest clean test How it works... The test code creates a new SOAPClient with the URI of the target web service. The request is created using Groovy's MarkupBuilder. The body closure (and if needed, also the header closure) is passed to the MarkupBuilder for the SOAP message creation. The assertion code gets the result from the response, which is automatically parsed by XMLSlurper, allowing easy access to elements of the response such as the header or the body elements. In the previous test, we simply check that the returned Martin Luther King day matches with the expected one for the year 2013. There's more... If you require more control over the content of the SOAP request, the SOAPClient also supports sending the SOAP envelope as a String, such as in this example: def response = client.send ( """<?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope> <soapenv:Header/> <soapenv:Body> <dat:GetMartinLutherKingDay> <dat:year>2013</dat:year> </dat:GetMartinLutherKingDay> </soapenv:Body> </soapenv:Envelope> """ ) Replace the call to the send method in step 3 with the one above and run your test again. See also https://github.com/jwagenleitner/groovy-wslite http://www.holidaywebservice.com Testing RESTful services This recipe is very similar to the previous recipe Testing SOAP web services, except that it shows how to test a RESTful service using Groovy and JUnit. Getting ready For this recipe, we are going to use a test framework aptly named Rest-Assured. This framework is a simple DSL for testing and validating REST services returning either JSON or XML. Before we start to delve into the recipe, we need to start a simple REST service for testing purposes. We are going to use the Ratpack framework. The test REST service, which we will use, exposes three APIs to fetch, add, and delete books from a database using JSON as lingua franca. For the sake of brevity, the code for the setup of this recipe is available in the rest-test folder of the companion code for this article. The code contains the Ratpack server, the domain objects, Gradle build, and the actual test case that we are going to analyze in the next section. How to do it... The test case takes care of starting the Ratpack server and execute the REST requests. Here is the RestTest class located in src/test/groovy/org/groovy/ cookbok/rest folder: src/test/groovy/org/groovy/ cookbok/rest folder: package org.groovy.cookbook.rest import static com.jayway.restassured.RestAssured.* import static com.jayway.restassured.matcher. RestAssuredMatchers.* import static org.hamcrest.Matchers.* import static org.junit.Assert.* import groovy.json.JsonBuilder import org.groovy.cookbook.server.* import org.junit.AfterClass import org.junit.BeforeClass import org.junit.Test class RestTest { static server final static HOST = 'http://localhost:5050' @BeforeClass static void setUp() { server = App.init() server.startAndWait() } @AfterClass static void tearDown() { if(server.isRunning()) { server.stop() } } @Test void testGetBooks() { expect(). body('author', hasItems('Ian Bogost', 'Nate Silver')). when().get("${HOST}/api/books") } @Test void testGetBook() { expect(). body('author', is('Steven Levy')). when().get("${HOST}/api/books/5") } @Test void testPostBook() { def book = new Book() book.author = 'Haruki Murakami' book.date = '2012-05-14' book.title = 'Kafka on the shore' JsonBuilder jb = new JsonBuilder() jb.content = book given(). content(jb.toString()). expect().body('id', is(6)). when().post("${HOST}/api/books/new") } @Test void testDeleteBook() { expect().statusCode(200). when().delete("${HOST}/api/books/1") expect().body('id', not(hasValue(1))). when().get("${HOST}/api/books") } } Build the code and execute the test from the command line by typing: gradle clean test How it works... The JUnit test has a @BeforeClass annotated method, executed at the beginning of the unit test, that starts the Ratpack server and the associated REST services. The @AfterClass annotated method, on the contrary, shuts down the server when the test is over. The unit test has four test methods. The first one, testGetBooks executes a GET request against the server and retrieves all the books. The rather readable DSL offered by the Rest-Assured framework should be easy to follow. The expect method starts building the response expectation returned from the get method. The actual assert of the test is implemented via a Hamcrest matcher (hence the static org.hamcrest.Matchers.* import in the test). The test is asserting that the body of the response contains two books that have the author named Ian Bogost or Greg Grandin. The get method hits the URL of the embedded Ratpack server, started at the beginning of the test. The testGetBook method is rather similar to the previous one, except that it uses the is matcher to assert the presence of an author on the returned JSON message. The testPostBook tests that the creation of a new book is successful. First, a new book object is created and transformed into a JSON object using JsonBuilder. Instead of the expect method, we use the given method to prepare the POST request. The given method returns a RequestSpecification to which we assign the newly created book and finally, invoke the post method to execute the operation on the server. As in our book database, the biggest identifier is 8, the new book should get the id 9, which we assert in the test. The last test method (testDeleteBook) verifies that a book can be deleted. Again we use the expect method to prepare the response, but this time we verify that the returned HTTP status code is 200 (for deletion) upon deleting a book with the id 1. The same test also double-checks that fetching the full list of books does not contain the book with id equals to 1. See also https://code.google.com/p/rest-assured/ https://code.google.com/p/hamcrest/ https://github.com/ratpack/ratpack Writing functional tests for web applications If you are developing web applications, it is of utmost importance that you thoroughly test them before allowing user access. Web GUI testing can require long hours of very expensive human resources to repeatedly exercise the application against a varied list of input conditions. Selenium, a browser-based testing and automation framework, aims to solve these problems for software developers, and it has become the de facto standard for web interface integration and functional testing. Selenium is not just a single tool but a suite of software components, each catering to different testing needs of an organization. It has four main parts: Selenium Integrated Development Environment (IDE): It is a Firefox add-on that you can only use for creating relatively simple test cases and test suites. Selenium Remote Control (RC): It also known as Selenium 1, is the first Selenium tool that allowed users to use programming languages in creating complex tests. WebDriver: It is the newest Selenium implementation that allows your test scripts to communicate directly to the browser, thereby controlling it from the OS level. Selenium Grid: It is a tool that is used with Selenium RC to execute parallel tests across different browsers and operating systems. Since 2008, Selenium RC and WebDriver are merged into a single framework to form Selenium 2. Selenium 1 refers to Selenium RC. This recipe will show you how to write a Selenium 2 based test using HtmlUnitDriver. HtmlUnitDriver is the fastest and most lightweight implementation of WebDriver at the moment. As the name suggests, it is based on HtmlUnit, a relatively old framework for testing web applications. The main disadvantage of using this driver instead of a WebDriver implementation that "drives" a real browser is the JavaScript support. None of the popular browsers use the JavaScript engine used by HtmlUnit (Rhino). If you test JavaScript using HtmlUnit, the results may divert considerably from those browsers. Still, WebDriver and HtmlUnit can be used for fast paced testing against a web interface, leaving more JavaScript intensive tests to other, long running, WebDriver implementations that use specific browsers. Getting ready Due to the relative complexity of the setup required to demonstrate the steps of this recipe, it is recommended that the reader uses the code that comes bundled with this recipe. The code is located in the selenium-test located in the code directory for this article. The source code, as other recipes in this article, is built using Gradle and has a standard structure, containing application code and test code. The web application under test is very simple. It is composed of two pages. A welcome page that looks similar to the following screenshot: And a single field test form page: The Ratpack framework is utilized to run the fictional web application and serve the HTML pages along with some JavaScript and CSS. How to do it... The following steps will describe the salient points of Selenium testing with Groovy. Let's open the build.gradle file. We are interested in the dependencies required to execute the tests: testCompile group: 'org.seleniumhq.selenium', name: 'selenium-htmlunit-driver', version: '2.32.0' testCompile group: 'org.seleniumhq.selenium', name: 'selenium-support', version: '2.9.0' Let's open the test case, SeleniumTest.groovy located in test/groovy/org/ groovy/cookbook/selenium: test/groovy/org/ groovy/cookbook/selenium: package org.groovy.cookbook.selenium import static org.junit.Assert.* import org.groovy.cookbook.server.* import org.junit.AfterClass import org.junit.BeforeClass import org.junit.Test import org.openqa.selenium.By import org.openqa.selenium.WebDriver import org.openqa.selenium.WebElement import org.openqa.selenium.htmlunit.HtmlUnitDriver import org.openqa.selenium.support.ui.WebDriverWait import com.google.common.base.Function class SeleniumTest { static server static final HOST = 'http://localhost:5050' static HtmlUnitDriver driver @BeforeClass static void setUp() { server = App.init() server.startAndWait() driver = new HtmlUnitDriver(true) } @AfterClass static void tearDown() { if (server.isRunning()) { server.stop() } } @Test void testWelcomePage() { driver.get(HOST) assertEquals('welcome', driver.title) } @Test void testFormPost() { driver.get("${HOST}/form") assertEquals('test form', driver.title) WebElement input = driver.findElement(By.name('username')) input.sendKeys('test') input.submit() WebDriverWait wait = new WebDriverWait(driver, 4) wait.until ExpectedConditions. presenceOfElementLocated(By.className('hello')) assertEquals('oh hello,test', driver.title) } } How it works... The test case initializes the Ratpack server and the HtmlUnit driver by passing true to the HtmlUnitDriver instance. The boolean parameter in the constructor indicates whether the driver should support JavaScript. The first test, testWelcomePage, simply verifies that the title of the website's welcome page is as expected. The get method executes an HTTP GET request against the URL specified in the method, the Ratpack server in our test. The second test, testFormPost, involves the DOM manipulation of a form, its submission, and waiting for an answer from the server. The Selenium API should be fairly readable. For a start, the test checks that the page containing the form has the expected title. Then the element named username (a form field) is selected, populated, and finally submitted. This is how the HTML looks for the form field: <input type="text" name="username" placeholder="Your username"> The test uses the findElement method to select the input field. The method expects a By object that is essentially a mechanism to locate elements within a document. Elements can be identified by name, id, text link, tag name, CSS selector, or XPath expression. The form is submitted via AJAX. Here is part of the JavaScript activated by the form submission: complete:function(xhr, status) { if (status === 'error' || !xhr.responseText) { alert('error') } else { document.title = xhr.responseText jQuery(e.target). replaceWith('<p class="hello">' + xhr.responseText + '</p>') } } After the form submission, the DOM of the page is manipulated to change the page title of the form page and replace the form DOM element with a message wrapped in a paragraph element. To verify that the DOM changes have been applied, the test uses the WebDriverWait class to wait until the DOM is actually modified and the element with the class hello appears on the page. The WebDriverWait is instantiated with a four seconds timeout. This recipe only scratches the surface of the Selenium 2 framework's capabilities, but it should get you started to implement your own integration and functional test. See also http://docs.seleniumhq.org/ http://htmlunit.sourceforge.net/ Writing behavior-driven tests with Groovy Behavior Driven Development, or simply BDD, is a methodology where QA, business analysts, and marketing people could get involved in defining the requirements of a process in a common language. It could be considered an extension of Test Driven Development, although is not a replacement. The initial motivation for BDD stems from the perplexity of business people (analysts, domain experts, and so on) to deal with "tests" as these seem to be too technical. The employment of the word "behaviors" in the conversation is a way to engage the whole team. BDD states that software tests should be specified in terms of the desired behavior of the unit. The behavior is expressed in a semi-formal format, borrowed from user story specifications, a format popularized by agile methodologies. For a deeper insight into the BDD rationale, it is highly recommended to read the original paper from Dan North available at http://dannorth.net/introducing-bdd/. Spock is one of the most widely used frameworks in the Groovy and Java ecosystem that allows the creation of BDD tests in a very intuitive language and facilitates some common tasks such as mocking and extensibility. What makes it stand out from the crowd is its beautiful and highly expressive specification language. Thanks to its JUnit runner, Spock is compatible with most IDEs, build tools, and continuous integration servers. In this recipe, we are going to look at how to implement both a unit test and a web integration test using Spock. Getting ready This recipe has a slightly different setup than most of the recipes in this book, as it resorts to an existing web application source code, freely available on the Internet. This is the Spring Petclinic application provided by SpringSource as a demonstration of the latest Spring framework features and configurations. The web application works as a pet hospital and most of the interactions are typical CRUD operations against certain entities (veterinarians, pets, pet owners). The Petclinic application is available in the groovy-test/spock/web folder of the companion code for this article. All Petclinic's original tests have been converted to Spock. Additionally we created a simple integration test that uses Spock and Selenium to showcase the possibilities offered by the integration of the two frameworks. As usual, the recipe uses Gradle to build the reference web application and the tests. The Petclinic web application can be started by launching the following Gradle command in your shell from the groovy-test/spock/web folder: gradle tomcatRunWar If the application starts without errors, the shell should eventually display the following message: The Server is running at http://localhost:8080/petclinic Take some time to familiarize yourself with the Petclinic application by directing your browser to http://localhost:8080/petclinic and browsing around the website: How to do it... The following steps will describe the key concepts for writing your own behavior-driven unit and integration tests: Let's start by taking a look at the dependencies required to implement a Spock-based test suite: testCompile 'org.spockframework:spock-core:0.7-groovy-2.0' testCompile group: 'org.seleniumhq.selenium', name: 'selenium-java', version: '2.16.1' testCompile group: 'junit', name: 'junit', version: '4.10' testCompile 'org.hamcrest:hamcrest-core:1.2' testRuntime 'cglib:cglib-nodep:2.2' testRuntime 'org.objenesis:objenesis:1.2' This is what a BDD unit test for the application's business logic looks like: package org.springframework.samples.petclinic.model import spock.lang.* class OwnerTest extends Specification { def "test pet and owner" () { given: def p = new Pet() def o = new Owner() when: p.setName("fido") then: o.getPet("fido") == null o.getPet("Fido") == null when: o.addPet(p) then: o.getPet("fido").equals(p) o.getPet("Fido").equals(p) } } The test is named OwnerTest.groovy, and it is available in the spock/web/src/ test folder of the main groovy-test project that comes with this article. The third test in this recipe mixes Spock and Selenium, the web testing framework already discussed in Writing functional tests for web applications recipe: package org.cookbook.groovy.spock import static java.util.concurrent.TimeUnit.SECONDS import org.openqa.selenium.By import org.openqa.selenium.WebElement import org.openqa.selenium.htmlunit.HtmlUnitDriver import spock.lang.Shared import spock.lang.Specification class HomeSpecification extends Specification { static final HOME = 'http://localhost:9966/petclinic' @Shared def driver = new HtmlUnitDriver(true) def setup() { driver.manage().timeouts().implicitlyWait 10, SECONDS } def 'user enters home page'() { when: driver.get(HOME) then: driver.title == 'PetClinic :: ' + 'a Spring Framework demonstration' } def 'user clicks on menus'() { when: driver.get(HOME) def vets = driver.findElement(By.linkText('Veterinarians')) vets.click() then: driver.currentUrl == 'http://localhost:9966/petclinic/vets.html' } } The test above is available in the spock/specs/src/test folder of the accompanying project. How it works... The first step of this recipe lays out the dependencies required to set up a Spock-based BDD test suite. Spock requires Java 5 or higher, and it's pretty picky with regard to the matching Groovy version to use. In the case of this recipe, as we are using Groovy 2.x, we set the dependency to the 0.7-groovy-2.0 version of the Spock framework. The full build.gradle file is located in the spock/specs folder of the recipe's code. The first test case demonstrated in the recipe is a direct conversion of a JUnit test written by Spring for the Petclinic application. This is the original test written in Java: public class OwnerTests { @Test public void testHasPet() { Owner owner = new Owner();Pet fido = new Pet(); fido.setName("Fido"); assertNull(owner.getPet("Fido")); assertNull(owner.getPet("fido")); owner.addPet(fido); assertEquals(fido, owner.getPet("Fido")); assertEquals(fido, owner.getPet("fido")); } } All we need to import in the Spock test is spock.lang.* that contains the most important types for writing specifications. A Spock test extends from spock.lang.Specification. The name of a specification normally relates to the system or system operation under test. In the case of the Groovy test at step 2, we reused the original Java test name, but it would have been better renamed to something more meaningful for a specification such as OwnerPetSpec. The class Specification exposes a number of practical methods for implementing specifications. Additionally, it tells JUnit to run the specification with Sputnik, Spock's own JUnit runner. Thanks to Sputnik, Spock specifications can be executed by all IDEs and build tools. Following the class definition, we have the feature method: def 'test pet and owner'() { ... } Feature methods lie at the core of a specification. They contain the description of the features (properties, aspects) that define the system that is under specification test. Feature methods are conventionally named with String literals: it's a good idea to choose meaningful names for feature methods. In the test above, we are testing that: given two entities, pet and owner, the getPet method of the owner instance will return null until the pet is not assigned to the owner, and that the getPet method will accept both "fido" and "Fido" in order to verify the ownership. Conceptually, a feature method consists of four phases: Set up the feature's fixture Provide an input to the system under specification (stimulus) Describe the response expected from the system Clean up Whereas the first and last phases are optional, the stimulus and response phases are always present and may occur more than once. Each phase is defined by blocks: blocks are defined by a label and extend to the beginning of the next block or the end of the method. In the test at step 2, we can see 3 types of blocks: In the given block, data gets initialized The when and then blocks always occur together. They describe a stimulus and the expected response. Whereas when blocks may contain arbitrary code; then blocks are restricted to conditions, exception checking, interactions, and variable definitions. The first test case has two when/then pairs. A pet is assigned the name "fido", and the test verifies that calling getPet on an owner object only returns something if the pet is actually "owned" by the owner. The second test is slightly more complex because it employs the Selenium framework to execute a web integration test with a BDD flavor. The test is located in the groovy-test/spock/specs/src/test folder. You can launch it by typing gradle test from the groovy-test/spock/specs folder. The test takes care of starting the web container and run the application under test, Petclinic. The test starts by defining a shared Selenium driver marked with the @Shared annotation, which is visible by all the feature methods. The first feature method simply opens the Petclinic main page and checks that the title matches the specification. The second feature method uses the Selenium API to select a link, click on it, and verify that the link brings the user to the right page. The verification is performed against the currentUrl of the browser that is expected to match the URL of the link we clicked on. See also http://dannorth.net/introducing-bdd/ http://en.wikipedia.org/wiki/Behavior-driven_development https://code.google.com/p/spock/ https://github.com/spring-projects/spring-petclinic/
Read more
  • 0
  • 0
  • 4525

article-image-parse-objects-and-queries
Packt
18 Oct 2013
6 min read
Save for later

Parse Objects and Queries

Packt
18 Oct 2013
6 min read
(For more resources related to this topic, see here.) In this article, we will learn how to work with Parse objects along with writing queries to set and get data from Parse. Every application has a different and specific Application ID associated with the Client Key, which remains same for all the applications of the same user. Parse is based on object-oriented principles. All the operations on Parse will be done in the form of objects. Parse saves your data in the form of objects you send, and helps you to fetch the data in the same format again. In this article, you will learn about objects and operations that can be performed on Parse objects. Parse objects All the data in Parse is saved in the form of PFObject. When you fetch any data from Parse by firing a query, the result will be in the form of PFObject. The detailed concept of PFObject is explained in the following section. PFObject Data stored on Parse is in the form of objects and it's developed around PFObject. PFObject can be defined as the key-value (dictionary format) pair of JSON data. The Parse data is schemaless, which means that you don't need to specify ahead of time what keys exist on each PFObject. Parse backend will take care of storing your data simply as a set of whatever key-value pair you want. Let's say you are tracking the visited count of the username with a user ID using your application. A single PFObject could contain the following code: visitedCount:1122, userName:"Jack Samuel", userId:1232333332 Parse accepts only string as Key. Values can be strings, numbers, Booleans, or even arrays, and dictionaries—anything that can be JSON encoded. The class name of PFObject is used to distinguish different sorts of data. Let's say you call the visitedCounts object of the user. Parse recommends you to write your class name NameYourClassLikeThis and nameYourKeysLikeThis just to provide readability to the code. As you have seen in the previous example, we have used visitedCounts to represent the visited count key. Operations on Parse objects You can perform save, update, and delete operations on Parse objects. Following is the detailed explanation of the operations that can be performed on Parse objects. Saving objects To save your User table on the Parse Cloud with additional fields, you need to follow the coding convention similar to the NSMutableDictionary method. After updating the data you have to call the saveInBackground method to save it on the Parse Cloud. Here is the example that explains how to save additional data on the Parse Cloud: PFObject *userObject = [PFObject currentUser];[userObject setObject:[NSNumber numberWithInt:1122]forKey:@"visitedCount"];[userObject setObject:@"Jack Samuel" forKey:@"userName"];[userObject setObject:@"1232333332" forKey:@"userId"];[userObject saveInBackground]; Just after executing the preceding piece of code, your data is saved on the Parse Cloud. You can check your data in Data Browser of your application on Parse. It should be something similar to the following line of code: objectId: "xWMyZ4YEGZ", visitedCount: 1122, userName: "JackSamuel", userId: "1232333332",createdAt:"2011-06-10T18:33:42Z", updatedAt:"2011-06-10T18:33:42Z" There are two things to note here: You don't have to configure or set up a new class called User before running your code. Parse will automatically create the class when it first encounters it. There are also a few fields you don't need to specify, those are provided as a convenience: objectId is a unique identifier for each saved object. createdAt and updatedAt represent the time that each object was created and last modified in the Parse Cloud. Each of these fields is filled in by Parse, so they don't exist on PFObject until a save operation has completed. You can provide additional logic after the success or failure of the callback operation using the saveInBackgroundWithBlock or saveInBackgroundWithTarget:selector: methods provided by Parse: [userObject saveInBackgroundWithBlock:^(BOOLsucceeded, NSError *error) {if (succeeded)NSLog(@"Success");elseNSLog(@"Error %@",error);}]; Fetching objects To fetch the saved data from the Parse Cloud is even easier than saving data. You can fetch the data from the Parse Cloud in the following way. You can fetch the complete object from its objectId using PFQuery. Methods to fetch data from the cloud are asynchronous. You can implement this either by using block-based or callback-based methods provided by Parse: PFQuery *query = [PFQuery queryWithClassName:@"GameScore"]; // 1[query getObjectInBackgroundWithId:@"xWMyZ4YEGZ" block:^(PFObject*gameScore, NSError *error) { //2// Do something with the returned PFObject in the gameScorevariable.int score = [[gameScore objectForKey:@"score"] intValue];NSString *playerName = [gameScore objectForKey:@"playerName"];//3BOOL cheatMode = [[gameScore objectForKey:@"cheatMode"]boolValue];NSLog(@"%@", gameScore);}];// The InBackground methods are asynchronous, so the code writtenafter this will be executed// immediately. The codes which are dependent on the query resultshould be moved// inside the completion block above. Lets analyze each line in here, as follows: Line 1: It creates a query object pointing to the class name given in the argument. Line 2: It calls an asynchronous method on the query object created in line 1 to download the complete object for objectId, provided as an argument. As we are using the block-based method, we can provide code inside the block, which will execute on success or failure. Line 3: It reads data from PFObject that we got in response to the query. Parse provides some common values of all Parse objects as properties: NSString *objectId = gameScore.objectId;NSDate *updatedAt = gameScore.updatedAt;NSDate *createdAt = gameScore.createdAt; To refresh the current Parse object, type: [myObject refresh]; This method can be called on any Parse object, which is useful when you want to refresh the data of the object. Let's say you want to re-authenticate a user, so you can call the refresh method on the user object to refresh it. Saving objects offline Parse provides you with the functions to save your data when the user is offline. So when the user is not connected to the Internet, the data will be saved locally in the objects, and as soon as the user is connected to the Internet, data will be saved automatically on the Parse Cloud. If your application is forcefully closed before establishing the connection, Parse will try again to save the object next time the application is opened. For such operations, Parse provides you with the saveEventually method, so that you will not lose any data even when the user is not connected to the Internet. Eventually all calls are executed in the order the request is made. The following code demonstrates the saveEventually call: // Create the object.PFObject *gameScore = [PFObject objectWithClassName:@"GameScore"];[gameScore setObject:[NSNumber numberWithInt:1337]forKey:@"score"];[gameScore setObject:@"Sean Plott" forKey:@"playerName"];[gameScore setObject:[NSNumber numberWithBool:NO]forKey:@"cheatMode"];[gameScore saveEventually]; Summary In this article, we explored Parse objects and the way to query the data available on Parse. We started by exploring Parse objects and the ways to save these objects on the cloud. Finally, we learned about the queries which will help us to fetch the saved data on Parse. Resources for Article: Further resources on this subject: New iPad Features in iOS 6 [Article] Creating a New iOS Social Project [Article] Installing Alfresco Software Development Kit (SDK) [Article]
Read more
  • 0
  • 0
  • 1970

article-image-introducing-kafka
Packt
10 Oct 2013
6 min read
Save for later

Introducing Kafka

Packt
10 Oct 2013
6 min read
(For more resources related to this topic, see here.) In today's world, real-time information is continuously getting generated by applications (business, social, or any other type), and this information needs easy ways to be reliably and quickly routed to multiple types of receivers. Most of the time, applications that are producing information and applications that are consuming this information are well apart and inaccessible to each other. This, at times, leads to redevelopment of information of producers or consumers to provide an integration point between them. Therefore, a mechanism is required for seamless integration of information of producers and consumers to avoid any kind of rewriting of an application at either end. In the present era of big data, the first challenge is to collect the data and the second challenge is to analyze it. As it is a huge amount of data, the analysis typically includes the following and much more: User behavior data Application performance tracing Activity data in the form of logs Event messages Message publishing is a mechanism for connecting various applications with the help of messages that are routed between them, for example, by a message broker such as Kafka. Kafka is a solution to the real-time problems of any software solution, that is, to deal with real-time volumes of information and route it to multiple consumers quickly. Kafka provides seamless integration between information of producers and consumers without blocking the producers of the information, and without letting producers know who the final consumers are. Apache Kafka is an open source, distributed publish-subscribe messaging system, mainly designed with the following characteristics: Persistent messaging: To derive the real value from big data, any kind of information loss cannot be afforded. Apache Kafka is designed with O(1) disk structures that provide constant-time performance even with very large volumes of stored messages, which is in order of TB. High throughput: Keeping big data in mind, Kafka is designed to work on commodity hardware and to support millions of messages per second. Distributed: Apache Kafka explicitly supports messages partitioning over Kafka servers and distributing consumption over a cluster of consumer machines while maintaining per-partition ordering semantics. Multiple client support: Apache Kafka system supports easy integration of clients from different platforms such as Java, .NET, PHP, Ruby, and Python. Real time: Messages produced by the producer threads should be immediately visible to consumer threads; this feature is critical to event-based systems such as Complex Event Processing (CEP) systems. Kafka provides a real-time publish-subscribe solution, which overcomes the challenges of real-time data usage for consumption, for data volumes that may grow in order of magnitude, larger that the real data. Kafka also supports parallel data loading in the Hadoop systems. The following diagram shows a typical big data aggregation-and-analysis scenario supported by the Apache Kafka messaging system: At the production side, there are different kinds of producers, such as the following: Frontend web applications generating application logs Producer proxies generating web analytics logs Producer adapters generating transformation logs Producer services generating invocation trace logs At the consumption side, there are different kinds of consumers, such as the following: Offline consumers that are consuming messages and storing them in Hadoop or traditional data warehouse for offline analysis Near real-time consumers that are consuming messages and storing them in any NoSQL datastore such as HBase or Cassandra for near real-time analytics Real-time consumers that filter messages in the in-memory database and trigger alert events for related groups Need for Kafka A large amount of data is generated by companies having any form of web-based presence and activity. Data is one of the newer ingredients in these Internet-based systems. This data typically includes user-activity events corresponding to logins, page visits, clicks, social networking activities such as likes, sharing, and comments, and operational and system metrics. This data is typically handled by logging and traditional log aggregation solutions due to high throughput (millions of messages per second). These traditional solutions are the viable solutions for providing logging data to an offline analysis system such as Hadoop. However, the solutions are very limiting for building real-time processing systems. According to the new trends in Internet applications, activity data has become a part of production data and is used to run analytics at real time. These analytics can be: Search based on relevance Recommendations based on popularity, co-occurrence, or sentimental analysis Delivering advertisements to the masses Internet application security from spam or unauthorized data scraping Real-time usage of these multiple sets of data collected from production systems has become a challenge because of the volume of data collected and processed. Apache Kafka aims to unify offline and online processing by providing a mechanism for parallel load in Hadoop systems as well as the ability to partition real-time consumption over a cluster of machines. Kafka can be compared with Scribe or Flume as it is useful for processing activity stream data; but from the architecture perspective, it is closer to traditional messaging systems such as ActiveMQ or RabitMQ. Few Kafka usages Some of the companies that are using Apache Kafka in their respective use cases are as follows: LinkedIn (www.linkedin.com): Apache Kafka is used at LinkedIn for the streaming of activity data and operational metrics. This data powers various products such as LinkedIn news feed and LinkedIn Today in addition to offline analytics systems such as Hadoop. DataSift (www.datasift.com/): At DataSift, Kafka is used as a collector for monitoring events and as a tracker of users' consumption of data streams in real time. Twitter (www.twitter.com/): Twitter uses Kafka as a part of its Storm— a stream-processing infrastructure. Foursquare (www.foursquare.com/): Kafka powers online-to-online and online-to-offline messaging at Foursquare. It is used to integrate Foursquare monitoring and production systems with Foursquare, Hadoop-based offline infrastructures. Square (www.squareup.com/): Square uses Kafka as a bus to move all system events through Square's various datacenters. This includes metrics, logs, custom events, and so on. On the consumer side, it outputs into Splunk, Graphite, or Esper-like real-time alerting. The source of the above information is https: //cwiki. apache.org/confluence/display/KAFKA/Powered+By. Summary In this article, we have seen how companies are evolving the mechanism of collecting and processing application-generated data, and that of utilizing the real power of this data by running analytics over it. Resources for Article: Further resources on this subject: Apache Felix Gogo [Article] Hadoop and HDInsight in a Heartbeat [Article] Advanced Hadoop MapReduce Administration [Article]
Read more
  • 0
  • 0
  • 9139
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-drawing-2d
Packt
08 Oct 2013
15 min read
Save for later

Drawing in 2D

Packt
08 Oct 2013
15 min read
(For more resources related to this topic, see here.) Drawing basics The screens of modern computers consist of a number of small squares, called pixels ( picture elements ). Each pixel can light in one color. You create pictures on the screen by changing the colors of the pixels. Graphics based on pixels is called raster graphics. Another kind of graphics is vector graphics, which is based on primitives such as lines and circles. Today, most computer screens are arrays of pixels and represent raster graphics. But images based on vector graphics (vector images) are still used in computer graphics. Vector images are drawn on raster screens using the rasterization procedure. The openFrameworks project can draw on the whole screen (when it is in fullscreen mode) or only in a window (when fullscreen mode is disabled). For simplicity, we will call the area where openFrameworks can draw, the screen . The current width and height of the screen in pixels may be obtained using the ofGetWidth() and ofGetHeight() functions. For pointing the pixels, openFrameworks uses the screen's coordinate system. This coordinate system has its origin on the top-left corner of the screen. The measurement unit is a pixel. So, each pixel on the screen with width w and height h pixels can be pointed by its coordinates (x, y), where x and y are integer values lying in the range 0 to w-1 and from 0 to h-1 respectively. In this article, we will deal with two-dimensional (2D) graphics, which is a number of methods and algorithms for drawing objects on the screen by specifying the two coordinates (x, y) in pixels. The other kind of graphics is three-dimensional (3D) graphics, which represents objects in 3D space using three coordinates (x, y, z) and performs rendering on the screen using some kind of projection of space (3D) to the screen (2D). The background color of the screen The drawing on the screen in openFrameworks should be performed in the testApp::draw() function. Before this function is called by openFrameworks, the entire screen is filled with a fixed color, which is set by the function ofSetBackground( r, g, b ). Here r, g, and b are integer values corresponding to red, green, and blue components of the background color in the range 0 to 255. Note that each of the ofSetBackground() function call fills the screen with the specified color immediately. You can make a gradient background using the ofBackgroundGradient() function. You can set the background color just once in the testApp::setup() function, but we often call ofSetBackground() in the beginning of the testApp::draw() function to not mix up the setup stage and the drawing stage. Pulsating background example You can think of ofSetBackground() as an opportunity to make the simplest drawings, as if the screen consists of one big pixel. Consider an example where the background color slowly changes from black to white and back using a sine wave. This is example 02-2D/01-PulsatingBackground. The project is based on the openFrameworks emptyExample example. Copy the folder with the example and rename it. Then fill the body of the testApp::draw() function with the following code: float time = ofGetElapsedTimef(); //Get time in seconds//Get periodic value in [-1,1],with wavelength equal to 1 second float value = sin( time * M_TWO_PI );//Map value from [-1,1] to [0,255] float v = ofMap( value, -1, 1, 0, 255 );ofBackground( v, v, v ); //Set background color This code gets the time lapsed from the start of the project using the ofGetElapsedTimef() function, and uses this value for computing value = sin( time * M_TWO_PI ). Here, M_TWO_PI is an openFrameworks constant equal to 2π; that is, approximately 6.283185. So, time * M_TWO_PI increases by 2π per second. The value 2π is equal to the period of the sine wave function, sin(). So, the argument of sin(...) will go through its wavelength in one second, hence value = sin(...) will run from -1 to 1 and back. Finally, we map the value to v, which changes in range from 0 to 255 using the ofMap() function, and set the background to a color with red, green, and blue components equal to v. Run the project; you will see how the screen color pulsates by smoothly changing its color from black to white and back. Replace the last line, which sets the background color to ofBackground( v, 0, 0 );, and the color will pulsate from black to red. Replace the argument of the sin(...) function to the formula time * M_TWO_PI * 2 and the speed of the pulsating increases by two times. We will return to background in the Drawing with an uncleared background section. Now we will consider how to draw geometric primitives. Geometric primitives In this article we will deal with 2D graphics. 2D graphics can be created in the following ways: Drawing geometric primitives such as lines, circles, and other curves and shapes like triangles and rectangles. This is the most natural way of creating graphics by programming. Generative art and creative coding projects are often based on this graphics method. We will consider this in the rest of the article. Drawing images lets you add more realism to the graphics. Setting the contents of the screen directly, pixel-by-pixel, is the most powerful way of generating graphics. But it is harder to use for simple things like drawing curves. So, such method is normally used together with both of the previous methods. A somewhat fast technique for drawing a screen pixel-by-pixel consists of filling an array with pixels colors, loading it in an image, and drawing the image on the screen. The fastest, but a little bit harder technique, is using fragment shaders. openFrameworks has the following functions for drawing primitives: ofLine( x1, y1, x2, y2 ): This function draws a line segment connecting points (x1, y1) and (x2, y2) ofRect( x, y, w, h ): This function draws a rectangle with the top-left corner (x, y), width w, and height h ofTriangle( x1, y1, x2, y2, x3, y3 ): This function draws a triangle with vertices (x1, y1), (x2, y2), and (x3, y3) ofCircle( x, y, r ): This function draws a circle with center (x, y) and radius r openFrameworks has no special function for changing the color of a separate pixel. To do so, you can draw the pixel (x, y) as a rectangle with width and height equal to 1 pixel; that is, ofRect( x, y, 1, 1 ). This is a very slow method, but we sometimes use it for educational and debugging purposes. All the coordinates in these functions are float type. Although the coordinates (x, y) of a particular pixel on the screen are integer values, openFrameworks uses float numbers for drawing geometric primitives. This is because a video card can draw objects with the float coordinates using modeling, as if the line goes between pixels. So the resultant picture of drawing with float coordinates is smoother than with integer coordinates. Using these functions, it is possible to create simple drawings. The simplest example of a flower Let's consider the example that draws a circle, line, and two triangles, which forms the simplest kind of flower. This is example 02-2D/02-FlowerSimplest. This example project is based on the openFrameworks emptyExample project. Fill the body of the testApp::draw() function with the following code: ofBackground( 255, 255, 255 ); //Set white background ofSetColor( 0, 0, 0 ); //Set black colorofCircle( 300, 100, 40 ); //Blossom ofLine( 300, 100, 300, 400 ); //Stem ofTriangle( 300, 270, 300, 300, 200, 220 ); //Left leaf ofTriangle( 300, 270, 300, 300, 400, 220 ); //Right leaf On running this code, you will see the following picture of the "flower": Controlling the drawing of primitives There are a number of functions for controlling the parameters for drawing primitives. ofSetColor( r, g, b ): This function sets the color of drawing primitives, where r, g, and b are integer values corresponding to red, green, and blue components of the color in the range 0 to 255. After calling ofSetColor(), all the primitives will be drawn using this color until another ofSetColor() calling. We will discuss colors in more detail in the Colors section. ofFill() and ofNoFill(): These functions enable and disable filling shapes like circles, rectangles, and triangles. After calling ofFill() or ofNoFill(), all the primitives will be drawn filled or unfilled until the next function is called. By default, the shapes are rendered filled with color. Add the line ofNoFill(); before ofCircle(...); in the previous example and you will see all the shapes unfilled, as follows: ofSetLineWidth( lineWidth ): This function sets the width of the rendered lines to the lineWidth value, which has type float. The default value is 1.0, and calling this function with larger values will result in thick lines. It only affects drawing unfilled shapes. The line thickness is changed up to some limit depending on the video card. Normally, this limit is not less than 8.0. Add the line ofSetLineWidth( 7 ); before the line drawing in the previous example, and you will see the flower with a thick vertical line, whereas all the filled shapes will remain unchanged. Note that we use the value 7; this is an odd number, so it gives symmetrical line thickening. Note that this method for obtaining thick lines is simple but not perfect, because adjacent lines are drawn quite crudely. For obtaining smooth thick lines, you should draw these as filled shapes. ofSetCircleResolution( res ): This function sets the circle resolution; that is, the number of line segments used for drawing circles to res. The default value is 20, but with such settings only small circles look good. For bigger circles, it is recommended to increase the circle resolution; for example, to 40 or 60. Add the line ofSetCircleResolution( 40 ); before ofCircle(...); in the previous example and you will see a smoother circle. Note that a large res value can decrease the performance of the project, so if you need to draw many small circles, consider using smaller res values. ofEnableSmoothing() and ofDisableSmoothing(): These functions enable and disable line smoothing. Such settings can be controlled by your video card. In our example, calling these functions will not have any effect. Performance considerations The functions discussed work well for drawings containing not more than a 1000 primitives. When you draw more primitives, the project's performance can decrease (it depends on your video card). The reason is that each command such as ofSetColor() or ofLine() is sent to drawing separately, which takes time. So, for drawing 10,000, 100,000, or even 1 million primitives, you should use advanced methods, which draw many primitives at once. In openFrameworks, you can use the ofMesh and ofVboMesh classes for this. Using ofPoint Maybe you noted a problem when considering the preceding flower example: drawing primitives by specifying the coordinates of all the vertices is a little cumbersome. There are too many numbers in the code, so it is hard to understand the relation between primitives. To solve this problem, we will learn about using the ofPoint class and then apply it for drawing primitives using control points. ofPoint is a class that represents the coordinates of a 2D point. It has two main fields: x and y, which are float type. Actually, ofPoint has the third field z, so ofPoint can be used for representing 3D points too. If you do not specify z, it sets to zero by default, so in this case you can think of ofPoint as a 2D point indeed. Operations with points To represent some point, just declare an object of the ofPoint class. ofPoint p; To initialize the point, set its coordinates. p.x = 100.0; p.y = 200.0; Or, alternatively, use the constructor. p = ofPoint( 100.0, 200.0 ); You can operate with points just as you do with numbers. If you have a point q, the following operations are valid: p + q or p - q provides points with coordinates (p.x + q.x, p.y + q.y) or (p.x - q.x, p.y - q.y) p * k or p / k, where k is the float value, provides the points (p.x * k, p.y * k) or (p.x / k, p.y / k) p += q or p -= q adds or subtracts q from p There are a number of useful functions for simplifying 2D vector mathematics, as follows: p.length(): This function returns the length of the vector p, which is equal to sqrt( p.x * p.x + p.y * p.y ). p.normalize(): This function normalizes the point so it has the unit length p = p / p.length(). Also, this function handles the case correctly when p.length() is equal to zero. See the full list of functions for ofPoint in the libs/openFrameworks/math/ofVec3f.h file. Actually, ofPoint is just another name for the ofVec3f class, representing 3D vectors and corresponding functions. All functions' drawing primitives have overloaded versions working with ofPoint: ofLine( p1, p2 ) draws a line segment connecting the points p1 and p2 ofRect( p, w, h ) draws a rectangle with top-left corner p, width w, and height h ofTriangle( p1, p2, p3 ) draws a triangle with the vertices p1, p2, and p3 ofCircle( p, r ) draws a circle with center p and radius r Using control points example We are ready to solve the problem stated in the beginning of the Using ofPoint section. To avoid using many numbers in drawing code, we can declare a number of points and use them as vertices for primitive drawing. In computer graphics, such points are called control points . Let's specify the following control points for the flower in our simplest flower example: Now we implement this in the code. This is example 02-2D/03-FlowerControlPoints. Add the following declaration of control points in the testApp class declaration in the testApp.h file: ofPoint stem0, stem1, stem2, stem3, leftLeaf, rightLeaf; Then set values for points in the testApp::update() function as follows: stem0 = ofPoint( 300, 100 ); stem1 = ofPoint( 300, 270 ); stem2 = ofPoint( 300, 300 ); stem3 = ofPoint( 300, 400 ); leftLeaf = ofPoint( 200, 220 ); rightLeaf = ofPoint( 400, 220 ); Finally, use these control points for drawing the flower in the testApp::draw() function: ofBackground( 255, 255, 255 ); //Set white background ofSetColor( 0, 0, 0 ); //Set black colorofCircle ( stem0, 40 ); //Blossom ofLine( stem0, stem3 ); //Stem ofTriangle( stem1, stem2, leftLeaf ); //Left leaf ofTriangle( stem1, stem2, rightLeaf ); //Right leaf You will observe that when drawing with control points the code is much easier to understand. Furthermore, there is one more advantage of using control points: we can easily change control points' positions and hence obtain animated drawings. See the full example code in 02-2D/03-FlowerControlPoints. In addition to the already explained code, it contains a code for shifting the leftLeaf and rightLeaf points depending on time. So, when you run the code, you will see the flower with moving leaves. Coordinate system transformations Sometimes we need to translate, rotate, and resize drawings. For example, arcade games are based on the characters moving across the screen. When we perform drawing using control points, the straightforward solution for translating, rotating, and resizing graphics is in applying desired transformations to control points using corresponding mathematical formulas. Such idea works, but sometimes leads to complicated formulas in the code (especially when we need to rotate graphics). The more elegant solution is in using coordinate system transformations. This is a method of temporarily changing the coordinate system during drawing, which lets you translate, rotate, and resize drawings without changing the drawing algorithm. The current coordinate system is represented in openFrameworks with a matrix. All coordinate system transformations are made by changing this matrix in some way. When openFrameworks draws something using the changed coordinate system, it performs exactly the same number of computations as with the original matrix. It means that you can apply as many coordinate system transformations as you want without any decrease in the performance of the drawing. Coordinate system transformations are managed in openFrameworks with the following functions: ofPushMatrix(): This function pushes the current coordinate system in a matrix stack. This stack is a special container that holds the coordinate system matrices. It gives you the ability to restore coordinate system transformations when you do not need them. ofPopMatrix(): This function pops the last added coordinate system from a matrix stack and uses it as the current coordinate system. You should take care to see that the number of ofPopMatrix() calls don't exceed the number of ofPushMatrix() calls. Though the coordinate system is restored before testApp::draw() is called, we recommend that the number of ofPushMatrix() and ofPopMatrix() callings in your project should be exactly the same. It will simplify the project's debugging and further development. ofTranslate( x, y ) or ofTranslate( p ): This function moves the current coordinate system at the vector (x, y) or, equivalently, at the vector p. If x and y are equal to zero, the coordinate system remains unchanged. ofScale( scaleX, scaleY ): This function scales the current coordinate system at scaleX in the x axis and at scaleY in the y axis. If both parameters are equal to 1.0, the coordinate system remains unchanged. The value -1.0 means inverting the coordinate axis in the opposite direction. ofRotate( angle ): This function rotates the current coordinate system around its origin at angle degrees clockwise. If the angle value is equal to 0, or k * 360 with k as an integer, the coordinate system remains unchanged. All transformations can be applied in any sequence; for example, translating, scaling, rotating, translating again, and so on. The typical usage of these functions is the following: Store the current transformation matrix using ofPushMatrix(). Change the coordinate system by calling any of these functions: ofTranslate(), ofScale(), or ofRotate(). Draw something. Restore the original transformation matrix using ofPopMatrix(). Step 3 can include steps 1 to 4 again. For example, for moving the origin of the coordinate system to the center of the screen, use the following code in testApp::draw(): ofPushMatrix(); ofTranslate( ofGetWidth() / 2, ofGetHeight() / 2 ); //Draw something ofPopMatrix(); If you replace the //Draw something comment to ofCircle( 0, 0, 100 );, you will see the circle in the center of the screen. This transformation significantly simplifies coding the drawings that should be located at the center of the screen. Now let's use coordinate system transformation for adding triangular petals to the flower. For further exploring coordinate system transformations.
Read more
  • 0
  • 0
  • 2489

article-image-images-colors-and-backgrounds
Packt
07 Oct 2013
5 min read
Save for later

Images, colors, and backgrounds

Packt
07 Oct 2013
5 min read
(For more resources related to this topic, see here.) The following screenshot (Images and colors) shows the final result of this article:   Images and colors The following is the corresponding drawing.kv code: 64. # File name: drawing.kv (Images and colors) 65. <DrawingSpace>: 66. canvas: 67. Ellipse: 68. pos: 10,10 69. size: 80,80 70. source: 'kivy.png' 71. Rectangle: 72. pos: 110,10 73. size: 80,80 74. source: 'kivy.png' 75. Color: 76. rgba: 0,0,1,.75 77. Line: 78. points: 10,10,390,10 79. width: 10 80. cap: 'square' 81. Color: 82. rgba: 0,1,0,1 83. Rectangle: 84. pos: 210,10 85. size: 80,80 86. source: 'kivy.png' 87. Rectangle: 88. pos: 310,10 89. size: 80,80 This code starts with an Ellipse (line 67) and a Rectangle (line 71). We use the source property, which inserts an image to decorate the polygon. The image kivy.png is 80 x 80 pixels with a white background (without any alpha/transparency channel). The result is shown in the first two columns of the previous screenshot (Images and colors). In line 75, we use the context instruction Color to change the color (with the rgba property: red, green, blue, and alpha) of the coordinate space context. This means that the next VertexInstructions will be drawn with the color changed by rgba. A ContextInstruction changes the current coordinate space context. In the previous screenshot, the blue bar at the bottom (line 77) has a transparent blue (line 76) instead of the default white (1,1,1,1) as seen in the previous examples. We set the ends shape of the line to a square with the cap property (line 80). We change the color again in line 81. After that, we draw two more rectangles, one with the kivy.png image and other without it. In the previous screenshot (Images and color) you can see that the white part of the image has become as green as the basic Rectangle on the left. Be very careful with this. The Color instruction acts as a light that is illuminating the kivy.png image. This is why you can still see the Kivy logo on the background instead of it being all covered by the color. There is another important detail to notice in the previous screenshot. There is a blue line that crosses the first two polygons in front and then crosses behind the last two. This illustrates the fact that the instructions are executed in order and this might bring some unwanted results. In this example we have full control of the order but for more complicated scenarios Kivy provides an alternative. We can specify three Canvas instances (canvas.before, canvas, and canvas.after) for each Widget. They are useful to organize the order of execution to guarantee that the background component remains in the background, or to bring some of the elements to the foreground. The following drawing.kv file shows an example of these three sets (lines 92, 98, and 104) of instructions: 90. # File name: drawing.kv (Before and After Canvas) 91. <DrawingSpace>: 92. canvas.before: 93. Color: 94. rgba: 1,0,0,1 95. Rectangle: 96. pos: 0,0 97. size: 100,100 98. canvas: 99. Color: 100. rgba: 0,1,0,1 101. Rectangle: 102. pos: 100,0 103. size: 100,100 104. canvas.after: 105. Color: 106. rgba: 0,0,1,1 107. Rectangle: 108. pos: 200,0 109. size: 100,100 110. Button: 111. text: 'A very very very long button' 112. pos_hint: {'center_x': .5, 'center_y': .5} 113. size_hint: .9,.1 In each set, a Rectangle of different color is drawn (lines 95, 101, and 107). The following diagram illustrates the execution order of the canvas. The number on the top-left margin of each code block indicates the order of execution: Execution order of the canvas Please note that we didn't define any canvas, canvas.before, or canvas.after for the Button, but Kivy does. The Button is a Widget and it displays graphics on the screen. For example, the gray background is just a Rectangle. That means that it has instructions in its internal Canvas instances. The following screenshot shows the result (executed with python drawing.py --size=300x100): Before and after canvas The graphics of the Button (the child) are covered up by the graphics of instructions in the canvas.after. But what is executed between canvas.before and canvas? It could be code of a base class when we are working with inheritance and we want to add instructions in the subclass that should be executed before the base class Canvas instances. A practical example of this will be covered when we apply them in the last section of this article in the comic creator project. The canvas.before will also be useful when we study how to dynamically add instruction to Canvas instances For now, it is sufficient to understand that there are three sets of instructions (Canvas instances) that provide some flexibility when we are displaying graphics on the screen. We will now explore some more context instructions related to three basic transformations. Summary In this article we learned how to add images and colors to shapes and how to position graphics at a front or back level. Resources for Article: Further resources on this subject: Easily Writing SQL Queries with Spring Python [Article] Python Testing: Installing the Robot Framework [Article] Advanced Output Formats in Python 2.6 Text Processing [Article]
Read more
  • 0
  • 0
  • 4211

article-image-understanding-cython
Packt
07 Oct 2013
8 min read
Save for later

Understanding Cython

Packt
07 Oct 2013
8 min read
If you were to create an API for Python, you should write it using Cython to create a more type-safe Python API. Or, you could take the C types from Cython to implement the same algorithms in your Python code, and they will be faster because you're specifying the types and you avoid a lot of the type conversion required. Consider you are implementing a fresh project in C. There are a few issues we always come across in starting fresh; for example, choosing the logging or configuration system we will use or implement. With Cython, we can reuse the Python logging system as well as the ConfigParser standard libraries from Python in our C code to get a head start. If this doesn't prove to be the correct solution, we can chop and change easily. We can even extend and get Python to handle all usage. Since the Python API is very powerful, we might as well make Python do as much as it can to get us off the ground. Another question is do we want Python be our "driver" (main entry function) or do we want to handle this from our C code? Cython cdef In the next two examples, I will demonstrate how we can reuse the Python logging and Python ConfigParser modules directly from C code. But there are a few formalities to get over first, namely the Python initialization API and the link load model for fully embedded Python applications for using the shared library method. It's very simple to embed Python within a C/C++ application; you will require the following boilerplate: #include <Python.h>int main (int argc, char ** argv){Py_SetProgramName (argv [0]);Py_Initialize ();/* Do all your stuff in side here...*/Py_Finalize ();return 0;} Make sure you always put the Python.h header at the very beginning of each C file, because Python contains a lot of headers defined for system headers to turn things on and off to make things behave correctly on your system. Later, I will introduce some important concepts about the GIL that you should know and the relevant Python API code you will need to use from time to time. But for now, these few calls will be enough for you to get off the ground. Linking models Linking models are extremely important when considering how we can extend or embed things in native applications. There are two main linking models for Cython: fully embedded Python and code, which looks like the following figure: This demonstrates a fully embedded Python application where the Python runtime is linked into the final binary. This means we already have the Python runtime, whereas before we had to run the Python interpreter to call into our Cython module. There is also a Python shared object module as shown in the following figure: We have now fully modularized Python. This would be a more Pythonic approach to Cython, and if your code base is mostly Python, this is the approach you should take if you simply want to have a native module to call into some native code, as this lends your code to be more dynamic and reusable. The public keyword Moving on from linking models, we should next look at the public keyword, which allows Cython to generate a C/C++ header file that we can include with the prototypes to call directly into Python code from C. The main caveat if you're going to call Python public declarations directly from C is if your link model is fully embedded and linked against libpython.so; you need to use the boilerplate code as shown in the previous section. And before calling anything with the function, you need to initialize the Python module example if you have a cythonfile.pyx file and compile it with public declarations such as the following: cdef public void cythonFunction ():print "inside cython function!!!" You will not only get a cythonfile.c file but also cythonfile.h; this declares a function called extern void initcythonfile (void). So, before calling anything to do with the Cython code, use the following: /* Boiler plate init Python */Py_SetProgramName (argv [0]);Py_Initialize ();/* Init our config module into Python memory */initpublicTest ();cythonFunction ();/* cleanup python before exit ... */Py_Finalize (); Calling initcythonfile can be considered as the following in Python: import cythonfile Just like the previous examples, this only affects you if you're generating a fully embedded Python binary. Logging into Python A good example of Cython's abilities in my opinion is reusing the Python logging module directly from C. So, for example, we want a few macros we can rely on, such as info (…) that can handle VA_ARGS and feels as if we are calling a simple printf method. I think that after this example, you should start to see how things might work when mixing C and Python now that the cdef and public keywords start to bring things to life: import loggingcdef public void initLogging (char * logfile):logging.basicConfig (filename = logfile,level = logging.DEBUG,format = '%(levelname)s %(asctime)s:%(message)s',datefmt = '%m/%d/%Y %I:%M:%S')cdef public void pyinfo (char * message):logging.info (message)cdef public void pydebug (char * message):logging.debug (message)cdef public void pyerror (char * message):logging.error (message) This could serve as a simple wrapper for calling directly into the Python logger, but we can make this even more awesome in our C code with C99 __VA_ARGS__ and an attribute that is similar to GCC printf. This will make it look and work just like any function that is similar to printf. We can define some headers to wrap our calls to this in C as follows: #ifndef __MAIN_H__#define __MAIN_H__#include <Python.h>#include <stdio.h>#include <stdarg.h>#define printflike __attribute__ ((format (printf, 3, 4)))extern void printflike cinfo (const char *, unsigned, const char *,...);extern void printflike cdebug (const char *, unsigned, const char *,...);extern void printflike cerror (const char *, unsigned, const char *,...);#define info(...) cinfo (__FILE__, __LINE__, __VA_ARGS__)#define error(...) cerror (__FILE__, __LINE__, __VA_ARGS__)#define debug(...) cdebug (__FILE__, __LINE__, __VA_ARGS__)#include "logger.h" // remember to import our cython public's#endif //__MAIN_H__ Now we have these macros calling cinfo and the rest, and we can see the file and line number where we call these logging functions: void cdebug (const char * file, unsigned line,const char * fmt, ...){char buffer [256];va_list args;va_start (args, fmt);vsprintf (buffer, fmt, args);va_end (args);char buf [512];snprintf (buf, sizeof (buf), "%s-%i -> %s",file, line, buffer);pydebug (buf);} On calling debug ("debug message"), we see the following output: Philips-MacBook:cpy-logging redbrain$ ./example log Philips-MacBook:cpy-logging redbrain$ cat log INFO 05/06/2013 12:28:24: main.c-62 -> info message DEBUG 05/06/2013 12:28:24: main.c-63 -> debug messageERROR 05/06/2013 12:28:24: main.c-64 -> error message Also, you should note that we import and do everything we would do in Python as we would in here, so don't be afraid to make lists or classes and use these to help out. Remember if you had a Cython module with public declarations calling into the logging module, this integrates your applications as if it were one. More importantly, you only need all of this boilerplate when you fully embed Python, not when you compile your module to a shared library. Python ConfigParser Another useful case is to make Python's ConfigParser accessible in some way from C; ideally, all we really want is to have a function to which we pass the path to a config file to receive a STATUS OK/FAIL message and a filled buffer of the configuration that we need: from ConfigParser import SafeConfigParser, NoSectionErrorcdef extern from "main.h":struct config:char * pathint numbercdef config myconfig Here, we've Cythoned our struct and declared an instance on the stack for easier management: cdef public config * parseConfig (char * cfg):# initialize the global stack variable for our config...myconfig.path = NULLmyconfig.number = 0# buffers for assigning python types into C typescdef char * path = NULLcdef number = 0parser = SafeConfigParser ()try:parser.readfp (open (cfg))pynumber = int (parser.get ("example", "number"))pypath = parser.get ("example", "path")except NoSectionError:print "No section named example"return NULLexcept IOError:print "no such file ", cfgreturn NULLfinally:myconfig.number = pynumbermyconfig.path = pypathreturn &myconfig This is a fairly trivial piece of Cython code that will return NULL on error as well as the pointer to the struct containing the configuration: Philips-MacBook:cpy-configparser redbrain$ ./example sample.cfgcfg->path = some/path/to/somethingcfg-number = 15 As you can see, we easily parsed a config file without using any C code. I always found figuring out how I was going to parse config files in C to be a nightmare. I usually ended up writing my own mini domain-specific language using Flex and Bison as a parser as well as my own middle-end, which is just too involved.
Read more
  • 0
  • 0
  • 4520

article-image-working-databases
Packt
01 Oct 2013
19 min read
Save for later

Working with Databases

Packt
01 Oct 2013
19 min read
(For more resources related to this topic, see here.) We have learned how to create snippets, how to work with forms, and Ajax, how to test your code, and how to create a REST API, and all this is awesome! However, we did not learn how to persist data to make it durable. In this article, we will see how to use Mapper, an object-relational mapping ( ORM ) system for relational databases, included with Lift. The idea is to create a map of database tables into a well-organized structure of objects; for example, if you have a table that holds users, you will have a class that represents the user table. Let's say that a user can have several phone numbers. Probably, you will have a table containing a column for the phone numbers and a column containing the ID of the user owning that particular phone number. This means that you will have a class to represent the table that holds phone numbers, and the user class will have an attribute to hold the list of its phone numbers; this is known as a one-to-many relationship . Mapper is a system that helps us build such mappings by providing useful features and abstractions that make working with databases a simple task. For example, Mapper provides several types of fields such as MappedString, MappedInt, and MappedDate which we can use to map the attributes of the class versus the columns in the table being mapped. It also provides useful methods such as findAll that is used to get a list of records or save, to persist the data. There is a lot more that Mapper can do, and we'll see what this is through the course of this article. Configuring a connection to database The first thing we need to learn while working with databases is how to connect the application that we will build with the database itself. In this recipe, we will show you how to configure Lift to connect with the database of our choice. For this recipe, we will use PostgreSQL; however, other databases can also be used: Getting ready Start a new blank project. Edit the build.sbt file to add the lift-mapper and PostgreSQL driver dependencies: "net.liftweb" %% "lift-mapper" % liftVersion % "compile", "org.postgresql" % "postgresql" % "9.2-1003-jdbc4" % "compile" Create a new database. Create a new user. How to do it... Now carry out the following steps to configure a connection with the database: Add the following lines into the default.props file: db.driver=org.postgresql.Driver db.url=jdbc:postgresql:liftbook db.user=<place here the user you've created> db.password=<place here the user password> Add the following import statement in the Boot.scala file: import net.liftweb.mapper._ Create a new method named configureDB() in the Boot.scala file with the following code: def configureDB() { for { driver <- Props.get("db.driver") url <- Props.get("db.url") } yield { val standardVendor = new StandardDBVendor(driver, url, Props.get("db.user"), Props.get("db.password")) LiftRules.unloadHooks.append(standardVendor.closeAllConnections_! _) DB.defineConnectionManager(DefaultConnectionIdentifier, standardVendor) } } Then, invoke configureDB() from inside the boot method. How it works... Lift offers the net.liftweb.mapper.StandardDBVendor class, which we can use to create connections to the database easily. This class takes four arguments: driver, URL, user, and password. These are described as follows: driver : The driver argument is the JDBC driver that we will use, that is, org.postgresql.Driver URL : The URL argument is the JDBC URL that the driver will use, that is, jdbc:postgresql:liftbook user and password : The user and password arguments are the values you set when you created the user in the database. After creating the default vendor, we need to bind it to a Java Naming and Directory Interface (JNDI) name that will be used by Lift to manage the connection with the database. To create this bind, we invoked the defineConnectionManager method from the DB object. This method adds the connection identifier and the database vendor into a HashMap, using the connection identifier as the key and the database vendor as the value. The DefaultConnectionIdentifier object provides a default JNDI name that we can use without having to worry about creating our own connection identifier. You can create your own connection identifier if you want. You just need to create an object that extends ConnectionIdentifier with a method called jndiName that should return a string. Finally, we told Lift to close all the connections while shutting down the application by appending a function to the unloadHooks variable. We did this to avoid locking connections while shutting the application down. There's more... It is possible to configure Lift to use a JNDI datasource instead of using the JDBC driver directly. In this way, we can allow the container to create a pool of connections and then tell Lift to use this pool. To use a JNDI datasource, we will need to perform the following steps: Create a file called jetty-env.xml in the WEB-INF folder under src/main/webapp/ with the following content: <!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd"> <Configure class="org.eclipse.jetty.webapp.WebAppContext"> <New id="dsliftbook" class="org.eclipse.jetty.plus.jndi.Resource"> <Arg>jdbc/dsliftbook</Arg> <Arg> <New class="org.postgresql.ds.PGSimpleDataSource"> <Set name="User">place here the user you've created</Set> <Set name="Password">place here the user password</Set> <Set name="DatabaseName">liftbook</Set> <Set name="ServerName">localhost</Set> <Set name="PortNumber">5432</Set> </New> </Arg> </New> </Configure> Add the following line into the build .sbt file: env in Compile := Some(file("./src/main/webapp/WEB-INF/jetty-env.xml") asFile) Remove all the jetty dependencies and add the following: "org.eclipse.jetty" % "jetty-webapp" % "8.0.4.v20111024" % "container", "org.eclipse.jetty" % "jetty-plus" % "8.0.4.v20111024" % "container", Add the following code into the web.xml file. <resource-ref> <res-ref-name>jdbc/dsliftbook</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> Remove the configureDB method. Replace the invocation of configureDB method with the following line of code: DefaultConnectionIdentifier.jndiName = "jdbc/dsliftbook" The creation of the file jetty-envy.xml and the change we made in the web.xml file were to create the datasource in jetty and make it available to Lift. Since the connections will be managed by jetty now, we don't need to append hooks so that Lift can close the connections or any other configuration when shutting down the application. All we need to do is tell Lift to get the connections from the JNDI datasource that we have configured into the jetty. We do this by setting the jndiName variable of the default connection identifier, DefaultConnectionIdentifier as follows: DefaultConnectionIdentifier.jndiName = "jdbc/dsliftbook" The change we've made to the build.sbt file was to make the jetty-env.xml file available to the embedded jetty. So, we can use it when we get the application started by using the container:start command. See also... You can learn more about how to configure a JNDI datasource in Jetty at the following address: http://wiki.eclipse.org/Jetty/Howto/Configure_JNDI_Datasource Mapping a table to a Scala class Now that we know how to connect Lift applications to the database, the next step is to learn how to create mappings between a database table and a Scala object using Mapper. Getting ready We will re-use the project we created in the previous recipe since it already has the connection configured. How to do it... Carry out the following steps to map a table into a Scala object using Mapper: Create a new file named Contact.scala inside the model package under src/main/scala/code/ with the following code: package code.model import net.liftweb.mapper.{MappedString, LongKeyedMetaMapper, LongKeyedMapper, IdPK} class Contact extends LongKeyedMapper[Contact] with IdPK { def getSingleton = Contact object name extends MappedString(this, 100) } object Contact extends Contact with LongKeyedMetaMapper[Contact] { override def dbTableName = "contacts" } Add the following import statement in the Boot.scala file: import code.model.Contact Add the following code into the boot method of the Boot class: Schemifier.schemify( true, Schemifier.infoF _, Contact ) Create a new snippet named Contacts with the following code: package code.snippet import code.model.Contact import scala.xml.Text import net.liftweb.util.BindHelpers._ class Contacts { def prepareContacts_!() { Contact.findAll().map(_.delete_!) val contactsNames = "John" :: "Joe" :: "Lisa" :: Nil contactsNames.foreach(Contact.create.name(_).save()) } def list = { prepareContacts_!() "li *" #> Contact.findAll().map { c => { c.name.get } } } } Edit the index.html file by replacing the content of the div element with main as the value of id using the following code : <div data-list="Contacts.list"> <ul> <li></li> </ul> </div> Start the application. Access your local host (http://localhost:8080), and you will see a page with three names, as shown in the following screenshot: How it works... To work with Mapper, we use a class and an object. The first is the class that is a representation of the table; in this class we define the attributes the object will have based on the columns of the table we are mapping. The second one is the class' companion object where we define some metadata and helper methods. The Contact class extends the LongKeyedMapper[Contact] trait and mixes the trait IdPK. This means that we are defining a class that has an attribute called id (the primary key), and this attribute has the type Long. We are also saying that the type of this Mapper is Contact. To define the attributes that our class will have, we need to create objects that extend "something". This "something" is the type of the column. So, when we say an object name extends MappedString(this, 100), we are telling Lift that our Contact class will have an attribute called name, which will be a string that can be 100 characters long. After defining the basics, we need to tell our Mapper where it can get the metadata about the database table. This is done by defining the getSingleton method. The Contact object is the object that Mapper uses to get the database table metadata. By default, Lift will use the name of this object as the table name. Since we don't want our table to be called contact but contacts, we've overridden the method dbTableName. What we have done here is created an object called Contact, which is a representation of a table in the database called contacts that has two columns: id and name. Here, id is the primary key and of the type Long, and name is of the type String, which will be mapped to a varchar datatype. This is all we need to map a database table to a Scala class, and now that we've got the mapping done, we can use it. To demonstrate how to use the mapping, we have created the snippet Contacts. This snippet has two methods. The list method does two things; it first invokes the prepareContacts_!() method, and then invokes the findAll method from the Contact companion object. The prepareContacts_!() method also does two things: it first deletes all the contacts from the database and then creates three contacts: John, Joe, and Lisa. To delete all the contacts from the database, it first fetches all of them using the findAll method, which executes a select * from contacts query and returns a list of Contact objects, one for each existing row in the table. Then, it iterates over the collection using the foreach method and for each contact, it invokes the delete_! method which as you can imagine will execute a delete from contacts where id = contactId query is valid. After deleting all the contacts from the database table, it iterates the contactsNames list, and for each element it invokes the create method of the Contact companion object, which then creates an instance of the Contact class. Once we have the instance of the Contact class, we can set the value of the name attribute by passing the value instance.name(value). You can chain commands while working with Mapper objects because they return themselves. For example, let's say our Contact class has firstName and lastName as attributes. Then, we could do something like this to create and save a new contact: Contact.create.firstName("John").lastName("Doe").save() Finally, we invoke the save() method of the instance, which will make Lift execute an insert query, thus saving the data into the database by creating a new record. Getting back to the list method, we fetch all the contacts again by invoking the findAll method, and then create a li tag for each contact we have fetched from the database. The content of the li tags is the contact name, which we get by calling the get method of the attribute we want the value of. So, when you say contact.name.get, you are telling Lift that you want to get the value of the name attribute from the contact object, which is an instance of the Contact class. There's more... Lift comes with a variety of built-in field types that we can use; MappedString is just one of them. The others include, MappedInt, MappedLong, and MappedBoolean. All these fields come with some built-in features such as the toForm method, which returns the HTML needed to generate a form field, and the validate method that validates the value of the field. By default, Lift will use the name of the object as the name of the table's column; for example, if you define your object as name—as we did—Lift will assume that the column name is name. Lift comes with a built-in list of database-reserved words such as limit, order, user, and so on. If the attribute you are mapping is a database-reserved word, Lift will append _c at the end of the column's name when using Schemifier. For example, if you create an attribute called user, Lift will create a database column called user_c. You can change this behavior by overriding the dbColumnName method, as shown in the following code: object name extends MappedString(this, 100) { override def dbColumnName = "some_new_name" } In this case, we are telling Lift that the name of the column is some_new_name. We have seen that you can fetch data from the database using the findAll method. However, this method will fetch every single row from the database. To avoid this, you can filter the result using the By object; for example, let's say you want to get only the contacts with the name Joe. To accomplish this, you would add a By object as the parameter of the findAll method as follows: Contact.findAll(By(Contact.name, "Joe")). There are also other filters such as ByList and NotBy. And if for some reason the features offered by Mapper to build select queries aren't enough, you can use methods such as findAllByPreparedStatement and findAllByInsecureSQL where you can use raw SQL to build the queries. The last thing left to talk about here is how this example would work if we didn't create any table in the database. Well, I hope you remember that we added the following lines of code to the Boot.scala file: Schemifier.schemify( true, Schemifier.infoF _, Contact ) As it turns out, the Schemifier object is a helper object that assures that the database has the correct schema based on a list of MetaMappers. This means that for each MetaMapper we pass to the Schemifier object, the object will compare the MetaMapper with the database schema and act accordingly so that both the MetaMapper and the database schema match. So, in our example, Schemifier created the table for us. If you change MetaMapper by adding attributes, Schemifier will create the proper columns in the table. See also... To learn more about query parameters, you can find more at: https://www.assembla.com/spaces/liftweb/wiki/Mapper#querying_the_database Creating one-to-many relationships In the previous recipe, we learned how to map a Scala class to a database table using Mapper. However, we have mapped a simple class with only one attribute, and of course, we will not face this while dealing with real-world applications. We will probably need to work with more complex data such as the one having one-to-many or many-to-many relationships. An example of this kind of relationship would be an application for a store where you'll have customers and orders, and we need to associate each customer with the orders he or she placed. This means that one customer can have many orders. In this recipe, we will learn how to create such relationships using Mapper. Getting ready We will modify the code from the last section by adding a one-to-many relationship into the Contact class. You can use the same project from before or duplicate it and create a new project. How to do it... Carry out the following steps: Create a new class named Phone into the model package under src/main/scala/code/ using the following code: package code.model import net.liftweb.mapper._ class Phone extends LongKeyedMapper[Phone] with IdPK { def getSingleton = Phone object number extends MappedString(this, 20) object contact extends MappedLongForeignKey(this, Contact) } object Phone extends Phone with LongKeyedMetaMapper[Phone] { override def dbTableName = "phones" } Change the Contact class declaration from: class Contact extends LongKeyedMapper[Contact] with IdPK To: class Contact extends LongKeyedMapper[Contact] with IdPK with OneToMany[Long, Contact] Add the attribute that will hold the phone numbers to the Contact class as follows: object phones extends MappedOneToMany(Phone, Phone.contact, OrderBy(Phone.id, Ascending)) with Owned[Phone] with Cascade[Phone] Insert the new class into the Schemifier list of parameters. It should look like the following code: Schemifier.schemify( true, Schemifier.infoF _, Contact, Phone ) In the Contacts snippet, replace the import statement from the following code: import code.model.Contact To: import code.model._ Modify the Contacts.prepareContacts_! method to associate a phone number to each contact. Your method should be similar to the one in the following lines of code: def prepareContacts_!() { Contact.findAll().map(_.delete_!) val contactsNames = "John" :: "Joe" :: "Lisa" :: Nil val phones = "5555-5555" :: "5555-4444" :: "5555-3333" :: "5555-2222" :: "5555-1111" :: Nil contactsNames.map(name => { val contact = Contact.create.name(name) val phone = Phone.create.number(phones((new Random()).nextInt(5))).saveMe() contact.phones.append(phone) contact.save() }) } Replace the list method's code with the following code: def list = { prepareContacts_!() ".contact *" #> Contact.findAll().map { contact => { ".name *" #> contact.name.get & ".phone *" #> contact.phones.map(_.number.get) } } } In the index.html file, replace the ul tag with the one in the following code snippet: <ul> <li class="contact"> <span class="name"></span> <ul> <li class="phone"></li> </ul> </li> </ul> Start the application. Access http://localhost:8080. Now you will see a web page with three names and their phone numbers, as shown in the following screenshot: How it works... In order to create a one-to-many relationship, we have mapped a second table called phone in the same way we've created the Contact class. The difference here is that we have added a new attribute called contact. This attribute extends MappedLongForeignKe y, which is the class that we need to use to tell Lift that this attribute is a foreign key from another table. In this case, we are telling Lift that contact is a foreign key from the contacts table in the phones table. The first parameter is the owner, which is the class that owns the foreign key, and the second parameter is MetaMapper, which maps the parent table. After mapping the phones table and telling Lift that it has a foreign key, we need to tell Lift what the "one" side of the one-to-many relationship will be. To do this, we need to mix the OneToMany trait in the Contact class. This trait will add features to manage the one-to-many relationship in the Contact class. Then, we need to add one attribute to hold the collection of children records; in other words, we need to add an attribute to hold the contact's phone numbers. Note that the phones attribute extends MappedOneToMany, and that the first two parameters of the MappedOneToMany constructor are Phone and Phone.contact. This means that we are telling Lift that this attribute will hold records from the phones table and that it should use the contact attribute from the Phone MetaMapper to do the join. The last parameter, which is optional, is the OrderBy object. We've added it to tell Lift that it should order, in an ascending order, the list of phone numbers by their id. We also added a few more traits into the phones attribute to show some nice features. One of the traits is Owned; this trait tells Lift to delete all orphan fields before saving the record. This means that if, for some reason, there are any records in the child table that has no owner in the parent table, Lift will delete them when you invoke the save method, helping you keep your databases consistent and clean. Another trait we've added is the Cascade trait. As its name implies, it tells Lift to delete all the child records while deleting the parent record; for example, if you invoke contact.delete_! for a given contact instance, Lift will delete all phone records associated with this contact instance. As you can see, Lift offers an easy and handy way to deal with one-to-many relationships. See also... You can read more about one-to-many relationships at the following URL: https://www.assembla.com/wiki/show/liftweb/Mapper#onetomany
Read more
  • 0
  • 0
  • 2119
article-image-ninject-patterns-and-anti-patterns
Packt
30 Sep 2013
7 min read
Save for later

Ninject Patterns and Anti-patterns

Packt
30 Sep 2013
7 min read
(For more resources related to this topic, see here.) Dependencies can be injected in a consumer class using different patterns and injecting them into a constructor is just one of them. While there are some patterns that can be followed for injecting dependencies, there are also some patterns that are recommended to be avoided, as they usually lead to undesirable results. In this article, we will examine only those patterns and antipatterns that are somehow relevant to Ninject features. Constructor Injection Constructor Injection is the most common and recommended pattern for injecting dependencies in a class. Generally this pattern should always be used as the primary injection pattern unless we have to use other ones. In this pattern, a list of all class dependencies should be introduced in the constructor. The question is what if the class has more than one constructor. Although Ninject's strategy for selecting constructor is customizable, its default behavior is selecting the constructor with more parameters, provided all of them are resolvable by Ninject. So, although in the following code the second constructor introduces more parameters, Ninject will select the first one if it cannot resolve IService2 and it will even use the default constructor if IService1 is not registered either. But if both dependencies are registered and resolvable, Ninject will select the second constructor because it has more parameters: public class Consumer { private readonly IService1 dependency1; private readonly IService2 dependency2; public Consumer(IService1 dependency1) { this.dependency1 = dependency1; } public Consumer(IService1 dependency1, IService2 dependency2) { this.dependency1 = dependency1; this.dependency2 = dependency2; } } If the preceding class had another constructor with two resolvable parameters, Ninject would throw an ActivationException exception notifying that several constructors had the same priority. There are two approaches to override this default behavior and explicitly select a constructor. The first approach is to indicate the desired constructor in a binding as follows: Bind<Consumer>().ToConstructor(arg => new Consumer(arg.Inject<IService1>())); In the preceding example, we explicitly selected the first constructor. Using the Inject<T> method that the arg argument provides, we requested Ninject to resolve IService1 in order to be injected into the specified constructor. The second method is to indicate the desired constructor using the [Inject] attribute: [Inject] public Consumer(IService1 dependency1) { this.dependency1 = dependency1; } In the preceding example, we applied the Ninject's [Inject] attribute on the first constructor to explicitly specify that we need to initialize the class by injecting dependencies into this constructor; even though the second constructor has more parameters and the default strategy of Ninject would be to select the second one. Note that applying this attribute on more than one constructor will result in the ActivationException. Ninject is highly customizable and it is even possible to substitute the default [Inject] attribute with another one, so that we don't need to add reference to the Ninject library from our consumer classes just because of an attribute: kernel.Settings.Set("InjectAttribute",typeof(MyAttribute)); Initializer methods and properties Apart from constructor injection, Ninject supports the injection of dependencies using initializer methods and property setters. We can specify as many methods and properties as required using the [Inject] attribute to inject dependencies. Although the dependencies will be injected to them as soon as the class is constructed, it is not possible to predict in which order they will receive their dependencies. The following code shows how to specify a property for injection: [Inject]public IService Service{ get { return dependency; } set { dependency = value; }} Here is an example of injecting dependencies using an injector method: [Inject]public void Setup(IService dependency){ this.dependency = dependency;} Note that only public members and constructors will be injected and even the internals will be ignored unless Ninject is configured to inject nonpublic members. In Constructor Injection, the constructor is a single point where we can consume all of the dependencies as soon as the class is activated. But when we use initializer methods the dependencies will be injected via multiple points in an unpredictable order, so we cannot decide in which method all of the dependencies will be ready to consume. In order to solve this problem, Ninject offers the IInitializable interface. This interface has an Initialize method which will be called once all of the dependencies have been injected: public class Consumer:IInitializable{ private IService1 dependency1; private IService2 dependency2; [Inject] public IService Service1 { get { return dependency1; } set { dependency1 = value; } } [Inject] public IService Service2 { get { return dependency2; } set { dependency2 = value; } } public void Initialize() { // Consume all dependencies here }} Although Ninject supports injection using properties and methods, Constructor Injection should be the superior approach. First of all, Constructor Injection makes the class more reusable, because a list of all class dependencies are visible, while in the initializer property or method the user of the class should investigate all of the class members or go through the class documentations (if any), to discover its dependencies. Initialization of the class is easier while using Constructor Injection because all the dependencies get injected at the same time and we can easily consume them at the same place where the constructor initializes the class. As we have seen in the preceding examples the only case where the backing fields could be readonly was in the Constructor Injection scenario. As the readonly fields are initializable only in the constructor, we need to make them writable to be able to use initializer methods and properties. This can lead to potential mutation of backing fields. Service Locator Service Locator is a design pattern introduced by Martin Fowler regarding which there have been some controversies. Although it can be useful in particular circumstances, it is generally considered as an antipattern and preferably should be avoided. Ninject can easily be misused as a Service Locator if we are not familiar to this pattern. The following example demonstrates misusing the Ninject kernel as a Service Locator rather than a DI container: public class Consumer{ public void Consume() { var kernel = new StandardKernel(); var depenency1 = kernel.Get<IService1>(); var depenency2 = kernel.Get<IService2>(); ... }} There are two significant downsides with the preceding code. The first one is that although we are using a DI container, we are not at all implementing DI. The class is tied to the Ninject kernel while it is not really a dependency of this class. This class and all of its prospective consumers will always have to drag their unnecessary dependency on the kernel object and Ninject library. On the other hand, the real dependencies of class (IService1 and IService2) are invisible from the consumers, and this reduces its reusability. Even if we change the design of this class to the following one, the problems still exist: public class Consumer{ private readonly IKernel kernel; public Consumer(IKernel kernel) { this.kernel = kernel; } public void Consume() { var depenency1 = kernel.Get<IService1>(); var depenency2 = kernel.Get<IService2>(); ... }} The preceding class still depends on the Ninject library while it doesn't have to and its actual dependencies are still invisible to its consumers. It can easily be refactored using the Constructor Injection pattern: public Consumer(IService1 dependency1, IService2 dependency2){ this.dependency1 = dependency1; this.dependency2 = dependency2;} Summary In this article we studied the most common DI patterns and anti-patterns related to Ninject. Resources for Article: Further resources on this subject: Introduction to JBoss Clustering [Article] Configuring Clusters in GlassFish [Article] Designing Secure Java EE Applications in GlassFish [Article]
Read more
  • 0
  • 0
  • 7520

article-image-restful-services-jax-rs-20
Packt
26 Sep 2013
16 min read
Save for later

RESTful Services JAX-RS 2.0

Packt
26 Sep 2013
16 min read
Representational State Transfer Representational State Transfer (REST) is a style of information application architecture that aligns the distributed applications to the HTTP request and response protocols, in particular matching Hypermedia to the HTTP request methods and Uniform Resource Identifiers (URI). Hypermedia is the term that describes the ability of a system to deliver self-referential content, where related contextual links point to downloadable or stream-able digital media, such as photographs, movies, documents, and other data. Modern systems, especially web applications, demonstrate through display of text that a certain fragment of text is a link to the media. Hypermedia is the logical extension of the term hypertext, which is a text that contains embedded references to other text. These embedded references are called links, and they immediately transfer the user to the other text when they are invoked. Hypermedia is a property of media, including hypertext, to immediately link other media and text. In HTML, the anchor tag <a> accepts a href attribute, the so-called hyperlink parameter. The World Wide Web is built on the HTTP standards, Versions 1.0 and 1.1, which define specific enumerations to retrieve data from a web resource. These operations, sometimes called Web Methods, are GET, POST, PUT, and DELETE. Representational State Transfer also reuses these operations to form semantic interface to a URI. Representational State Transfer, then, is both a style and architecture for building network enabled distributed applications. It is governed by the following constraints: Client/Server: A REST application encourages the architectural robust principle of separation of concerns by dividing the solution into clients and servers. A standalone, therefore, cannot be a RESTful application. This constraint ensures the distributed nature of RESTful applications over the network. Stateless: A REST application exhibits stateless communication. Clients cannot and should not take advantage of any stored context information in the server and therefore the full data of each request must be sent to the server for processing. Cache: A REST application is able to declare which data is cacheable or not cacheable. This constraint allows the architect to set the performance level for the solution, in other words, a trade-off. Caching data to the web resource, allows the business to achieve a sense of latency, scalability, and availability. The counter point to improved performance through caching data is the issue of expiration of the cache at the correct time and sequence, when do we delete stale data? The cache constraint also permits successful implementation providers to develop optimal frameworks and servers. Uniform Interface: A REST application emphasizes and maintains a unique identifier for each component and there are a set of protocols for accessing data. The constraint allows general interaction to any REST component and therefore anyone or anything can manipulate a REST component to access data. The drawback is the Uniform Interface may be suboptimal in ease-of-use and cognitive load to directly provide a data structure and remote procedure function call. Layered Style: A REST application can be composed of functional processing layers in order to simplify complex flows of data between clients and servers. Layered style constraint permits modularization of function with the data and in itself is another sufficient example of separation of concerns. The layered style is an approach that benefits load-balancing servers, caching content, and scalability. Code-on-Demand: A REST application can optimally supply downloadable code on demand for the client to execute. The code could be the byte-codes from the JVM, such as a Java Applet, or JavaFX WebStart application, or it could be a JavaScript code with say JSON data. Downloadable code is definitely a clear security risk that means that the solution architect must assume responsibility of sandboxing Java classes, profiling data, and applying certificate signing in all instances. Therefore, code-on-demand, is a disadvantage in a public domain service, and this constraint in REST application is only seen inside the firewalls of corporations. In terms of the Java platform, the Java EE standard covers REST applications through the specification JAX-RS and this article covers Version 2.0. JAX-RS 2.0 features For Java EE 7, the JAX-RS 2.0 specification has the following new features: Client-side API for invoking RESTful server-side remote endpoint Support for Hypermedia linkage Tighter integration with the Bean Validation framework Asynchronous API for both server and client-side invocations Container filters on the server side for processing incoming requests and outbound responses Client filter on the client side for processing outgoing request and incoming responses Reader and writer interceptors to handle specific content types Architectural style The REST style is simply a Uniform Resource Identifier and the application of the HTTP request methods, which invokes resources that generate a HTTP response. Although Fielding, himself, says that REST does not necessarily require the HTTP communication as a networker layer, and the style of architecture can be built on any other network protocol. Let's look at those methods again with a fictional URL (http://fizzbuzz.com/) Method Description POST A REST style application creates or inserts an entity with the supplied data. The client can assume new data has been inserted into the underlying backend database and the server returns a new URI to reference the data. PUT A REST style application replaces the entity into the database with the supplied data. GET A REST style application retrieves the entity associated with the URI, and it can be a collection of URI representing entities or it can be the actual properties of the entity DELETE A REST style application deletes the entity associated with the URI from the backend database. The user should note that PUT and DELETE are idempotent operations, meaning they can be repeated endlessly and the result is the same in steady state conditions. The GET operation is a safe operation; it has no side effects to the server-side data. REST style for collections of entities Let's take a real example with the URL http://fizzbuzz.com/resources/, which represents the URI of a collection of resources. Resources could be anything, such as books, products, or cast iron widgets. Method Description GET Retrieves the collection entities by URI under the link http://fizzbuzz.com/resources and they may include other more data. POST Creates a new entity in the collection under the URI http://fizzbuzz.com/resources. The URI is automatically assigned and returned by this service call, which could be something like http://fizzbuzz.com/resources/WKT54321. PUT Replaces the entire collection of entities under the URI http://fizzbuzz.com/resources. DELETE Deletes the entire collection of entities under the URI http://fizzbuzz.com/resources. As a reminder, a URI is a series of characters that identifies a particular resource on the World Wide Web. A URI, then, allows different clients to uniquely identify a resource on the web, or a representation. A URI is combination of a Uniform Resource Name (URN) and a Uniform Resource Locator (URL). You can think of a URN like a person's name, as way of naming an individual and a URL is similar to a person's home address, which is the way to go and visit them sometime. In the modern world, non-technical people are accustomed to desktop web browsing as URL. However, the web URL is a special case of a generalized URI. A diagram that illustrates HTML5 RESTful communication between a JAX RS 2.0 client and server, is as follows: REST style for single entities Assuming we have a URI reference to a single entity like http://fizzbuzz.com/resources/WKT54321. Method Description GET Retrieves the entity with reference to URI under the link http://fizzbuzz.com/resources/WKT54321. POST Creates a new sub entity under the URI http://fizzbuzz.com/resources/WKT54321. There is a subtle difference here, as this call does something else. It is not often used, except to create Master-Detail records. The URI of the subentity is automatically assigned and returned by this service call, which could be something like http://fizzbuzz.com/resources/WKT54321/D1023 PUT Replaces the referenced entity's entire collection with the URI http://fizzbuzz.com/resources/WKT54321 . If the entity does not exist then the service creates it. DELETE Deletes the entity under the URI references http://fizzbuzz.com/resources/WKT54321 Now that we understand the REST style we can move on to the JAX-RS API properly. Consider carefully your REST hierarchy of resources The key to build a REST application is to target the users of the application instead of blindly converting the business domain into an exposed middleware. Does the user need the whole detail of every object and responsibility in the application? On the other hand is the design not spreading enough information for the intended audience to do their work? Servlet mapping In order to enable JAX-RS in a Java EE application the developer must set up the configuration in the web deployment descriptor file. JAX-RS requires a specific servlet mapping to be enabled, which triggers the provider to search for annotated classes. The standard API for JAX-RS lies in the Java package javax.ws.rs and in its subpackages. Interestingly, the Java API for REST style interfaces sits underneath the Java Web Services package javax.ws. For your web applications, you must configure a Java servlet with just fully qualified name of the class javax.ws.rs.core.Application. The Servlet must be mapped to a root URL pattern in order to intercept REST style requests. The following is an example code of such a configuration: <?xml version="1.0" encoding="UTF-8"?><web-app xsi:schemaLocation="http://metadata-complete="false"><display-name>JavaEE Handbook 7 JAX RS Basic</display-name><servlet><servlet-name>javax.ws.rs.core.Application</servlet-name><load-on-startup>1</load-on-startup></servlet><servlet-mapping><servlet-name>javax.ws.rs.core.Application</servlet-name><url-pattern>/rest/*</url-pattern></servlet-mapping></web-app> In the web deployment descriptor we just saw, only the servlet name is defined, javax.ws.rs.core.Application. Do not define the servlet class. The second step maps the URL pattern to the REST endpoint path. In this example, any URL that matches the simplified Glob pattern /rest/* is mapped. This is known as the application path. A JAX-RS application is normally packaged up as a WAR file and deployed to a Java EE application server or a servlet container that conforms to the Web Profile. The application classes are stored in the /WEB-INF/classes and required libraries are found under /WEB-INF/lib folder. Therefore, JAX-RS applications share the consistency of configurations as servlet applications. The JAX-RS specification does recommend, but does not enforce, that conformant providers follow the general principles of Servlet 3.0 plug-ability and discoverability of REST style endpoints. Discoverability is achieved in the recommendation through class scanning of the WAR file, and it is up to the provider to make this feature available. An application can create a subclass of the javax.ws.rs.core.Application class. The Application class is a concrete class and looks like the following code: public class Application {public Application() { /* ... */ }public java.util.Set<Class<?>> getClasses() {/* ... */ }public java.util.Set<Object> getSingletons() {/* ... */ }public java.util.Map<String,Object> getProperties() {/* ... */ }} Implementing a custom Application subclass is a special case of providing maximum control of RESTful services for your business application. The developer must provide a set collection of classes that represent JAX-RS end points. The engineer must supply a list of Singleton object, if any, and do something useful with the properties. The default implementation of javax.ws.rs.core.Application and the methods getClasses() and getSingletons() return empty sets. The getProperties() method returns an empty map collection. By returning empty sets, the provider assumes that all relevant JAX-RS resource and provider classes that are scanned and found will be added to the JAX-RS application. Majority of the time, I suspect, you, the developer will want to rely on annotations to specify the REST end points and there are Servlet Context listeners and Servlet Filters to configure application wide behavior, for example the startup sequence of a web application. So how can we register our own custom Application subclass? The answer is just subclass the core class with your own class. The following code explains what we just read: package com.fizbuzz.services;@javax.ws.rs.ApplicationPath("rest")public class GreatApp extends javax.ws.rs.core.Application {// Do your custom thing here} In the custom GreatApp class, you can now configure logic for initialization. Note the use of @ApplicationPath annotation to configure the REST style URL. You still have to associate your custom Application subclass into the web deployment descriptor with a XML Servlet name definition. Remember the Application Configuration A very common error for first time JAX-RS developers is to forget that the web deployment descriptor really requires a servlet mapping to a javax.ws.js.core.Application type. Now that we know how to initialize a JAX-RS application, let us dig deeper and look at the defining REST style endpoints. Mapping JAX-RS resources JAX-RS resources are configured through the resource path, which suffixes the application path. Here is the constitution of the URL. http://<hostname>:<port>/<web_context>/<application_path>/<resource_path> The <hostname> is the host name of the server. The <port> refers to the port number, which is optional and the default port is 80. The <web_context> is the Servlet context, which is the context for the deployed web application. The <application_path> is the configuration URI pattern as specified in the web deployment descriptor @ApplicationPath or the Servlet configuration of Application type. The <resource_path> is the resource path to REST style resource. The final fragment <resource_path> defines the URL pattern that maps a REST style resource. The resource path is configured by annotation javax.ws.rs.Path. Test-Driven Development with JAX-RS Let us write a unit test to verify the simplest JAX-RS service. It follows a REST style resource around a list of books. There are only four books in this endpoint and the only thing the user/client can do at the start is to access the list of books by author and title. The client invokes the REST style endpoint, otherwise known as a resource with an HTTP GET request. The following is the code for the class RestfulBookService: package je7hb.jaxrs.basic;import javax.annotation.*;import javax.ws.rs.*;import java.util.*;@Path("/books")public class RestfulBookService {private List<Book> products = Arrays.asList(new Book("Sir Arthur Dolan Coyle","Sherlock Holmes and the Hounds of the Baskervilles"),new Book("Dan Brown", "Da Vinci Code"),new Book("Charles Dickens", "Great Expectations"),new Book("Robert Louis Stevenson", "Treasure Island"));@GET@Produces("text/plain")public String getList() {StringBuffer buf = new StringBuffer();for (Book b: products) {buf.append(b.title); buf.append('n'); }return buf.toString();}@PostConstructpublic void acquireResource() { /* ... */ }@PreDestroypublic void releaseResource() { /* ... */ }static class Book {public final String author;public final String title;Book(String author, String title) {this.author = author;this.title = title;}}} The annotation @javax.ws.rs.Path declares the class as a REST style end-point for a resource. The @Path annotation is assigned to the class itself. The path argument defines the relative URL pattern for this resource, namely/books. The method getList() is the interesting one. It is annotated with both @javax.ws.rs.GET and @javax.ws.rs.Produces. The @GET is one of the six annotations that conform to the HTTP web request methods. It indicates that the method is associated with HTTP GET protocol request. The @Produces annotation indicates the MIME content that this resource will generate. In this example, the MIME content is text/plain. The other methods on the resource bring CDI into the picture. In the example, we are injecting post construction and pre destruction methods into the bean. This is the only class that we require for a simple REST style application from the server side. With an invoking of the web resource with a HTTP GET Request like http://localhost:8080/mywebapp/rest/books we should get a plain text output with list of titles like the following: Sherlock Holmes and the Hounds of the BaskervillesDa Vinci CodeGreat ExpectationsTreasure Island So do we test this REST style interface then? We could use Arquillian Framework directly, but this means our tests have to be built specifically in a project and it complicates the build process. Arquillian uses another open source project in the JBoss repository called ShrinkWrap. The framework allows the construction of various types of virtual Java archives in a programmatic fashion. Let's look at the unit test class RestfulBookServiceTest in the following code: package je7hb.jaxrs.basic;// import omittedpublic class RestfulBookServiceTest {@Testpublic void shouldAssembleAndRetrieveBookList()throws Exception {WebArchive webArchive =ShrinkWrap.create(WebArchive.class, "test.war").addClasses(RestfulBookService.class).setWebXML(new File("src/main/webapp/WEB-INF/web.xml")).addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml");File warFile = new File(webArchive.getName());new ZipExporterImpl(webArchive).exportTo(warFile, true);SimpleEmbeddedRunner runner =SimpleEmbeddedRunner.launchDeployWarFile(warFile, "mywebapp", 8080);try {URL url = new URL("http://localhost:8080/mywebapp/rest/books");InputStream inputStream = url.openStream();BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream));List<String> lines = new ArrayList<>();String text = null;int count=0;while ( ( text = reader.readLine()) != null ) {lines.add(text);++count;System.out.printf("**** OUTPUT ****text[%d] = %sn", count, text );}assertFalse( lines.isEmpty() );assertEquals("Sherlock Holmes and the Houndsof the Baskervilles", lines.get(0));assertEquals("Da Vinci Code", lines.get(1));assertEquals("Great Expectations", lines.get(2));assertEquals( "Treasure Island", lines.get(3) );}finally {runner.stop();}}} In the unit test method, shouldAssembleAndRetrieveBookList(), we first assemble a virtual web archive with an explicit name test.war. The WAR file contains the RestfulBookService services, the Web deployment descriptor file web.xml and an empty beans.xml file, which if you remember is only there to trigger the CDI container into life for this web application. With the virtual web archive, we export the WAR as a physical file with the utility ZipExporterImpl class from ShinkWrap library, which creates the file test.war in the project root folder. Next, we fire up the SimpleEmbeddedRunner utility. It deploys the web archive to an embedded GlassFish container. Essentially, this is the boilerplate to get to deliver a test result. We, then, get to the heart of the test itself; we construct a URL to invoke the REST style endpoint, which is http://localhost:8080/mywebapp/rest/books. We read the output from the service endpoint with Java standard I/O line by line into a list collection of Strings. Once we have a list of collections, then we assert each line against the expected output from the rest style service. Because we acquired an expensive resource, like an embedded Glassfish container, we are careful to release it, which is the reason, why we surround critical code with a try-finally block statement. When the execution comes to end of test method, we ensure the embedded GlassFish container is shut down.
Read more
  • 0
  • 0
  • 4129

article-image-google-guice
Packt
24 Sep 2013
13 min read
Save for later

Google Guice

Packt
24 Sep 2013
13 min read
(For more resources related to this topic, see here.) Structure of flightsweb application Our application has two servlets: IndexServlet, which is a trivial example of forwarding any request, mapped with "/" to index.jsp and FlightServlet, which processes the request using the functionality we developed in the previous section and forwards the response to response.jsp. Here in, we simply declare the FlightEngine and SearchRequest as the class attributes and annotate them with @Inject. FlightSearchFilter is a filter with the only responsibility of validating the request parameters. Index.jsp is the landing page of this application and presents the user with a form to search the flights, and response.jsp is the results page. The flight search form will look as shown in the following screenshot: The search page would subsequently lead to the following result page. In order to build the application, we need to execute the following command in the directory, where the pom.xml file for the project resides: shell> mvn clean package The project for this article being a web application project compiles and assembles a WAR file, flightsweb.war in the target directory. We could deploy this file to TOMCAT. Using GuiceFilter Let's start with a typical web application development scenario. We need to write a JSP to render a form for searching flights and subsequently a response JSP page. The search form would post the request parameters to a processing servlet, which processes the parameters and renders the response. Let's have a look at web.xml. A web.xml file for an application intending to use Guice for dependency injection needs to apply the following filter: <filter> <filter-name>guiceFilter</filter-name> <filter-class>com.google.inject.servlet.GuiceFilter</filter-class> </filter> <filter-mapping> <filter-name>guiceFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> It simply says that all the requests need to pass via the Guice filter. This is essential since we need to use various servlet scopes in our application as well as to dispatch various requests to injectable filters and servlets. Rest any other servlet, filter-related declaration could be done programmatically using Guice-provided APIs. Rolling out our ServletContextListener interface Let's move on to another important piece, a servlet context listener for our application. Why do we need a servlet context listener in the first place? A servlet context listener comes into picture once the application is deployed. This event is the best time when we could bind and inject our dependencies. Guice provides an abstract class, which implements ServletContextListener interface. This class basically takes care of initializing the injector once the application is deployed, and destroying it once it is undeployed. Here, we add to the functionality by providing our own configuration for the injector and leave the initialization and destruction part to super class provided by Guice. For accomplishing this, we need to implement the following API in our sub class: protected abstract Injector getInjector(); Let's have a look at how the implementation would look like: package org.packt.web.listener; import com.google.inject.servlet.GuiceServletContextListener; import com.google.inject.servlet.ServletModule; public class FlightServletContextListener extends GuiceServletContextListener { @Override protected Injector getInjector() { return Guice.createInjector( new ServletModule(){ @Override protected void configureServlets() { // overridden method contains various // configurations } }); }} Here, we are returning the instance of injector using the API: public static Injector createInjector(Module... modules) Next, we need to provide a declaration of our custom FlightServletContextListener interface in web.xml: <listener> <listener-class> org.packt.web.listener.FlightServletContextListener </listener-class> </listener> ServletModule – the entry point for configurations In the argument for modules, we provide a reference of an anonymous class, which extends the class ServletModule. A ServletModule class configures the servlets and filters programmatically, which is actually a replacement of declaring the servlet and filters and their corresponding mappings in web.xml. Why do we need to have a replacement of web.xml in the first place? Think of it on different terms. We need to provide a singleton scope to our servlet. We need to use various web scopes like RequestScope, SessionScope, and so on for our classes, such as SearchRequest and SearchResponse. These could not be done simply via declarations in web.xml. A programmatic configuration is far more logical choice for this. Let's have a look at a few configurations we write in our anonymous class extending the ServletModule: new ServletModule(){ @Override protected void configureServlets() { install(new MainModule()); serve("/response").with(FlightServlet.class); serve("/").with(IndexServlet.class); } } A servlet module at first provides a way to install our modules using the install() API. Here, we install MainModule, which is reused from the previous section. Rest all other modules are installed from MainModule. Binding language ServletModule presents APIs, which could be used for configuring filters and servlets. Using these expressive APIs known as EDSL, we could configure the mappings between servlets, filters, and respective URLs. Guice uses an embedded domain specific language or EDSL to help us create bindings simply and readably. We are already using this notation while creating various sort of bindings using the bind() APIs. Readers could refer to the Binder javadoc, where EDSL is discussed with several examples. Mapping servlets Here, following statement maps the /response path in the application to the FlightServlet class's instance: serve("/response").with(FlightServlet.class); serve() returns an instance of ServletKeyBindingBuilder. It provides various APIs, using which we could map a URL to an instance of servlet. This API also has a variable argument, which helps to avoid repetition. For example, in order to map /response as well as /response-quick, both the URLs to FlightServlet.class we could use the following statement: serve("/response","/response-quick").with(FlightServlet.class); serveRegex() is similar to serve(), but accepts the regular expression for URL patterns, rather than concrete URLs. For instance, an easier way to map both of the preceding URL patterns would be using this API: serveRegex("^response").with(FlightServlet.class); ServletKeyBindingBuilder.with() is an overloaded API. Let's have a look at the various signatures. void with(Class<? extends HttpServlet> servletKey); void with(Key<? extends HttpServlet> servletKey); To use the key binding option, we will develop a custom annotation @FlightServe. FlightServlet will be then annotated with it. Following binding maps a URL pattern to a key: serve("/response").with(Key.get(HttpServlet.class, FlightServe.class)); Since this, we need to just declare a binding between @FlightServe and FlightServlet, which will go in modules: bind(HttpServlet.class). annotatedWith(FlightServe.class).to(FlightServlet.class) What is the advantage of binding indirectly using a key? First of all, it is the only way using which we could separate an interface from an implementation. Also it helps us to assign scope as a part of the configuration. A servlet or a filter must be at least in singleton In this case we can assign scope directly in configuration. The option of annotating a filter or a servlet with @Singleton is also available, although. Guice 3.0 provides following overloaded versions, which even facilitate providing initialization parameters and hence provide type safety. void with(HttpServlet servlet); void with(Class<? extends HttpServlet> servletKey, Map<String, String> initParams); void with(Key<? extends HttpServlet> servletKey, Map<String, String> initParams); void with(HttpServlet servlet, Map<String, String> initParams); An important point to be noted here is that ServletModule not only provides a programmatic API to configure the servlets, but also a type-safe idiomatic API to configure the initialization parameters. It is not possible to ensure type safety while declaring the initialization parameters in web.xml. Mapping filters Similar to the servlets, filters could be mapped to URL patterns or regular expressions. Here, the filter() API is used to map a URL pattern to a Filter. For example: filter("/response").through(FlightSearchFilter.class); filter() returns an instance of FilterKeyBindingBuilder. FlightKeyBindingBuilder provides various APIs, using which we can map a URL to an instance of filter. filter() and filterRegex() APIs take exactly the same kind of arguments as serve() and serveRegex() does when it comes to handling the pure URLs or regular expressions. Let's have a look at FilterKeyBindingBuilder.through() APIs. Similar to ServletKeyBindingBuilder.with() it also provides various overloaded versions: void through(Class<? extends Filter> filterKey); void through(Key<? extends Filter> filterKey); Key mapped to a URL, which is then bound via annotation to an implementation could be exemplified as: filter("/response"). through(Key.get(Filter.class,FlightFilter.class)); The binding is done through annotation. Also note, that the filter implementation is deemed as singleton in scope. bind(Filter.class). annotatedWith(FlightFilter.class). to(FlightSearchFilter.class).in(Singleton.class); Guice 3.0 provides following overloaded versions, which even facilitate providing initialization parameters and provide type safety: void through(Filter filter); void through(Class<? extends Filter> filterKey, Map<String, String> initParams); void through(Key<? extends Filter> filterKey, Map<String, String> initParams); void through(Filter filter, Map<String, String> initParams); Again, these type safe APIs provide a better configuration option then declaration driven web.xml. Web scopes Aside from dependency injection and configuration facilities via programmable APIs, Guice provides feature of scoping various classes, depending on their role in the business logic. As we saw, while developing the custom scope, a scope comes into picture during binding phase. Later, when the scope API is invoked, it brings the provider into picture. Actually it is the provider which is the key to the complete implementation of the scope. Same thing applies for the web scope. @RequestScoped Whenever we annotate any class with either of servlet scopes like @RequestScoped or @SessionScoped, call to scope API of these respective APIs are made. This results in eager preparation of the Provider<T> instances. So to harness these providers, we need not configure any type of binding, as these are implicit bindings. We just need to inject these providers where we need the instances of respective types. Let us discuss various examples related to these servlet scopes. Classes scoped to @RequestScoped are instantiated on every request. A typical example would be to instantiate SearchRequest on every request. We need to annotate the SearchRQ with the @RequestScoped. @RequestScoped public class SearchRequest { ……} Next, in FlightServlet we need to inject the implicit provider: @Inject private Provider<SearchRequest> searchRQProvider; The instance could be fetched simply by invoking the .get() API of the provider: SearchRequest searchRequest = searchRQProvider.get(); @SessionScoped The same case goes with @SessionScoped annotation. In FlightSearchFilter, we need an instance of RequestCounter (a class for keeping track of number of requests in a session). This class RequestCounter needs to be annotated with @SessionScoped, and would be fetched in the same way as the SearchRequest. However the Provider takes care to instantiate it on every new session creation: @SessionScoped public class RequestCounter implements Serializable{……} Next, in FlightSearchFilter, we need to inject the implicit provider: @Inject private Provider<RequestCounter> sessionCountProvider; The instance could be fetched simply by invoking the .get() API of the provider. @RequestParameters Guice also provides a @RequestParameters annotation. It could be directly used to inject the request parameters. Let's have a look at an example in FlightSearchFilter. Here, we inject the provider for type Map<String,String[]> in a field: @Inject @RequestParameters private Provider<Map<String, String[]>> reqParamMapProvider; As the provider is bound internally via InternalServletModule (Guice installs this module internally), we can harness the implicit binding and inject the Provider. An important point to be noted over here is that, in case we try to inject the classes annotated with ServletScopes, like @RequestScoped or @SessionScoped, outside of the ServletContext or via a non HTTP request like RPC, Guice throws the following exception: SEVERE: Exception starting filter guiceFilter com.google.inject.ProvisionException: Guice provision errors: Error in custom provider, com.google.inject.OutOfScopeException: Cannot access scoped object. Either we are not currently inside an HTTP Servlet request, or you may have forgotten to apply com.google.inject.servlet.GuiceFilter as a servlet filter for this request. This happens because the Providers associated with these scopes necessarily work with a ServletContext and hence it could not complete the dependency injection. We need to make sure that our dependencies annotated with ServletScopes come into the picture only when we are in WebScope. Another way in which the scoped dependencies could be made available is by using the injector.getInstance() API. This however requires that we need to inject the injector itself using the @Inject injector in the dependent class. This is however not advisable as it is mixing dependency injection logic with the application logic. We need to avoid this approach. Exercising caution while scoping Our examples illustrate cases where we are injecting the dependencies with narrower scope in the dependencies of wider scope. For example, RequestCounter (which is @SessionScoped) is injected in FlightSearchFilter (which is a singleton). This needs to be very carefully designed, as in when we are absolutely sure that a narrowly scoped dependency should be always present else it would create a problem. It basically results in scope widening, which means that apparently we are widening the scope of SessionScoped objects to that of singleton scoped object, the servlet. If not managed properly, it could result into memory leaks, as the garbage collector could not collect the references to the narrowly scoped objects, which are held in the widely scoped objects. Sometimes this is unavoidable, in such a case we need to make sure we are following two basic rules: Injecting the narrow scoped dependency using Providers. By following this strategy, we never allow the widely scoped class to hold the reference to the narrowly scoped dependency, once it goes out of scope. Do not get the injector instance injected in the wide scoped class instance to fetch the narrow scoped dependency, directly. It could result in hard to debug bugs. Make sure that we use the dependent narrowly scoped objects in APIs only. This lets these to live as stack variables rather than heap variables. Once method execution finishes, the stack variables are garbage collected. Assigning the object fetched from the provider to a class level reference could affect garbage collection adversely, and result in memory leaks. Here, we are using these narrowly scoped dependencies in APIs: doGet() and doFilter(). This makes sure that they are always available. Contrarily, injecting widely scoped dependencies in narrowly scoped dependencies works well, for example, in a @RequestScoped annotated class if we inject a @SessionScoped annotated dependency, it is much better since it is always guaranteed that dependency would be available for injection and once narrowly scoped object goes out of scope it is garbage collected properly. We retrofitted our flight search application in a web environment. In doing so we learned about many aspects of the integration facilities Guice offers us: We learned how to set up the application to use dependency injection using GuiceFilter and a custom ServletContextListener. We saw how to avoid servlet, filter mapping in web.xml, and follow a safer programmatic approach using ServletModule. We saw the usage of various mapping APIs for the same and also certain newly introduced features in Guice 3.0. We discussed how to use the various web scopes.
Read more
  • 0
  • 0
  • 2427
article-image-getting-know-netbeans
Packt
24 Sep 2013
6 min read
Save for later

Getting to know NetBeans

Packt
24 Sep 2013
6 min read
(For more resources related to this topic, see here.) The NetBeans IDE also supports C/C++, PHP, HTML5, CSS3, JavaScript, Groovy, and Ruby. Nowadays nearly all the companies are using NetBeans as the basic tool for the development. And it is very popular in all industrial segments. NetBeans IDE 6.9 introduces the JavaFX Composer, support for JavaFX SDK 1.3, OSGi interoperability, support for the PHP Zend framework and Ruby on Rails 3.0, and more. “NetBeans” basically stands for “Network JavaBeans”. The journey of NetBeans started in 1996. NetBeans started as a final year student project and group of seven students started coding in C++ later they switched on to Java 1.0.2 towards framing NetBeans. Initially it was named as “Xelfi” (meaning Delphi for UNIX). After completing their studies, they started selling it on the internet for approximately $20 USD as the Shareware and got successful. In 1997, Roman Stanek who was a Czech entrepreneur founded NetBeans and started selling Xelfi with the name of NetBeans IDE now. Later on during Java One in 1998, NetBeans was presented in front of Java Community and Sun Microsystems acquired NetBeans. Jaroslav Tulach (alias Yarda Tulach) is one of the founders of NetBeans is now working with Oracle to improve NetBeans IDE and to make it more and more powerful but performance driven IDE for specially for different kind of Java application development, it is his and his teams efforts that NetBeans is the most powerful IDE for Java application development and people are able to create, run, deploy Java applications so easily. I don’t understand that how people say that Java has so many restrictions, it is not easy to develop Java applications, don’t have drag-drop features for GUI based Java applications. People who still says such things, I personally feel that they are still leaving in stone-age. NetBeans gives you everything you require to design, develop, run and deploy any Java application. One can develop any Java application as we have already mentioned above and it gives you visual designer for Java Desktop applications, Web applications, Mobile applications, etc. Layout of NetBeans IDE : Launch the NetBeans IDE and create a DemoProject i.e. simple java application to understand detailed layout of the IDE. Here is the snapshot of NetBeans IDE 7.2. Use steps here to get following output : Launch NetBeans IDE Create a simple java project as we have studied before named “DemoProject” with a Main.java which will help you to understand the layout in detail. It will open IDE with different child windows as shown in below figure. Layout of NetBeans As shown in figure above, Layout of NetBeans is divided into different small windows as follows: Project Window: Open projects and source code will be listed here in different tab for each source file. Editor Window: In Editor Window, Source tab allows you to write, generate code and business logics as per the suggestions provided by NetBeans IDE. Whereas Design tab allows you to design windows, forms, panels, dialogs, etc. with the visual designer feature provided by Design tab in NetBeans. Design tab also allows you to customize the menus and components added to the Editor area and Design tab has a great feature of drag and drop. Properties Window: Allows you to display and customize the properties of components selected in Design tab. Palette Window: List and display the windows, dialogs, menu bar, tool bar, components, etc. available for respective technology and to drag and drop. Navigator Window: Displays the hierarchical structure for all the components like class, functions or methods, objects, variables, text field, label, etc. i.e. visual and non-visual components. Output Window: Displays the output and other messages from the NetBeans, also displays logs, etc. Every child window in NetBeans IDE is designed with purpose which helps developers to reduce the amount of time required for development, their efforts, chances of errors, bugs, etc.The Properties Window, Palette Window and Output Window are hidden by default and are not visible by default. To open Properties Window : Go to Window | Properties or use shortcut Ctrl + Shift + 7 To open Palette Window : Go to Window | Palette or use shortcut Ctrl + Shift + 8 To open Output Window : Go to Window | Output | Output or use shortcut Ctrl + 4 UI Builder in NetBeans IDE The UI Builder is further divided into three main tabs: Design Tab : The Design tab provides Editor area where you get visual view of the components where you can add components directly from palette; place, align, resize components. Source Tab :The Source tab allows you to see the source code you have written and generated after the use of Design tab. History Tab :The History tab provides the details about the changes made in the file. UI Builder Here we have created a blank java application, created package and then we have added a JFrame form which provides Editor Area as shown in fig. above. UI Builder in NetBeans IDE allows you to drag and drop components you need from categories specified in Palette to the Canvas and design the UI as per requirements. Before we start designing projects, we will have a look at some design related concepts and some tips which may help you in designing great applications using NetBeans IDE. Free Form Design : With the Editor area in design tab, you can go on adding components freely and place the components here and there wherever you feel. Placement of Components : Since the Editor area allows you to add components the place you need, it also provides hints, using these hints you can automatically place components. And the code will be generated automatically. Visual Help : Design tab visually helps you a lot by providing hints, anchors, insets, offsets, fills, etc. with which you can design professional UIs very easily. Shortcuts to NetBeans : Even if you are new to NetBeans, you can design and develop applications like a professional or an expert by using the various shortcuts available in NetBeans IDE. For shortcuts, Go to Help | Keyboard Shortcuts Card. It will open PDF file containing shortcuts applicable to respective version of the NetBeans IDE. The file will contain the various shortcuts related to coding, debugging, compiling, testing, running, navigating through the source code, etc. Summary Helps you understand the layout of NetBeans IDE and how easily we can design projects in NetBeans IDE. Resources for Article: Further resources on this subject: Getting Started with NetBeans [Article] NetBeans IDE 7: Building an EJB Application [Article] iReport in NetBeans [Article]
Read more
  • 0
  • 0
  • 2101

article-image-eventbus-class
Packt
23 Sep 2013
14 min read
Save for later

The EventBus Class

Packt
23 Sep 2013
14 min read
(For more resources related to this topic, see here.) When developing software, the idea of objects sharing information or collaborating with each other is a must. The difficulty lies in ensuring that communication between objects is done effectively, but not at the cost of having highly coupled components. Objects are considered highly coupled when they have too much detail about other components' responsibilities. When we have high coupling in an application, maintenance becomes very challenging, as any change can have a rippling effect. To help us cope with this software design issue; we have event-based programming. In event-based programming, objects can either subscribe/listen for specific events, or publish events to be consumed. In Java, we have had the idea of event listeners for some time. An event listener is an object whose purpose is to be notified when a specific event occurs. In this article, we are going to discuss the Guava EventBus class and how it facilitates the publishing and subscribing of events. The EventBus class will allow us to achieve the level of collaboration we desire, while doing so in a manner that results in virtually no coupling between objects. It's worth noting that the EventBus is a lightweight, in-process publish/subscribe style of communication, and is not meant for inter-process communication. We are going to cover several classes in this article that have an @Beta annotation indicating that the functionality of the class may be subject to change in future releases of Guava. EventBus The EventBus class (found in the com.google.common.eventbus package) is the focal point for establishing the publish/subscribe-programming paradigm with Guava. At a very high level, subscribers will register with EventBus to be notified of particular events, and publishers will send events to EventBus for distribution to interested subscribers. All the subscribers are notified serially, so it's important that any code performed in the event-handling method executes quickly. Creating an EventBus instance Creating an EventBus instance is accomplished by merely making a call to the EventBus constructor: EventBus eventBus = new EventBus(); We could also provide an optional string argument to create an identifier (for logging purposes) for EventBus: EventBus eventBus = new EventBus(TradeAccountEvent.class.getName()); Subscribing to events The following three steps are required by an object to receive notifications from EventBus,: The object needs to define a public method that accepts only one argument. The argument should be of the event type for which the object is interested in receiving notifications. The method exposed for an event notification is annotated with an @Subscribe annotation. Finally, the object registers with an instance of EventBus, passing itself as an argument to the EventBus.register method. Posting the events To post an event, we need to pass an event object to the EventBus.post method. EventBus will call the registered subscriber handler methods, taking arguments those are assignable to the event object type. This is a very powerful concept because interfaces, superclasses, and interfaces implemented by superclasses are included, meaning we can easily make our event handlers as course- or fine-grained as we want, simply by changing the type accepted by the event-handling method. Defining handler methods Methods used as event handlers must accept only one argument, the event object. As mentioned before, EventBus will call event-handling methods serially, so it's important that those methods complete quickly. If any extended processing needs to be done as a result of receiving an event, it's best to run that code in a separate thread. Concurrency EventBus will not call the handler methods from multiple threads, unless the handler method is marked with the @AllowConcurrentEvent annotation. By marking a handler method with the @AllowConcurrentEvent annotation, we are asserting that our handler method is thread-safe. Annotating a handler method with the @AllowConcurrentEvent annotation by itself will not register a method with EventBus. Now that we have defined how we can use EventBus, let's look at some examples. Subscribe – An example Let's assume we have defined the following TradeAccountEvent class as follows: public class TradeAccountEvent { private double amount; private Date tradeExecutionTime; private TradeType tradeType; private TradeAccount tradeAccount; public TradeAccountEvent(TradeAccount account, double amount, Date tradeExecutionTime, TradeType tradeType) { checkArgument(amount > 0.0, "Trade can't be less than zero"); this.amount = amount; this.tradeExecutionTime = checkNotNull(tradeExecutionTime,"ExecutionTime can't be null"); this.tradeAccount = checkNotNull(account,"Account can't be null"); this.tradeType = checkNotNull(tradeType,"TradeType can't be null"); } //Details left out for clarity So whenever a buy or sell transaction is executed, we will create an instance of the TradeAccountEvent class. Now let's assume we have a need to audit the trades as they are being executed, so we have the SimpleTradeAuditor class as follows: public class SimpleTradeAuditor { private List<TradeAccountEvent> tradeEvents = Lists.newArrayList(); public SimpleTradeAuditor(EventBus eventBus){ eventBus.register(this); } @Subscribe public void auditTrade(TradeAccountEvent tradeAccountEvent){ tradeEvents.add(tradeAccountEvent); System.out.println("Received trade "+tradeAccountEvent); } } Let's quickly walk through what is happening here. In the constructor, we are receiving an instance of an EventBus class and immediately register the SimpleTradeAuditor class with the EventBus instance to receive notifications on TradeAccountEvents. We have designated auditTrade as the event-handling method by placing the @Subscribe annotation on the method. In this case, we are simply adding the TradeAccountEvent object to a list and printing out to the console acknowledgement that we received the trade. Event Publishing – An example Now let's take a look at a simple event publishing example. For executing our trades, we have the following class: public class SimpleTradeExecutor { private EventBus eventBus; public SimpleTradeExecutor(EventBus eventBus) { this.eventBus = eventBus; } public void executeTrade(TradeAccount tradeAccount, double amount, TradeType tradeType){ TradeAccountEvent tradeAccountEvent = processTrade(tradeAccount, amount, tradeType); eventBus.post(tradeAccountEvent); } private TradeAccountEvent processTrade(TradeAccount tradeAccount, double amount, TradeType tradeType){ Date executionTime = new Date(); String message = String.format("Processed trade for %s of amount %n type %s @ %s",tradeAccount,amount,tradeType,executionTime); TradeAccountEvent tradeAccountEvent = new TradeAccountEvent(tradeAccount,amount,executionTime,tradeType); System.out.println(message); return tradeAccountEvent; } } Like the SimpleTradeAuditor class, we are taking an instance of the EventBus class in the SimpleTradeExecutor constructor. But unlike the SimpleTradeAuditor class, we are storing a reference to the EventBus for later use. While this may seem obvious to most, it is critical for the same instance to be passed to both classes. We will see in future examples how to use multiple EventBus instances, but in this case, we are using a single instance. Our SimpleTradeExecutor class has one public method, executeTrade, which accepts all of the required information to process a trade in our simple example. In this case, we call the processTrade method, passing along the required information and printing to the console that our trade was executed, then returning a TradeAccountEvent instance. Once the processTrade method completes, we make a call to EventBus.post with the returned TradeAccountEvent instance, which will notify any subscribers of the TradeAccountEvent object. If we take a quick view of both our publishing and subscribing examples, we see that although both classes participate in the sharing of required information, neither has any knowledge of the other. Finer-grained subscribing We have just seen examples on publishing and subscribing using the EventBus class. If we recall, EventBus publishes events based on the type accepted by the subscribed method. This gives us some flexibility to send events to different subscribers by type. For example, let's say we want to audit the buy and sell trades separately. First, let's create two separate types of events: public class SellEvent extends TradeAccountEvent { public SellEvent(TradeAccount tradeAccount, double amount, Date tradExecutionTime) { super(tradeAccount, amount, tradExecutionTime, TradeType.SELL); } } public class BuyEvent extends TradeAccountEvent { public BuyEvent(TradeAccount tradeAccount, double amount, Date tradExecutionTime) { super(tradeAccount, amount, tradExecutionTime, TradeType.BUY); } } Now we have created two discrete event classes, SellEvent and BuyEvent, both of which extend the TradeAccountEvent class. To enable separate auditing, we will first create a class for auditing SellEvent instances: public class TradeSellAuditor { private List<SellEvent> sellEvents = Lists.newArrayList(); public TradeSellAuditor(EventBus eventBus) { eventBus.register(this); } @Subscribe public void auditSell(SellEvent sellEvent){ sellEvents.add(sellEvent); System.out.println("Received SellEvent "+sellEvent); } public List<SellEvent> getSellEvents() { return sellEvents; } } Here we see functionality that is very similar to the SimpleTradeAuditor class with the exception that this class will only receive the SellEvent instances. Then we will create a class for auditing only the BuyEvent instances: public class TradeBuyAuditor { private List<BuyEvent> buyEvents = Lists.newArrayList(); public TradeBuyAuditor(EventBus eventBus) { eventBus.register(this); } @Subscribe public void auditBuy(BuyEvent buyEvent){ buyEvents.add(buyEvent); System.out.println("Received TradeBuyEvent "+buyEvent); } public List<BuyEvent> getBuyEvents() { return buyEvents; } } Now we just need to refactor our SimpleTradeExecutor class to create the correct TradeAccountEvent instance class based on whether it's a buy or sell transaction: public class BuySellTradeExecutor { … deatails left out for clarity same as SimpleTradeExecutor //The executeTrade() method is unchanged from SimpleTradeExecutor private TradeAccountEvent processTrade(TradeAccount tradeAccount, double amount, TradeType tradeType) { Date executionTime = new Date(); String message = String.format("Processed trade for %s of amount %n type %s @ %s", tradeAccount, amount, tradeType, executionTime); TradeAccountEvent tradeAccountEvent; if (tradeType.equals(TradeType.BUY)) { tradeAccountEvent = new BuyEvent(tradeAccount, amount, executionTime); } else { tradeAccountEvent = new SellEvent(tradeAccount, amount, executionTime); } System.out.println(message); return tradeAccountEvent; } } Here we've created a new BuySellTradeExecutor class that behaves in the exact same manner as our SimpleTradeExecutor class, with the exception that depending on the type of transaction, we create either a BuyEvent or SellEvent instance. However, the EventBus class is completely unaware of any of these changes. We have registered different subscribers and we are posting different events, but these changes are transparent to the EventBus instance. Also, take note that we did not have to create separate classes for the notification of events. Our SimpleTradeAuditor class would have continued to receive the events as they occurred. If we wanted to do separate processing depending on the type of event, we could simply add a check for the type of event. Finally, if needed, we could also have a class that has multiple subscribe methods defined: public class AllTradesAuditor { private List<BuyEvent> buyEvents = Lists.newArrayList(); private List<SellEvent> sellEvents = Lists.newArrayList(); public AllTradesAuditor(EventBus eventBus) { eventBus.register(this); } @Subscribe public void auditSell(SellEvent sellEvent){ sellEvents.add(sellEvent); System.out.println("Received TradeSellEvent "+sellEvent); } @Subscribe public void auditBuy(BuyEvent buyEvent){ buyEvents.add(buyEvent); System.out.println("Received TradeBuyEvent "+buyEvent); } } Here we've created a class with two event-handling methods. The AllTradesAuditor method will receive notifications about all trade events; it's just a matter of which method gets called by EventBus depending on the type of event. Taken to an extreme, we could create an event handling method that accepts a type of Object, as Object is an actual class (the base class for all other objects in Java), and we could receive notifications on any and all events processed by EventBus. Finally, there is nothing preventing us from having more than one EventBus instance. If we were to refactor the BuySellTradeExecutor class into two separate classes, we could inject a separate EventBus instance into each class. Then it would be a matter of injecting the correct EventBus instance into the auditing classes, and we could have complete event publishing-subscribing independence. Unsubscribing to events Just as we want to subscribe to events, it may be desirable at some point to turn off the receiving of events. This is accomplished by passing the subscribed object to the eventBus.unregister method. For example, if we know at some point that we would want to stop processing events, we could add the following method to our subscribing class: public void unregister(){ this.eventBus.unregister(this); } Once this method is called, that particular instance will stop receiving events for whatever it had previously registered for. Other instances that are registered for the same event will continue to receive notifications. AsyncEventBus We stated earlier the importance of ensuring that our event-handling methods keep the processing light due to the fact that the EventBus processes all events in a serial fashion. However, we have another option with the AsyncEventBus class. The AsyncEventBus class offers the exact same functionality as the EventBus, but uses a provided java.util.concurrent.Executor instance to execute handler methods asynchronously. Creating an AsyncEventBus instance We create an AsyncEventBus instance in a manner similar to the EventBus instance: AsyncEventBus asyncEventBus = new AsyncEventBus(executorService); Here we are creating an AsyncEventBus instance by providing a previously created ExecutorService instance. We also have the option of providing a String identifier in addition to the ExecutorService instance. AsyncEventBus is very helpful to use in situations where we suspect the subscribers are performing heavy processing when events are received. DeadEvents When EventBus receives a notification of an event through the post method, and there are no registered subscribers, the event is wrapped in an instance of a DeadEvent class. Having a class that subscribes for DeadEvent instances can be very helpful when trying to ensure that all events have registered subscribers. The DeadEvent class exposes a getEvent method that can be used to inspect the original event that was undelivered. For example, we could provide a very simple class, which is shown as follows: public class DeadEventSubscriber { private static final Logger logger = Logger.getLogger(DeadEventSubscriber.class); public DeadEventSubscriber(EventBus eventBus) { eventBus.register(this); } @Subscribe public void handleUnsubscribedEvent(DeadEvent deadEvent){ logger.warn("No subscribers for "+deadEvent.getEvent()); } } Here we are simply registering for any DeadEvent instances and logging a warning for the original unclaimed event. Dependency injection To ensure we have registered our subscribers and publishers with the same instance of an EventBus class, using a dependency injection framework (Spring or Guice) makes a lot of sense. In the following example, we will show how to use the Spring Framework Java configuration with the SimpleTradeAuditor and SimpleTradeExecutor classes. First, we need to make the following changes to the SimpleTradeAuditor and SimpleTradeExecutor classes: @Component public class SimpleTradeExecutor { private EventBus eventBus; @Autowired public SimpleTradeExecutor(EventBus eventBus) { this.eventBus = checkNotNull(eventBus, "EventBus can't be null"); } @Component public class SimpleTradeAuditor { private List<TradeAccountEvent> tradeEvents = Lists.newArrayList(); @Autowired public SimpleTradeAuditor(EventBus eventBus){ checkNotNull(eventBus,"EventBus can't be null"); eventBus.register(this); } Here we've simply added an @Component annotation at the class level for both the classes. This is done to enable Spring to pick these classes as beans, which we want to inject. In this case, we want to use constructor injection, so we added an @Autowired annotation to the constructor for each class. Having the @Autowired annotation tells Spring to inject an instance of an EventBus class into the constructor for both objects. Finally, we have our configuration class that instructs the Spring Framework where to look for components to wire up with the beans defined in the configuration class. @Configuration @ComponentScan(basePackages = {"bbejeck.guava.article7.publisher", "bbejeck.guava.article7.subscriber"}) public class EventBusConfig { @Bean public EventBus eventBus() { return new EventBus(); } } Here we have the @Configuration annotation, which identifies this class to Spring as a Context that contains the beans to be created and injected if need be. We defined the eventBus method that constructs and returns an instance of an EventBus class, which is injected into other objects. In this case, since we placed the @Autowired annotation on the constructors of the SimpleTradeAuditor and SimpleTradeExecutor classes, Spring will inject the same EventBus instance into both classes, which is exactly what we want to do. While a full discussion of how the Spring Framework functions is beyond the scope of this book, it is worth noting that Spring creates singletons by default, which is exactly what we want here. As we can see, using a dependency injection framework can go a long way in ensuring that our event-based system is configured properly. Summary In this article, we have covered how to use event-based programing to reduce coupling in our code by using the Guava EventBus class. We covered how to create an EventBus instance and register subscribers and publishers. We also explored the powerful concept of using types to register what events we are interested in receiving. We learned about the AsyncEventBus class, which allows us to dispatch events asynchronously. We saw how we can use the DeadEvent class to ensure we have subscribers for all of our events. Finally, we saw how we can use dependency injection to ease the setup of our event-based system. In the next article, we will take a look at working with files in Guava. Resources for Article: Further resources on this subject: WordPress: Avoiding the Black Hat Techniques [Article] So, what is Google Drive? [Article] Showing your Google calendar on your Joomla! site using GCalendar [Article]
Read more
  • 0
  • 0
  • 3088