Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Application Development

357 Articles
article-image-software-task-management-tool-rake
Packt
16 Apr 2014
5 min read
Save for later

The Software Task Management Tool - Rake

Packt
16 Apr 2014
5 min read
(For more resources related to this topic, see here.) Installing Rake As Rake is a Ruby library, you should first install Ruby on the system if you don't have it installed already. The installation process is different for each operating system. However, we will see the installation example only for the Debian operating system family. Just open the terminal and write the following installation command: $ sudo apt-get install ruby If you have an operating system that doesn't contain the apt-get utility and if you have problems with the Ruby installation, please refer to the official instructions at https://www.ruby-lang.org/en/installation. There are a lot of ways to install Ruby, so please choose your operating system from the list on this page and select your desired installation method. Rake is included in the Ruby core as Ruby 1.9, so you don't have to install it as a separate gem. However, if you still use Ruby 1.8 or an older version, you will have to install Rake as a gem. Use the following command to install the gem: $ gem install rake The Ruby release cycle is slower than that of Rake and sometimes, you need to install it as a gem to work around some special issues. So you can still install Rake as a gem and in some cases, this is a requirement even for Ruby Version 1.9 and higher. To check if you have installed it correctly, open your terminal and type the following command: $ rake --version This should return the installed Rake version. The next sign that Rake is installed and is working correctly is an error that you see after typing the rake command in the terminal: $ mkdir ~/test-rake $ cd ~/test-rake $ rake rake aborted! No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb) (See full trace by running task with --trace) Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. Introducing rake tasks From the previous error message, it's clear that first you need to have Rakefile. As you can see, there are four variants of its name: rakefile, Rakefile, rakefile.rb, and Rakefile.rb. The most popularly used variant is Rakefile. Rails also uses it. However, you can choose any variant for your project. There is no convention that prohibits the user from using any of the four suggested variants. Rakefile is a file that is required for any Rake-based project. Apart from the fact that its content usually contains DSL, it's also a general Ruby file. Also, you can write any Ruby code in it. Perform the following steps to get started: Let's create a Rakefile in the current folder, which will just say Hello Rake, using the following commands: $ echo "puts 'Hello Rake'" > Rakefile $ cat Rakefile puts 'Hello Rake' Here, the first line creates a Rakefile with the content, puts 'Hello Rake', and the second line just shows us its content to make sure that we've done everything correctly. Now, run rake as we tried it before, using the following command: $ rake Hello Rake rake aborted! Don't know how to build task 'default' (See full trace by running task with --trace) The message has changed and it says Hello Rake. Then, it gets aborted because of another error message. At this moment, we have made the first step in learning Rake. Now, we have to define a default rake task that will be executed when you try to start Rake without any arguments. To do so, open your editor and change the created Rakefile with the following content: task :default do puts 'Hello Rake' end Now, run rake again: $ rake Hello, Rake The output that says Hello, Rake demonstrates that the task works correctly. The command-line arguments The most commonly used rake command-line argument is -T. It shows us a list of available rake tasks that you have already defined. We have defined the default rake task, and if we try to show the list of all rake tasks, it should be there. However, take a look at what happens in real life using the following command: $ rake -T The list is empty. Why? The answer lies within Rake. Run the rake command with the -h option to get the whole list of arguments. Pay attention to the description of the -T option, as shown in the following command-line output: -T, --tasks [PATTERN] Display the tasks (matching optional PATTERN) with descriptions, then exit. You can get more information on Rake in the repository at the following GitHub link at https://github.com/jimweirich/rake. The word description is the cornerstone here. It's a new term that we should know. Additionally, there is also an optional description to name a rake task. However, it's recommended that you define it because you won't see the list of all the defined rake tasks that we've already seen. It will be inconvenient for you to read your Rakefile every time you try to run some rake task. Just accept it as a rule: always leave a description for the defined rake tasks. Now, add a description to your rake tasks with the desc method call, as shown in the following lines of code: desc "Says 'Hello, Rake'" task :default do puts 'Hello, Rake.' end As you see, it's rather easy. Run the rake -T command again and you will see an output as shown: $ rake -T rake default # Says 'Hello, Rake' If you want to list all the tasks even if they don't have descriptions, you can pass an -A option with the -T option to the rake command. The resulting command will look like this: rake -T -A.
Read more
  • 0
  • 0
  • 1331

article-image-fabric-library-deployment-and-development-task-manager
Packt
14 Apr 2014
4 min read
Save for later

The Fabric library – the deployment and development task manager

Packt
14 Apr 2014
4 min read
(For more resources related to this topic, see here.) Essentially, Fabric is a tool that allows the developer to execute arbitrary Python functions via the command line and also a set of functions in order to execute shell commands on remote servers via SSH. Combining these two things together offers developers a powerful way to administrate the application workflow without having to remember the series of commands that need to be executed on the command line. The library documentation can be found at http://fabric.readthedocs.org/. Installing the library in PTVS is straightforward. Like all other libraries, to insert this library into a Django project, right-click on the Python 2.7 node in Python Environments of the Solution Explorer window. Then, select the Install Python Package entry. The Python environment contextual menu Clicking on it brings up the Install Python Package modal window as shown in the following screenshot: It's important to use easy_install to download from the Python package index. This will bring the precompiled versions of the library into the system instead of the plain Python C libraries that have to be compiled on the system. Once the package is installed in the system, you can start creating tasks that can be executed outside your application from the command line. First, create a configuration file, fabfile.py, for Fabric. This file contains the tasks that Fabric will execute. The previous screenshot shows a really simple task: it prints out the string hello world once it's executed. You can execute it from the command prompt by using the Fabric command fab, as shown in the following screenshot: Now that you know that the system is working fine, you can move on to the juicy part where you can make some tasks that interact with a remote server through ssh. Create a task that connects to a remote machine and find out the type of OS that runs on it. The env object provides a way to add credentials to Fabric in a programmatic way We have defined a Python function, host_type, that runs a POSIX command, uname–s, on the remote. We also set up a couple of variables to tell Fabric which is the remote machine we are connecting to, i.e. env.hosts, and the password that has to be used to access that machine, i.e. env.password. It's never a good idea to put plain passwords into the source code, as is shown in the preceding screenshot example. Now, we can execute the host_typetask in the command line as follows: The Fabric library connects to the remote machine with the information provided and executes the command on the server. Then, it brings back the result of the command itself in the output part of the response. We can also create tasks that accept parameters from the command line. Create a task that echoes a message on the remote machine, starting with a parameter as shown in the following screenshot: The following are two examples of how the task can be executed: We can also create a helper function that executes an arbitrary command on the remote machine as follows: def execute(cmd): run(cmd) We are also able to upload a file into the remote server by using put: The first argument of put is the local file you want to upload and the second one is the destination folder's filename. Let's see what happens: Deploying process with Fabric The possibilities of using Fabric are really endless, since the tasks can be written in plain Python language. This provides the opportunity to automate many operations and focus more on the development instead of focusing on how to deploy your code to servers to maintain them. Summary This article provided you with an in-depth look at remote task management and schema migrations using the third-party Python library Fabric. Resources for Article: Further resources on this subject: Through the Web Theming using Python [Article] Web Scraping with Python [Article] Python Data Persistence using MySQL [Article]
Read more
  • 0
  • 0
  • 1184

article-image-websockets
Packt
24 Mar 2014
5 min read
Save for later

Build a Chat Application using the Java API for WebSocket

Packt
24 Mar 2014
5 min read
Traditionally, web applications have been developed using the request/response model followed by the HTTP protocol. In this model, the request is always initiated by the client and then the server returns a response back to the client. There has never been any way for the server to send data to the client independently (without having to wait for a request from the browser) until now. The WebSocket protocol allows full-duplex, two-way communication between the client (browser) and the server. Java EE 7 introduces the Java API for WebSocket, which allows us to develop WebSocket endpoints in Java. The Java API for WebSocket is a brand-new technology in the Java EE Standard. A socket is a two-way pipe that stays alive longer than a single request. Applied to an HTML5-compliant browser, this would allow for continuous communication to or from a web server without the need to load a new page (similar to AJAX). Developing a WebSocket Server Endpoint A WebSocket server endpoint is a Java class deployed to the application server that handles WebSocket requests. There are two ways in which we can implement a WebSocket server endpoint via the Java API for WebSocket: either by developing an endpoint programmatically, in which case we need to extend the javax.websocket.Endpoint class, or by decorating Plain Old Java Objects (POJOs) with WebSocket-specific annotations. The two approaches are very similar; therefore, we will be discussing only the annotation approach in detail and briefly explaining the second approach, that is, developing WebSocket server endpoints programmatically, later in this section. In this article, we will develop a simple web-based chat application, taking full advantage of the Java API for WebSocket. Developing an annotated WebSocket server endpoint The following Java class code illustrates how to develop a WebSocket server endpoint by annotating a Java class: package net.ensode.glassfishbook.websocketchat.serverendpoint; import java.io.IOException; import java.util.logging.Level; import java.util.logging.Logger; import javax.websocket.OnClose; import javax.websocket.OnMessage; import javax.websocket.OnOpen; import javax.websocket.Session; import javax.websocket.server.ServerEndpoint; @ServerEndpoint("/websocketchat") public class WebSocketChatEndpoint { private static final Logger LOG = Logger.getLogger(WebSocketChatEndpoint.class.getName()); @OnOpen public void connectionOpened() { LOG.log(Level.INFO, "connection opened"); } @OnMessage public synchronized void processMessage(Session session, String message) { LOG.log(Level.INFO, "received message: {0}", message); try { for (Session sess : session.getOpenSessions()) { if (sess.isOpen()) { sess.getBasicRemote().sendText(message); } } } catch (IOException ioe) { LOG.log(Level.SEVERE, ioe.getMessage()); } } @OnClose public void connectionClosed() { LOG.log(Level.INFO, "connection closed"); } } The class-level @ServerEndpoint annotation indicates that the class is a WebSocket server endpoint. The URI (Uniform Resource Identifier) of the server endpoint is the value specified within the parentheses following the annotation (which is "/websocketchat" in this example)—WebSocket clients will use this URI to communicate with our endpoint. The @OnOpen annotation is used to decorate a method that needs to be executed whenever a WebSocket connection is opened by any of the clients. In our example, we are simply sending some output to the server log, but of course, any valid server-side Java code can be placed here. Any method annotated with the @OnMessage annotation will be invoked whenever our server endpoint receives a message from a client. Since we are developing a chat application, our code simply broadcasts the message it receives to all connected clients. In our example, the processMessage() method is annotated with @OnMessage, and takes two parameters: an instance of a class implementing the javax.websocket.Session interface and a String parameter containing the message that was received. Since we are developing a chat application, our WebSocket server endpoint simply broadcasts the received message to all connected clients. The getOpenSessions() method of the Session interface returns a set of session objects representing all open sessions. We iterate through this set to broadcast the received message to all connected clients by invoking the getBasicRemote() method on each session instance and then invoking the sendText() method on the resulting RemoteEndpoint.Basic implementation returned by calling the previous method. The getOpenSessions() method on the Session interface returns all the open sessions at the time it was invoked. It is possible for one or more of the sessions to have closed after the method was invoked; therefore, it is recommended to invoke the isOpen() method on a Session implementation before attempting to return data back to the client. An exception may be thrown if we attempt to access a closed session. Finally, we need to decorate a method with the @OnClose annotation in case we need to handle the event when a client disconnects from the server endpoint. In our example, we simply log a message into the server log. There is one additional annotation that we didn't use in our example—the @OnError annotation; it is used to decorate a method that needs to be invoked in case there's an error while sending or receiving data to or from the client. As we can see, developing an annotated WebSocket server endpoint is straightforward. We simply need to add a few annotations, and the application server will invoke our annotated methods as necessary. If we wish to develop a WebSocket server endpoint programmatically, we need to write a Java class that extends javax.websocket.Endpoint. This class has the onOpen(), onClose(), and onError() methods that are called at appropriate times during the endpoint's life cycle. There is no method equivalent to the @OnMessage annotation to handle incoming messages from clients. The addMessageHandler() method needs to be invoked in the session, passing an instance of a class implementing the javax.websocket.MessageHandler interface (or one of its subinterfaces) as its sole parameter. In general, it is easier and more straightforward to develop annotated WebSocket endpoints compared to their programmatic counterparts. Therefore, we recommend that you use the annotated approach whenever possible.
Read more
  • 0
  • 0
  • 5389

article-image-presenting-data-using-adf-faces
Packt
20 Mar 2014
7 min read
Save for later

Presenting Data Using ADF Faces

Packt
20 Mar 2014
7 min read
(For more resources related to this topic, see here.) In this article, you will learn how to present a single record, multiple records, and master-details records on your page using different components and methodologies. You will also learn how to enable the internationalizing and localizing processes in your application by using a resource bundle and the different options of bundle you can have. Starting from this article onward, we will not use the HR schema. We will rather use the FacerHR schema in the Git repository under the BookDatabaseSchema folder and read the README.txt file for information on how to create the database schema. This schema will be used for the whole book, so you need to do this only once. Make sure you validate your database connection information for your recipes to work without problem. Presenting single records on your page In this recipe, we will address the need for presenting a single record in a page, which is useful specifically when you want to focus on a specific record in the table of your database; for example, a user's profile can be represented by a single record in an employee's table. The application and its model have been created for you; you can see it by cloning the PresentingSingleRecord application from the Git repository. How to do it... In order to present a single record in pages, follow the ensuing steps: Open the PresentingSingleRecord application. Create a bounded task flow by right-clicking on ViewController and navigating to New | ADF Task Flow. Name the task flow single-employee-info and uncheck the Create with Page Fragments option. You can create a task flow with a page fragment, but you will need a page to host it at the end; alternatively, you can create a whole page if the task flow holds only one activity and is not reusable. However, in this case, I prefer to create a page-based task flow for fast deployment cycles and train you to always start from task flow. Add a View activity inside of the task flow and name it singleEmployee. Double-click on the newly created activity to create the page; this page will be based on the Oracle Three Column layout. Close the dialog by pressing the OK button. Navigate to Data Controls pane | HrAppModuleDataControl, drag-and-drop EmployeesView1 into the white area of the page template, and select ADF Form from the drop-down list that appears as you drop the view object. Check the Row Navigation option so that it has the first, previous, next, and last buttons for navigating through the task. Group attributes based on their category, so the Personal Information group should include the EmployeeId, FirstName, LastName, Email, and Phone Number attributes; the Job Information group should include HireDate, Job, Salary, and CommissionPct; and the last group will be Department Information that includes both ManagerId and DepartmentId attributes. Select multiple components by holding the Ctrl key and click on the Group button at the top-right corner, as shown in the following screenshot: Change the Display Label values of the three groups to eInfo, jInfo, and dInfo respectively. The Display Label option is a little misleading when it comes to groups in a form as groups don't have titles. Due to this, Display Label will be assigned to the Id attribute of the af:group component that will wrap the components, which can't have space and should be reasonably small; however, Input Text w/Label or Output Text w/Label will end up in the Label attribute in the panelLabelAndMessage component. Change the Component to Use option of all attributes from ADF Input Text w/Label to ADF Output Text w/Label. You might think that if you check the Read-Only Form option, it will have the same effect, but it won't. What will happen is that the readOnly attribute of the input text will change to true, which will make the input text non-updateable; however, it won't change the component type. Change the Display Label option for the attributes to have more human-readable labels to the end user; you should end up with the following screen: Finish by pressing the OK button. You can save yourself the trouble of editing the Display Label option every time you create a component that is based on a view object by changing the Label attribute in UI Hints from the entity object or view object. More information can be found in the documentation at http://docs.oracle.com/middleware/1212/adf/ADFFD/bcentities.htm#sm0140. Examine the page structure from the Structure pane in the bottom-left corner as shown in the following screenshot. A panel form layout can be found inside the center facet of the page template. This panel form layout represents an ADF form, and inside of it, there are three group components; each group has a panel label and message for each field of the view object. At the bottom of the panel form layout, you can locate a footer facet; expand it to see a panel group layout that has all the navigation buttons. The footer facet identifies the locations of the buttons, which will be at the bottom of this panel form layout even if some components appear inside the page markup after this facet. Examine the panel form layout properties by clicking on the Properties pane, which is usually located in the bottom-right corner. It allows you to change attributes such as Max Columns, Rows, Field Width, or Label Width. Change these attributes to change the form and to have more than one column. If you can't see the Structure or Properties pane, you can see them again by navigating to Window menu | Structure or Window menu | Properties. Save everything and run the page, placing it inside the adf-config task flow; to see this in action, refer to the following screenshot: How it works... The best component to represent a single record is a panel form layout, which presents the user with an organized form layout for different input/output components. If you examine the page source code, you can see an expression like #{bindings.FirstName.inputValue}, which is related to the FirstName binding inside the Bindings section of the page definition where it points to EmployeesView1Iterator. However, iterator means multiple records, then why FirstName is only presenting a single record? It's because the iterator is aware of the current row that represents the row in focus, and this row will always point to the first row of the view object's select statement when you render the page. By pressing different buttons on the form, the Current Row value changes and thus the point of focus changes to reflect a different row based on the button you pressed. When you are dealing with a single record, you can show it as the input text or any of the user input's components; alternatively, you can change it as the output text if you are just viewing it. In this recipe, you can see that the Group component is represented as a line in the user interface when you run the page. If you were to change the panel form layout's attributes, such as Max Columns or Rows, you would see a different view. Max Columns represents the maximum number of columns to show in a form, which defaults to 3 in case of desktops and 2 in case of PDAs; however, if this panel form layout is inside another panel form layout, the Max Columns value will always be 1. The Rows attribute represents the numbers of rows after which we should start a new column; it has a default value of 231-1. You can know more about each attribute by clicking on the gear icon that appears when you hover over an attribute and reading the information on the property's Help page. The benefit of having a panel form layout is that all labels are aligned properly; this organizes everything for you similar to the HTML table component. See also Check the following reference for more information about arranging content in forms: http://docs.oracle.com/middleware/1212/adf/ADFUI/af_orgpage.htm#CDEHDJEA
Read more
  • 0
  • 0
  • 1921

article-image-maximizing-everyday-debugging
Packt
14 Mar 2014
5 min read
Save for later

Maximizing everyday debugging

Packt
14 Mar 2014
5 min read
(For more resources related to this topic, see here.) Getting ready For this article, you will just need a premium version of VS2013 or you may use VS Express for Windows Desktop. Be sure to run your choice on a machine using a 64-bit edition of Windows. Note that Edit and Continue previously existed for 32-bit code. How to do it… Both features are now supported by C#/VB, but we will be using C# for our examples. The features being demonstrated are compiler-based features, so feel free to use code from one of your own projects if you prefer. To see how Edit and Continue can benefit 64-bit development, perform the following steps: Create a new C# Console Application using the default name. To ensure the demonstration is running with 64-bit code, we need to change the default solution platform. Click on the drop-down arrow next to Any CPU and select Configuration Manager…: When the Configuration Manager dialog opens, we can create a new Project Platform targeting 64-bit code. To do this, click on the drop-down menu for Platform and select <New...>: When <New...> is selected, it will present the New Project Platform dialog box. Select x64 as the new platform type: Once x64 has been selected, you will return to the Configuration Manager. Verify that x64 remains active under Platform and then click on Close to close this dialog. The main IDE window will now indicate that x64 is active: Now, let's add some code to demonstrate the new behavior. Replace the existing code in your blank class file so that it looks like the following listing: class Program { static void Main(string[] args) { int w = 16; int h = 8; Debugging Your .NET Application 156 int area = calcArea(w, h); Console.WriteLine("Area: " + area); } private static int calcArea(int width, int height) { return width / height; } } Let's set some breakpoints so that we are able to inspect during execution. First, add a breakpoint to the Main method's Console line. Add a second breakpoint to the calcArea method's return line. You can do this by either clicking on the left side of the editor window's border or by right-clicking on the line, and selecting Breakpoint | Insert Breakpoint: If you are not sure where to click, use the right-click method and then practice toggling the breakpoint by left-clicking on the breakpoint marker. Feel free to use any method that you find most convenient. Once the two breakpoints are added, Visual Studio will mark their location as shown in the following screenshot (the arrow indicates where you may click to toggle the breakpoint): With the breakpoint marker now set, let's debug the program. Begin debugging by either pressing F5 or clicking on the Start button on the toolbar: Once debugging starts, the program will quickly execute until stopped by the first breakpoint. Let's first take a look at Edit and Continue. Visual Studio will stop at the calcArea method's return line. Astute readers will notice an error (marked by 1 in the following screenshot) present in the calculation as the area value returned should be width * height. Make the correction. Before continuing, note the variables listed in the Autos window (marked by 2 in the following screenshot). If you don't see Autos, it can be made visible by pressing Ctrl + D, A or through Debug | Windows | Autos while debugging. After correcting the area calculation, advance the debugging step by pressing F10 twice. (Alternatively make the advancement by selecting the menu item Debug | Step Over twice). Visual Studio will advance to the declaration for area. Note that you were able to edit your code and continue debugging without restarting. The Autos window will update to display the function's return value, which is 128 (the value for area has not been assigned yet): There's more… Programmers who write C++ already have the ability to see the return values of functions; this just brings .NET developers into the fold. Your development experience won't have to suffer based on the languages chosen for your projects. The Edit and Continue functionality is also available for ASP.NET projects. New projects created in VS2013 will have Edit and Continue enabled by default. Existing projects imported to VS2013 will usually need this to be enabled if it hasn't been already. To do so, right-click on your ASP.NET project in Solution Explorer and select Properties (alternatively, it is also available via Project | <Project Name> Properties…). Navigate to the Web option and scroll to the bottom to check the Enable Edit and Continue checkbox. The following screenshot shows where this option is located on the properties page: Summary In this article, we learned how to use the Edit and Continue feature. Using this feature enables you to make changes to your project without having to immediately recompile your project. This simplifies debugging and enables a bit of exploration. You also saw how the Autos window can display the values of variables as you step through your program’s execution. Resources for Article: Further resources on this subject: Using the Data Pager Control in Visual Studio 2008 [article] Load Testing Using Visual Studio 2008: Part 1 [article] Creating a Simple Report with Visual Studio 2008 [article]
Read more
  • 0
  • 0
  • 1217

article-image-making-your-code-better
Packt
14 Feb 2014
8 min read
Save for later

Making Your Code Better

Packt
14 Feb 2014
8 min read
(For more resources related to this topic, see here.) Code quality analysis The fact that you can compile your code does not mean your code is good. It does not even mean it will work. There are many things that can easily break your code. A good example is an unhandled NullReferenceException. You will be able to compile your code, you will be able to run your application, but there will be a problem. ReSharper v8 comes with more than 1400 code analysis rules and more than 700 quick fixes, which allow you to fix detected problems. What is really cool is that ReSharper provides you with code inspection rules for all supported languages. This means that ReSharper not only improves your C# or VB.NET code, but also HTML, JavaScript, CSS, XAML, XML, ASP.NET, ASP.NET MVC, and TypeScript. Apart from finding possible errors, code quality analysis rules can also improve the readability of your code. ReSharper can detect code, which is unused and mark it as grayed, prompts you that maybe you should use auto properties or objects and collection initializers, or use the var keyword instead of an explicit type name. ReSharper provides you with five severity levels for rules and allows you to configure them according to your preference. Code inspection rules can be configured in the ReSharper's Options window. A sample view of code inspection rules with the list of available severity levels is shown in the following screenshot: Background analysis One of the best features in terms of code quality in ReSharper is Background analysis. This means that all the rules are checked as you are writing your code. You do not need to compile your project to see the results of the analysis. ReSharper will display appropriate messages in real time. Solution-wide inspections By default, the described rules are checked locally, which means that they should be checked in the current class. Because of this, ReSharper can mark some code as unused if it is used only locally; for example, there can be any unused private method or some part of code inside your method. These two cases are shown in the following screenshot: Additionally, for local analysis, ReSharper can check some rules in your entire project. To do this, you need to enable Solution-wide inspections. The easiest way to enable Solution-wide inspections is to double-click the circle icon in the bottom-right corner of Visual Studio, as seen in the following screenshot: With enabled Solution-wide inspections, ReSharper can mark the public methods or returned values that are unused. Please note that running Solution-wide inspections can hit Visual Studio’s performance in big projects. In such cases, it is better to disable this feature. Disabling code inspections With ReSharper v8, you can easily mark some part of your code as code that should not be checked by ReSharper. You can do this by adding the following comments: // ReSharper disable all // [your code] // ReSharper restore all All code between these two comments will be skipped by ReSharper in code inspections. Of course, instead of the all word, you can use the name of any ReSharper rule such as UseObjectOrCollectionInitializer. You can also disable ReSharper analysis for a single line with the following comment: // ReSharper disable once UseObjectOrCollectionInitializer ReSharper can generate these comments for you. If ReSharper highlights some issue, then just press Alt + Enter and select Options for “YOUR_RULE“ inspection, as shown in the following screenshot: Code Issues You can also an ad-hoc run code analysis. An ad-hoc analysis can be run on the solution or project level. To run ad-hoc analysis, just navigate to RESHARPER | Inspect | Code Issues in Solution or RESHARPER | Inspect | Code Issues in Current Project from the Visual Studio toolbar. This will display a dialog box that shows us the progress of analysis and will finally display the results in the Inspection Results window. You can filter and group the displayed issues as and when you need to. You can also quickly go to a place where the issue occurs just by double-clicking on it. A sample report is shown in the following screenshot: Eliminating errors and code smells We think you will agree that the code analysis provided by ReSharper is really cool and helps create better code. What is even cooler is that ReSharper provides you with features that can fix some issues automatically. Quick fixes Most errors and issues found by ReSharper can be fixed just by pressing Alt + Enter. This will display a list of the available solutions and let you select the best one for you. Fix in scope Quick fixes described above allow you to fix the issues in one particular place. However, sometimes there are issues that you would like to fix in every file in your project or solution. A great example is removing unused using statements or the this keyword. With ReSharper v8, you do not need to fix such issues manually. Instead, you can use a new feature called Fix in scope. You start as usual by pressing Alt + Enter but instead of just selecting some solution, you can select more options by clicking the small arrow on the right from the available options. A sample usage of the Fix in scope feature is shown in the following screenshot: This will allow you to fix the selected issue with just one click! Structural Search and Replace Even though ReSharper contains a lot of built-in analysis, it also allows you to create your own analyses. You can create your own patterns that will be used to search some structures in your code. This feature is called Structural Search and Replace (SSR). To open the Search with Pattern window, navigate to RESHARPER | Find | Search with Pattern…. A sample window is shown in the following screenshot: You can see two things here: On the left, there is place to write you pattern On the right, there is place to define placeholders In the preceding example, we were looking for if statements to compare them with some false expression. You can now simply click on the Find button and ReSharper will display every code that matches this pattern. Of course, you can also save your patterns. You can create new search patterns from the code editor. Just select some code, click on the right mouse button and select Find Similar Code….This will automatically generate the pattern for this code, which you can easily adjust to your needs. SSR allows you not only to find code based on defined patterns, but also replace it with different code. Click on the Replace button available on the top in the preceding screenshot. This will display a new section on the left called Replace pattern. There, you can write code that will be placed instead of code that matches the defined pattern. For the pattern shown, you can write the following code: if (false = $value$) { $statement$ } This will simply change the order of expressions inside the if statement. The saved patterns can also be presented as Quick fixes. Simply navigate to RESHARPER | Options | Code Inspection | Custom Patterns and set proper severity for your pattern, as shown in the following screenshot: This will allow you to define patterns in the code editor, which is shown in the following screenshot: Code Cleanup ReSharper also allows you to fix more than one issue in one run. Navigate to RESHARPER | Tools | Cleanup Code… from the Visual Studio toolbar or just press Ctrl + E, Ctrl + C. This will display the Code Cleanup window, which is shown in the following screenshot: By clicking on the Run button, ReSharper will fix all issues configured in the selected profile. By default, there are two patterns: Full Cleanup Reformat Code You can add your own pattern by clicking on the Edit Profiles button. Summary Code quality analysis is a very powerful feature in ReSharper. As we have described in this article, ReSharper not only prompts you when something is wrong or can be written better, but also allows you to quickly fix these issues. If you do not agree with all rules provided by ReSharper, you can easily configure them to meet your needs. There are many rules that will open your eyes and show you that you can write better code. With ReSharper, writing better, cleaner code is as easy as just pressing Alt + Enter. Resources for Article: Further resources on this subject: Ensuring Quality for Unit Testing with Microsoft Visual Studio 2010 [Article] Getting Started with Code::Blocks [Article] Core .NET Recipes [Article]
Read more
  • 0
  • 0
  • 1631
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-transformation
Packt
20 Dec 2013
23 min read
Save for later

Apache Camel: Transformation

Packt
20 Dec 2013
23 min read
(For more resources related to this topic, see here.) The latest version of the example code for this article can be found at http://github.com/CamelCookbook/camel-cookbook-examples. You can also download the example code files for all Packt books you have purchased from your account at https://www.packtpub.com. If you purchased this book elsewhere, you can visit https://www.packtpub.com/books/content/support and register to have the files e-mailed directly to you. In this article we will explore a number of ways in which Camel performs message content transformation: Let us first look at some important concepts regarding transformation of messages in Camel: Using the transform statement. This allows you to reference Camel Expression Language code within the route to do message transformations. Calling a templating component, such as Camel's XSLT or Velocity template style components. This will typically reference an external template resource that is used in transforming your message. Calling a Java method (for example, beanref), defined by you, within a Camel route to perform the transformation. This is a special case processor that can invoke any referenced Java object method. Camel's Type Converter capability that can automatically cast data from one type to another transparently within your Camel route. This capability is extensible, so you can add your own Type Converters. Camel's Data Format capability that allows us to use built-in, or add our own, higher order message format converters. A Camel Data Format goes beyond simple data type converters, which handle simple data type translations such as String to int, or File to String. Data Formats are used to translate between a low-level representation (XML) and a high-level one (Java objects). Other examples include encrypting/decrypting data, and compressing/decompressing data. For more, see http://camel.apache.org/data-format.html. A number of Camel architectural concepts are used throughout this article. Full details can be found at the Apache Camel website at http://camel.apache.org. The code for this article is contained within the camel-cookbook-transformation module of the examples. Transforming using a Simple Expression When you want to transform a message in a relatively straightforward way, you use Camel's transform statement along with one of the Expression Languages provided by the framework. For example, Camel's Simple Expression Language provides you with a quick, inline mechanism for straightforward transformations. This recipe will show you how to use Camel's Simple Expression Language to transform the message body. Getting ready The Java code for this recipe is located in the org.camelcookbook.transformation.simple package. Spring XML files are located under src/main/resources/META-INF/spring and are prefixed with simple. How to do it... In a Camel route, use a transform DSL statement containing the Expression Language code to do your transformation. In the XML DSL, this is written as follows: <route> <from uri="direct:start"/> <transform> <simple>Hello ${body}</simple> </transform> </route> In the Java DSL, the same route is expressed as: from("direct:start") .transform(simple("Hello ${body}")); In this example, the message transformation prefixes the incoming message with the phrase Hello using the Simple Expression Language. The processing step after the transform statement will see the transformed message content in the body of the exchange. How it works... Camel's Simple Expression Language is quite good at manipulating the String content through its access to all aspects of the message being processed, through its rich String and logical operators. The result of your Simple Expression becomes the new message body after the transform step. This includes predicates such as using Simple's logical operators, to evaluate a true or false condition; the results of that Boolean operation become the new message body containing a String: "true" or "false". The advantage of using a distinct transform step within a route, as opposed to embedding it within a processor, is that the logic is clearly visible to other programmers. Ensure that the expression embedded within your route is kept simple so as to not distract the next developer from the overall purpose of the integration. It is best to move more complex (or just lengthy) transformation logic into its own subroute, and invoke it using direct: or seda:. There's more... The transform statement will work with any Expression Language available in Camel, so if you need more powerful message processing capabilities you can leverage scripting languages such as Groovy or JavaScript (among many others) as well. The Transforming inline with XQuery recipe will show you how to use the XQuery Expression Language to do transformations on XML messages. See also Message Translator: http://camel.apache.org/message-translator.html Camel Expression capabilities: http://camel.apache.org/expression.html Camel Simple Expression Language: http://camel.apache.org/simple.html Languages supported by Camel: http://camel.apache.org/languages.html The Transforming inline with XQuery recipe   Transforming inline with XQuery Camel supports the use of Camel's XQuery Expression Language along with the transform statement as a quick and easy way to transform an XML message within a route. This recipe will show you how to use an XQuery Expression to do in-route XML transformation. Getting ready The Java code for this recipe is located in the org.camelcookbook.transformation.xquery package. Spring XML files are located under src/main/resources/META-INF/spring and prefixed with xquery. To use the XQuery Expression Language, you need to add a dependency element for the camel-saxon library, which provides the implementation for the XQuery Expression Language. Add the following to the dependencies section of your Maven POM: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-saxon</artifactId> <version>${camel-version}</version> </dependency>   How to do it... In the Camel route, specify a transform statement followed by the XQuery Expression Language code to do your transformation. In the XML DSL, this is written as: <route> <from uri="direct:start"/> <transform> <xquery> <books>{ for $x in /bookstore/book where $x/price>30 order by $x/title return $x/title }</books> </xquery> </transform> </route> When using the XML DSL, remember to XML encode the XQuery embedded XML elements. Therefore, < becomes &lt; and > becomes &gt;. In the Java DSL, the same route is expressed as: from("direct:start") .transform(xquery("<books>{ for $x in /bookstore/book " + "where $x/price>30 order by $x/title " + "return $x/title }</books>")); Feed the following input XML message through the transformation: <bookstore> <book category="COOKING"> <title lang="en">Everyday Italian</title> <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> </book> <book category="CHILDREN"> <title lang="en">Harry Potter</title> <author>J K. Rowling</author> <year>2005</year> <price>29.99</price> </book> <book category="PROGRAMMING"> <title lang="en">Apache Camel Developer's Cookbook</title> <author>Scott Cranton</author> <author>Jakub Korab</author> <year>2013</year> <price>49.99</price> </book> <book category="WEB"> <title lang="en">Learning XML</title> <author>Erik T. Ray</author> <year>2003</year> <price>39.95</price> </book> </bookstore> The resulting message will be: <books> <title lang="en">Apache Camel Developer's Cookbook</title> <title lang="en">Learning XML</title> </books> The processing step after the transform statement will see the transformed message content in the body of the exchange. How it works... Camel's XQuery Expression Language is a good way to inline XML transformation code within your route. The result of the XQuery Expression becomes the new message body after the transform step. All of the message's body, headers, and properties are made available to the XQuery Processor, so you can reference them directly within your XQuery statement. This provides you with a powerful mechanism for transforming XML messages. If you are more comfortable with XSLT, take a look at the Transforming with XSLT recipe. In-lining the transformation within your integration route can sometimes be an advantage as you can clearly see what is being changed. However, when the transformation expression becomes so complex that it starts to overwhelm the integration route, you may want to consider moving the transformation expression outside of the route. See the Transforming using a Simple Expression recipe for another inline transformation example, and see the Transforming with XSLT recipe for an example of externalizing your transformation. You can fetch the XQuery Expression from an external file using Camel's resource reference syntax. To reference an XQuery file on the classpath you can specify: <transform> <xquery>resource:classpath:/path/to/myxquery.xml</xquery> </transform> This is equivalent to using XQuery as an endpoint: <to uri="xquery:classpath:/path/to/myxquery.xml"/>   There's more... The XQuery Expression Language allows you to pass in headers associated with the message. These will show up as XQuery variables that can be referenced within your XQuery statements. Consider, from the previous example, to allow the value of the books that are filtered to be passed in with the message body, that is, parameterize the XQuery, you can modify the XQuery statement as follows: <transform> <xquery> declare variable $in.headers.myParamValue as xs:integer external; <books value='{$in.headers.myParamValue}'&gt;{ for $x in /bookstore/book where $x/price>$in.headers.myParamValue order by $x/title return $x/title }&lt;/books&gt; </xquery> </transform> Message headers will be associated with an XQuery variable called in.headers.<name of header>. To use this in your XQuery, you need to explicitly declare an external variable of the same name and XML Schema (xs:) type as the value of the message header. The transform statement will work with any Expression Language enabled within Camel, so if you need more powerful message processing capabilities you can leverage scripting languages such as Groovy or JavaScript (among many others) as well. The Transforming using a Simple Expression recipe will show you how to use the Simple Expression Language to do transformations on String messages. See also Message Translator: http://camel.apache.org/message-translator.html Camel Expression capabilities: http://camel.apache.org/expression.html Camel XQuery Expression Language: http://camel.apache.org/xquery.html XQuery language: http://www.w3.org/XML/Query/ Languages supported by Camel: http://camel.apache.org/languages.html The Transforming with XSLT recipe The Transforming using a Simple Expression recipe   Transforming with XSLT When you want to transform an XML message using XSLT, use Camel's XSLT Component. This is similar to the Transforming inline with XQuery recipe except that there is no XSLT Expression Language, so it can only be used as an endpoint. This recipe will show you how to transform a message using an external XSLT resource. Getting ready The Java code for this recipe is located in the org.camelcookbook.transformation.xslt package. Spring XML files are located under src/main/resources/META-INF/spring and prefixed with xslt. How to do it... In a Camel route, add the xslt processor step into the route at the point where you want the XSLT transformation to occur. The XSLT file must be referenced as an external resource, and depending on where the file is located, prefixed with either classpath:(default if not using a prefix), file:, or http:. In the XML DSL, this is written as: <route> <from uri="direct:start"/> <to uri="xslt:book.xslt"/> </route> In the Java DSL, the same route is expressed as: from("direct:start") .to("xslt:book.xslt"); The next processing step in the route will see the transformed message content in the body of the exchange. How it works... The following example shows how the preceding steps will process an XML file. Consider the following input XML message: <bookstore> <book category="COOKING"> <title lang="en">Everyday Italian</title> <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> </book> <book category="CHILDREN"> <title lang="en">Harry Potter</title> <author>J K. Rowling</author> <year>2005</year> <price>29.99</price> </book> <book category="PROGRAMMING"> <title lang="en">Apache Camel Developer's Cookbook</title> <author>Scott Cranton</author> <author>Jakub Korab</author> <year>2013</year> <price>49.99</price> </book> <book category="WEB"> <title lang="en">Learning XML</title> <author>Erik T. Ray</author> <year>2003</year> <price>39.95</price> </book> </bookstore> Process this with the following XSLT contained in books.xslt: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" > <xsl:output omit-xml-declaration="yes"/> <xsl:template match="/"> <books> <xsl:apply-templates select="/bookstore/book/title[../price>30]"> <xsl:sort select="."/> </xsl:apply-templates> </books> </xsl:template> <xsl:template match="node()|@*"> <xsl:copy> <xsl:apply-templates select="node()|@*"/> </xsl:copy> </xsl:template> </xsl:stylesheet> The result will appear as follows: <books> <title lang="en">Apache Camel Developer's Cookbook</title> <title lang="en">Learning XML</title> </books> The Camel XSLT Processor internally runs the message body through a registered Java XML transformer using the XSLT file referenced by the endpoint. This processor uses Camel's Type Converter capabilities to convert the input message body type to one of the supported XML source models in the following order of priority: StAXSource (off by default; this can be enabled by setting allowStAX=true on the endpoint URI) SAXSource StreamSource DOMSource Camel's Type Converter can convert from most input types (String, File, byte[], and so on) to one of the XML source types for most XML content loaded through other Camel endpoints with no extra work on your part. The output data type for the message is, by default, a String, and is configurable using the output parameter on the xslt endpoint URI. There's more... The XSLT Processor passes in headers, properties, and parameters associated with the message. These will show up as XSLT parameters that can be referenced within your XSLT statements. You can pass in the names of the books as parameters to the XSLT template; to do so, modify the previous XLST as follows: <xsl:param name="myParamValue"/> <xsl:template match="/"> <books> <xsl:attribute name="value"> <xsl:value-of select="$myParamValue"/> </xsl:attribute> <xsl:apply-templates select="/bookstore/book/title[../price>$myParamValue]"> <xsl:sort select="."/> </xsl:apply-templates> </books> </xsl:template> The Exchange instance will be associated with a parameter called exchange; the IN message with a parameter called in; and the message headers, properties, and parameters will be associated XSLT parameters with the same name. To use these in your XSLT, you need to explicitly declare a parameter of the same name in your XSLT file. In the previous example, it is possible to use either a message header or exchange property called myParamValue. See also Message Translator: http://camel.apache.org/message-translator.html Camel XSLT Component: http://camel.apache.org/xslt Camel Type Converter: http://camel.apache.org/type-converter.html XSL working group: http://www.w3.org/Style/XSL/ The Transforming inline with XQuery recipe   Transforming from Java to XML with JAXB Camel's JAXB Component is one of a number of components that can be used to convert your XML data back and forth from Java objects. It provides a Camel Data Format that allows you to use JAXB annotated Java classes, and then marshal (Java to XML) or unmarshal (XML to Java) your data. JAXB is a Java standard for translating between XML data and Java that is used by creating annotated Java classes that bind, or map, to your XML data schema. The framework takes care of the rest. This recipe will show you how to use the JAXB Camel Data Format to convert back and forth from Java to XML. Getting ready The Java code for this recipe is located in the org.camelcookbook.transformation.jaxb package. The Spring XML files are located under src/main/resources/META-INF/spring and prefixed with jaxb. To use Camel's JAXB Component, you need to add a dependency element for the camel-jaxb library, which provides the implementation for the JAXB Data Format. Add the following to the dependencies section of your Maven POM: <dependency> lt;groupId>org.apache.camel</groupId> <artifactId>camel-jaxb</artifactId> <version>${camel-version}</version> lt;/dependency>   How to do it... The main steps for converting between Java and XML are as follows: Given a JAXB annotated model, reference that model within a named Camel Data Format. Use that named Data Format within your Camel route using the marshal and unmarshal DSL statements. Create an annotated Java model using standard JAXB annotations. There are a number of external tools that can automate this creation from existing XML or XSD (XML Schema) files: @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = { "title", "author", "year", "price" } ) @XmlRootElement(name = "book") public class Book { @XmlElement(required = true) protected Book.Title title; @XmlElement(required = true) protected List<String> author; protected int year; protected double price; // getters and setters } Instantiate a JAXB Data Format within your Camel route that refers to the Java package(s) containing your JAXB annotated classes. In the XML DSL, this is written as: <camelContext > <dataFormats> <jaxb id="myJaxb" contextPath="org.camelcookbook .transformation.myschema"/> </dataFormats> <!-- route definitions here --> </camelContext> In the Java DSL, the Data Format is defined as: public class JaxbRouteBuilder extends RouteBuilder { @Override public void configure() throws Exception { DataFormat myJaxb= new JaxbDataFormat( "org.camelcookbook.transformation.myschema"); // route definitions here } } Reference the Data Format within your route, choosing marshal(Java to XML) or unmarshal(XML to Java) as appropriate. In the XML DSL, this routing logic is written as: <route> <from uri="direct:unmarshal"/> <unmarshal ref="myJaxb"/> </route> In the Java DSL, this is expressed as: from("direct:unmarshal").unmarshal(myJaxb);   How it works... Using Camel JAXB to translate your XML data back and forth to Java makes it much easier for the Java processors defined later on in your route to do custom message processing. This is useful when the built-in XML translators (for example, XSLT or XQuery) are not enough, or you just want to call existing Java code. Camel JAXB eliminates the boilerplate code from your integration flows by providing a wrapper around the standard JAXB mechanisms for instantiating the Java binding for the XML data. There's more... Camel JAXB works just fine with existing JAXB tooling like the maven-jaxb2-plugin plugin, which can automatically create JAXB-annotated Java classes from an XML Schema (XSD). See also Camel JAXB: http://camel.apache.org/jaxb.html Available Data Formats: http://camel.apache.org/data-format.html JAXB Specification: http://jcp.org/en/jsr/detail?id=222   Transforming from Java to JSON Camel's JSON Component is used when you need to convert your JSON data back and forth from Java. It provides a Camel Data Format that, without any requirement for an annotated Java class, allows you to marshal (Java to JSON) or unmarshal (JSON to Java) your data. There is only one step to using Camel JSON to marshal and unmarshal XML data. Within your Camel route, insert the marshal(Java to JSON), or unmarshal(JSON to Java) statement, and configure it to use the JSON Data Format. This recipe will show you how to use the camel-xstream library to convert from Java to JSON, and back. Getting ready The Java code for this recipe is located in the org.camelcookbook.transformation.json package. The Spring XML files are located under src/main/resources/META-INF/spring and prefixed with json. To use Camel's JSON Component, you need to add a dependency element for the camel-xstream library, which provides an implementation for the JSON Data Format using the XStream library. Add the following to the dependencies section of your Maven POM: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-xstream</artifactId> <version>${camel-version}</version> </dependency>   How to do it... Reference the Data Format within your route, choosing the marshal (Java to JSON), or unmarshal (JSON to Java) statement, as appropriate: In the XML DSL, this is written as follows: <route> <from uri="direct:marshal"/> <marshal> <json/> </marshal> <to uri="mock:marshalResult"/> </route> In the Java DSL, this same route is expressed as: from("direct:marshal") .marshal().json() .to("mock:marshalResult");   How it works... Using Camel JSON simplifies translating your data between JSON and Java. This is convenient when you are dealing with REST endpoints and need Java processors in Camel to do custom message processing later on in the route. Camel JSON provides a wrapper around the JSON libraries for instantiating the Java binding for the JSON data, eliminating more boilerplate code from your integration flows. There's more... Camel JSON works with the XStream library by default, and can be configured to use other JSON libraries, such as Jackson or GSon. These other libraries provide additional features, more customization, and more flexibility that can be leveraged by Camel. To use them, include their respective Camel components, for example, camel-jackson, and specify the library within the json element: <dataFormats> <json id="myJson" library="Jackson"/> </dataFormats>   See also Camel JSON: http://camel.apache.org/json.html Available Data Formats: http://camel.apache.org/data-format.html   Transforming from XML to JSON Camel provides an XML JSON Component that converts your data back and forth between XML and JSON in a single step, without an intermediate Java object representation. It provides a Camel Data Format that allows you to marshal (XML to JSON), or unmarshal (JSON to XML) your data. This recipe will show you how to use the XML JSON Component to convert from XML to JSON, and back. Getting ready Java code for this recipe is located in the org.camelcookbook.transformation.xmljson package. Spring XML files are located under src/main/resources/META-INF/spring and prefixed with xmljson. To use Camel's XML JSON Component, you need to add a dependency element for the camel-xmljson library, which provides an implementation for the XML JSON Data Format. Add the following to the dependencies section of your Maven POM: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-xmljson</artifactId> <version>${camel-version}</version> </dependency>   How to do it... Reference the xmljson Data Format within your route, choosing the marshal(XML to JSON), or unmarshal(JSON to XML) statement, as appropriate: In the XML DSL, this is written as follows: <route> <from uri="direct:marshal"/> <marshal> <xmljson/> </marshal> <to uri="mock:marshalResult"/> </route> In the Java DSL, this same route is expressed as: from("direct:marshal") .marshal().xmljson() .to("mock:marshalResult");   How it works... Using the Camel XML JSON Component simplifies translating your data between XML and JSON, making it convenient to use when you are dealing with REST endpoints. The XML JSON Data Format wraps around the Json-lib library, which provides the core translation capabilities, eliminating more boilerplate code from your integration flows. There's more... You may need to configure XML JSON if you want to fine-tune the output of your transformation. For example, consider the following JSON: [{"@category":"PROGRAMMING","title":{"@lang":"en","#text": "Apache Camel Developer's Cookbook"},"author":[ "Scott Cranton","Jakub Korab"],"year":"2013","price":"49.99"}] This will be converted as follows, by default, which may not be exactly what you want (notice the <a> and <e> elements): <?xml version="1.0" encoding="UTF-8"?> <a> <e category="PROGRAMMING"> <author> <e>Scott Cranton</e> <e>Jakub Korab</e> </author> <price>49.99</price> <title lang="en">Apache Camel Developer's Cookbook</title> <year>2013</year> </e> </a> To configure XML JSON to use <bookstore> as the root element instead of <a>, use <book> for the individual elements instead of <e>, and expand the multiple author values to use a sequence of <author> elements, you would need to tune the configuration of the Data Format before referencing it in your route. In the XML DSL, the definition of the Data Format and the route that uses it is written as follows: <dataFormats> <xmljson id="myXmlJson" rootName="bookstore" elementName="book" expandableProperties="author author"/> </dataFormats> <route> <from uri="direct:unmarshalBookstore"/> <unmarshal ref="myXmlJson"/> <to uri="mock:unmarshalResult"/> </route> In the Java DSL, the same thing is expressed as: XmlJsonDataFormat xmlJsonFormat = new XmlJsonDataFormat(); xmlJsonFormat.setRootName("bookstore"); xmlJsonFormat.setElementName("book"); xmlJsonFormat.setExpandableProperties( Arrays.asList("author", "author")); from("direct:unmarshalBookstore") .unmarshal(xmlJsonFormat) .to("mock:unmarshalBookstoreResult"); This will result in the previous JSON being unmarshalled as follows: <?xml version="1.0" encoding="UTF-8"?> <bookstore> <book category="PROGRAMMING"> <author>Scott Cranton</author> <author>Jakub Korab</author><price>49.99</price> <title lang="en">Apache Camel Developer's Cookbook</title> <year>2013</year> </book> </bookstore>   See also Camel XML JSON: http://camel.apache.org/xmljson.html Available Data Formats: http://camel.apache.org/data-format.html Json-lib: http://json-lib.sourceforge.net   Summary We saw how Apache Camel is flexible in allowing us to transform and convert messages in various formats. This makes Apache Camel an ideal choice for integrating different systems together. Resources for Article: Further resources on this subject: Drools Integration Modules: Spring Framework and Apache Camel [Article] Installing Apache Karaf [Article] Working with AMQP [Article]
Read more
  • 0
  • 0
  • 6741

article-image-jboss-eap6-overview
Packt
19 Dec 2013
5 min read
Save for later

JBoss EAP6 Overview

Packt
19 Dec 2013
5 min read
(For more resources related to this topic, see here.) Understanding high availability To understand the term high availability, here is its definition from Wikipedia: "High availability is a system design approach and associated service implementation that ensures that a prearranged level of operational performance will be met during a contractual measurement period. Users want their systems, for example, hospitals, production computers, and the electrical grid to be ready to serve them at all times. If a user cannot access the system, it is said to be unavailable." In the IT field, when we mention the words "high availability", we usually think of the uptime of the server, and technologies such as clustering and load balancing can be used to achieve this. Clustering means to use multiple servers to form a group. From their perspective, users see the cluster as a single entity and access it as if it's just a single point. The following figure shows the structure of a cluster: To achieve the previously mentioned goal, we usually use a controller of the cluster, called load balancer, to sit in front of the cluster. Its job is to receive and dispatch user requests to a node inside the cluster, and the node will do the real work of processing the user requests. After the node processes the user request, the response will be sent to the load balancer, and the load balancer will send it back to the users. The following figure shows the workflow:     Besides load balancing user requests, the clustering system can also do failover inside itself. Failover means when a node has crashed, the load balancer can switch to other running nodes to process user requests. In a cluster, some nodes may fail during runtime. If this happens, the requests to the failed nodes should be redirected to the healthy nodes. The process is shown in the following figure: To make failover possible, the node in a cluster should be able to replicate user data from one to another. In JBoss EAP6, the Infinispan module, which is a data-grid solution provided by the JBoss community, does the web session replication. If one node fails, the user request could be redirected to another node; however, the session with the user won't be lost. The following figure illustrates failover: To achieve the previously mentioned goals, the JBoss community has provided us a powerful set of tools. In the next section we'll have an overview on it. JBoss EAP6 high availability As a Java EE application server, JBoss EAP6 uses modules coming from different open source projects: Web server (JBossWeb) EJB (JBoss EJB3) Web service (JBossWS/RESTEasy) Messaging (HornetQ) JPA and transaction management (Hibernate/Narayana) As we can see, JBoss EAP6 uses many more open source projects, and each part may have its own consideration to achieve the goal of high availability. Now let's have a brief on these parts with respect to high availability: JBoss Web, Apache httpd, mod_jk, and mod_cluster The clustering for a web server may be the most popular topic and is well understood by the majority. There are a lot of good solutions in the market. For JBoss EAP6, the solution it adopted is to use Apache httpd as the load balancer. httpd will dispatch the user requests to the EAP server. Red Hat has led two open source projects to work with httpd, which are called mod_jk and mod_cluster. In this article we'll learn how to use these two projects. EJB session bean JBoss EAP6 has provided the @org.jboss.ejb3.annotation.Clustered annotation that we can use on both the @Stateless and @Stateful session beans. The clustered annotation is JBoss EAP6/WildFly specific implementation. When using @Clustered with @Stateless, the session bean can be load balanced; and when @Clustered is used with the @Stateful bean, the state of the bean will be replicated in the cluster. JBossWS and RESTEasy JBoss EAP6 provides two web service solutions out of the box. One is JBossWS and the other is RESTEasy. JBossWS is a web service framework that implements the JAX-WS specification. RESTEasy is an implementation of the JAX-RS specification to help you to build RESTFul web services. HornetQ HornetQ is a high-performance messaging system provided by the JBoss community. The messaging system is designed to be asynchronous and has its own consideration on load balancing and failover. Hibernate and Narayana In the database and transaction management field, high availability is a huge topic. For example, each database vendor may have their own solutions on load balancing the database queries. For example, PostgreSQL has some open source solutions, for example, Slony and pgpool, which can let us replicate the database from master to slave and which distributes the user queries to different database nodes in a cluster. In the ORM layer, Hibernate also has projects such as Hibernate Shards that can deploy a data base in a distributed way. JGroups and JBoss Remoting JGroups and JBoss Remoting are the cornerstone of JBoss EAP6 clustering features, which enable it to support high availability. JGroups is a reliable communication system based on IP multicasting. JGroups is not limited to multicast and can use TCP too. JBoss Remoting is the underlying communication framework for multiple parts in JBoss EAP6. Summary In this article we learned the basic concepts about high availability and also had an overview of the basic functions of JBoss EAP6. This will help you in understanding JBoss EAP6 in a better way. Resources for Article: Further resources on this subject: Introduction to JBoss Clustering [Article] JBoss RichFaces 3.3 Supplemental Installation [Article] JBoss AS plug-in and the Eclipse Web Tools Platform [Article]
Read more
  • 0
  • 0
  • 1384

article-image-working-amqp
Packt
18 Dec 2013
5 min read
Save for later

Working with AMQP

Packt
18 Dec 2013
5 min read
(for more resources related to this topic, see here.) Broadcasting messages In this example we are seeing how to send the same message to a possibly large number of consumers. This is a typical messaging application, broadcasting to a huge number of clients. For example, when updating the scoreboard in a massive multiplayer game, or when publishing news in a social network application. In this article we are discussing both the producer and consumer implementation. Since it is very typical to have consumers using different technologies and programming languages, we are using Java, Python, and Ruby to show interoperability with AMQP. We are going to appreciate the benefits of having separated exchanges and queues in AMQP. Getting ready To use this article you will need to set up Java, Python and Ruby environments as described. How to do it… To cook this article we are preparing four different codes: The Java publisher The Java consumer The Python consumer The Ruby consumer To prepare a Java publisher: Declare a fanout exchange: channel.exchangeDeclare(myExchange, "fanout"); Send one message to the exchange: channel.basicPublish(myExchange, "", null, jsonmessage.getBytes()); Then to prepare a Java consumer: Declare the same fanout exchange declared by the producer: channel.exchangeDeclare(myExchange, "fanout"); Autocreate a new temporary queue: String queueName = channel.queueDeclare().getQueue(); Bind the queue to the exchange: channel.queueBind(queueName, myExchange, ""); Define a custom, non-blocking consumer. Consume messages invoking channel.basicConsume() The source code of the Python consumer is very similar to the Java consumer, so there is no need to repeat the needed steps. In the Ruby consumer you need to use require "bunny" and then use the URI connection. We are now ready to mix all together, to see the article in action: Start one instance of the Java producer; messages start getting published immediately. Start one or more instances of the Java/Python/Ruby consumer; the consumers receive only the messages sent while they are running. see that the consumer has lost the messages sent while it was down. How it works… Both the producer and the consumers are connected to RabbitMQ with a single connection, but the logical path of the messages is depicted in the following figure: In step 1 we have declared the exchange that we are using. The logic is the same as in the queue declaration: if the specified exchange doesn't exist, create it; otherwise, do nothing. The second argument of exchangeDeclare() is a string, specifying the type of the exchange, fanout in this case. In step 2 the producer sends one message to the exchange. You can just view it along with the other defined exchanges issuing the following command on the RabbitMQ command shell: rabbitmqctl list_exchanges The second argument in the call to channel.basicPublish() is the routing key, which is always ignored when used with a fanout exchange. The third argument, set to null, is the optional message property. The fourth argument is just the message itself. When we started one consumer, it created its own temporary queue (step 9). Using the channel.queueDeclare() empty overload, we are creating a nondurable, exclusive, autodelete queue with an autogenerated name. Launching a couple of consumers and issuing rabbitmqctl list_queues, we can see two queues, one per consumer, with their odd names, along with the persistent myFirstQueue as shown in the following screenshot: In step 5 we have bound the queues to myExchange. It is possible to monitor these bindings too, issuing the following command: rabbitmqctl list_bindings The monitoring is a very important aspect of AMQP; messages are routed by exchanges to the bound queues, and buffered in the queues. Exchanges do not buffer messages; they are just logical elements. The fanout exchange routes messages by just placing a copy of them in each bound queue. So, no bound queues and all the messages are just received by no one consumer. As soon as we close one consumer, we implicitly destroy its private temporary queue (that's why the queues are autodelete; otherwise, these queues would be left behind unused, and the number of queues on the broker would increase indefinitely), and messages are not buffered to it anymore. When we restart the consumer, it will create a new, independent queue and as soon as we bind it to myExchange, messages sent by the publisher will be buffered into this queue and pulled by the consumer itself. There's more… When RabbitMQ is started for the first time, it creates some predefined exchanges. Issuing rabbitmqctl list_exchanges we can observe many existing exchanges, in addition to the one that we have defined in this article: All the amq.* exchanges listed here are already defined by all the AMQP-compliant brokers and can be used instead of defining your own exchanges; they do not need to be declared at all. We could have used amq.fanout in place of myLastnews.fanout_6, and this is a good choice for very simple applications. However, applications generally declare and use their own exchanges. See also With the overload used in the article, the exchange is non-autodelete (won't be deleted as soon as the last client detaches it) and non-durable (won't survive server restarts). You can find more available options and overloads at http://www.rabbitmq.com/releases/rabbitmq-java-client/current-javadoc/. Summary In this article, we are mainly using Java since this language is widely used in enterprise software development, integration, and distribution. RabbitMQ is a perfect fit in this environment. resources for article: further resources on this subject: RabbitMQ Acknowledgements [article] Getting Started with ZeroMQ [article] Setting up GlassFish for JMS and Working with Message Queues [article]
Read more
  • 0
  • 0
  • 1484

article-image-hello-opencl
Packt
18 Dec 2013
16 min read
Save for later

Hello OpenCL

Packt
18 Dec 2013
16 min read
(For more resources related to this topic, see here.) The Wikipedia definition says that, Parallel Computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (in parallel). There are many Parallel Computing programming standards or API specifications, such as OpenMP, OpenMPI, Pthreads, and so on. This book is all about OpenCL Parallel Programming. In this article, we will start with a discussion on different types of parallel programming. We will first introduce you to OpenCL with different OpenCL components. We will also take a look at the various hardware and software vendors of OpenCL and their OpenCL installation steps. Finally, at the end of the article we will see an OpenCL program example SAXPY in detail and its implementation. Advances in computer architecture All over the 20th century computer architectures have advanced by multiple folds. The trend is continuing in the 21st century and will remain for a long time to come. Some of these trends in architecture follow Moore's Law. "Moore's law is the observation that, over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years". Many devices in the computer industry are linked to Moore's law, whether they are DSPs, memory devices, or digital cameras. All the hardware advances would be of no use if there weren't any software advances. Algorithms and software applications grow in complexity, as more and more user interaction comes into play. An algorithm can be highly sequential or it may be parallelized, by using any parallel computing framework. Amdahl's Law is used to predict the speedup for an algorithm, which can be obtained given n threads. This speedup is dependent on the value of the amount of strictly serial or non-parallelizable code (B). The time T(n) an algorithm takes to finish when being executed on n thread(s) of execution corresponds to: T(n) = T(1) (B + (1-B)/n) Therefore the theoretical speedup which can be obtained for a given algorithm is given by : Speedup(n) = 1/(B + (1-B)/n) Amdahl's Law has a limitation, that it does not fully exploit the computing power that becomes available as the number of processing core increase. Gustafson's Law takes into account the scaling of the platform by adding more processing elements in the platform. This law assumes that the total amount of work that can be done in parallel, varies linearly with the increase in number of processing elements. Let an algorithm be decomposed into (a+b). The variable a is the serial execution time and variable b is the parallel execution time. Then the corresponding speedup for P parallel elements is given by: (a + P*b) Speedup = (a + P*b) / (a + b) Now defining α as a/(a+b), the sequential execution component, as follows, gives the speedup for P processing elements: Speedup(P) = P – α *(P - 1) Given a problem which can be solved using OpenCL, the same problem can also be solved on a different hardware with different capabilities. Gustafson's law suggests that with more number of computing units, the data set should also increase that is, "fixed work per processor". Whereas Amdahl's law suggests the speedup which can be obtained for the existing data set if more computing units are added, that is, "Fixed work for all processors". Let's take the following example: Let the serial component and parallel component of execution be of one unit each. In Amdahl's Law the strictly serial component of code is B (equals 0.5). For two processors, the speedup T(2) is given by: T(2) = 1 / (0.5 + (1 – 0.5) / 2) = 1.33 Similarly for four and eight processors, the speedup is given by: T(4) = 1.6 and T(8) = 1.77 Adding more processors, for example when n tends to infinity, the speedup obtained at max is only 2. On the other hand in Gustafson's law, Alpha = 1(1+1) = 0.5 (which is also the serial component of code). The speedup for two processors is given by: Speedup(2) = 2 – 0.5(2 - 1) = 1.5 Similarly for four and eight processors, the speedup is given by: Speedup(4) = 2.5 and Speedup(8) = 4.5 The following figure shows the work load scaling factor of Gustafson's law, when compared to Amdahl's law with a constant workload: Comparison of Amdahl's and Gustafson's Law OpenCL is all about parallel programming, and Gustafson's law very well fits into this book as we will be dealing with OpenCL for data parallel applications. Workloads which are data parallel in nature can easily increase the data set and take advantage of the scalable platforms by adding more compute units. For example, more pixels can be computed as more compute units are added. Different parallel programming techniques There are several different forms of parallel computing such as bit-level, instruction level, data, and task parallelism. This book will largely focus on data and task parallelism using heterogeneous devices. We just now coined a term, heterogeneous devices. How do we tackle complex tasks "in parallel" using different types of computer architecture? Why do we need OpenCL when there are many (already defined) open standards for Parallel Computing? To answer this question, let us discuss the pros and cons of different Parallel computing Framework. OpenMP OpenMP is an API that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran. It is prevalent only on a multi-core computer platform with a shared memory subsystem. A basic OpenMP example implementation of the OpenMP Parallel directive is as follows: #pragma omp parallel { body; } When you build the preceding code using the OpenMP shared library, libgomp would expand to something similar to the following code: void subfunction (void *data) { use data; body; } setup data; GOMP_parallel_start (subfunction, &data, num_threads); subfunction (&data); GOMP_parallel_end (); void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads) The OpenMP directives make things easy for the developer to modify the existing code to exploit the multicore architecture. OpenMP, though being a great parallel programming tool, does not support parallel execution on heterogeneous devices, and the use of a multicore architecture with shared memory subsystem does not make it cost effective. MPI Message Passing Interface (MPI) has an advantage over OpenMP, that it can run on either the shared or distributed memory architecture. Distributed memory computers are less expensive than large shared memory computers. But it has its own drawback with inherent programming and debugging challenges. One major disadvantage of MPI parallel framework is that the performance is limited by the communication network between the nodes. Supercomputers have a massive number of processors which are interconnected using a high speed network connection or are in computer clusters, where computer processors are in close proximity to each other. In clusters, there is an expensive and dedicated data bus for data transfers across the computers. MPI is extensively used in most of these compute monsters called supercomputers. OpenACC The OpenACC Application Program Interface (API) describes a collection of compiler directives to specify loops and regions of code in standard C, C++, and Fortran to be offloaded from a host CPU to an attached accelerator, providing portability across operating systems, host CPUs, and accelerators. OpenACC is similar to OpenMP in terms of program annotation, but unlike OpenMP which can only be accelerated on CPUs, OpenACC programs can be accelerated on a GPU or on other accelerators also. OpenACC aims to overcome the drawbacks of OpenMP by making parallel programming possible across heterogeneous devices. OpenACC standard describes directives and APIs to accelerate the applications. The ease of programming and the ability to scale the existing codes to use the heterogeneous processor, warrantees a great future for OpenACC programming. CUDA Compute Unified Device Architecture (CUDA) is a parallel computing architecture developed by NVIDIA for graphics processing and GPU (General Purpose GPU) programming. There is a fairly good developer community following for the CUDA software framework. Unlike OpenCL, which is supported on GPUs by many vendors and even on many other devices such as IBM's Cell B.E. processor or TI's DSP processor and so on, CUDA is supported only for NVIDIA GPUs. Due to this lack of generalization, and focus on a very specific hardware platform from a single vendor, OpenCL is gaining traction. CUDA or OpenCL? CUDA is more proprietary and vendor specific but has its own advantages. It is easier to learn and start writing code in CUDA than in OpenCL, due to its simplicity. Optimization of CUDA is more deterministic across a platform, since less number of platforms are supported from a single vendor only. It has simplified few programming constructs and mechanisms. So for a quick start and if you are sure that you can stick to one device (GPU) from a single vendor that is NVIDIA, CUDA can be a good choice. OpenCL on the other hand is supported for many hardware from several vendors and those hardware vary extensively even in their basic architecture, which created the requirement of understanding a little complicated concepts before starting OpenCL programming. Also, due to the support of a huge range of hardware, although an OpenCL program is portable, it may lose optimization when ported from one platform to another. The kernel development where most of the effort goes, is practically identical between the two languages. So, one should not worry about which one to choose. Choose the language which is convenient. But remember your OpenCL application will be vendor agnostic. This book aims at attracting more developers to OpenCL. There are many libraries which use OpenCL programming for acceleration. Some of them are MAGMA, clAMDBLAS, clAMDFFT, BOLT C++ Template library, and JACKET which accelerate MATLAB on GPUs. Besides this, there are C++ and Java bindings available for OpenCL also. Once you've figured out how to write your important "kernels" it's trivial to port to either OpenCL or CUDA. A kernel is a computation code which is executed by an array of threads. CUDA also has a vast set of CUDA accelerated libraries, that is, CUBLAS, CUFFT, CUSPARSE, Thrust and so on. But it may not take a long time to port these libraries to OpenCL. Renderscripts Renderscripts is also an API specification which is targeted for 3D rendering and general purpose compute operations in an Android platform. Android apps can accelerate the performance by using these APIs. It is also a cross-platform solution. When an app is run, the scripts are compiled into a machine code of the device. This device can be a CPU, a GPU, or a DSP. The choice of which device to run it on is made at runtime. If a platform does not have a GPU, the code may fall back to the CPU. Only Android supports this API specification as of now. The execution model in Renderscripts is similar to that of OpenCL. Hybrid parallel computing model Parallel programming models have their own advantages and disadvantages. With the advent of many different types of computer architectures, there is a need to use multiple programming models to achieve high performance. For example, one may want to use MPI as the message passing framework, and then at each node level one might want to use, OpenCL, CUDA, OpenMP, or OpenACC. Besides all the above programming models many compilers such as Intel ICC, GCC, and Open64 provide auto parallelization options, which makes the programmers job easy and exploit the underlying hardware architecture without the need of knowing any parallel computing framework. Compilers are known to be good at providing instruction-level parallelism. But tackling data level or task level auto parallelism has its own limitations and complexities. Introduction to OpenCL OpenCL standard was first introduced by Apple, and later on became part of the open standards organization "Khronos Group". It is a non-profit industry consortium, creating open standards for the authoring, and acceleration of parallel computing, graphics, dynamic media, computer vision and sensor processing on a wide variety of platforms and devices. The goal of OpenCL is to make certain types of parallel programming easier, and to provide vendor agnostic hardware-accelerated parallel execution of code. OpenCL (Open Computing Language) is the first open, royalty-free standard for general-purpose parallel programming of heterogeneous systems. It provides a uniform programming environment for software developers to write efficient, portable code for high-performance compute servers, desktop computer systems, and handheld devices using a diverse mix of multi-core CPUs, GPUs, and DSPs. OpenCL gives developers a common set of easy-to-use tools to take advantage of any device with an OpenCL driver (processors, graphics cards, and so on) for the processing of parallel code. By creating an efficient, close-to-the-metal programming interface, OpenCL will form the foundation layer of a parallel computing ecosystem of platform-independent tools, middleware, and applications. We mentioned vendor agnostic, yes that is what OpenCL is about. The different vendors here can be AMD, Intel, NVIDIA, ARM, TI, and so on. The following diagram shows the different vendors and hardware architectures which use the OpenCL specification to leverage the hardware capabilities: The heterogeneous system The OpenCL framework defines a language to write "kernels". These kernels are functions which are capable of running on different compute devices. OpenCL defines an extended C language for writing compute kernels, and a set of APIs for creating and managing these kernels. The compute kernels are compiled with a runtime compiler, which compiles them on-the-fly during host application execution for the targeted device. This enables the host application to take advantage of all the compute devices in the system with a single set of portable compute kernels. Based on your interest and hardware availability, you might want to do OpenCL programming with a "host and device" combination of "CPU and CPU" or "CPU and GPU". Both have their own programming strategy. In CPUs you can run very large kernels as the CPU architecture supports out-of-order instruction level parallelism and have large caches. For the GPU you will be better off writing small kernels for better performance. Hardware and software vendors There are various hardware vendors who support OpenCL. Every OpenCL vendor provides OpenCL runtime libraries. These runtimes are capable of running only on their specific hardware architectures. Not only across different vendors, but within a vendor there may be different types of architectures which might need a different approach towards OpenCL programming. Now let's discuss the various hardware vendors who provide an implementation of OpenCL, to exploit their underlying hardware. Advanced Micro Devices, Inc. (AMD) With the launch of AMD A Series APU, one of industry's first Accelerated Processing Unit (APU), AMD is leading the efforts of integrating both the x86_64 CPU and GPU dies in one chip. It has four cores of CPU processing power, and also a four or five graphics SIMD engine, depending on the silicon part which you wish to buy. The following figure shows the block diagram of AMD APU architecture: AMD architecture diagram—© 2011, Advanced Micro Devices, Inc. An AMD GPU consist of a number of Compute Engines (CU) and each CU has 16 ALUs. Further, each ALU is a VLIW4 SIMD processor and it could execute a bundle of four or five independent instructions. Each CU could be issued a group of 64 work items which form the work group (wavefront). AMD Radeon ™ HD 6XXX graphics processors uses this design. The following figure shows the HD 6XXX series Compute unit, which has 16 SIMD engines, each of which has four processing elements: AMD Radeon HD 6xxx Series SIMD Engine—© 2011, Advanced Micro Devices, Inc. Starting with the AMD Radeon HD 7XXX series of graphics processors from AMD, there were significant architectural changes. AMD introduced the new Graphics Core Next (GCN) architecture. The following figure shows an GCN compute unit which has 4 SIMD engines and each engine is 16 lanes wide: GCN Compute Unit—© 2011, Advanced Micro Devices, Inc. A group of these Compute Units forms an AMD HD 7xxx Graphics Processor. In GCN, each CU includes four separate SIMD units for vector processing. Each of these SIMD units simultaneously execute a single operation across 16 work items, but each can be working on a separate wavefront. Apart from the APUs, AMD also provides discrete graphics cards. The latest family of graphics card, HD 7XXX, and beyond uses the GCN architecture. NVIDIA® One of NVIDIA GPU architectures is codenamed "Kepler". GeForce® GTX 680 is one Kepler architectural silicon part. Each Kepler GPU consists of different configurations of Graphics Processing Clusters (GPC) and streaming multiprocessors. The GTX 680 consists of four GPCs and eight SMXs as shown in the following figure: NVIDIA Kepler architecture—GTX 680, © NVIDIA® Kepler architecture is part of the GTX 6XX and GTX 7XX family of NVIDIA discrete cards. Prior to Kepler, NVIDIA had Fermi architecture which was part of the GTX 5XX family of discrete and mobile graphic processing units. Intel® Intel's OpenCL implementation is supported in the Sandy Bridge and Ivy Bridge processor families. Sandy Bridge family architecture is also synonymous with the AMD's APU. These processor architectures also integrated a GPU into the same silicon as the CPU by Intel. Intel changed the design of the L3 cache, and allowed the graphic cores to get access to the L3, which is also called as the last level cache. It is because of this L3 sharing that the graphics performance is good in Intel. Each of the CPUs including the graphics execution unit is connected via Ring Bus. Also each execution unit is a true parallel scalar processor. Sandy Bridge provides the graphics engine HD 2000, with six Execution Units (EU), and HD 3000 (12 EU), and Ivy Bridge provides HD 2500(six EU) and HD 4000 (16 EU). The following figure shows the Sandy bridge architecture with a ring bus, which acts as an interconnect between the cores and the HD graphics: Intel Sandy Bridge architecture—© Intel® ARM Mali™ GPUs ARM also provides GPUs by the name of Mali Graphics processors. The Mali T6XX series of processors come with two, four, or eight graphics cores. These graphic engines deliver graphics compute capability to entry level smartphones, tablets, and Smart TVs. The below diagram shows the Mali T628 graphics processor. ARM Mali—T628 graphics processor, © ARM Mali T628 has eight shader cores or graphic cores. These cores also support Renderscripts APIs besides supporting OpenCL. Besides the four key competitors, companies such as TI (DSP), Altera (FPGA), and Oracle are providing OpenCL implementations for their respective hardware. We suggest you to get hold of the benchmark performance numbers of the different processor architectures we discussed, and try to compare the performance numbers of each of them. This is an important first step towards comparing different architectures, and in the future you might want to select a particular OpenCL platform based on your application workload.
Read more
  • 0
  • 0
  • 1708
article-image-moving-your-application-production
Packt
16 Dec 2013
9 min read
Save for later

Moving Your Application to Production

Packt
16 Dec 2013
9 min read
(For more resources related to this topic, see here.) We will start by examining the Sencha Cmd compiler. Compiling with Sencha Cmd This section will focus on using Sencha Cmd to compile our Ext JS 4 application for deployment within a Web Archive (WAR) file. The goal of the compilation process is to create a single JavaScript file that contains all of the code needed for the application, including all the Ext JS 4 dependencies. The index.html file that was created during the application skeleton generation is structured as follows: <!DOCTYPE HTML> <html> <head> <meta charset="UTF-8"> <title>TTT</title> <!-- <x-compile> --> <!-- <x-bootstrap> --> <link rel="stylesheet" href="bootstrap.css"> <script src = "ext/ext-dev.js"></script> <script src = "bootstrap.js"></script> <!-- </x-bootstrap> --> <script src = "app.js"></script> <!-- </x-compile> --> </head> <body></body> </html> The open and close tags of the x-compile directive enclose the part of the index.html file where the Sencha Cmd compiler will operate. The only declarations that should be contained in this block are the script tags. The compiler will process all of the scripts within the x-compile directive, searching for dependencies based on the Ext.define, requires, or uses directives. An exception to this is the ext-dev.js file. This file is considered to be a "bootstrap" file for the framework and will not be processed in the same way. The compiler ignores the files in the x-bootstrap block and the declarations are removed from the final compiler-generated page. The first step in the compilation process is to examine and parse all the JavaScript source code and analyze any dependencies. To do this the compiler needs to identify all the source folders in the application. Our application has two source folders: Ext JS 4 sources in webapp/ext/src and 3T application sources in webapp/app. These folder locations are specified using the -sdk and -classpath arguments in the compile command: sencha –sdk {path-to-sdk} compile -classpath={app-sources-folder} page -yui -in{index-page-to-compile}-out {output-file-location} For our 3T application the compile command is as follows: sencha –sdk ext compile -classpath=app page -yui -in index.html-out build/index.html This command performs the following actions: The Sencha Cmd compiler examines all the folders specified by the -classpath argument. The -sdk directory is automatically included for scanning. The page command then includes all of the script tags in index.html that are contained in the x-compile block. After identifying the content of the app directory and the index.html page, the compiler analyzes the JavaScript code and determines what is ultimately needed for inclusion in a single JavaScript file representing the application. A modified version of the original index.html file is written to build/index.html. All of the JavaScript files needed by the new index.html file are concatenated and compressed using the YUI Compressor, and written to the build/all-classes.js file. The sencha compile command must be executed from within the webapp directory, which is the root of the application and is the directory containing the index.html file. All the arguments supplied to the sencha compile command can then be relative to the webapp directory. Open a command prompt (or terminal window in Mac) and navigate to the webapp directory of the 3T project. Executing the sencha compile command as shown earlier in this section will result in the following output: Opening the webapp/build folder in NetBeans should now show the two newly generated files: index.html and all-classes.js. The all-classes.js file will contain all the required Ext JS 4 classes in addition to all the 3T application classes. Attempting to open this file in NetBeans will result in the following warning: "The file seems to be too large to open safely...", but you can open the file in a text editor to see the following concatenated and minified content: Opening the build/index.html page in NetBeans will display the following screenshot: You can now open the build/index.html file in the browser after running the application, but the result may surprise you: The layout that is presented will depend on the browser, but regardless, you will see that the CSS styling is missing. The CSS files required by our application need to be moved outside the <!-- <x-compile> --> directive. But where are the styles coming from? It is now time to briefly delve into Ext JS 4 themes and the bootstrap.css file. Ext JS 4 theming Ext JS 4 themes leverage Syntactically Awesome StyleSheets (SASS) and Compass (http://compass-style.org/) to enable the use of variables and mixins in stylesheets. Almost all of the styles for Ext JS 4 components can be customized, including colors, fonts, borders, and backgrounds, by simply changing the SASS variables. SASS is an extension of CSS that allows you to keep large stylesheets well-organized; a very good overview and reference can be found at http://sass-lang.com/documentation/file.SASS_REFERENCE.html. Theming an Ext JS 4 application using Compass and SASS is beyond the scope of this book. Sencha Cmd allows easy integration with these technologies to build SASS projects; however, the SASS language and syntax is a steep learning curve in its own right. Ext JS 4 theming is very powerful and minor changes to the existing themes can quickly change the appearance of your application. You can find out more about Ext JS 4 theming at http://docs.sencha.com/extjs/4.2.2/#!/guide/theming. The bootstrap.css file was created with the default theme definition during the generation of the application skeleton. The content of the bootstrap.css file is as follows: @import 'ext/packages/ext-theme-classic/build/resources/ext-theme-classic-all.css'; This file imports the ext-theme-classic-all.css stylesheet, which is the default "classic" Ext JS theme. All of the available themes can be found in the ext/packages directory of the Ext JS 4 SDK: Changing to a different theme is as simple as changing the bootstrap.css import. Switching to the neptune theme would require the following bootstrap.css definition: @import 'ext/packages/ext-theme-neptune/build/resources/ext-theme-neptune-all.css'; This modification will change the appearance of the application to the Ext JS "neptune" theme as shown in the following screenshot: We will change the bootstrap.css file definition to use the gray theme: @import 'ext /packages/ext-theme-gray/build/resources/ext-theme-gray-all.css'; This will result in the following appearance: You may experiment with different themes but should note that not all of the themes may be as complete as the classic theme; minor changes may be required to fully utilize the styling for some components. We will keep the gray theme for our index.html page. This will allow us to differentiate the (original) index.html page from the new ones that will be created in the following section using the classic theme. Compiling for production use Until now we have only worked with the Sencha Cmd-generated index.html file. We will now create a new index-dev.html file for our development environment. The development file will be a copy of the index.html file without the bootstrap.css file. We will reference the default classic theme in the index-dev.html file as follows: <!DOCTYPE HTML> <html> <head> <meta charset="UTF-8"> <title>TTT</title> <link rel="stylesh eet" href="ext/packages/ext-theme-classic/build/resources/ext-theme-classic-all. css"> <link rel="stylesheet" href="resources/styles.css"> <!-- <x-compile> --> <!-- <x-bootstrap> --> <script src = "ext/ext-dev.js"></script> <script src = "bootstrap.js"></script> <!-- </x-bootstrap> --> <script src = "app.js"></script> <!-- </x-compile> --> </head> <body></body> </html> Note that we have moved the stylesheet definition out of the <!-- <x-compile> --> directive. If you are using the downloaded source code for the book, you will have the resources/styles.css file and the resources directory structure available. The stylesheet and associated images in the resources directory contain the 3T logos and icons. We recommend you download the full source code now for completeness. We can now modify the Sencha Cmd compile command to use the index-dev.html file and output the generated compile file to index-prod.html in the webapp directory: sencha –sdk ext compile -classpath=app page -yui -in index-dev.html-out index-prod.html This command will generate the index-prod.html file and the all-classes.js files in the webapp directory as shown in the following screenshot: The index-prod.html file references the stylesheets directly and uses the single compiled and minified all-classes.js file. You can now run the application and browse the index-prod.html file as shown in the following screenshot: You should notice a significant increase in the speed with which the logon window is displayed as all the JavaScript classes are loaded from the single all-classes.js file. The index-prod.html file will be used by developers to test the compiled all-classes.js file. Accessing the individual pages will now allow us to differentiate between environments: The logon window as displayed in the browser Page description The index.html page was generated by Sencha Cmd and has been configured to use the gray theme in bootstrap.css. This page is no longer needed for development; use index-dev.html instead. You can access this page at http://localhost:8080/index.html The index-dev.html page uses the classic theme stylesheet included outside the <!-- <x-compile> --> directive. Use this file for application development. Ext JS 4 will dynamically load source files as required. You can access this page at http://localhost:8080/index-dev.html The index-prod.html file is dynamically generated by the Sencha Cmd compile command. This page uses the all-classes.js all-in-one compiled JavaScript file with the classic theme stylesheet. You can access this page at http://localhost:8080/index-prod.html
Read more
  • 0
  • 0
  • 3301

article-image-introducing-salesforce-chatter
Packt
21 Nov 2013
5 min read
Save for later

Introducing Salesforce Chatter

Packt
21 Nov 2013
5 min read
(For more resources related to this topic, see here.) An overview of cloud computing Cloud computing is a subscription-based service that provides us with computing resources and networked storage space. It allows you to access your information anytime and from anywhere. The only requirement is that one must have an Internet connection. That's all. If you have a cloud-based setup, there is no need to maintain the server in the future. We can think of cloud computing as similar to our e-mail account. Think of your accounts such as Gmail, Hotmail, and so on. We just need a web browser and an Internet connection to access our information. We do not need separate software to access our e-mail account; it is different from the text editor installed on our computer. There is no need of physically moving storage and information; everything is up and running over there and not at our end. It is the same with cloud; we choose what has to be stored and accessed on cloud. You also don't have to pay an employee or contractor to maintain the server since it is based on the cloud. While traditional technologies and computer setup require you to be physically present at the same place to access information, cloud removes this barrier and allows us to access information from anywhere. Cloud computing helps businesses to perform better by allowing employees to work from remote locations (anywhere on the globe). It provides mobile access to information and flexibility to the working of a business organization. Depending on your needs, we can subscribe to the following type of clouds: Public cloud: This cloud can be accessed by subscribers who have an Internet connection and access to cloud storage Private cloud: This is accessed by a limited group of people or members of an organization Community cloud: This is a cloud that is shared between two or more organizations that have similar requirements Hybrid cloud: This is a combination of at least two clouds, where the clouds are a combination of public, private, or community Depending on your need, you have the ability to subscribe to a specific cloud provider. Cloud providers follow the pay-as-you-go method. It means that, if your technological needs change, you can purchase more and continue working on cloud. You do not have to worry about the storage configuration and management of servers because everything is done by your cloud provider. An overview of salesforce.com Salesforce.com is the leader in pay-as-you-go enterprise cloud computing. It specializes in CRM software products for sales and customer services and supplies products for building and running business apps. Salesforce has recently developed a social networking product called Chatter for its business apps. With the concept of no software or hardware required, we are up and running and seeing immediate positive effects on our business. It is a platform for creating and deploying apps for social enterprise. This does not require us to buy or manage servers, software, or hardware. Here you can focus fully on building apps that include mobile functionality, business processes, reporting, and search. All apps run on secure servers and proven services that scale, tune, and back up data automatically. Collaboration in the past Collaboration always plays a key role to improve business outcomes; it is a crucial necessity in any professional business. The central meaning of communication has changed over time. With changes in people's individual living situations as well as advancements in technology, how one communicates with the rest of the world has been altered. A century or two ago, people could communicate using smoke signals, carrier pigeons and drum beats, or speak to one another, that is, face-to-face communication. As the world and technology developed, we found that we could send longer messages from long distances with ease. This has caused a decline in face-to-face-interaction and a substantial growth in communication via technology. The old way of face-to-face interaction impacted the business process as there was a gap between the collaboration of the client, company, or employees situated in distant places. So it reduced the profit, ROI, as well as customer satisfaction. In the past, there was no faster way available for communication, so collaboration was a time-consuming task for business; its effect was the loss of client retention. Imagine a situation where a sales representative is near to closing a deal, but the decision maker is out of the office. In the past, there was no fast/direct way to communicate. Sometimes this lack of efficient communication impacted the business negatively, in addition to the loss of potential opportunities. Summary In this article we learned cloud computing and Salesforce.com, and discussed about collaboration in the new era by comparing it to the ancient age. We discussed and introduced Salesforce Chatter and its effect on ROI (Return of Investment). Resources for Article: Further resources on this subject: Salesforce CRM Functions [Article] Configuration in Salesforce CRM [Article] Django 1.2 E-commerce: Data Integration [Article]
Read more
  • 0
  • 0
  • 1480

article-image-understanding-outside
Packt
20 Nov 2013
6 min read
Save for later

Understanding outside-in

Packt
20 Nov 2013
6 min read
(For more resources related to this topic, see here.) In this category, developers pick a story or use case and drill into low-level unit tests. Basically, the objective is to obtain high-level design. The different system interfaces are identified and abstracted. Once different layers/interfaces are identified, unit-level coding can be started. Here, developers look at the system boundary and create a boundary interface depending upon a use case/story. Then, collaborating classes are created. Mock objects can act as a collaborating class. This approach of development relies on code for abstraction. Acceptance Test-Driven Development (ATDD) falls into this category. FitNesse fixtures provide support for ATDD. This is stateful. It is also known as the top-down approach. An example of ATDD As a health professional (doctor), I should get the health information of all my critical patients who are admitted as soon as I'm around 100 feet from the hospital. Here, how patient information is periodically stored in the server, how GPS tracks the doctor, how the system registers the doctor's communication medium (sends report to the doctor by e-mail or text), and so on can be mocked out. A test will be conducted to deal with how to fetch patient data for a doctor and send the report quickly. Gradually, other components will be coded. The preceding figure represents the high-level components of the user story. Here, Report Dispatcher is notified when a doctor approaches a hospital, then the dispatcher fetches patient information for the doctor and sends him the patient record. Patient Information, Outbound Messaging Interface, and Location Sensor (GPS) parts are mocked, and the dispatcher acts as a collaborator. Now we have a requirement to calculate the taxable income from the total income, calculate the payable tax, and send an e-mail to the client with details. We have to gather the annual income, medical expense, house loan premium, life insurance details, provident fund deposit, and apply the following rule to calculate the taxable income: Up to USD 100,000 is not taxable and can be invested as medical insurance, provident fund, or house loan principal payment Up to USD 150,000 can be used as house rent or house loan interest Now we will build a TaxConsultant application using the outside-in style. Following are the steps: Create a new test class com.packtpub.chapter04.outside. in.TaxConsultantTest under the test source folder. We need to perform three tasks; that are, calculate the taxable income, the payable tax, and send an e-mail to a client. We will create a class TaxConsultant to perform these tasks. We will be using Mockito to mock out external behavior. Add a test to check that when a client has investment, then our consultant deducts an amount from the total income and calculates the taxable income. Add a test when_deductable_present_then_taxable_income_is_less_than_the_total_income() to verify it so that it can calculate the taxable income: @Testpublic void when_deductable_present_then_taxable_income_is_less_than_the_total_income () {TaxConsultant consultant = new TaxConsultant();} Add the class under the src source folder. Now we have to pass different amounts to the consultant. Create a method consult() and pass the following values: @Testpublic void when_deductable_present_then_taxable_income_is_less_than_the_total_income () {TaxConsultant consultant = new TaxConsultant();double totalIncome = 1200000;double homeLoanInterest = 150000;double homeLoanPrincipal =20000;double providentFundSavings = 50000;double lifeInsurancePremium = 30000;consultant.consult(totalIncome,homeLoanInterest,homeLoanPrincipal,providentFundSavings,lifeInsurancePremium);} In the outside-in approach, we mock out objects with interesting behavior. We will mock out taxable income behavior and create an interface named TaxbleIncomeCalculator. This interface will have a method to calculate the taxable income. We read that a long parameter list is code smell; we will refactor it later: public interface TaxbleIncomeCalculator {double calculate(double totalIncome, double homeLoanInterest,double homeLoanPrincipal, double providentFundSavings, doublelifeInsurancePremium);} Pass this interface to TaxConsultant as the constructor argument: @Testpublic void when_deductable_present_then_taxable_income_is_less_than_the_total_income () {TaxbleIncomeCalculator taxableIncomeCalculator = null;TaxConsultant consultant = new TaxConsultant(taxableIncomeCalculator); We need a tax calculator to verify that behavior. Create an interface called TaxCalculator: public interface TaxCalculator {double calculate(double taxableIncome);} Pass this interface to TaxConsultant: TaxConsultant consultant = new TaxConsultant(taxableIncomeCalculator,taxCalculator); Now, time to verify the collaboration. We will use Mockito to create mock objects from the interfaces. For now, the @Mock annotation creates a proxy mock object. In the setUp method, we will use MockitoAnnotations.initMocks(this); to create the objects: @Mock TaxbleIncomeCalculator taxableIncomeCalculator;@Mock TaxCalculator taxCalculator;TaxConsultant consultant;@Beforepublic void setUp() {MockitoAnnotations.initMocks(this);consultant= new TaxConsultant(taxableIncomeCalculator,taxCalculator);} Now in test, verify that the consultant class calls TaxableIncomeCalculator and TaxableIncomeCalculator makes a call to TaxCalculator. Mockito has the verify method to test that: verify(taxableIncomeCalculator, only())calculate(eq(totalIncome), eq(homeLoanInterest),eq(homeLoanPrincipal), eq(providentFundSavings),eq(lifeInsurancePremium));verify(taxCalculator,only()).calculate(anyDouble()); Here, we are verifying that the consultant class delegates the call to mock objects. only() checks that the method was called at least once on the mock object. eq() checks that the value passed to the mock object's method is equal to some value. Here, the test will fail since we don't have the calls. We will add the following code to pass the test: public class TaxConsultant {private final TaxbleIncomeCalculator taxbleIncomeCalculator;private final TaxCalculator calculator;public TaxConsultant(TaxbleIncomeCalculatortaxableIncomeCalculator, TaxCalculator calc) {this.taxbleIncomeCalculator =taxableIncomeCalculator;this.calculator = calc;}public void consult(double totalIncome, double homeLoanInterest,double homeLoanPrincipal, double providentFundSavings, doublelifeInsurancePremium) {double taxableIncome = taxbleIncomeCalculator.calculate(totalIncome,homeLoanInterest, homeLoanPrincipal,providentFundSavings,lifeInsurancePremium);double payableTax= calculator.calculate(taxableIncome);}} The test is being passed; we can now add another delegator for e-mail and call it EmailSender. Our facade class is ready. Now we need to use TDD for each interface we extracted. Similarly, we can apply TDD for TaxableIncomeCalculator and EmailSender. Summary This article provided an overview of classical and mockist TDD. In classical TDD, real objects are used and integrated, and mocks are only preferred if a real object is not easy to instantiate. In mockist TDD, mocks have higher priority than real objects. Resources for Article: Further resources on this subject: Important features of Mockito [Article] Testing your App [Article] Mocking static methods (Simple) [Article]
Read more
  • 0
  • 0
  • 1124
article-image-code-interlude-signals-and-slots
Packt
19 Nov 2013
6 min read
Save for later

Code interlude – signals and slots

Packt
19 Nov 2013
6 min read
(For more resources related to this topic, see here.) Qt offers a better way: signals and slots. Like an event, the sending component generates a signal—in Qt parlance, the object emits a signal—which recipient objects may receive in a slot for the purpose. Qt objects may emit more than one signal, and signals may carry arguments; in addition, multiple Qt objects can have slots connected to the same signal, making it easy to arrange one-to-many notifications. Equally important, if no object is interested in a signal, it can be safely ignored, and no slots connected to the signal. Any object that inherits from QObject, Qt's base class for objects, can emit signals or provide slots for connection to signals. Under the hood, Qt provides extensions to C++ syntax for declaring signals and slots. A simple example will help make this clear. The classic example you find in the Qt documentation is an excellent one, and we'll use it again it here, with some extension's. Imagine you have the need for a counter, that is, a container that holds an integer. In C++, you might write: class Counter { public: Counter() { m_value = 0; } int value() const { return m_value; } void setValue(int value); private: int m_value; }; The Counter class has a single private member, m_value, bearing its value. Clients can invoke the value to obtain the counter's value, or set its value by invoking setValue with a new value. In Qt, using signals and slots, we write the class this way: #include <QObject> class Counter : public QObject { Q_OBJECT public: Counter() { m_value = 0; } int value() const { return m_value; } public slots: void setValue(int value); void increment(); void decrement(); signals: void valueChanged(int newValue); private: int m_value; }; This Counter class inherits from QObject, the base class for all Qt objects. All QObject subclasses must include the declaration Q_OBJECT as the first element of their definition; this macro expands to Qt code implementing the subclass-specific glue necessary for the Qt object and signal-slot mechanism. The constructor remains the same, initializing our private member to zero. Similarly, the accessor method value remains the same, returning the current value for the counter. An object's slots must be public, and are declared using the Qt extension to C++ public slots. This code defines three slots: a setValue slot, which accepts a new value for the counter, and the increment and decrement slots, which increment and decrement the value of the counter. Slots may take arguments, but do not return them; the communication between a signal and its slots is one way, initiating with the signal and terminating with the slot(s) connected to the signal. The counter offers a single signal. Like slots, signals are also declared using a Qt extension to C++, signals. In the example above, a Counter object emits the signal valueChanged with a single argument, which is the new value of the counter. A signal is a function signature, not a method; Qt's extensions to C++ use the type signature of signals and slots to ensure type safety between signal-slot connections, a key advantage signals and slots have over other decoupled messaging schemes. As the developers, it's our responsibility to implement each slot in our class with whatever application logic makes sense. The Counter class's slots look like this: void Counter::setValue(int newValue) { if (newValue != m_value) { m_value = newValue; emit valueChanged(newValue); } } void Counter::increment() { setValue(value() + 1); } void Counter::decrement() { setValue(value() – 1); } We use the implementation of the setValue slot as a method, which is what all slots are at their heart. The setValue slot takes a new value and assigns the new value to the Counter class's private member variable if they aren't the same. Then, the signal emits the valueChanged signal, using the Qt extension emit, which triggers an invocation to the slots connected to the signal. This is a common pattern for signals that handle object properties: testing the property to be set for equality with the new value, and only assigning and emitting a signal if the values are unequal. If we had a button, say QPushButton, we could connect its clicked signal to the increment or decrement slot, so that a click on the button incremented or decremented the counter. I'd do that using the QObject::connect method, like this: QPushButton* button = new QPushButton(tr("Increment"), this); Counter* counter = new Counter(this); QObject::connect(button, SIGNAL(clicked(void)), Counter, SLOT(increment(void)); We first create the QPushButton and Counter objects. The QPushButton constructor takes a string, the label for the button, which we denote to be the string Increment or its localized counterpart. Why do we pass this to each constructor? Qt provides a parent-child memory management between QObjects and their descendants, easing clean-up when you're done using an object. When you free an object, Qt also frees any children of the parent object, so you don't have to. The parent-child relationship is set at construction time; I'm signaling to the constructors that when the object invoking |this code is freed, the push button and counter may be freed as well. (Of course, the invoking method must also be a subclass of QObject for this to work.) Next, I call QObject::connect, passing first the source object and the signal to be connected, and then the receiver object and the slot to which the signal should be sent. The types of the signal and the slot must match, and the signals and slots must be wrapped in the SIGNAL and SLOT macros, respectively. Signals can also be connected to signals, and when that happens, the signals are chained and trigger any slots connected to the downstream signals. For example, I could write: Counter a, b; QObject::connect(&a, SIGNAL(valueChanged(int)), &b, SLOT(setValue(int))); This connects the counter b with the counter a, so that any change in value to the counter a also changes the value of the counter b. Signals and slots are used throughout Qt, both for user interface elements and to handle asynchronous operations, such as the presence of data on network sockets and HTTP transaction results. Under the hood, signals and slots are very efficient, boiling down to function dispatch operations, so you shouldn't hesitate to use the abstraction in your own designs. Qt provides a special build tool, the meta-object compiler, which compiles the extensions to C++ that signals and slots require and generates the additional code necessary to implement the mechanism. Summary In this article we learned the usage events for the purpose of coupling different objects; components offering data encapsulate that data in an event, and an event loop (or, more recently, an event listener) catches the event and performs some action. Resources for Article: Further resources on this subject: One-page Application Development [Article] Android 3.0 Application Development: Multimedia Management [Article] Creating and configuring a basic mobile application [Article]
Read more
  • 0
  • 0
  • 1097

article-image-enabling-and-configuring-snmp-windows
Packt
14 Nov 2013
5 min read
Save for later

Enabling and configuring SNMP on Windows

Packt
14 Nov 2013
5 min read
This article by Justin M. Brant, the author of SolarWinds Server & Application Monitor: Deployment and Administration, covers enabling and configuring SNMP on Windows. (For more resources related to this topic, see here.) Procedures in this article are not required pre-deployment, as it is possible after deployment to populate SolarWinds SAM with nodes; however, it is recommended. Even after deployment, you should still enable and configure advanced monitoring services on your vital nodes. SolarWinds SAM uses three types of protocols to poll management data: Simple Network Management Protocol (SNMP): This is the most common network management service protocol. To utilize it, SNMP must be enabled and an SNMP community string must be assigned on the server, device, or application. The community string is essentially a password that is sent between a node and SolarWinds SAM. Once the community string is set and assigned, the node is permitted to expose management data to SolarWinds SAM, in the form of variables. Currently, there are three versions of SNMP: v1, v2c, and v3. SolarWinds SAM uses SNMPv2c by default. To poll using SNMPv1, you must disable SNMPv2c on the device. Similarly, to poll using SNMPv3, you must configure your devices and SolarWinds SAM accordingly. Windows Management Instrumentation (WMI): This has added functionality by incorporating Windows specifi c communications and security features. WMI comes preinstalled on Windows by default but is not automatically enabled and confi gured. WMI is not exclusive to Windows server platforms; it comes installed on all modern Microsoft operating systems, and can also be used to poll desktop operating systems, such as Windows 7. Internet Control Message Protocol (ICMP): This is the most basic of the three; it simply sends echo requests (pings) to a server or device for status, response time, and packet loss. SolarWinds SAM uses ICMP in conjunction with SNMP and WMI. Nodes can be confi gured to poll with ICMP exclusively, but you miss out on CPU, memory, and volume data. Some devices can only be polled with ICMP, although in most instances you will rarely use ICMP exclusively. Trying to decide between SNMP and WMI? SNMP is more standardized and provides data that you may not be able to poll with WMI, such as interface information. In addition, polling a single WMI-enabled node uses roughly five times the resources required to poll the same node with SNMP. This article will explain how to prepare for SolarWinds SAM deployment, by enabling and configuring network management services and protocols on: Windows servers. In this article we will reference service accounts. A service account is an account created to handoff credentials to SolarWinds SAM. Service accounts are a best practice primarily for security reasons, but also to ensure that user accounts do not become locked out. Enabling and configuring SNMP on Windows Procedures listed in this article will explain how to enable SNMP and then assign a community string, on Windows Server 2008 R2. All Windows server-related procedures in this book are performed on Windows Server 2008 R2. Procedures vary slightly in other supported versions. Installing an SNMP service on Windows This procedure explains how to install the SNMP service on Windows Server 2008 R2. Log in to a Windows server. Navigate to Start Menu | Control Panel | Administrative Tools | Server Manager. In order to see Administrative Tools in the Control Panel, you may need to select View by: Small Icons or Large Icons. Select Features and click on Add Features. Check SNMP Services, then click on Next and Install. Click on Close. Assigning an SNMP community string on Windows This procedure explains how to assign a community string on Windows 2008 R2, and ensure that the SNMP service is configured to run automatically on start up. Log in to a Windows server. Navigate to Start Menu | Control Panel | Administrative Tools | Services. Double-click on SNMP Service. On the General tab, select Automatic under Startup type. Select the Agent tab and ensure Physical, Applications, Internet, and End-to-end are all checked under the Service area. Optionally, enter a Contact person and system Location. Select the Security tab and click on the Add button under Accepted community names. Enter a Community Name and click on the Add button. For example, we used S4MS3rv3r. We recommend using something secure, as this is a password. Community String and Community Name mean the same thing.   READ ONLY community rights will normally suffice. A detailed explanation of community rights can be found on the author's blog: http://justinmbrant.blogspot.com/ Next, tick the Accept SNMP packets from these hosts radio button. Click on the Add button underneath the radio buttons and add the IP of the server you have designated as the SolarWinds SAM host. Once you complete these steps, the SNMP Service Properties Security tab should look something like the following screenshot. Notice that we used 192.168.1.3, as that is the IP of the server where we plan to deploy SolarWinds SAM. Summary In this article, we learned different types of protocols to poll management data. We also learned how to install SNMP as well as assign SNMP community string on Windows. Resources for Article: Further resources on this subject: The OpenFlow Controllers [Article] The GNS3 orchestra [Article] Network Monitoring Essentials [Article]
Read more
  • 0
  • 0
  • 9222