Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Application Development

357 Articles
article-image-managing-eap-domain-mode
Packt
19 Jul 2016
7 min read
Save for later

Managing EAP in Domain Mode

Packt
19 Jul 2016
7 min read
This article by Francesco Marchioni author of the book Mastering JBoss Enterprise Application Platform 7dives deep into application server management using the domain mode, its main components, and discusses how to shift to advanced configurations that resemble real-world projects. Here are the main topics covered are: Domain mode breakdown Handy domainproperties Electing the domaincontroller (For more resources related to this topic, see here.) Domain mode break down Managing the application server in the domain mode means, in a nutshell, to control multiple servers from a centralized single point of control. The servers that are part of the domain can span across multiple machines (or even across the cloud) and they can be grouped with similar servers of the domain to share a common configuration. To make some rationale, we will break down the domain components into two main categories: Physical components: Theseare the domain elements that can be identified with a Java process running on the operating system Logical components: Theseare the domain elements which can span across several physical components Domain physical components When you start the application server through the domain.sh script, you will be able to identify the following processes: Host controller: Each domain installation contains a host controller. This is a Java process that is in charge to start and stop the servers that are defined within the host.xml file. The host controller is only aware of the items that are specific to the local physical installation such as the domaincontroller host and port, the JVM settings of the servers or their system properties. Domain controller: One host controller of the domain (and only one) is configured to act as domaincontroller. This means basically two things: keeping the domainconfiguration (into the domain.xml file) and assisting the host controller for managing the servers of the domain. Servers: Each host controller can contain any number of servers which are the actual server instances. These server instances cannot be started autonomously. The host controller is in charge to start/stop single servers, when the domaincontroller commands them. If you start the default domain configuration on a Linux machine, you will see that the following processes will show in your operating system: As you can see, the process controller is identified by the [Process Controller] label, while the domaincontroller corresponds to the [Host Controller] label. Each server shows in the process table with the name defined in the host.xml file. You can use common operating system commands such as grep to further restrict the search to a specific process. Domain logical components A domain configuration with only physical elements in it would not add much to a line of standalone servers. The following components can abstract the domain definition, making it dynamic and flexible: Server Group: A server group is a collection of servers. They are defined in the domain.xml file, hence they don't have any reference to an actual host controller installation. You can use a server group to share configuration and deployments across a group of servers. Profile: A profile is an EAP configuration. A domain can hold as many profiles as you need. Out of the box the following configurations are provided: default: This configuration matches with the standalone.xml configuration (in standalone mode) hence it does not include JMS, IIOP, or HA. full: This configuration matches with the standalone-full.xml configuration (in standalone mode) hence it includes JMS and OpenJDK IIOP to the default server. ha: This configuration matches with the standalone-ha.xml configuration (in standalone mode) so it enhances the default configuration with clustering (HA). full-ha: This configuration matches with the standalone-full-ha.xml configuration (in standalone mode), hence it includes JMS, IIOP, and HA. Handy domainproperties So far we have learnt the default configuration files used by JBoss EAP and the location where they are placed. These settings can be however varied by means of system properties. The following table shows how to customize the domain configuration file names: Option Description --domain-config The domain configuration file (default domain.xml) --host-config The host configuration file (default host.xml) On the other hand, this table summarizes the available options to adjust the domain directory structure: Property Description jboss.domain.base.dir The base directory for domain content jboss.domain.config.dir The base configuration directory jboss.domain.data.dir The directory used for persistent data file storage jboss.domain.log.dir The directory containing the host-controller.log and process-controller.log files jboss.domain.temp.dir The directory used for temporary file storage jboss.domain.deployment.dir The directory used to store deployed content jboss.domain.servers.dir The directory containing the managed server instances For example, you can start EAP 7 in domain mode using the domain configuration file mydomain.xml and the host file named myhost.xml based on the base directory /home/jboss/eap7domain using the following command: $ ./domain.sh –domain-config=mydomain.xml –host-config=myhost.xml –Djboss.domain.base.dir=/home/jboss/eap7domain Electing the domaincontroller Before creating your first domain, we will learn more in detail the process which connects one or more host controller to one domaincontroller and how to elect a host controller to be a domaincontroller. The physical topology of the domain is stored in the host.xml file. Within this file, you will find as the first line the Host Controller name, which makes each host controller unique: <host name="master"> One of the host controllers will be configured to act as a domaincontroller. This is done in the domain-controller section with the following block, which states that the domaincontroller is the host controller itself (hence, local): <domain-controller> <local/> </domain-controller> All other host controllers will connect to the domaincontroller, using the following example configuration which uses the jboss.domain.master.address and jboss.domain.master.port properties to specify the domaincontroller address and port: <domain-controller> <remote protocol="remote" host="${jboss.domain.master.address}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/> </domain-controller> The host controller-domaincontroller communication happens behind the scenes through a management native port that is defined as well into the host.xml file: <management-interfaces> <native-interface security-realm="ManagementRealm"> <socket interface="management" port="${jboss.management.native.port:9999}"/> </native-interface> <http-interface security-realm="ManagementRealm" http-upgrade-enabled="true"> <socket interface="management" port="${jboss.management.http.port:9990}"/> </http-interface> </management-interfaces> The other highlighted attribute is the managementhttpport that can be used by the administrator to reach the domaincontroller. This port is especially relevant if the host controller is the domaincontroller. Both sockets use the management interface, which is defined in the interfaces section of the host.xml file, and exposes the domain controller on a network available address: <interfaces> <interface name="management"> <inet-address value="${jboss.bind.address.management:127.0.0.1}"/> </interface> <interface name="public"> <inet-address value="${jboss.bind.address:127.0.0.1}"/> </interface> </interfaces> If you want to run multiplehost controllers on the same machine, you need to provide a unique jboss.management.native.port for each host controller or a different jboss.bind.address.management. Summary In this article we have some essentials of the domain mode breakdown, handy domain propertiesand also electing the domain controller. Resources for Article: Further resources on this subject: Red5: A video-on-demand Flash Server [article] Animating Elements [article] Data Science with R [article]
Read more
  • 0
  • 0
  • 2486

article-image-containerizing-web-application-docker-part-1
Darwin Corn
10 Jun 2016
4 min read
Save for later

Containerizing a Web Application with Docker Part 1

Darwin Corn
10 Jun 2016
4 min read
Congratulations, you’ve written a web application! Now what? Part one of this post deals with steps to take after development, more specifically the creation of a Docker image that contains the application. In part two, I’ll lay out deploying that image to the Google Cloud Platform as well as some further reading that'll help you descend into the rabbit hole that is DevOps. For demonstration purposes, let’s say that you’re me and you want to share your adventures in TrapRap and Death Metal (not simultaneously, thankfully!) with the world. I’ve written a simple Ember frontend for this purpose, and through the course of this post, I will explain to you how I go about containerizing it. Of course, the beauty of this procedure is that it will work with any frontend application, and you are certainly welcome to Bring Your Own Code. Everything I use is publically available on GitHub, however, and you’re certainly welcome to work through this post with the material presented as well. So, I’ve got this web app. You can get it here, or you can run: $ git clone https://github.com/ndarwincorn/docker-demo.git Do this for wherever it is you keep your source code. You’ll need ember-cli and some familiarity with Ember to customize it yourself, or you can just cut to the chase and build the Docker image, which is what I’m going to do in this post. I’m using Docker 1.10, but there’s no reason this wouldn’t work on a Mac running Docker Toolbox (or even Boot2Docker, but don’t quote me on that) or a less bleeding edge Linux distro. Since installing Docker is well documented, I won’t get into that here and will continue with the assumption that you have a working, up-to-date Docker installed on your machine, and that the Docker daemon is running. If you’re working with your own app, feel free to skip below to my explanation of the process and then come back here when you’ve got a Dockerfile in the root of your application. In the root of the application, run the following (make sure you don’t have any locally-installed web servers listening on port 80 already): # docker build -t docker-demo . # docker run -d -p 80:80 --name demo docker-demo Once the command finishes by printing a container ID, launch a web browser and navigate to http://localhost. Hey! Now you can listen to my music served from a LXC container running on your very own computer. How did we accomplish this? Let’s take it piece-by-piece (here’s where to start reading again if you’ve approached this article with your own app): I created a simple Dockerfile using the official Nginx image because I have a deep-seated mistrust of Canonical and don’t want to use the Dockerfile here. Here’s what it looks like in my project: docker-demo/Dockerfile FROM nginx COPY dist/usr/share/nginx/html Running the docker build command reads the Dockerfile and uses it to configure a docker image based on the nginx image. During image configuration, it copies the contents of the dist folder in my project to /srv/http/docker-demo in the container, which the nginx configuration that was mentioned is pointed to. The -t flag tells Docker to ‘tag’ (name) the image we’ve just created as ‘docker-demo’. The docker run command takes that image and builds a container from it. The -d flag is short for ‘detach’, or run the /usr/bin/nginx command built into the image from our Dockerfile and leave the container running. The -p flag maps a port on the host to a port in the container, and --name names the container for later reference. The command should return a container ID that can be used to manipulate it later. In part two, I’ll show you how to push the image we created to the Google Cloud Platform and then launch it as a container in a specially-purposed VM on their Compute Engine. About the Author Darwin Corn is a Systems Analyst for the Consumer Direct Care Network. He is a mid-level professional with diverse experience in the Information Technology world.
Read more
  • 0
  • 0
  • 4346

article-image-web-server-development
Packt
15 Apr 2016
24 min read
Save for later

Web Server Development

Packt
15 Apr 2016
24 min read
In this article by Holger Brunn, Alexandre Fayolle, and Daniel Eufémio Gago Reis, the authors of the book, Odoo Development Cookbook, have discussed how to deploy the web server in Odoo. In this article, we'll cover the following topics: Make a path accessible from the network Restrict access to web accessible paths Consume parameters passed to your handlers Modify an existing handler Using the RPC API (For more resources related to this topic, see here.) Introduction We'll introduce the basics of the web server part of Odoo in this article. Note that this article covers the fundamental pieces. All of Odoo's web request handling is driven by the Python library werkzeug (http://werkzeug.pocoo.org). While the complexity of werkzeug is mostly hidden by Odoo's convenient wrappers, it is an interesting read to see how things work under the hood. Make a path accessible from the network In this recipe, we'll see how to make an URL of the form http://yourserver/path1/path2 accessible to users. This can either be a web page or a path returning arbitrary data to be consumed by other programs. In the latter case, you would usually use the JSON format to consume parameters and to offer you data. Getting ready We'll make use of a ready-made library.book model. We want to allow any user to query the full list of books. Furthermore, we want to provide the same information to programs via a JSON request. How to do it… We'll need to add controllers, which go into a folder called controllers by convention. Add a controllers/main.py file with the HTML version of our page: from openerp import http from openerp.http import request class Main(http.Controller): @http.route('/my_module/books', type='http', auth='none') def books(self): records = request.env['library.book']. sudo().search([]) result = '<html><body><table><tr><td>' result += '</td></tr><tr><td>'.join( records.mapped('name')) result += '</td></tr></table></body></html>' return result Add a function to serve the same information in the JSON format @http.route('/my_module/books/json', type='json', auth='none') def books_json(self): records = request.env['library.book']. sudo().search([]) return records.read(['name']) Add the file controllers/__init__.py: from . import main Add controllers to your __init__.py addon: from . import controllers After restarting your server, you can visit /my_module/books in your browser and get presented with a flat list of book names. To test the JSON-RPC part, you'll have to craft a JSON request. A simple way to do that would be using the following command line to receive the output on the command line: curl -i -X POST -H "Content-Type: application/json" -d "{}" localhost:8069/my_module/books/json If you get 404 errors at this point, you probably have more than one database available on your instance. In this case, it's impossible for Odoo to determine which database is meant to serve the request. Use the --db-filter='^yourdatabasename$' parameter to force using exact database you installed the module in. Now the path should be accessible. How it works… The two crucial parts here are that our controller is derived from openerp.http.Controller and that the methods we use to serve content are decorated with openerp.http.route. Inheriting from openerp.http.Controller registers the controller with Odoo's routing system in a similar way as models are registered by inheriting from openerp.models.Model; also, Controller has a meta class that takes care of this. In general, paths handled by your addon should start with your addon's name to avoid name clashes. Of course, if you extend some addon's functionality, you'll use this addon's name. openerp.http.route The route decorator allows us to tell Odoo that a method is to be web accessible in the first place, and the first parameter determines on which path it is accessible. Instead of a string, you can also pass a list of strings in case you use the same function to serve multiple paths. The type argument defaults to http and determines what type of request is to be served. While strictly speaking JSON is HTTP, declaring the second function as type='json' makes life a lot easier, because Odoo then handles type conversions itself. Don't worry about the auth parameter for now, it will be addressed in recipe Restrict access to web accessible paths. Return values Odoo's treatment of the functions' return values is determined by the type argument of the route decorator. For type='http', we usually want to deliver some HTML, so the first function simply returns a string containing it. An alternative is to use request.make_response(), which gives you control over the headers to send in the response. So to indicate when our page was updated the last time, we might change the last line in books() to the following: return request.make_response( result, [ ('Last-modified', email.utils.formatdate( ( fields.Datetime.from_string( request.env['library.book'].sudo() .search([], order='write_date desc', limit=1) .write_date) - datetime.datetime(1970, 1, 1) ).total_seconds(), usegmt=True)), ]) This code sends a Last-modified header along with the HTML we generated, telling the browser when the list was modified for the last time. We extract this information from the write_date field of the library.book model. In order for the preceding snippet to work, you'll have to add some imports on the top of the file: import email import datetime from openerp import fields You can also create a Response object of werkzeug manually and return that, but there's little gain for the effort. Generating HTML manually is nice for demonstration purposes, but you should never do this in production code. Always use templates as appropriate and return them by calling request.render(). This will give you localization for free and makes your code better by separating business logic from the presentation layer. Also, templates provide you with functions to escape data before outputting HTML. The preceding code is vulnerable to cross-site-scripting attacks if a user manages to slip a script tag into the book name, for example. For a JSON request, simply return the data structure you want to hand over to the client, Odoo takes care of serialization. For this to work, you should restrict yourself to data types that are JSON serializable, which are roughly dictionaries, lists, strings, floats and integers. openerp.http.request The request object is a static object referring to the currently handled request, which contains everything you need to take useful action. Most important is the property request.env, which contains an Environment object which is just the same as in self.env for models. This environment is bound to the current user, which is none in the preceding example because we used auth='none'. Lack of a user is also why we have to sudo() all our calls to model methods in the example code. If you're used to web development, you'll expect session handling, which is perfectly correct. Use request.session for an OpenERPSession object (which is quite a thin wrapper around the Session object of werkzeug), and request.session.sid to access the session id. To store session values, just treat request.session as a dictionary: request.session['hello'] = 'world' request.session.get('hello') Note that storing data in the session is not different from using global variables. Use it only if you must - that is usually the case for multi request actions like a checkout in the website_sale module. And also in this case, handle all functionality concerning sessions in your controllers, never in your modules. There's more… The route decorator can have some extra parameters to customize its behavior further. By default, all HTTP methods are allowed, and Odoo intermingles with the parameters passed. Using the parameter methods, you can pass a list of methods to accept, which usually would be one of either ['GET'] or ['POST']. To allow cross origin requests (browsers block AJAX and some other types of requests to domains other than where the script was loaded from for security and privacy reasons), set the cors parameter to * to allow requests from all origins, or some URI to restrict requests to ones originating from this URI. If this parameter is unset, which is the default, the Access-Control-Allow-Origin header is not set, leaving you with the browser's standard behavior. In our example, we might want to set it on /my_module/books/json in order to allow scripts pulled from other websites accessing the list of books. By default, Odoo protects certain types of requests from an attack known as cross-site request forgery by passing a token along on every request. If you want to turn that off, set the parameter csrf to False, but note that this is a bad idea in general. See also If you host multiple Odoo databases on the same instance and each database has different web accessible paths on possibly multiple domain names per database, the standard regular expressions in the --db-filter parameter might not be enough to force the right database for every domain. In that case, use the community module dbfilter_from_header from https://github.com/OCA/server-tools in order to configure the database filters on proxy level. To see how using templates makes modularity possible, see recipe Modify an existing handler later in the article. Restrict access to web accessible paths We'll explore the three authentication mechanisms Odoo provides for routes in this recipe. We'll define routes with different authentication mechanisms in order to show their differences. Getting ready As we extend code from the previous recipe, we'll also depend on the library.book model, so you should get its code correct in order to proceed. How to do it… Define handlers in controllers/main.py: Add a path that shows all books: @http.route('/my_module/all-books', type='http', auth='none') def all_books(self): records = request.env['library.book'].sudo().search([]) result = '<html><body><table><tr><td>' result += '</td></tr><tr><td>'.join( records.mapped('name')) result += '</td></tr></table></body></html>' return result Add a path that shows all books and indicates which was written by the current user, if any: @http.route('/my_module/all-books/mark-mine', type='http', auth='public') def all_books_mark_mine(self): records = request.env['library.book'].sudo().search([]) result = '<html><body><table>' for record in records: result += '<tr>' if record.author_ids & request.env.user.partner_id: result += '<th>' else: result += '<td>' result += record.name if record.author_ids & request.env.user.partner_id: result += '</th>' else: result += '</td>' result += '</tr>' result += '</table></body></html>' return result Add a path that shows the current user's books: @http.route('/my_module/all-books/mine', type='http', auth='user') def all_books_mine(self): records = request.env['library.book'].search([ ('author_ids', 'in', request.env.user.partner_id.ids), ]) result = '<html><body><table><tr><td>' result += '</td></tr><tr><td>'.join( records.mapped('name')) result += '</td></tr></table></body></html>' return result With this code, the paths /my_module/all_books and /my_module/all_books/mark_mine look the same for unauthenticated users, while a logged in user sees her books in a bold font on the latter path. The path /my_module/all-books/mine is not accessible at all for unauthenticated users. If you try to access it without being authenticated, you'll be redirected to the login screen in order to do so. How it works… The difference between authentication methods is basically what you can expect from the content of request.env.user. For auth='none', the user record is always empty, even if an authenticated user is accessing the path. Use this if you want to serve content that has no dependencies on users, or if you want to provide database agnostic functionality in a server wide module. The value auth='public' sets the user record to a special user with XML ID, base.public_user, for unauthenticated users, and to the user's record for authenticated ones. This is the right choice if you want to offer functionality to both unauthenticated and authenticated users, while the authenticated ones get some extras, as demonstrated in the preceding code. Use auth='user' to be sure that only authenticated users have access to what you've got to offer. With this method, you can be sure request.env.user points to some existing user. There's more… The magic for authentication methods happens in the ir.http model from the base addon. For whatever value you pass to the auth parameter in your route, Odoo searches for a function called _auth_method_<yourvalue> on this model, so you can easily customize this by inheriting this model and declaring a method that takes care of your authentication method of choice. As an example, we provide an authentication method base_group_user which enforces a currently logged in user who is a member of the group with XML ID, base.group_user: from openerp import exceptions, http, models from openerp.http import request class IrHttp(models.Model): _inherit = 'ir.http' def _auth_method_base_group_user(self): self._auth_method_user() if not request.env.user.has_group('base.group_user'): raise exceptions.AccessDenied() Now you can say auth='base_group_user' in your decorator and be sure that users running this route's handler are members of this group. With a little trickery you can extend this to auth='groups(xmlid1,…)', the implementation of this is left as an exercise to the reader, but is included in the example code. Consume parameters passed to your handlers It's nice to be able to show content, but it's better to show content as a result of some user input. This recipe will demonstrate the different ways to receive this input and react to it. As the recipes before, we'll make use of the library.book model. How to do it… First, we'll add a route that expects a traditional parameter with a book's ID to show some details about it. Then, we'll do the same, but we'll incorporate our parameter into the path itself: Add a path that expects a book's ID as parameter: @http.route('/my_module/book_details', type='http', auth='none') def book_details(self, book_id): record = request.env['library.book'].sudo().browse( int(book_id)) return u'<html><body><h1>%s</h1>Authors: %s' % ( record.name, u', '.join(record.author_ids.mapped( 'name')) or 'none', ) Add a path where we can pass the book's ID in the path @http.route("/my_module/book_details/<model('library.book') :book>", type='http', auth='none') def book_details_in_path(self, book): return self.book_details(book.id) If you point your browser to /my_module/book_details?book_id=1, you should see a detail page of the book with ID 1. If this doesn't exist, you'll receive an error page. The second handler allows you to go to /my_module/book_details/1 and view the same page. How it works… By default, Odoo (actually werkzeug) intermingles with GET and POST parameters and passes them as keyword argument to your handler. So by simply declaring your function as expecting a parameter called book_id, you introduce this parameter as either GET (the parameter in the URL) or POST (usually passed by forms with your handler as action) parameter. Given that we didn't add a default value for this parameter, the runtime will raise an error if you try to access this path without setting the parameter. The second example makes use of the fact that in a werkzeug environment, most paths are virtual anyway. So we can simply define our path as containing some input. In this case, we say we expect the ID of a library.book as the last component of the path. The name after the colon is the name of a keyword argument. Our function will be called with this parameter passed as keyword argument. Here, Odoo takes care of looking up this ID and delivering a browse record, which of course only works if the user accessing this path has appropriate permissions. Given that book is a browse record, we can simply recycle the first example's function by passing book.id as parameter book_id to give out the same content. There's more… Defining parameters within the path is a functionality delivered by werkzeug, which is called converters. The model converter is added by Odoo, which also defines the converter, models, that accepts a comma separated list of IDs and passes a record set containing those IDs to your handler. The beauty of converters is that the runtime coerces the parameters to the expected type, while you're on your own with normal keyword parameters. These are delivered as strings and you have to take care of the necessary type conversions yourself, as seen in the first example. Built-in werkzeug converters include int, float, and string, but also more intricate ones such as path, any, or uuid. You can look up their semantics at http://werkzeug.pocoo.org/docs/0.11/routing/#builtin-converters. See also Odoo's custom converters are defined in ir_http.py in the base module and registered in the _get_converters method of ir.http. As an exercise, you can create your own converter that allows you to visit the /my_module/book_details/Odoo+cookbook page to receive the details of this book (if you added it to your library before). Modify an existing handler When you install the website module, the path /website/info displays some information about your Odoo instance. In this recipe, we override this in order to change this information page's layout, but also to change what is displayed. Getting ready Install the website module and inspect the path /website/info. Now craft a new module that depends on website and uses the following code. How to do it… We'll have to adapt the existing template and override the existing handler: Override the qweb template in a file called views/templates.xml: <?xml version="1.0" encoding="UTF-8"?> <odoo> <template id="show_website_info" inherit_id="website.show_website_info"> <xpath expr="//dl[@t-foreach='apps']" position="replace"> <table class="table"> <tr t-foreach="apps" t-as="app"> <th> <a t-att-href="app.website"> <t t-esc="app.name" /></a> </th> <td><t t-esc="app.summary" /></td> </tr> </table> </xpath> </template> </odoo> Override the handler in a file called controllers/main.py: from openerp import http from openerp.addons.website.controllers.main import Website class Website(Website): @http.route() def website_info(self): result = super(Website, self).website_info() result.qcontext['apps'] = result.qcontext[ 'apps'].filtered( lambda x: x.name != 'website') return result Now when visiting the info page, we'll only see a filtered list of installed applications, and in a table as opposed to the original definition list. How it works In the first step, we override an existing QWeb template. In order to find out which that is, you'll have to consult the code of the original handler. Usually, it will end with the following command line, which tells you that you need to override template.name: return request.render('template.name', values) In our case, the handler uses a template called website.info, but this one is extended immediately by another template called website.show_website_info, so it's more convenient to override this one. Here, we replace the definition list showing installed apps with a table. In order to override the handler method, we must identify the class that defines the handler, which is openerp.addons.website.controllers.main.Website in this case. We import the class to be able to inherit from it. Now we override the method and change the data passed to the response. Note that what the overridden handler returns is a Response object and not a string of HTML as the previous recipes did for the sake of brevity. This object contains a reference to the template to be used and the values accessible to the template, but is only evaluated at the very end of the request. In general, there are three ways to change an existing handler: If it uses a QWeb template, the simplest way of changing it is to override the template. This is the right choice for layout changes and small logic changes. QWeb templates get a context passed, which is available in the response as the field qcontext. This usually is a dictionary where you can add or remove values to suit your needs. In the preceding example, we filter the list of apps to only contain apps which have a website set. If the handler receives parameters, you could also preprocess those in order to have the overridden handler behave the way you want. There's more… As seen in the preceding section, inheritance with controllers works slightly differently than model inheritance: You actually need a reference to the base class and use Python inheritance on it. Don't forget to decorate your new handler with the @http.route decorator; Odoo uses it as a marker for which methods are exposed to the network layer. If you omit the decorator, you actually make the handler's path inaccessible. The @http.route decorator itself behaves similarly to field declarations: every value you don't set will be derived from the decorator of the function you're overriding, so we don't have to repeat values we don't want to change. After receiving a response object from the function you override, you can do a lot more than just changing the QWeb context: You can add or remove HTTP headers by manipulating response.headers. If you want to render an entirely different template, you can set response.template. To detect if a response is based on QWeb in the first place, query response.is_qweb. The resulting HTML code is available by calling response.render(). Using the RPC API One of Odoo's strengths is its interoperability, which is helped by the fact that basically any functionality is available via JSON-RPC 2.0 and XMLRPC. In this recipe, we'll explore how to use both of them from client code. This interface also enables you to integrate Odoo with any other application. Making functionality available via any of the two protocols on the server side is explained in the There's more section of this recipe. We'll query a list of installed modules from the Odoo instance, so that we could show a list as the one displayed in the previous recipe in our own application or website. How to do it… The following code is not meant to run within Odoo, but as simple scripts: First, we query the list of installed modules via XMLRPC: #!/usr/bin/env python2 import xmlrpclib db = 'odoo9' user = 'admin' password = 'admin' uid = xmlrpclib.ServerProxy( 'http://localhost:8069/xmlrpc/2/common') .authenticate(db, user, password, {}) odoo = xmlrpclib.ServerProxy( 'http://localhost:8069/xmlrpc/2/object') installed_modules = odoo.execute_kw( db, uid, password, 'ir.module.module', 'search_read', [[('state', '=', 'installed')], ['name']], {'context': {'lang': 'fr_FR'}}) for module in installed_modules: print module['name'] Then we do the same with JSONRPC: import json import urllib2 db = 'odoo9' user = 'admin' password = 'admin' request = urllib2.Request( 'http://localhost:8069/web/session/authenticate', json.dumps({ 'jsonrpc': '2.0', 'params': { 'db': db, 'login': user, 'password': password, }, }), {'Content-type': 'application/json'}) result = urllib2.urlopen(request).read() result = json.loads(result) session_id = result['result']['session_id'] request = urllib2.Request( 'http://localhost:8069/web/dataset/call_kw', json.dumps({ 'jsonrpc': '2.0', 'params': { 'model': 'ir.module.module', 'method': 'search_read', 'args': [ [('state', '=', 'installed')], ['name'], ], 'kwargs': {'context': {'lang': 'fr_FR'}}, }, }), { 'X-Openerp-Session-Id': session_id, 'Content-type': 'application/json', }) result = urllib2.urlopen(request).read() result = json.loads(result) for module in result['result']: print module['name'] Both code snippets will print a list of installed modules, and because they pass a context that sets the language to French, the list will be in French if there are no translations available. How it works… Both snippets call the function search_read, which is very convenient because you can specify a search domain on the model you call, pass a list of fields you want to be returned, and receive the result in one request. In older versions of Odoo, you had to call search first to receive a list of IDs and then call read to actually read the data. search_read returns a list of dictionaries, with the keys being the names of the fields requested and the values the record's data. The ID field will always be transmitted, no matter if you requested it or not. Now, we need to look at the specifics of the two protocols. XMLRPC The XMLRPC API expects a user ID and a password for every call, which is why we need to fetch this ID via the method authenticate on the path /xmlrpc/2/common. If you already know the user's ID, you can skip this step. As soon as you know the user's ID, you can call any model's method by calling execute_kw on the path /xmlrpc/2/object. This method expects the database you want to execute the function on, the user's ID and password for authentication, then the model you want to call your function on, and then the function's name. The next two mandatory parameters are a list of positional arguments to your function, and a dictionary of keyword arguments. JSONRPC Don't be distracted by the size of the code example, that's because Python doesn't have built in support for JSONRPC. As soon as you've wrapped the urllib calls in some helper functions, the example will be as concise as the XMLRPC one. As JSONRPC is stateful, the first thing we have to do is to request a session at /web/session/authenticate. This function takes the database, the user's name, and their password. The crucial part here is that we record the session ID Odoo created, which we pass in the header X-Openerp-Session-Id to /web/dataset/call_kw. Then the function behaves the same as execute_kw from; we need to pass a model name and a function to call on it, then positional and keyword arguments. There's more… Both protocols allow you to call basically any function of your models. In case you don't want a function to be available via either interface, prepend its name with an underscore – Odoo won't expose those functions as RPC calls. Furthermore, you need to take care that your parameters, as well as the return values, are serializable for the protocol. To be sure, restrict yourself to scalar values, dictionaries, and lists. As you can do roughly the same with both protocols, it's up to you which one to use. This decision should be mainly driven by what your platform supports best. In a web context, you're generally better off with JSON, because Odoo allows JSON handlers to pass a CORS header conveniently (see the Make a path accessible from the network recipe for details). This is rather difficult with XMLRPC. Summary In this article, we saw how to start about with the web server architecture. Later on, we covered the Routes and Controllers that will be used in the article and their authentication, how the handlers consumes parameters, and how to use an RPC API, namely, JSON-RPC and XML-RPC. Resources for Article: Further resources on this subject: Advanced React [article] Remote Authentication [article] ASP.Net Site Performance: Improving JavaScript Loading [article]
Read more
  • 0
  • 0
  • 3882

article-image-push-your-data-web
Packt
22 Feb 2016
27 min read
Save for later

Push your data to the Web

Packt
22 Feb 2016
27 min read
This article covers the following topics: An introduction to the Shiny app framework Creating your first Shiny app The connection between the server file and the user interface The concept of reactive programming Different types of interface layouts, widgets, and Shiny tags How to create a dynamic user interface Ways to share your Shiny applications with others How to deploy Shiny apps to the web (For more resources related to this topic, see here.) Introducing Shiny – the app framework The Shiny package delivers a powerful framework to build fully featured interactive Web applications just with R and RStudio. Basic Shiny applications typically consist of two components: ~/shinyapp |-- ui.R |-- server.R While the ui.R function represents the appearance of the user interface, the server.R function contains all the code for the execution of the app. The look of the user interface is based on the famous Twitter bootstrap framework, which makes the look and layout highly customizable and fully responsive. In fact, you only need to know R and how to use the shiny package to build a pretty web application. Also, a little knowledge of HTML, CSS, and JavaScript may help. If you want to check the general possibilities and what is possible with the Shiny package, it is advisable to take a look at the inbuilt examples. Just load the library and enter the example name: library(shiny) runExample("01_hello") As you can see, running the first example opens the Shiny app in a new window. This app creates a simple histogram plot where you can interactively change the number of bins. Further, this example allows you to inspect the corresponding ui.R and server.R code files. There are currently eleven inbuilt example apps: 01_hello 02_text 03_reactivity 04_mpg 05_sliders 06_tabsets 07_widgets 08_html 09_upload 10_download 11_timer These examples focus mainly on the user interface possibilities and elements that you can create with Shiny. Creating a new Shiny web app with RStudio RStudio offers a fast and easy way to create the basis of every new Shiny app. Just click on New Project and select the New Directory option in the newly opened window: After that, click on the Shiny Web Application field: Give your new app a name in the next step, and click on Create Project: RStudio will then open a ready-to-use Shiny app by opening a prefilled ui.R and server.R file: You can click on the now visible Run App button in the right corner of the file pane to display the prefilled example application. Creating your first Shiny application In your effort to create your first Shiny application, you should first create or consider rough sketches for your app. Questions that you might ask in this context are, What do I want to show? How do I want it to show?, and so on. Let's say we want to create an application that allows users to explore some of the variables of the mtcars dataset. The data was extracted from the 1974 Motor Trend US magazine, and comprises fuel consumption and 10 aspects of automobile design and performance for 32 automobiles (1973–74 models). Sketching the final app We want the user of the app to be able to select one out of the three variables of the dataset that gets displayed in a histogram. Furthermore, we want users to get a summary of the dataset under the main plot. So, the following figure could be a rough project sketch: Constructing the user interface for your app We will reuse the already opened ui.R file from the RStudio example, and adapt it to our needs. The layout of the ui.R file for your first app is controlled by nested Shiny functions and looks like the following lines: library(shiny) shinyUI(pageWithSidebar( headerPanel("My First Shiny App"), sidebarPanel( selectInput(inputId = "variable", label = "Variable:", choices = c ("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( plotOutput("carsPlot"), verbatimTextOutput ("carsSummary") ) )) Creating the server file The server file holds all the code for the execution of the application: library(shiny) library(datasets) shinyServer(function(input, output) { output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) output$carsSummary <- renderPrint({ summary(mtcars[,input$variable]) }) }) The final application After changing the ui.R and the server.R files according to our needs, just hit the Run App button and the final app opens in a new window: As planned in the app sketch, the app offers the user a drop-down menu to choose the desired variable on the left side, and shows a histogram and data summary of the selected variable on the right side. Deconstructing the final app into its components For a better understanding of the Shiny application logic and the interplay of the two main files, ui.R and server.R, we will disassemble your first app again into its individual parts. The components of the user interface We have divided the user interface into three parts: After loading the Shiny library, the complete look of the app gets defined by the shinyUI() function. In our app sketch, we chose a sidebar look; therefore, the shinyUI function holds the argument, pageWithSidebar(): library(shiny) shinyUI(pageWithSidebar( ... The headerPanel() argument is certainly the simplest component, since usually only the title of the app will be stored in it. In our ui.R file, it is just a single line of code: library(shiny) shinyUI(pageWithSidebar( titlePanel("My First Shiny App"), ... The sidebarPanel() function defines the look of the sidebar, and most importantly, handles the input of the variables of the chosen mtcars dataset: library(shiny) shinyUI(pageWithSidebar( titlePanel("My First Shiny App"), sidebarPanel( selectInput(inputId = "variable", label = "Variable:", choices = c ("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), ... Finally, the mainPanel() function ensures that the output is displayed. In our case, this is the histogram and the data summary for the selected variables: library(shiny) shinyUI(pageWithSidebar( titlePanel("My First Shiny App"), sidebarPanel( selectInput(inputId = "variable", label = "Variable:", choices = c ("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( plotOutput("carsPlot"), verbatimTextOutput ("carsSummary") ) )) The server file in detail While the ui.R file defines the look of the app, the server.R file holds instructions for the execution of the R code. Again, we use our first app to deconstruct the related server.R file into its main important parts. After loading the needed libraries, datasets, and further scripts, the function, shinyServer(function(input, output) {} ), defines the server logic: library(shiny) library(datasets) shinyServer(function(input, output) { The marked lines of code that follow translate the inputs of the ui.R file into matching outputs. In our case, the server side output$ object is assigned to carsPlot, which in turn was called in the mainPanel() function of the ui.R file as plotOutput(). Moreover, the render* function, in our example it is renderPlot(), reflects the type of output. Of course, here it is the histogram plot. Within the renderPlot() function, you can recognize the input$ object assigned to the variables that were defined in the user interface file: library(shiny) library(datasets) shinyServer(function(input, output) { output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) ... In the following lines, you will see another type of the render function, renderPrint() , and within the curly braces, the actual R function, summary(), with the defined input variable: library(shiny) library(datasets) shinyServer(function(input, output) { output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) output$carsSummary <- renderPrint({ summary(mtcars[,input$variable]) }) }) There are plenty of different render functions. The most used are as follows: renderPlot: This creates normal plots renderPrin: This gives printed output types renderUI: This gives HTML or Shiny tag objects renderTable: This gives tables, data frames, and matrices renderText: This creates character strings Every code outside the shinyServer() function runs only once on the first launch of the app, while all the code in between the brackets and before the output functions runs as often as a user visits or refreshes the application. The code within the output functions runs every time a user changes the widget that belongs to the corresponding output. The connection between the server and the ui file As already inspected in our decomposed Shiny app, the input functions of the ui.R file are linked with the output functions of the server file. The following figure illustrates this again: The concept of reactivity Shiny uses a reactive programming model, and this is a big deal. By applying reactive programming, the framework is able to be fast, efficient, and robust. Briefly, changing the input in the user interface, Shiny rebuilds the related output. Shiny uses three reactive objects: Reactive source Reactive conductor Reactive endpoint For simplicity, we use the formal signs of the RStudio documentation: The implementation of a reactive source is the reactive value; that of a reactive conductor is a reactive expression; and the reactive endpoint is also called the observer. The source and endpoint structure As taught in the previous section, the defined input of the ui.R links is the output of the server.R file. For simplicity, we use the code from our first Shiny app again, along with the introduced formal signs: ... output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) ... The input variable, in our app these are the Horsepower; Miles per Gallon, and Number of Carburetors choices, represents the reactive source. The histogram called carsPlot stands for the reactive endpoint. In fact, it is possible to link the reactive source to numerous reactive endpoints, and also conversely. In our Shiny app, we also connected the input variable to our first and second output—carsSummary: ... output$carsPlot <- renderPlot({ hist(mtcars[,input$variable], main = "Histogram of mtcars variables", xlab = input$variable) }) output$carsSummary <- renderPrint({ summary(mtcars[,input$variable]) }) ... To sum it up, this structure ensures that every time a user changes the input, the output refreshes automatically and accordingly. The purpose of the reactive conductor The reactive conductor differs from the reactive source and the endpoint is so far that this reactive type can be dependent and can have dependents. Therefore, it can be placed between the source, which can only have dependents and the endpoint, which in turn can only be dependent. The primary function of a reactive conductor is the encapsulation of heavy and difficult computations. In fact, reactive expressions are caching the results of these computations. The following graph displays a possible connection of the three reactive types: In general, reactivity raises the impression of a logically working directional system; after input, the output occurs. You get the feeling that an input pushes information to an output. But this isn't the case. In reality, it works vice versa. The output pulls the information from the input. And this all works due to sophisticated server logic. The input sends a callback to the server, which in turn informs the output that pulls the needed value from the input and shows the result to the user. But of course, for a user, this all feels like an instant updating of any input changes, and overall, like a responsive app's behavior. Of course, we have just touched upon the main aspects of reactivity, but now you know what's really going on under the hood of Shiny. Discovering the scope of the Shiny user interface After you know how to build a simple Shiny application, as well as how reactivity works, let us take a look at the next step: the various resources to create a custom user interface. Furthermore, there are nearly endless possibilities to shape the look and feel of the layout. As already mentioned, the entire HTML, CSS, and JavaScript logic and functions of the layout options are based on the highly flexible bootstrap framework. And, of course, everything is responsive by default, which makes it possible for the final application layout to adapt to the screen of any device. Exploring the Shiny interface layouts Currently, there are four common shinyUI () page layouts: pageWithSidebar() fluidPage() navbarPage() fixedPage() These page layouts can be, in turn, structured with different functions for a custom inner arrangement structure of the page layout. In the following sections, we are introducing the most useful inner layout functions. As an example, we will use our first Shiny application again. The sidebar layout The sidebar layout, where the sidebarPanel() function is used as the input area, and the mainPanel() function as the output, just like in our first Shiny app. The sidebar layout uses the pageWithSidebar() function: library(shiny) shinyUI(pageWithSidebar( headerPanel("The Sidebar Layout"), sidebarPanel( selectInput(inputId = "variable", label = "This is the sidebarPanel", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( tags$h2("This is the mainPanel"), plotOutput("carsPlot"), verbatimTextOutput("carsSummary") ) )) When you only change the first three functions, you can create exactly the same look as the application with the fluidPage() layout. This is the sidebar layout with the fluidPage() function: library(shiny) shinyUI(fluidPage( titlePanel("The Sidebar Layout"), sidebarLayout( sidebarPanel( selectInput(inputId = "variable", label = "This is the sidebarPanel", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( tags$h2("This is the mainPanel"), plotOutput("carsPlot"), verbatimTextOutput("carsSummary") ) ) ))   The grid layout The grid layout is where rows are created with the fluidRow() function. The input and output are made within free customizable columns. Naturally, a maximum of 12 columns from the bootstrap grid system must be respected. This is the grid layout with the fluidPage () function and a 4-8 grid: library(shiny) shinyUI(fluidPage( titlePanel("The Grid Layout"), fluidRow( column(4, selectInput(inputId = "variable", label = "Four-column input area", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), column(8, tags$h3("Eight-column output area"), plotOutput("carsPlot"), verbatimTextOutput("carsSummary") ) ) )) As you can see from inspecting the previous ui.R file, the width of the columns is defined within the fluidRow() function, and the sum of these two columns adds up to 12. Since the allocation of the columns is completely flexible, you can also create something like the grid layout with the fluidPage() function and a 4-4-4 grid: library(shiny) shinyUI(fluidPage( titlePanel("The Grid Layout"), fluidRow( column(4, selectInput(inputId = "variable", label = "Four-column input area", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), column(4, tags$h5("Four-column output area"), plotOutput("carsPlot") ), column(4, tags$h5("Another four-column output area"), verbatimTextOutput("carsSummary") ) ) )) The tabset panel layout The tabsetPanel() function can be built into the mainPanel() function of the aforementioned sidebar layout page. By applying this function, you can integrate several tabbed outputs into one view. This is the tabset layout with the fluidPage() function and three tab panels: library(shiny) shinyUI(fluidPage( titlePanel("The Tabset Layout"), sidebarLayout( sidebarPanel( selectInput(inputId = "variable", label = "Select a variable", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( tabsetPanel( tabPanel("Plot", plotOutput("carsPlot")), tabPanel("Summary", verbatimTextOutput("carsSummary")), tabPanel("Raw Data", dataTableOutput("tableData")) ) ) ) )) After changing the code to include the tabsetPanel() function, the three tabs with the tabPanel() function display the respective output. With the help of this layout, you are no longer dependent on representing several outputs among themselves. Instead, you can display each output in its own tab, while the sidebar does not change. The position of the tabs is flexible and can be assigned to be above, below, right, and left. For example, in the following code file detail, the position of the tabsetPanel() function was assigned as follows: ... mainPanel( tabsetPanel(position = "below", tabPanel("Plot", plotOutput("carsPlot")), tabPanel("Summary", verbatimTextOutput("carsSummary")), tabPanel("Raw Data", tableOutput("tableData")) ) ) ... The navlist panel layout The navlistPanel() function is similar to the tabsetPanel() function, and can be seen as an alternative if you need to integrate a large number of tabs. The navlistPanel() function also uses the tabPanel() function to include outputs: library(shiny) shinyUI(fluidPage( titlePanel("The Navlist Layout"), navlistPanel( "Discovering The Dataset", tabPanel("Plot", plotOutput("carsPlot")), tabPanel("Summary", verbatimTextOutput("carsSummary")), tabPanel("Another Plot", plotOutput("barPlot")), tabPanel("Even A Third Plot", plotOutput("thirdPlot"), "More Information", tabPanel("Raw Data", tableOutput("tableData")), tabPanel("More Datatables", tableOutput("moreData")) ) ))   The navbar page as the page layout In the previous examples, we have used the page layouts, fluidPage() and pageWithSidebar(), in the first line. But, especially when you want to create an application with a variety of tabs, sidebars, and various input and output areas, it is recommended that you use the navbarPage() layout. This function makes use of the standard top navigation of the bootstrap framework: library(shiny) shinyUI(navbarPage("The Navbar Page Layout", tabPanel("Data Analysis", sidebarPanel( selectInput(inputId = "variable", label = "Select a variable", choices = c("Horsepower" = "hp", "Miles per Gallon" = "mpg", "Number of Carburetors" = "carb"), selected = "hp") ), mainPanel( plotOutput("carsPlot"), verbatimTextOutput("carsSummary") ) ), tabPanel("Calculations" … ), tabPanel("Some Notes" … ) )) Adding widgets to your application After inspecting the most important page layouts in detail, we now look at the different interface input and output elements. By adding these widgets, panels, and other interface elements to an application, we can further customize each page layout. Shiny input elements Already, in our first Shiny application, we got to know a typical Shiny input element: the selection box widget. But, of course, there are a lot more widgets with different types of uses. All widgets can have several arguments; the minimum setup is to assign an inputId, which instructs the input slot to communicate with the server file, and a label to communicate with a widget. Each widget can also have its own specific arguments. As an example, we are looking at the code of a slider widget. In the previous screenshot are two versions of a slider; we took the slider range for inspection: sliderInput(inputId = "sliderExample", label = "Slider range", min = 0, max = 100, value = c(25, 75)) Besides the mandatory arguments, inputId and label, three more values have been added to the slider widget. The min and max arguments specify the minimum and maximum values that can be selected. In our example, these are 0 and 100. A numeric vector was assigned to the value argument, and this creates a double-ended range slider. This vector must logically be within the set minimum and maximum values. Currently, there are more than twenty different input widgets, which in turn are all individually configurable by assigning to them their own set of arguments. A brief overview of the output elements As we have seen, the output elements in the ui.R file are connected to the rendering functions in the server file. The mainly used output elements are: htmlOutput imageOutput plotOutput tableOutput textOutput verbatimTextOutput downloadButton Due to their unambiguous naming, the purpose of these elements should be clear. Individualizing your app even further with Shiny tags Although you don't need to know HTML to create stunning Shiny applications, you have the option to create highly customized apps with the usage of raw HTML or so-called Shiny tags. To add raw HTML, you can use the HTML() function. We will focus on Shiny tags in the following list. Currently, there are over a 100 different Shiny tag objects, which can be used to add text styling, colors, different headers, visual and audio, lists, and many more things. You can use these tags by writing tags $tagname. Following is a brief list of useful tags: tags$h1: This is first level header; of course you can also use the known h1 -h6 tags$hr: This makes a horizontal line, also known as a thematic break tags$br: This makes a line break, a popular way to add some space tags$strong = This makes the text bold tags$div: This makes a division of text with a uniform style tags$a: This links to a webpage tags$iframe: This makes an inline frame for embedding possibilities The following ui.R file and corresponding screenshot show the usage of Shiny tags by an example: shinyUI(fluidPage( fluidRow( column(6, tags$h3("Customize your app with Shiny tags!"), tags$hr(), tags$a(href = "http://www.rstudio.com", "Click me"), tags$hr() ), column(6, tags$br(), tags$em("Look - the R project logo"), tags$br(), tags$img(src = "http://www.r-project.org/Rlogo.png") ) ), fluidRow( column(6, tags$strong("We can even add a video"), tags$video(src = "video.mp4", type = "video/mp4", autoplay = NA, controls = NA) ), column(6, tags$br(), tags$ol( tags$li("One"), tags$li("Two"), tags$li("Three")) ) ) ))   Creating dynamic user interface elements We know how to build completely custom user interfaces with all the bells and whistles. But all the introduced types of interface elements are fixed and static. However, if you need to create dynamic interface elements, Shiny offers three ways to achieve it: The conditinalPanel() function: The renderUI() function The use of directly injected JavaScript code In the following section, we only show how to use the first two ways, because firstly, they are built into the Shiny package, and secondly, the JavaScript method is indicated as experimental. Using conditionalPanel The condtionalPanel() functions allow you to show or hide interface elements dynamically, and is set in the ui.R file. The dynamic of this function is achieved by JavaScript expressions, but as usual in the Shiny package, all you need to know is R programming. The following example application shows how this function works for the ui.R file: library(shiny) shinyUI(fluidPage( titlePanel("Dynamic Interface With Conditional Panels"), column(4, wellPanel( sliderInput(inputId = "n", label= "Number of points:", min = 10, max = 200, value = 50, step = 10) )), column(5, "The plot below will be not displayed when the slider value", "is less than 50.", conditionalPanel("input.n >= 50", plotOutput("scatterPlot", height = 300) ) ) )) The following example application shows how this function works for the Related server.R file: library(shiny) shinyServer(function(input, output) { output$scatterPlot <- renderPlot({ x <- rnorm(input$n) y <- rnorm(input$n) plot(x, y) }) }) The code for this example application was taken from the Shiny gallery of RStudio (http://shiny.rstudio.com/gallery/conditionalpanel-demo.html). As readable in both code files, the defined function, input.n, is the linchpin for the dynamic behavior of the example app. In the conditionalPanel() function, it is defined that inputId="n" must have a value of 50 or higher, while the input and output of the plot will work as already defined. Taking advantage of the renderUI function The renderUI() function is hooked, contrary to the previous model, to the server file to create a dynamic user interface. We have already introduced different render output functions in this article. The following example code shows the basic functionality using the ui.R file: # Partial example taken from the Shiny documentation numericInput("lat", "Latitude"), numericInput("long", "Longitude"), uiOutput("cityControls") The following example code shows the basic functionality of the Related sever.R file: # Partial example output$cityControls <- renderUI({ cities <- getNearestCities(input$lat, input$long) checkboxGroupInput("cities", "Choose Cities", cities) }) As described, the dynamic of this method gets defined in the renderUI() process as an output, which then gets displayed through the uiOutput() function in the ui.R file. Sharing your Shiny application with others Typically, you create a Shiny application not only for yourself, but also for other users. There are a two main ways to distribute your app; either you let users download your application, or you deploy it on the web. Offering a download of your Shiny app By offering the option to download your final Shiny application, other users can run your app locally. Actually, there are four ways to deliver your app this way. No matter which way you choose, it is important that the user has installed R and the Shiny package on his/her computer. Gist Gist is a public code sharing pasteboard from GitHub. To share your app this way, it is important that both the ui.R file and the server.R file are in the same Gist and have been named correctly. Take a look at the following screenshot: There are two options to run apps via Gist. First, just enter runGist("Gist_URL") in the console of RStudio; or second, just use the Gist ID and place it in the shiny::runGist("Gist_ID") function. Gist is a very easy way to share your application, but you need to keep in mind that your code is published on a third-party server. GitHub The next way to enable users to download your app is through a GitHub repository: To run an application from GitHub, you need to enter the command, shiny::runGitHub ("Repository_Name", "GitHub_Account_Name"), in the console: Zip file There are two ways to share a Shiny application by zip file. You can either let the user download the zip file over the web, or you can share it via email, USB stick, memory card, or any other such device. To download a zip file via the Web, you need to type runUrl ("Zip_File_URL") in the console: Package Certainly, a much more labor-intensive but also publically effective way is to create a complete R package for your Shiny application. This especially makes sense if you have built an extensive application that may help many other users. Another advantage is the fact that you can also publish your application on CRAN. Later in the book, we will show you how to create an R package. Deploying your app to the web After showing you the ways users can download your app and run it on their local machines, we will now check the options to deploy Shiny apps to the web. Shinyapps.io http://www.shinyapps.io/ is a Shiny app- hosting service by RStudio. There is a free-to- use account package, but it is limited to a maximum of five applications, 25 so-called active hours, and the apps are branded with the RStudio logo. Nevertheless, this service is a great way to publish one's own applications quickly and easily to the web. To use http://www.shinyapps.io/ with RStudio, a few R packages and some additional operating system software needs to be installed: RTools (If you use Windows) GCC (If you use Linux) XCode Command Line Tools (If you use Mac OS X) The devtools R package The shinyapps package Since the shinyapps package is not on CRAN, you need to install it via GitHub by using the devtools package: if (!require("devtools")) install.packages("devtools") devtools::install_github("rstudio/shinyapps") library(shinyapps) When everything that is needed is installed ,you are ready to publish your Shiny apps directly from the RStudio IDE. Just click on the Publish icon, and in the new window you will need to log in to your http://www.shinyapps.io/ account once, if you are using it for the first time. All other times, you can directly create a new Shiny app or update an existing app: After clicking on Publish, a new tab called Deploy opens in the console pane, showing you the progress of the deployment process. If there is something set incorrectly, you can use the deployment log to find the error: When the deployment is successful, your app will be publically reachable with its own web address on http://www.shinyapps.io/.   Setting up a self-hosted Shiny server There are two editions of the Shiny Server software: an open source edition and the professional edition. The open source edition can be downloaded for free and you can use it on your own server. The Professional edition offers a lot more features and support by RStudio, but is also priced accordingly. Diving into the Shiny ecosystem Since the Shiny framework is such an awesome and powerful tool, a lot of people, and of course, the creators of RStudio and Shiny have built several packages around it that are enormously extending the existing functionalities of Shiny. These almost infinite possibilities of technical and visual individualization, which are possible by deeply checking the Shiny ecosystem, would certainly go beyond the scope of this article. Therefore, we are presenting only a few important directions to give a first impression. Creating apps with more files In this article, you have learned how to build a Shiny app consisting of two files: the server.R and the ui.R. To include every aspect, we first want to point out that it is also possible to create a single file Shiny app. To do so, create a file called app.R. In this file, you can include both the server.R and the ui.R file. Furthermore, you can include global variables, data, and more. If you build larger Shiny apps with multiple functions, datasets, options, and more, it could be very confusing if you do all of it in just one file. Therefore, single-file Shiny apps are a good idea for simple and small exhibition apps with a minimal setup. Especially for large Shiny apps, it is recommended that you outsource extensive custom functions, datasets, images, and more into your own files, but put them into the same directory as the app. An example file setup could look like this: ~/shinyapp |-- ui.R |-- server.R |-- helper.R |-- data |-- www |-- js |-- etc   To access the helper file, you just need to add source("helpers.R") into the code of your server.R file. The same logic applies to any other R files. If you want to read in some data from your data folder, you store it in a variable that is also in the head of your server.R file, like this: myData &lt;- readRDS("data/myDataset.rds") Expanding the Shiny package As said earlier, you can expand the functionalities of Shiny with several add-on packages. There are currently ten packages available on CRAN with different inbuilt functions to add some extra magic to your Shiny app. shinyAce: This package makes available Ace editor bindings to enable a rich text-editing environment within Shiny. shinybootstrap2: The latest Shiny package uses bootstrap 3; so, if you built your app with bootstrap 2 features, you need to use this package. shinyBS: This package adds the additional features of the original Twitter Bootstraptheme, such as tooltips, modals, and others, to Shiny. shinydashboard: This packages comes from the folks at RStudio and enables the user to create stunning and multifunctional dashboards on top of Shiny. shinyFiles: This provides functionality for client-side navigation of the server side file system in Shiny apps. shinyjs: By using this package, you can perform common JavaScript operations in Shiny applications without having to know any JavaScript. shinyRGL: This package provides Shiny wrappers for the RGL package. This package exposes RGL's ability to export WebGL visualization in a shiny-friendly format. shinystan: This package is, in fact, not a real add-on. Shinystan is a fantastic full-blown Shiny application to give users a graphical interface for Markov chain Monte Carlo simulations. shinythemes: This packages gives you the option of changing the whole look and feel of your application by using different inbuilt bootstrap themes. shinyTree: This exposes bindings to jsTree—a JavaScript library that supports interactive trees—to enable rich, editable trees in Shiny. Of course, you can find a bunch of other packages with similar or even more functionalities, extensions, and also comprehensive Shiny apps on GitHub. Summary To learn more about Shiny, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Learning Shiny (https://www.packtpub.com/application-development/learning-shiny) Mastering Machine Learning with R (https://www.packtpub.com/big-data-and-business-intelligence/mastering-machine-learning-r) Mastering Data Analysis with R (https://www.packtpub.com/big-data-and-business-intelligence/mastering-data-analysis-r)
Read more
  • 0
  • 0
  • 3559

article-image-machine-learning-and-python-dream-team
Packt
16 Feb 2016
3 min read
Save for later

Machine learning and Python – the Dream Team

Packt
16 Feb 2016
3 min read
In this article we will be learning more about machine learning and Python. Machine learning (ML) teaches machines how to carry out tasks by themselves. It is that simple. The complexity comes with the details, and that is most likely the reason you are reading this article. (For more resources related to this topic, see here.) Machine learning and Python – the dream team The goal of machine learning is to teach machines (software) to carry out tasks by providing them a couple of examples (how to do or not do a task). Let us assume that each morning when you turn on your computer, you perform the same task of moving e-mails around so that only those e-mails belonging to a particular topic end up in the same folder. After some time, you feel bored and think of automating this chore. One way would be to start analyzing your brain and writing down all the rules your brain processes while you are shuffling your e-mails. However, this will be quite cumbersome and always imperfect. While you will miss some rules, you will over-specify others. A better and more future-proof way would be to automate this process by choosing a set of e-mail meta information and body/folder name pairs and let an algorithm come up with the best rule set. The pairs would be your training data, and the resulting rule set (also called model) could then be applied to future e-mails, which we have not yet seen. This is machine learning in its simplest form. Of course, machine learning (often also referred to as data mining or predictive analysis) is not a brand new field in itself. Quite the contrary, its success over the recent years can be attributed to the pragmatic way of using rock-solid techniques and insights from other successful fields; for example, statistics. There, the purpose is for us humans to get insights into the data by learning more about the underlying patterns and relationships. As you read more and more about successful applications of machine learning (you have checked out kaggle.com already, haven't you?), you will see that applied statistics is a common field among machine learning experts. As you will see later, the process of coming up with a decent ML approach is never a waterfall-like process. Instead, you will see yourself going back and forth in your analysis, trying out different versions of your input data on diverse sets of ML algorithms. It is this explorative nature that lends itself perfectly to Python. Being an interpreted high-level programming language, it may seem that Python was designed specifically for the process of trying out different things. What is more, it does this very fast. Sure enough, it is slower than C or similar statically typed programming languages; nevertheless, with a myriad of easy-to-use libraries that are often written in C, you don't have to sacrifice speed for agility. Summary In this is article we learned about machine learning and its goals. To learn more please refer to the following books: Building Machine Learning Systems with Python - Second Edition (https://www.packtpub.com/big-data-and-business-intelligence/building-machine-learning-systems-python-second-edition) Expert Python Programming (https://www.packtpub.com/application-development/expert-python-programming) Resources for Article:   Further resources on this subject: Python Design Patterns in Depth – The Observer Pattern [article] Python Design Patterns in Depth: The Factory Pattern [article] Customizing IPython [article]
Read more
  • 0
  • 0
  • 1228

article-image-getting-started-rstudio
Packt
16 Feb 2016
5 min read
Save for later

Getting Started with RStudio

Packt
16 Feb 2016
5 min read
The number of users adopting the R programming language has been increasing faster and faster in the last few years. The functions of the R console are limited when it comes to managing a lot of files, or when we want to work with version control systems. This is the reason, in combination with the increasing adoption rate, why a need for a better development environment arose. To serve this need, a team of R fans began to develop an integrated development environment (IDE) to make it easier to work on bigger projects and to collaborate with others. This IDE has the name, RStudio. In this article, we will see how to work with RStudio and projects (For more resources related to this topic, see here.) Working with RStudio and projects In the times before RStudio, it was very hard to manage bigger projects with R in the R console, as you had to create all the folder structures on your own. When you work with projects or open a project, RStudio will instantly take several actions. For example, it will start a new and clean R session, it will source the .Rprofile file in the project's main directory, and it will set the current working directory to the project directory. So, you have a complete working environment individually for every project. RStudio will even adjust its own settings, such as active tabs, splitter positions, and so on, to where they were when the project was closed. But just because you can create projects with RStudio easily, it does not mean that you should create a project for every single time that you write R code. For example, if you just want to do a small analysis, we would recommend that you create a project where you save all your smaller scripts. Creating a project with RStudio RStudio offers you an easy way to create projects. Just navigate to File | New Project and you will see a popup window as with the options shown in the following screenshot: These options let you decide from where you want to create your project. So, if you want to start it from scratch and create a new directory, associate your new project to an existing one, or if you want to create a project from a version control repository, you can avail of the respective options. For now, we will focus on creating a new directory. The following screenshot shows you the next options available: Locating your project A very important question you have to ask yourself when creating a new project is where you want to save it? There are several options and details you have to pay attention to especially when it comes to collaboration and different people working on the same project. You can save your project locally, on a cloud storage or with the help of a revision control system such as Git. Creating your first project To begin your first project, choose the New Directory option we described before and create an empty project. Then, choose a name for the directory and the location that you want to save it in. You should create a projects folder on your Dropbox. The first project will be a small data analysis based on a dataset that was extracted from the 1974 issue of the Motor Trend US magazine. It comprises fuel consumption and ten aspects of automobile design and performance, such as the weight or number of cylinders for 32 automobiles, and is included in the base R package. So, we do not have to install a separate package to work with this dataset, as it is automatically loaded when you start R: As you can see, we left the Use packrat with this project option unchecked. Packrat is a dependency management tool that makes your R code more isolated, portable, and reproducible by giving your project its own privately managed package library. This is especially important when you want to create projects in an organizational context where the code has to run on various computer systems, and has to be usable for a lot of different users. This first project will just run locally and will not focus on a specific combination of package versions. Organizing your folders RStudio creates an empty directory for you that includes just the file, Motor-Car-Trend-Analysis.Rproj. This file will store all the information on your project that RStudio will need for loading. But to stay organized, we have to create some folders in the directory. Create the following folders: data: This includes all the data that we need for our analysis code: This includes all the code files for cleaning up data, generating plots, and so on plots: This includes all graphical outputs reports: This comprises all the reports that we create from our dataset Saving the data The Motor Trend Car Road Tests dataset is part of the dataset package, which is one of the preinstalled packages in R. But, we will save the data in a CSV file in our data folder, after extracting the data from the mtcars variable, to make sure our analysis is reproducible. Put the following line of code in a new R script and save it as data.R in the code folder: #write data into csv file write.csv(mtcars, file = "data/cars.csv", row.names=FALSE) Analyzing the data The analysis script will first have to load the data from the CSV file with the following line: cars_data <- read.csv(file = "data/cars.csv", header = TRUE, sep = ",") Summary To learn more about RStudio, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Mastering Machine Learning with R (https://www.packtpub.com/big-data-and-business-intelligence/mastering-machine-learning-r) R Data Analysis Cookbook (https://www.packtpub.com/big-data-and-business-intelligence/r-data-analysis-cookbook) Mastering Data Analysis with R (https://www.packtpub.com/big-data-and-business-intelligence/mastering-data-analysis-r) Resources for Article: Further resources on this subject: RefresheR [article] Deep learning in R [article] Aspects of Data Manipulation in R [article]
Read more
  • 0
  • 0
  • 2629
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-team-project-setup-0
Packt
10 Feb 2016
12 min read
Save for later

Building Your Application

Packt
10 Feb 2016
12 min read
"Measuring programming progress by lines of code is like measuring aircraft building progress by weight."                                                                --Bill Gates In this article, by Tarun Arora, the author of the book Microsoft Team Foundation Server 2015 Cookbook, provides you information about: Configuring TFBuild Agent, Pool, and Queues Setting up a TFBuild Agent using an unattended installation (For more resources related to this topic, see here.) As a developer, compiling code and running unit tests gives you an assurance that your code changes haven't had an impact on the existing codebase. Integrating your code changes into the source control repository enables other users to validate their changes with yours. As a best practice, Teams integrate changes into the shared repository several times a day to reduce the risk of introducing breaking changes or worse, overwriting each other's. Continuous integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is verified by an automated build, allowing Teams to detect problems early. The automated build that runs as part of the CI process is often referred to as the CI build. There isn't a clear definition of what the CI build should do, but at the very minimum, it is expected to compile code and run unit tests. Running the CI build on a non-developer remote workspace helps identify the dependencies that may otherwise go unnoticed into the release process. We can talk endlessly about the benefits of CI; the key here is that it enables you to have potentially deployable software at all times. Deployable software is the most tangible asset to customers. Moving from concept to application, in this article, you'll learn how to leverage the build tooling in TFS to set up a quality-focused CI process. But first, let's have a little introduction to the build system in TFS. The following image illustrates the three generations of build systems in TFS: TFS has gone through three generations of build systems. The very first was MSBuild using XML for configuration; the next one was XAML using Windows Workflow Foundation for configuration, and now, there's TFBuild using JSON for configuration. The XAML-based build system will continue to be supported in TFS 2015. No automated migration path is available from XAML build to TFBuild. This is generally because of the difference in the architecture between the two build systems. The new build system in TFS is called Team Foundation Build (TFBuild). It is an extensible task-based execution system with a rich web interface that allows authoring, queuing, and monitoring builds. TFBuild is fully cross platform with the underlying build agents that are capable of running natively on both Windows and non-Windows platforms. TFBuild provides out-of-the-box integration with Centralized Version Control such as TFVC and Distributed Version Controls such as Git and GitHub. TFBuild supports building .NET, Java, Android, and iOS applications. All the recipes in this article are based on TFBuild. TFBuild is a task orchestrator that allows you to run any build engine, such as Ant, CMake, Gradle, Gulp, Grunt, Maven, MSBuild, Visual Studio, Xamarin, XCode, and so on. TFBuild supports work item integration, publishing drops, and publishing test execution results into the TFS that is independent of the build engine that you choose. The build agents are xCopyable and do not require any installation. The agents are auto-updating in nature; there's no need to update every agent in your infrastructure: TFBuild offers a rich web-based interface. It does not require Visual Studio to author or modify a build definition. From simple to complex, all build definitions can easily be created in the web portal. The web interface is accessible from any device and any platform: The build definition can be authored from the web portal directly A build definition is a collection of tasks. A task is simply a build step. Build definition can be composed by dragging and dropping tasks. Each task supports Enabled, Continue on error, and Always run flags making it easier to manage build definitions as the task list grows: The build system supports invoking PowerShell, batch, command line, and shell scripts. All out-of-the-box tasks are open source. If a task does not satisfy your requirements, you can download the task from GitHub at https://github.com/Microsoft/vso-agent-tasks and customize it. If you can't find a task, you can easily create one. You'll learn more about custom tasks in this article. Changes to build definitions can be saved as drafts. Build definitions maintain a history of all changes in the History tab. A side-by-side comparison of the changes is also possible. Comments entered when changing the build definition show up in the change history: Build definitions can be saved as templates. This helps standardize the use of certain tasks across new build definitions: An existing build definition can be saved as a template Multiple triggers can be set for the same build, including CI triggers and multiple scheduled triggers: Rule-based retention policies support the setting up of multiple rules. Retention can be specified by "days" or "number" of the builds: The build output logs are displayed in web portal in real time. The build log can be accessed from the console even after the build gets completed: The build reports have been revamped to offer more visibility into the build execution, and among other things, the test results can now directly be accessed from the web interface. The .trx file does not need to be downloaded into Visual Studio to view the test results: The old build system had restrictions on one Team Project Collection per build controller and one controller per build machine. TFBuild removes this restriction and supports the reuse of queues across multiple Team Project Collections. The following image illustrates the architecture of the new build system: In the preceding diagram, we observe the following: Multiple agents can be configured on one machine Agents from across different machines can be grouped into a pool Each pool can have only one queue One queue can be used across multiple Team Project Collections To demonstrate the capabilities of TFBuild, we'll use the FabrikamTFVC and FabrikamGit Team Projects. Configuring TFBuild Agent, Pool, and Queues In this recipe, you'll learn how to configure agents and create pools and queues. You'll also learn how a queue can be used across multiple Team Project Collections. Getting ready Scenario: At Fabrikam, the FabrikamTFVC and FabrikamGit Team Projects need their own build queues. The FabrikamTFVC Teams build process can be executed on a Windows Server. The FabrikamGit Team build process needs both Windows and OS X. The Teams want to set up three build agents on a Windows Server; one build agent on an OS X machine. The Teams want to group two Windows Agents into a Windows Pool for FabrikamTFVC Team and group one Windows and one Mac Agent into another pool for the FabrikamGit Team: Permission: To configure a build agent, you should be in the Build Administrators Group. The prerequisites for setting up the build agent on a Windows-based machine are as follows: The build agent should have a supporting version of Windows. The list of supported versions is listed at https://msdn.microsoft.com/en-us/Library/vs/alm/TFS/administer/requirements#Operatingsystems. The build agent should have Visual Studio 2013 or 2015. The build agent should have PowerShell 3 or a newer version. A build agent is configured for your TFS as part of the server installation process if you leave the Configure the build service to start automatically option selected: For the purposes of this recipe, we'll configure the agents from scratch. Delete the default pool or any other pool you have by navigating to the Agent pools option in the TFS Administration Console http://tfs2015:8080/tfs/_admin/_AgentPool: How to do it Log into the Windows machine that you desire to set the agents upon. Navigate to the Agent pools in the TFS Administration Console by browsing to http://tfs2015:8080/tfs/_admin/_AgentPool. Click on New Pool, enter the pool name as Pool 1, and uncheck Auto-Provision Queue in Project Collections: Click on the Download agent icon. Copy the downloaded folder into E: and unzip it into E:Win-A1. You can use any drive; however, it is recommended to use the non-operating system drive: Run the PowerShell console as an administrator and change the current path in PowerShell to the location of the agent in this case E:Win-A1. Call the ConfigureAgent.ps1 script in the PowerShell console and click on Enter. This will launch the Build Agent Configuration utility: Enter the configuration details as illustrated in the following screenshot: It is recommended to install the build agent as a service; however, you have an option to run the agent as an interactive process. This is great when you want to debug a build or want to temporarily use a machine as a build agent. The configuration process creates a JSON settings file; it creates the working and diagnostics folders: Refresh the Agent pools page in the TFS Administration Console. The newly configured agent shows up under Pool 1: Repeat steps 2 to 5 to configure Win-A2 in Pool 1. Repeat steps 1 to 5 to configure Win-A3 in Pool 2. It is worth highlighting that each agent runs from its individual folder: Now, log into the Mac machine and launch terminal: Install the agent installer globally by running the commands illustrated here. You will be required to enter the machine password to authorize the install: This will download the agent in the user profile, shown as follows: The summary of actions performed when the agent is downloaded Run the following command to install the agent installer globally for the user profile: Running the following command will create a new directory called osx-A1 for the agent; create the agent in the directory: The agent installer has been copied from the user profile into the agent directory, shown as follows: Pass the following illustrated parameters to configure the agent: This completes the configuration of the xPlatform agent on the Mac. Refresh the Agent pools page in the TFS Administration Console to see the agent appear in Pool 2: The build agent has been configured at the Team Foundation Server level. In order to use the build agent for a Team Project Collection, a mapping between the build agent and Team Project Collection needs to be established. This is done by creating queues. To configure queues, navigate to the Collection Administration Console by browsing to http://tfs2015:8080/tfs/DefaultCollection/_admin/_BuildQueue. From the Build tab, click on New queue; this dialog allows you to reference the pool as a queue: Map Pool 1 as Queue 1 and Pool 2 as Queue 2 as shown here: The TFBuild Agent, Pools, and Queues are now ready to use. The green bar before the agent name and queue in the administration console indicates that the agent and queues are online. How it works... To test the setup, create a new build definition by navigating to the FabrikamTFVC Team Project Build hub by browsing to http://tfs2015:8080/tfs/DefaultCollection/FabrikamTFVC/_build. Click on the Add a new build definition icon. In the General tab, you'll see that the queues show up under the Queue dropdown menu. This confirms that the queues have been correctly configured and are available for selection in the build definition: Pools can be used across multiple Team Project Collections. As illustrated in the following screenshot, in Team Project Collection 2, clicking on the New queue... shows that the existing pools are already mapped in the default collection: Setting up a TFBuild Agent using an unattended installation The new build framework allows the unattended setup of build agents by injecting a set of parameter values via script. This technique can be used to spin up new agents to be attached into an existing agent pool. In this recipe, you'll learn how to configure and unconfigure a build agent via script. Getting ready Scenario: The FabrikamTFVC Team wants the ability to install, configure, and unconfigure a build agent directly via script without having to perform this operation using the Team Portal. Permission: To configure a build agent, you should be in the Build Administrators Group. Download the build agent as discussed in the earlier recipe Configuring TFBuild Agent, Pool, and Queues. Copy the folder to E:Agent. The script refers to this Agent folder. How to do it... Launch PowerShell in the elevated mode and execute the following command: .AgentVsoAgent.exe /Configure /RunningAsService /ServerUrl:"http://tfs2015:8080/tfs" /WindowsServiceLogonAccount:svc_build /WindowsServiceLogonPassword:xxxxx /Name:WinA-10 /PoolName:"Pool 1" /WorkFolder:"E:Agent_work" /StartMode:Automatic Replace the value of the username and password accordingly. Executing the script will result in the following output: The script installs an agent by the name WinA-10 as Windows Service running as svc_build. The agent is added to Pool 1: To unconfigure WinA-10, run the following command in an elevated PowerShell prompt: .AgentVsoAgent.exe /Unconfigure "vsoagent.tfs2015.WinA-10" To unconfigure, script needs to be executed from outside the scope of the Agent folder. Running the script from within the Agent folder scope will result in an error message. How it works... The new build agent natively allows configuration via script. A new capability called Personal Access Token (PAT) is due for release in the future updates of TFS 2015. PAT allows you to generate a personal OAuth token for a specific scope; it replaces the need to key in passwords into configuration files. Summary In this article, we have looked at configuring TFBuild Agent, Pool, and Queues and setting up a TFBuild Agent using an unattended installation. Resources for Article: Further resources on this subject: Overview of Process Management in Microsoft Visio 2013 [article] Introduction to the Raspberry Pi's Architecture and Setup [article] Implementing Microsoft Dynamics AX [article]
Read more
  • 0
  • 0
  • 1537

article-image-using-cloud-applications-and-containers
Xavier Bruhiere
10 Nov 2015
7 min read
Save for later

Using Cloud Applications and Containers

Xavier Bruhiere
10 Nov 2015
7 min read
We can find a certain comfort while developing an application on our local computer. We debug logs in real time. We know the exact location of everything, for we probably started it by ourselves. Make it work, make it right, make it fast - Kent Beck Optimization is the root of all devil - Donald Knuth So hey, we hack around until interesting results pop up (ok that's a bit exaggerated). The point is, when hitting the production server our code will sail a much different sea. And a much more hostile one. So, how to connect to third party resources ? How do you get a clear picture of what is really happening under the hood ? In this post we will try to answer those questions with existing tools. We won't discuss continuous integration or complex orchestration. Instead, we will focus on what it takes to wrap a typical program to make it run as a public service. A sample application Before diving into the real problem, we need some code to throw on remote servers. Our sample application below exposes a random key/value store over http. // app.js // use redis for data storage var Redis = require('ioredis'); // and express to expose a RESTFul API var express = require('express'); var app = express(); // connecting to redis server var redis = new Redis({ host: process.env.REDIS_HOST || '127.0.0.1', port: process.env.REDIS_PORT || 6379 }); // store random float at the given path app.post('/:key', function (req, res) { var key = req.params.key var value = Math.random(); console.log('storing', value,'at', key) res.json({set: redis.set(key, value)}); }); // retrieve the value at the given path app.get('/:key', function (req, res) { console.log('fetching value at ', req.params.key); redis.get(req.params.key).then(function(err, result) { res.json({ result: result || err }); }) }); var server = app.listen(3000, function () { var host = server.address().address; var port = server.address().port; console.log('Example app listening at http://%s:%s', host, port); }); And we define the following package.json and Dockerfile. { "name": "sample-app", "version": "0.1.0", "scripts": { "start": "node app.js" }, "dependencies": { "express": "^4.12.4", "ioredis": "^1.3.6", }, "devDependencies": {} } # Given a correct package.json, those two lines alone will properly install and run our code FROM node:0.12-onbuild # application's default port EXPOSE 3000 A Dockerfile ? Yeah, here is a first step toward cloud computation under control. Packing our code and its dependencies into a container will allow us to ship and launch the application with a few reproducible commands. # download official redis image docker pull redis # cd to the root directory of the app and build the container docker build -t article/sample . # assuming we are logged in to hub.docker.com, upload the resulting image for future deployment docker push article/sample Enough for the preparation, time to actually run the code. Service Discovery The server code needs a connection to redis. We can't hardcode it because host and port are likely to change under different deployments. Fortunately The Twelve-Factor App provides us with an elegant solution. The twelve-factor app stores config in environment variables (often shortened to env vars or env). Env vars are easy to change between deploys without changing any code; Indeed, this strategy integrates smoothly with an infrastructure composed of containers. docker run --detach --name redis redis # 7c5b7ff0b3f95e412fc7bee4677e1c5a22e9077d68ad19c48444d55d5f683f79 # fetch redis container virtual ip export REDIS_HOST=$(docker inspect -f '{{ .NetworkSettings.IPAddress }}' redis) # note : we don't specify REDIS_PORT as the redis container listens on the default port (6379) docker run -it --rm --name sample --env REDIS_HOST=$REDIS_HOST article/sample # > sample-app@0.1.0 start /usr/src/app # > node app.js # Example app listening at http://:::3000 In another terminal, we can check everything is working as expected. export SAMPLE_HOST=$(docker inspect -f '{{ .NetworkSettings.IPAddress }}' sample)) curl -X POST $SAMPLE_HOST:3000/test # {"set":{"isFulfilled":false,"isRejected":false}} curl -X GET $SAMPLE_HOST:3000/test # {"result":"0.5807915225159377"} We didn't precise any network informations but even so, containers can communicate. This method is widely used and projects like etcd or consul let us automate the whole process. Monitoring Performances can be a critical consideration for end-user experience or infrastructure costs. We should be able to identify bottlenecks or abnormal activities and once again, we will take advantage of containers and open source projects. Without modifying the running server, let's launch three new components to build a generic monitoring infrastructure. Influxdb is a fast time series database where we will store containers metrics. Since we properly defined the application into two single-purpose containers, it will give us an interesting overview of what's going on. # default parameters export INFLUXDB_PORT=8086 export INFLUXDB_USER=root export INFLUXDB_PASS=root export INFLUXDB_NAME=cadvisor # Start database backend docker run --detach --name influxdb --publish 8083:8083 --publish $INFLUXDB_PORT:8086 --expose 8090 --expose 8099 --env PRE_CREATE_DB=$INFLUXDB_NAME tutum/influxdb export INFLUXDB_HOST=$(docker inspect -f '{{ .NetworkSettings.IPAddress }}' influxdb) cadvisor Analyzes resource usage and performance characteristics of running containers. The command flags will instruct it how to use the database above to store metrics. docker run --detach --name cadvisor --volume=/var/run:/var/run:rw --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8080:8080 google/cadvisor:latest --storage_driver=influxdb --storage_driver_user=$INFLUXDB_USER --storage_driver_password=$INFLUXDB_PASS --storage_driver_host=$INFLUXDB_HOST:$INFLUXDB_PORT --log_dir=/ # A live dashboard is available at $CADVISOR_HOST:8080/containers # We can also point the brower to $INFLUXDB_HOST:8083, with credentials above, to inspect containers data. # Query example: # > list series # > select time,memory_usage from stats where container_name='cadvisor' limit 1000 # More infos: https://github.com/google/cadvisor/blob/master/storage/influxdb/influxdb.go Grafana is a feature rich metrics dashboard and graph editor for Graphite, InfluxDB and OpenTSB. From its web interface, we will query the database and graph the metrics cadvisor collected and stored. docker run --detach --name grafana -p 8000:80 -e INFLUXDB_HOST=$INFLUXDB_HOST -e INFLUXDB_PORT=$INFLUXDB_PORT -e INFLUXDB_NAME=$INFLUXDB_NAME -e INFLUXDB_USER=$INFLUXDB_USER -e INFLUXDB_PASS=$INFLUXDB_PASS -e INFLUXDB_IS_GRAFANADB=true tutum/grafana # Get login infos generated docker logs grafana  Now we can head to localhost:8000 and build a custom dashboard to monitor the server. I won't repeat the comprehensive documentation but here is a query example: # note: cadvisor stores metrics in series named 'stats' select difference(cpu_cumulative_usage) where container_name='cadvisor' group by time 60s Grafana's autocompletion feature shows us what we can track : cpu, memory and network usage among other metrics. We all love screenshots and dashboards so here is a final reward for our hard work. Conclusion Development best practices and a good understanding of powerful tools gave us a rigorous workflow to launch applications with confidence. To sum up: Containers bundle code and requirements for flexible deployment and execution isolation. Environment stores third party services informations, giving developers a predictable and robust solution to read them. InfluxDB + Cadvisor + Grafana feature a complete monitoring solution independently of the project implementation. We fullfilled our expections but there's room for improvements. As mentioned, service discovery could be automated, but we also omitted how to manage logs. There are many discussions around this complex subject and we can expect shortly new improvements in our toolbox. About the author Xavier Bruhiere is the CEO of Hive Tech. He contributes to many community projects, including Occulus Rift, Myo, Docker and Leap Motion. In his spare time he enjoys playing tennis, the violin and the guitar. You can reach him at @XavierBruhiere.
Read more
  • 0
  • 0
  • 2347

article-image-interactive-documents
Packt
02 Nov 2015
5 min read
Save for later

Interactive Documents

Packt
02 Nov 2015
5 min read
This article by Julian Hillebrand and Maximilian H. Nierhoff authors of the book Mastering RStudio for R Development covers the following topics: The two main ways to create interactive R Markdown documents Creating R Markdown and Shiny documents and presentations Using the ggvis package with R Markdown Embedding different types of interactive charts in documents Deploying interactive R Markdown documents (For more resources related to this topic, see here.) Creating interactive documents with R Markdown In this article, we want to focus on the opportunities to create interactive documents with R Markdown and RStudio. This is, of course, particularly interesting for the readers of a document, since it enables them to interact with the document by changing chart types, parameters, values, or other similar things. In principle, there are two ways to make an R Markdown document interactive. Firstly, you can use the Shiny web application framework of RStudio, or secondly, there is the possibility of incorporating various interactive chart types by using corresponding packages. Using R Markdown and Shiny Besides building complete web applications, there is also the possibility of integrating entire Shiny applications into R Markdown documents and presentations. Since we have already learned all the basic functions of R Markdown, and the use and logic of Shiny, we will focus on the following lines of integrating a simple Shiny app into an R Markdown file. In order for Shiny and R Markdown to work together, the argument, runtime: shiny must be added to the YAML header of the file. Of course, the RStudio IDE offers a quick way to create a new Shiny document presentation. Click on the new file, choose R Markdown, and in the popup window, select Shiny from the left-hand side menu. In the Shiny menu, you can decide whether you want to start with a Shiny Document option or a Shiny Presentation option: Shiny Document After choosing the Shiny Document option, a prefilled .Rmd file opens. It is different from the known R Markdown interface in that there is the Run Document button instead of the knit button and icon. The prefilled .Rmd file produces an R Markdown document with a working and interactive Shiny application. You can change the number of bins in the plot and also adjust the bandwidth. All these changes get rendered in real time, directly in your document. Shiny Presentation Also, when you click on Shiny Presentation in the selection menu, a prefilled .Rmd file opens. Because it is a presentation, the output format is changed to ioslides_presentation in the YAML header. The button in the code pane is now called Run Presentation: Otherwise, Shiny Presentation looks just like the normal R Markdown presentations. The Shiny app gets embedded in a slide and you can again interact with the underlying data of the application: Dissembling a Shiny R Markdown document Of course, the questions arises that how is it possible to embed a whole Shiny application onto an R Markdown document without the two usual basic files, ui.R and server.R? In fact, the rmarkdown package creates an invisible server.R file by extracting the R code from the code chunks. Reactive elements get placed into the index.html file of the HTML output, while the whole R Markdown document acts as the ui.R file. Embedding interactive charts into R Markdown The next way is to embed interactive chart types into R Markdown documents by using various R packages that enable us to create interactive charts. Some packages are as follows: ggvis rCharts googleVis dygraphs Therefore, we will not introduce them again, but will introduce some more packages that enable us to build interactive charts. They are: threejs networkD3 metricsgraphics plotly Please keep in mind that the interactivity logically only works with the HTML output of R Markdown. Using ggvis for interactive R Markdown documents Broadly speaking, ggvis is the successor of the well-known graphic package, ggplot2. The interactivity options of ggvis, which are based on the reactive programming model of the Shiny framework, are also useful for creating interactive R Markdown documents. To create an interactive R markdown document with ggvis, you need to click on the new file, then on R Markdown..., choose Shiny in the left menu of the new window, and finally, click on OK to create the document. As told before, since ggvis uses the reactive model of Shiny, we need to create an R Markdown document with ggvis this way. If you want to include an interactive ggvis plot within a normal R Markdown file, make sure to include the runtime: shiny argument in the YAML header. As shown, readers of this R Markdown document can easily adjust the bandwidth, and also, the kernel model. The interactive controls are created with input_. In our example, we used the controls, input_slider() and input_select(). For example, some of the other controls are input_checkbox(), input_numeric(), and so on. These controls have different arguments depending on the type of input. For both controls in our example, we used the label argument, which is just a text label shown next to the controls. Other arguments are ID (a unique identifier for the assigned control) and map (a function that remaps the output). Summary In this article, we have learned the two main ways to create interactive R Markdown documents. On the one hand, there is the versatile, usable Shiny framework. This includes the inbuilt Shiny documents and presentations options in RStudio, and also the ggvis package, which takes the advantages of the Shiny framework to build its interactivity. On the other hand, we introduced several already known, and also some new, R packages that make it possible to create several different types of interactive charts. Most of them achieve this by binding R to Existing JavaScript libraries. Resources for Article: Further resources on this subject: Jenkins Continuous Integration [article] Aspects of Data Manipulation in R [article] Find Friends on Facebook [article]
Read more
  • 0
  • 0
  • 2133

article-image-intro-docker-part-2-developing-simple-application
Julian Gindi
30 Oct 2015
5 min read
Save for later

Intro to Docker Part 2: Developing a Simple Application

Julian Gindi
30 Oct 2015
5 min read
In my last post, we learned some basic concepts related to Docker, and we learned a few basic operations for using Docker containers. In this post, we will develop a simple application using Docker. Along the way we will learn how to use Dockerfiles and Docker's amazing 'compose' feature to link multiple containers together. The Application We will be building a simple clone of Reddit's very awesome and mysterious "The Button". The application will be written in Python using the Flask web framework, and will use Redis as it's storage backend. If you do not know Python or Flask, fear not, the code is very readable and you are not required to understand the code to follow along with the Docker-specific sections. Getting Started Before we get started, we need to create a few files and directories. First, go ahead and create a Dockerfile, requirements.txt (where we will specify project-specific dependencies), and a main app.py file. touch Dockerfile requirements.txt app.py Next we will create a simple endpoint that will return "Hello World". Go ahead and edit your app.py file to look like such: from flask import Flask app = Flask(__name__) @app.route('/') def main(): return 'Hello World!' if __name__ == '__main__': app.run('0.0.0.0') Now we need to tell Docker how to build a container containing all the dependencies and code needed to run the app. Edit your Dockerfile to look like such: 1 FROM python:2.7 2 3 RUN mkdir /code 4 WORKDIR /code 5 6 ADD requirements.txt /code/ 7 RUN pip install -r requirements.txt 8 9 ADD . /code/1011 EXPOSE 5000 Before we move on, let me explain the basics of Dockerfiles. Dockerfiles A Dockerfile is a configuration file that specifies instructions on how to build a Docker container. I will now explain each line in the Dockerfile we just created (I will reference individual lines). 1: First, we specify the base image to use as our starting point (we discussed this in more detail in the last post). Here we are using a stock Python 2.7 image. 3: Dockerfiles can container a few 'directives' that dictate certain behaviors. RUN is one such directive. It does exactly what it sounds like - runs an arbitrary command. Here, were are just making a working directory. 4: We use WORKDIR to specify the main working directory. 6: ADD allows us to selectively add files to the container during the build process. Currently, we just need to add the requirements file to tell Docker while dependencies to install. 7: We use the RUN command and python's pip package manager to install all the needed dependencies. 9: Here we add all the code in our current directory into the Docker container (add /code). 11: Finally we 'expose' the ports we will need to access. In this case, Flask will run on port 5000. Building from a Dockerfile We are almost ready to build an image from this Dockerfile, but first, let's specify the dependencies we will need in our requirements.txt file. flask==0.10.1 redis==2.10.3 I am using specific versions here to ensure that your version will work just like mine does. Once we have all these pieces in place we can build the image with the following command. > docker build -t thebutton . We are 'tagging' this image with an easy-to-remember name that we can use later. Once the build completes, we can run the container and see our message in the browser. > docker run -p 5000:5000 thebutton python app.py We are doing a few things here: The -p flag tells Docker to expose port 5000 inside the container, to port 5000 outside the container (this just makes our lives easier). Next we specify the image name (thebutton) and finally the command to run inside the container - python app.py - this will start the web server and server for our page. We are almost ready to view our page but first, we must discover which IP the site will be on. For linux-based systems, you can use localhost but for Mac you will need to run boot2docker ip to discover the IP address to visit. Navigate to your site (in my case it's 192.168.59.103:5000) and you should see "Hello World" printed. Congrats! You are running your first site from inside a Docker container. Putting it All Together Now, we are going to complete the app, and use Docker Compose to launch the entire project for us. This will contain two containers, one running our Flask app, and another running an instance of Redis. The great thing about docker-compose is that you can specify a system to create, and how to connect all the containers. Let's create our docker-compose.yml file now. redis: image: redis:2.8.19 web: build: . command: python app.py ports: - "5000:5000" links: - redis:redis This file specifies the two containers (web and redis). It specifies how to build each container (we are just using the stock redis image here). The web container is a bit more involved since we first build the container using our local Dockerfile (the build: . line). Than we expose port 5000 and link the Redis container to our web container. The awesome thing about linking containers this way, is that the web container automatically gets information about the redis container. In this case, there is an /etc/host called 'redis' that points to our Redis container. This allows us to configure Redis easily in our application: db = redis.StrictRedis('redis', 6379, 0) To test this all out, you can grab the complete source here. All you will need to run is docker-compose up and than access the site the same way we did before. Congratulations! You now have all the tools you need to use docker effectively! About the author Julian Gindi is a Washington DC-based software and infrastructure engineer. He currently serves as Lead Infrastructure Engineer at [iStrategylabs](isl.co) where he does everything from system administration to designing and building deployment systems. He is most passionate about Operating System design and implementation, and in his free time contributes to the Linux Kernel.
Read more
  • 0
  • 0
  • 1839
article-image-team-project-setup
Packt
29 Oct 2015
5 min read
Save for later

Team Project Setup

Packt
29 Oct 2015
5 min read
In this article, by Tarun Arora and Ahmed Al-Asaad, author of the book Microsoft Team Foundation Server Cookbook, gives you knowledge about: Using Team Explorer to connect to Team Foundation Server Creating and setting up a new Team Project for a scrum team (For more resources related to this topic, see here.) Microsoft Visual Studio Team Foundation Server 2015 is the backbone of Microsoft's Application Lifecycle Management (ALM) solution, providing core services such as version control, work item tracking, reporting, and automated builds. Team Foundation Server helps organizations communicate and collaborate more effectively throughout the process of designing, building, testing, and deploying software—ultimately leading to increased productivity and team output, improved quality, and greater visibility into the application life cycle. Team Foundation Server is Microsoft on premise offering application life cycle management tooling; Visual Studio Online is a collection of developer services that runs on Microsoft Azure and extends the development experience in the cloud. Team Foundation server is very flexible and supports a broad spectrum of topologies. While a simple one-machine setup may suffice for small teams, you'll see enterprises using scaled-out complex topologies. You'll find that TFS topologies are largely driven by the scope and scale of its use in an organization. Ensure that you have details of your Team Foundation Server handy. Please refer to the Microsoft Visual Studio Licensing guide available at the following link to learn about the license requirements for Team Foundation Server: http://www.microsoft.com/en-gb/download/details.aspx?id=13350. Using Team Explorer to connect to Team Foundation Server 2015 and GitHub To build, plan, and track your software development project using Team Foundation Server, you'll need to connect the client of your choice to Team Foundation Server. In this recipe, we'll focus on connecting Team Explorer to Team Foundation Server. Getting ready Team Explorer is installed with each version of Visual Studio; alternatively, you can also install Team Explorer from the Microsoft download center as a standalone client. When you start Visual Studio for the first time, you'll be asked to sign in with a Microsoft account, such as Live or Hotmail, and provide some basic registration information. You should choose a Microsoft account that best represents you. If you already have an MSDN account, it's recommended that you sign in with its associated Microsoft account. If you don't have a Microsoft account, you can create one for free. Logging in is advisable, not mandatory. How to do it... Open Visual Studio 2015. Click on the Team Toolbar and select Connect to Team Foundation Server. In Team Explorer ,click on Select Team Projects.... In the Connect to Team Foundation Server form, the dropdown shows a list of all the TFS Servers you have connected to before. If you can't see the server you want to connect to in the dropdown, click on Servers to enter the details of the team foundation server. Click on Add and enter the details of your TFS Server and then click on OK. You may be required to enter the log in details to authenticate against the TFS server. Click Close on the Add/Remove Team Foundation Server form. You should now see the details of your server in the Connect to Team Foundation Server form. At the bottom left, you'll see the user ID being used to establish this connection. Click on Connect to complete the connection, this will navigate you back to the Team Explorer. At this point, you have successfully connected Team Explorer to Team Foundation Server. Creating and setting up a new Team Project for a Scrum Team Software projects require a logical container to store project artifacts such as work items, code, build, releases, and documents. In the Team Foundation Server the logical container is referred to as Team Project. Different teams follow different processes to organize, manage, and track their work. Team Projects can be customized to specific project delivery frameworks through process templates. This recipe explains how to create a new team project for a scrum team in the Team Foundation Server. Getting ready The new Team Project created action needs to be trigged from Team Explorer. Before you can create a new Team Project, you need to connect Team Explorer to Team Foundation Server. The recipe Connecting Team Explorer to Team Foundation Server explains how this can be done. In order to create a new Team Project, you will need the following permissions: You must have the Create new projects permission on the TFS application tier. This permission is granted by adding users to the Project Collection Administrators TFS group. The Team Foundation Administrators global group also includes this permission. You must have created new team sites permission within the SharePoint site collection that corresponds to the TFS team project collection. This permission is granted by adding the user to a SharePoint group with Full Control rights on the SharePoint site collection. In order to use the SQL Server Reporting Services features, you must be a member of the Team Foundation Content Manager role in Reporting Services. To verify whether you have the correct permissions, you can download Team Foundation Server Administration Tool from Codeplex available at https://tfsadmin.codeplex.com/. TFS Admin is an open source tool available under the Microsoft Public license (Ms-PL). Summary In this article, we have looked at setting up a Team Project in Team Foundation Server 2015. We started off by connecting Team Explorer to Team Foundation Server and GitHub. We then looked at creating a team project and setting up a scrum team. Resources for Article: Further resources on this subject: Introducing Liferay for Your Intranet [article] Preparing our Solution [article] Work Item Querying [article]
Read more
  • 0
  • 0
  • 1113

article-image-mono-micro-services-split-fat-application
Xavier Bruhiere
16 Oct 2015
7 min read
Save for later

Mono to Micro-Services: Splitting that fat application

Xavier Bruhiere
16 Oct 2015
7 min read
As articles state everywhere, we're living in a fast pace digital age. Project complexity, or business growth, challenges existing development patterns. That's why many developers are evolving from the monolithic application toward micro-services. Facebook is moving away from its big blue app. Soundcloud is embracing microservices. Yet this can be a daunting process, so what for? Scale. Better plugging new components than digging into an ocean of code. Split a complex problem into smaller ones, which is easier to solve and maintain. Distribute work through independent teams. Open technologies friendliness. Isolating a service into a container makes it straightforward to distribute and use. It also allows different, loosely coupled stacks to communicate. Once upon a time, there was a fat code block called Intuition, my algorithmic trading platform. In this post, we will engineer a simplified version, divided into well defined components. Code Components First, we're going to write the business logic, following the single responsibility principle, and one of my favorite code mantras: Prefer composition over inheritance The point is to identify key components of the problem, and code a specific solution for each of them. It will articulate our application around the collaboration of clear abstractions. As an illustration, start with the RandomAlgo class. Python tends to be the go-to language for data analysis and rapid prototyping. It is a great fit for our purpose. class RandomAlgo(object): """ Represent the algorithm flow. Heavily inspired from quantopian.com and processing.org """ def initialize(self, params): """ Called once to prepare the algo. """ self.threshold = params.get('threshold', 0.5) # As we will see later, we return here data channels we're interested in return ['quotes'] def event(self, data): """ This method is called every time a new batch of data is ready. :param data: {'sid': 'GOOG', 'quote': '345'} """ # randomly choose to invest or not if random.random() > self.threshold: print('buying {0} of {1}'.format(data['quote'], data['sid'])) This implementation focuses on a single thing: detecting buy signals. But once you get such a signal, how do you invest your portfolio? This is the responsibility of a new component. class Portfolio(object): def__init__(self, amount): """ Starting amount of cash we have. """ self.cash = amount def optimize(self, data): """ We have a buy signal on this data. Tell us how much cash we should bet. """ # We're still baby traders and we randomly choose what fraction of our cash available to invest to_invest = random.random() * self.cash self.cash = self.cash - to_invest return to_invest Then we can improve our previous algorithm's event method, taking advantage of composition. def initialize(self, params): # ... self.portfolio = Portfolio(params.get('starting_cash', 10000)) def event(self, data): # ... print('buying {0} of {1}'.format(portfolio.optimize(data), data['sid'])) Here are two simple components that produce readable and efficient code. Now we can develop more sophisticated portfolio optimizations without touching the algorithm internals. This is also a huge gain early in a project when we're not sure how things will evolve. Developers should only focus on this core logic. In the next section, we're going to unfold a separate part of the system. The communication layer will solve one question: how do we produce and consume events? Inter-components messaging Let's state the problem. We want each algorithm to receive interesting events and publish its own data. The kind of challenge Internet of Things (IoT) is tackling. We will find empirically that our modular approach allows us to pick the right tool, even within a-priori unrelated fields. The code below leverages MQTT to bring M2M messaging to the application. Notice we're diversifying our stack with node.js. Indeed it's one of the most convenient languages to deal with event-oriented systems (Javascript, in general, is gaining some traction in the IoT space). var mqtt = require('mqtt'); // connect to the broker, responsible to route messages // (thanks mosquitto) var conn = mqtt.connect('mqtt://test.mosquitto.org'); conn.on('connect', function () { // we're up ! Time to initialize the algorithm // and subscribe to interesting messages }); // triggered on topic we're listening to conn.on('message', function (topic, message) { console.log('received data:', message.toString()); // Here, pass it to the algo for processing }); That's neat! But we still need to connect this messaging layer with the actual python algorithm. RPC (Remote Procedure Call) protocol comes in handy for the task, especially with zerorpc. Here is the full implementation with more explanations. // command-line interfaces made easy var program = require('commander'); // the MQTT client for Node.js and the browser var mqtt = require('mqtt'); // a communication layer for distributed systems var zerorpc = require('zerorpc'); // import project properties var pkg = require('./package.json') // define the cli program .version(pkg.version) .description(pkg.description) .option('-m, --mqtt [url]', 'mqtt broker address', 'mqtt://test.mosquitto.org') .option('-r, --rpc [url]', 'rpc server address', 'tcp://127.0.0.1:4242') .parse(process.argv); // connect to mqtt broker var conn = mqtt.connect(program.mqtt); // connect to rpc peer, the actual python algorithm var algo = new zerorpc.Client() algo.connect(program.rpc); conn.on('connect', function () { // connections are ready, initialize the algorithm var conf = { cash: 50000 }; algo.invoke('initialize', conf, function(err, channels, more) { // the method returns an array of data channels the algorithm needs for (var i = 0; i < channels.length; i++) { console.log('subscribing to channel', channels[i]); conn.subscribe(channels[i]); } }); }); conn.on('message', function (topic, message) { console.log('received data:', message.toString()); // make the algorithm to process the incoming data algo.invoke('event', JSON.parse(message.toString()), function(err, res, more) { console.log('algo output:', res); // we're done algo.close(); conn.end(); }); }); The code above calls our algorithm's methods. Here is how to expose them over RPC. import click, zerorpc # ... algo code ... @click.command() @click.option('--addr', default='tcp://127.0.0.1:4242', help='address to bind rpc server') def serve(addr): server = zerorpc.Server(RandomAlgo()) server.bind(addr) click.echo(click.style('serving on {} ...'.format(addr), bold=True, fg='cyan')) # listen and serve server.run() if__name__ == '__main__': serve() At this point we are ready to run the app. Let's fire up 3 terminals, install requirements, and make the machines to trade. sudo apt-get install curl libpython-dev libzmq-dev # Install pip curl https://bootstrap.pypa.io/get-pip.py | python # Algorithm requirements pip install zerorpc click # Messaging requirements npm init npm install --save commander mqtt zerorpc # Activate backend python ma.py --addr tcp://127.0.0.1:4242 # Manipulate algorithm and serve messaging system node app.js --rpc tcp://127.0.0.1:4242 # Publish messages node_modules/.bin/mqtt pub -t 'quotes' -h 'test.mosquitto.org' -m '{"goog": 3.45}' In this state, our implementation is over-engineered. But we designed a sustainable architecture to wire up small components. And from here we can extend the system. One can focus on algorithms without worrying about events plumbing. The corollary: switching to a new messaging technology won't affect the way we develop algorithms. We can even swipe algorithms by changing the rpc address. A service discovery component could expose which backends are available and how to reach them. A project like octoblu adds devices authentification, data sharing, and more. We could implement data sources that connect to live market or databases, compute indicators like moving averages and publish them to algorithms. Conclusion Given our API definition, a contributor can hack on any component without breaking the project as a whole. In a fast pace environment, with constant iterations, this architecture can make or break products. This is especially true in the raising container world. Assuming we package each component into specialized containers, we smooth the way to a scalable infrastructure that we can test, distribute, deploy and grow. Not sure where to start when it comes to containers and microservices? Visit our Docker page!  About the Author Xavier Bruhiere is the CEO of Hive Tech. He contributes to many community projects, including Occulus Rift, Myo, Docker and Leap Motion. In his spare time he enjoys playing tennis, the violin and the guitar. You can reach him at @XavierBruhiere.
Read more
  • 0
  • 0
  • 2821

article-image-getting-places
Packt
13 Oct 2015
8 min read
Save for later

Getting Places

Packt
13 Oct 2015
8 min read
In this article by Nafiul Islam, the author of Mastering Pycharm, we'll learn all about navigation. It is divided into three parts. The first part is called Omni, which deals with getting to anywhere from any place. The second is called Macro, which deals with navigating to places of significance. The third and final part is about moving within a file and it is called Micro. By the end of this article, you should be able to navigate freely and quickly within PyCharm, and use the right tool for the job to do so. Veteran PyCharm users may not find their favorite navigation tool mentioned or explained. This is because the methods of navigation described throughout this article will lead readers to discover their own tools that they prefer over others. (For more resources related to this topic, see here.) Omni In this section, we will discuss the tools that PyCharm provides for a user to go from anywhere to any place. You could be in your project directory one second, the next, you could be inside the Python standard library or a class in your file. These tools are generally slow or at least slower than more precise tools of navigation provided. Back and Forward The Back and Forward actions allow you to move your cursor back to the place where it was previously for more than a few seconds or where you've made edits. This information persists throughout sessions, so even if you exit the IDE, you can still get back to the positions that you were in before you quit. This falls into the Omni category because these two actions could potentially get you from any place within a file to any place within a file in your directory (that you have been to) to even parts of the standard library that you've looked into as well as your third-party Python packages. The Back and Forward actions are perhaps two of my most used navigation actions, and you can use Keymap. Or, one can simply click on the Navigate menu to see the keyboard shortcuts: Macro The difference between Macro and Omni is subtle. Omni allows you to go to the exact location of a place, even a place of no particular significance (say, the third line of a documentation string) in any file. Macro, on the other hand, allows you to navigate anywhere of significance, such as a function definition, class declaration, or particular class method. Go to definition or navigate to declaration Go to definition is the old name for Navigate to Declaration in PyCharm. This action, like the one previously discussed, could lead you anywhere—a class inside your project or a third party library function. What this action does is allow you to go to the source file declaration of a module, package, class, function, and so on. Keymap is once again useful in finding the shortcut for this particular action. Using this action will move your cursor to the file where the class or function is declared, may it be in your project or elsewhere. Just place your cursor on the function or class and invoke the action. Your cursor will now be directly where the function or class was declared. There is, however, a slight problem with this. If one tries to go to the declaration of a .so object, such as the datetime module or the select module, what one will encounter is a stub file (discussed in detail later). These are helper files that allow PyCharm to give you the code completion that it does. Modules that are .so files are indicated by a terminal icon, as shown here: Search Everywhere The action speaks for itself. You search for classes, files, methods, and even actions. Universally invoked using double Shift (pressing Shift twice in quick succession), this nifty action looks similar to any other search bar. Search Everywhere searches only inside your project, by default; however, one can also use it to search non-project items as well. Not using this option leads to faster search and a lower memory footprint. Search Everywhere is a gateway to other search actions available in PyCharm. In the preceding screenshot, one can see that Search Everywhere has separate parts, such as Recent Files and Classes. Each of these parts has a shortcut next to their section name. If you find yourself using Search Everywhere for Classes all the time, you might start using the Navigate Class action instead which is much faster. The Switcher tool The Switcher tool allows you to quickly navigate through your currently open tabs, recently opened files as well as all of your panels. This tool is essential since you always navigate between tabs. A star to the left indicates open tabs; everything else is a recently opened or edited file. If you just have one file open, Switcher will show more of your recently opened files. It's really handy this way since almost always the files that you want to go to are options in Switcher. The Project panel The Project panel is what I use to see the structure of my project as well as search for files that I can't find with Switcher. This panel is by far the most used panel of all, and for good reason. The Project panel also supports search; just open it up and start typing to find your file. However, the Project panel can give you even more of an understanding of what your code looks similar to if you have Show Members enabled. Once this is enabled, you can see the classes as well as the declared methods inside your files. Note that search works just like before, meaning that your search is limited to only the files/objects that you can see; if you collapse everything, you won't be able to search either your files or the classes and methods in them. Micro Micro deals with getting places within a file. These tools are perhaps what I end up using the most in my development. The Structure panel The Structure panel gives you a bird's eye view of the file that you are currently have your cursor on. This panel is indispensable when trying to understand a project that one is not familiar with. The yellow arrow indicates the option to show inherited fields and methods. The red arrow indicates the option to show field names, meaning if that it is turned off, you will only see properties and methods. The orange arrow indicates the option to scroll to and from the source. If both are turned on (scroll to and scroll from), where your cursor is will be synchronized with what method, field, or property is highlighted in the structure panel. Inherited fields are grayed out in the display. Ace Jump This is my favorite navigation plugin, and was made by John Lindquist who is a developer at JetBrains (creators of PyCharm). Ace Jump is inspired from the Emacs mode with the same name. It allows you to jump from one place to another within the same file. Before one can use Ace Jump, one has to install the plugin for it. Ace Jump is usually invoked using Ctrl or command + ; (semicolon). You can search for Ace Jump in Keymap as well, and is called Ace Jump. Once invoked, you get a small box in which you can input a letter. Choose a letter from the word that you want to navigate to, and you will see letters on that letter pop up immediately. If we were to hit D, the cursor would move to the position indicated by D. This might seem long winded, but it actually leads to really fast navigation. If we wanted to select the word indicated by the letter, then we'd invoke Ace Jump twice before entering a letter. This turns the Ace Jump box red. Upon hitting B, the named parameter rounding will be selected. Often, we don't want to go to a word, but rather the beginning or the end of a line. In order to do this, just hit invoke Ace Jump and then the left arrow for line beginnings or the right arrow for line endings. In this case, we'd just hit V to jump to the beginning of the line that starts with num_type. This is an example, where we hit left arrow instead of the right one, and we get line-ending options. Summary In this article, I discussed some of the best tools for navigation. This is by no means an exhaustive list. However, these tools will serve as a gateway to more precise tools available for navigation in PyCharm. I generally use Ace Jump, Back, Forward, and Switcher the most when I write code. The Project panel is always open for me, with the most used files having their classes and methods expanded for quick search. Resources for Article: Further resources on this subject: Enhancing Your Blog with Advanced Features [article] Adding a developer with Django forms [article] Deployment and Post Deployment [article]
Read more
  • 0
  • 0
  • 1225
article-image-running-firefox-os-simulators-webide
Packt
12 Oct 2015
9 min read
Save for later

Running Firefox OS Simulators with WebIDE

Packt
12 Oct 2015
9 min read
In this article by Tanay Pant, the author of the book, Learning Firefox OS Application Development, you will learn how to use WebIDE and its features. We will start by installing Firefox OS simulators in the WebIDE so that we can run and test Firefox OS applications in it. Then, we will study how to install and create new applications with WebIDE. Finally, we will cover topics such as using developer tools for applications that run in WebIDE, and uninstalling applications in Firefox OS. In brief, we will go through the following topics: Getting to know about WebIDE Installing Firefox OS simulator Installing and creating new apps with WebIDE Using developer tools inside WebIDE Uninstalling applications in Firefox OS (For more resources related to this topic, see here.) Introducing WebIDE It is now time to have a peek at Firefox OS. You can test your applications in two ways, either by running it on a real device or by running it in Firefox OS Simulator. Let's go ahead with the latter option since you might not have a Firefox OS device yet. We will use WebIDE, which comes preinstalled with Firefox, to accomplish this task. If you haven't installed Firefox yet, you can do so from https://www.mozilla.org/en-US/firefox/new/. WebIDE allows you to install one or several runtimes (different versions) together. You can use WebIDE to install different types of applications, debug them using Firefox's Developer Tools Suite, and edit the applications/manifest using the built-in source editor. After you install Firefox, open WebIDE. You can open it by navigating to Tools | Web Developer | WebIDE. Let's now take a look at the following screenshot of WebIDE: You will notice that on the top-right side of your window, there is a Select Runtime option. When you click on it, you will see the Install Simulator option. Select that option, and you will see a page titled Extra Components. It presents a list of Firefox OS simulators. We will install the latest stable and unstable versions of Firefox OS. We installed two versions of Firefox OS because we would need both the latest and stable versions to test our applications in the future. After you successfully install both the simulators, click on Select Runtime. This will now show both the OS versions listed, as shown in the following screenshot:. Let's open Firefox OS 3.0. This will open up a new window titled B2G. You should now explore Firefox OS, take a look at its applications, and interact with them. It's all HTML, CSS and JavaScript. Wonderful, isn't it? Very soon, you will develop applications like these:` Installing and creating new apps using WebIDE To install or create a new application, click on Open App in the top-left corner of the WebIDE window. You will notice that there are three options: New App, Open Packaged App, and Open Hosted App. For now, think of Hosted apps like websites that are served from a web server and are stored online in the server itself but that can still use appcache and indexeddb to store all their assets and data offline, if desired. Packaged apps are distributed in a .zip format and they can be thought of as the source code of the website bundled and distributed in a ZIP file. Let's now head to the first option in the Open App menu, which is New App. Select the HelloWorld template, enter Project Name, and click on OK. After completing this, the WebIDE will ask you about the directory where you want to store the application. I have made a new folder named Hello World for this purpose on the desktop. Now, click on Open button and finally, click again on the OK button. This will prepare your app and show details, such as Title, Icon, Description, Location and App ID of your application. Note that beneath the app title, it says Packaged Web. Can you figure out why? As we discussed, it is because of the fact that we are not serving the application online, but from a packaged directory that holds its source code. This covers the right-hand side panel. In the left-hand side panel, we have the directory listing of the application. It contains an icon folder that holds different-sized icons for different screen resolutions It also contains the app.js file, which is the engine of the application and will contain the functionality of the application; index.html, which will contain the markup data for the application; and finally, the manifest.webapp file, which contains crucial information and various permissions about the application. If you click on any filename, you will notice that the file opens in an in-browser editor where you can edit the files to make changes to your application and save them from here itself. Let's make some edits in the application— in app.js and index.html. I have replaced World with Firefox everywhere to make it Hello Firefox. Let's make the same changes in the manifest file. The manifest file contains details of your application, such as its name, description, launch path, icons, developer information, and permissions. These details are used to display information about your application in the WebIDE and Firefox Marketplace. The manifest file is in JSON format. I went ahead and edited developer information in the application as well, to include my name and my website. After saving all the files, you will notice that the information of the app in the WebIDE has changed! It's now time to run the application in Firefox OS. Click on Select Runtime and fire up Firefox OS 3.0. After it is launched, click on the Play button in the WebIDE hovering on which is the prompt that says Install and Run. Doing this will install and launch the application on your simulator! Congratulations, you installed your first Firefox OS application! Using developer tools inside WebIDE WebIDE allows you to use Firefox's awesome developer tools for applications that run in the Simulator via WebIDE as well. To use them, simply click on the Settings icon (which looks like a wrench) beside the Install and Run icon that you had used to get the app installed and running. The icon says Debug App on hovering the cursor over it. Click on this to reveal developer tools for the app that is running via WebIDE. Click on Console, and you will see the message Hello Firefox, which we gave as the input in console.log() in the app.js file. Note that it also specifies the App ID of our application while displaying Hello Firefox. You may have noticed in the preceding illustration that I sent a command via the console alert('Hello Firefox'); and it simultaneously executed the instruction in the app running in the simulator. As you may have noticed, Firefox OS customizes the look and feel of components, such as the alert box (this is browser based). Our application is running in an iframe in Gaia. Every app, including the keyboard application, runs in an iframe for security reasons. You should go through these tools to get a hang of the debugging capabilities if you haven't done so already! One more important thing that you should keep in mind is that inline scripts (for example, <a href="#" onclick="alert(this)">Click Me</a>) are forbidden in Firefox OS apps, due to Content Security Policy (CSP) restrictions. CSP restrictions include the remote scripts, inline scripts, javascript URIs, function constructor, dynamic code execution, and plugins, such as Flash or Shockwave. Remote styles are also banned. Remote Web Workers and eval() operators are not allowed for security reasons and they show 400 error and security errors respectively upon usage. You are warned about CSP violations when submitting your application to the Firefox OS Marketplace. CSP warnings in the validator will not impact whether your app is accepted into the Marketplace. However, if your app is privileged and violates the CSP, you will be asked to fix this issue in order to get your application accepted. Browsing other runtime applications You can also take a look at the source code of the preinstalled/runtime apps that are present in Firefox OS or Gaia, to be precise. For example, the following is an illustration that shows how to open them: You can click on the Hello World button (in the same place where Open App used to exist), and this will show you the whole list of Runtime Apps as shown in the preceding illustration. I clicked on the Camera application and it showed me the source code of its main.js file. It's completely okay if you are daunted by the huge file. If you find these runtime applications interesting and want to contribute to them, then you can refer to Mozilla Developer Network's articles on developing Gaia, which you can find at https://developer.mozilla.org/en-US/Firefox_OS/Developing_Gaia. Our application looks as follows in the App Launcher of the operating system: Uninstalling applications in Firefox OS You can remove the project from WebIDE by clicking on the Remove Project button in the home page of the application. However, this will not uninstall the application from Firefox OS Simulator. The uninstallation system of the operating system is quite similar to iOS. You just have to double tap in OS X to get the Edit screen, from where you can click on the cross button on the top-left of the app icon to uninstall the app. You will then get a confirmation screen that warns you that all the data of the application will also be deleted along with the app. This will take you back to the Edit screen where you can click on Done to get back to the home screen. Summary In this article, you learned about WebIDE, how to install Firefox OS simulator in WebIDE, using Firefox OS and installing applications in it, and creating a skeleton application using WebIDE. You then learned how to use developer tools for applications that run in the simulator, browsing other preinstalled runtime applications present in Firefox OS. Finally, you learned about removing a project from WebIDE and uninstalling an application from the operating system. Resources for Article: Further resources on this subject: Learning Node.js for Mobile Application Development [Article] Introducing Web Application Development in Rails [Article] One-page Application Development [Article]
Read more
  • 0
  • 0
  • 2397

article-image-creating-jee-application-ejb
Packt
24 Sep 2015
11 min read
Save for later

Creating a JEE Application with EJB

Packt
24 Sep 2015
11 min read
In this article by Ram Kulkarni, author of Java EE Development with Eclipse (e2), we will be using EJBs (Enterprise Java Beans) to implement business logic. This is ideal in scenarios where you want components that process business logic to be distributed across different servers. But that is just one of the advantages of EJB. Even if you use EJBs on the same server as the web application, you may gain from a number of services that the EJB container provides to the applications through EJBs. You can specify security constraints for calling EJB methods declaratively (using annotations), and you can also easily specify transaction boundaries (specify which method calls from a part of one transaction) using annotations. In addition to this, the container handles the life cycle of EJBs, including pooling of certain types of EJB objects so that more objects can be created when the load on the application increases. (For more resources related to this topic, see here.) In this article, we will create the same application using EJBs and deploy it in a Glassfish 4 server. But before that, you need to understand some basic concepts of EJBs. Types of EJB EJBs can be of following types as per the EJB 3 specifications: Session bean: Stateful session bean Stateless session bean Singleton session bean Message-driven bean In this article, we will focus on session beans. Session beans In general, session beans are meant for containing methods used to execute the main business logic of enterprise applications. Any Plain Old Java Object (POJO) can be annotated with the appropriate EJB-3-specific annotations to make it a session bean. Session beans come in three types, as follows. Stateful session bean One stateful session bean serves requests for one client only. There is a one-to-one mapping between the stateful session bean and the client. Therefore, stateful beans can hold state data for the client between multiple method calls. In our CourseManagement application, we can use a stateful bean to hold the Student data (student profile and the courses taken by him/her) after a student logs-in. The state maintained by the Stateful bean is lost when the server restarts or when the session times out. Since there is one stateful bean per client, using a stateful bean might impact the scalability of the application. We use the @Stateful annotation to create a stateful session bean. Stateless session bean A stateless session bean does not hold any state information for any client. Therefore, one session bean can be shared across multiple clients. The EJB container maintains pools of stateless beans, and when a client request comes, it takes out a bean from the pool, executes methods, and returns the bean to the pool. Stateless session beans provide excellent scalability because they can be shared and need not be created for each client. We use the @Stateless annotation to create a stateless session bean. Singleton session bean As the name suggests, there is only one instance of a singleton bean class in the EJB container (this is true in the clustered environment too; each EJB container will have an instance of a singleton bean). This means that they are shared by multiple clients, and they are not pooled by EJB containers (because there can be only one instance). Since a singleton session bean is a shared resource, we need to manage concurrency in it. Java EE provides two concurrency management options for singleton session beans: container-managed concurrency and bean-managed concurrency. Container-managed concurrency can easily be specified by annotations. See https://docs.oracle.com/javaee/7/tutorial/ejb-basicexamples002.htm#GIPSZ for more information on managing concurrency in a singleton session bean. Using a singleton bean could have an impact on the scalability of the application if there are resource contentions in the code. We use the @Singleton annotation to create a singleton session bean Accessing a session bean from the client Session beans can be designed to be accessed locally (within the same application as a session bean) or remotely (from a client running in a different application or JVM) or both. In the case of remote access, session beans are required to implement a remote interface. For local access, session beans can implement a local interface or no interface (the no-interface view of a session bean). Remote and local interfaces that session beans implement are sometimes also called business interfaces, because they typically expose the primary business functionality. Creating a no-interface session bean To create a session bean with a no-interface view, create a POJO and annotate it with the appropriate EJB annotation type and @LocalBean. For example, we can create a local stateful Student bean as follows: import javax.ejb.LocalBean; import javax.ejb.Singleton; @Singleton @LocalBean public class Student { ... } Accessing a session bean using dependency injection You can access session beans by either using the @EJBannotation (for dependency injection) or performing a Java Naming and Directory Interface (JNDI) lookup. EJB containers are required to make the JNDI URLs of EJBs available to clients. Dependency injection of session beans using @EJB work only for managed components, that is, components of the application whose life cycle is managed by the EJB container. When a component is managed by the container, it is created (instantiated) by the container and also destroyed by the container. You do not create managed components using the new operator. JEE-managed components that support direct injection of EJBs are servlets, managed beans of JSF pages and EJBs themselves (one EJB can have other EJBs injected into it). Unfortunately, you cannot have a web container injecting EJBs into JSPs or JSP beans. Also, you cannot have EJBs injected into any custom classes that you create and are instantiated using the new operator. We can use the Student bean (created previously) from a managed bean of JSF, as follows: import javax.ejb.EJB; import javax.faces.bean.ManagedBean; @ManagedBean public class StudentJSFBean { @EJB private Student studentEJB; } Note that if you create an EJB with a no-interface view, then all the public methods in that EJB will be exposed to the clients. If you want to control which methods can be called by clients, then you should implement the business interface. Creating a session bean using a local business interface A business interface for EJB is a simple Java interface with either the @Remote or @Local annotation. So we can create a local interface for the Student bean as follows: import java.util.List; import javax.ejb.Local; @Local public interface StudentLocal { public List<Course> getCourses(); } We implement a session bean like this: import java.util.List; import javax.ejb.Local; import javax.ejb.Stateful; @Stateful @Local public class Student implements StudentLocal { @Override public List<CourseDTO> getCourses() { //get courses are return … } } Clients can access the Student EJB only through the local interface: import javax.ejb.EJB; import javax.faces.bean.ManagedBean; @ManagedBean public class StudentJSFBean { @EJB private StudentLocal student; } The session bean can implement multiple business interfaces. Accessing a session bean using a JNDI lookup Though accessing EJB using dependency injection is the easiest way, it works only if the container manages the class that accesses the EJB. If you want to access EJB from a POJO that is not a managed bean, then dependency injection will not work. Another scenario where dependency injection does not work is when EJB is deployed in a separate JVM (this could be on a remote server). In such cases, you will have to access EJB using a JNDI lookup (visit https://docs.oracle.com/javase/tutorial/jndi/ for more information on JNDI). JEE applications can be packaged in an Enterprise Application Archive (EAR), which contains a .jar file for EJBs and a WAR file for web applications (and the lib folder contains the libraries required for both). If, for example, the name of an EAR file is CourseManagement.ear and the name of an EJB JAR file in it is CourseManagementEJBs.jar, then the name of the application is CourseManagement (the name of the EAR file) and the module name is CourseManagementEJBs. The EJB container uses these names to create a JNDI URL for lookup EJBs. A global JNDI URL for EJB is created as follows: "java:global/<application_name>/<module_name>/<bean_name>![<bean_interface>]" java:global: Indicates that it is a global JNDI URL. <application_name>: The application name is typically the name of the EAR file. <module_name>: This is the name of the EJB JAR. <bean_name>: This is the name of the EJB bean class. <bean_interface>: This is optional if EJB has a no-interface view, or if it implements only one business interface. Otherwise, it is a fully qualified name of a business interface. EJB containers are also required to publish two more variations of JNDI URLs for each EJB. These are not global URLs, which means that they can't be used to access EJBs from clients that are not in the same JEE application (in the same EAR): "java:app/[<module_name>]/<bean_name>![<bean_interface>]" "java:module/<bean_name>![<bean_interface>]" The first URL can be used if the EJB client is in the same application, and the second URL can be used if the client is in the same module (the same JAR file as the EJB). Before you look up any URL in a JNDI server, you need to create an InitialContext that includes information, among other things such as the hostname of JNDI server and the port on which it is running. If you are creating InitialContext in the same server, then there is no need to specify these attributes: InitialContext initCtx = new InitialContext(); Object obj = initCtx.lookup("jndi_url"); We can use the following JNDI URLs to access a no-interface (LocalBean) Student EJB (assuming that the name of the EAR file is CourseManagement and the name of the JAR file for EJBs is CourseManagementEJBs): URL When to use java:global/CourseManagement/ CourseManagementEJBs/Student The client can be anywhere in the EAR file, because we are using a global URL. Note that we haven't specified the interface name because we are assuming that the Student bean provides a no-interface view in this example. java:app/CourseManagementEJBs/Student The client can be anywhere in the EAR. We skipped the application name because the client is expected to be in the same application. This is because the namespace of the URL is java:app. java:module/Student The client must be in the same JAR file as EJB. We can use the following JNDI URLs to access the Student EJB that implemented a local interface, StudentLocal: URL When to use java:global/CourseManagement/ CourseManagementEJBs/Student!packt.jee.book.ch6.StudentLocal The client can be anywhere in the EAR file, because we are using a global URL. java:global/CourseManagement/ CourseManagementEJBs/Student The client can be anywhere in the EAR. We skipped the interface name because the bean implements only one business interface. Note that the object returned from this call will be of the StudentLocal type, and not Student. java:app/CourseManagementEJBs/Student Or java:app/CourseManagementEJBs/Student!packt.jee.book.ch6.StudentLocal   The client can be anywhere in the EAR. We skipped the application name because the JNDI namespace is java:app. java:module/Student Or java:module/Student!packt.jee.book.ch6.StudentLocal The client must be in the same EAR as the EJB. Here is an example of how we can call the Student bean with the local business interface from one of the objects (that is not managed by the web container) in our web application: InitialContext ctx = new InitialContext(); StudentLocal student = (StudentLocal) ctx.loopup ("java:app/CourseManagementEJBs/Student"); return student.getCourses(id) ; //get courses from Student EJB Creating EAR for Deployment outside Eclipse. Summary EJBs are ideal for writing business logic in web applications. They can act as the perfect bridge between web interface components, such as a JSF, servlet, or JSP, and data access objects, such as JDO. EJBs can be distributed across multiple JEE application servers (this could improve application scalability) and their life cycle is managed by the container. EJBs can easily be injected into managed objects or can be looked up using JNDI. The Eclipse JEE makes creating and consuming EJBs very easy. The JEE application server Glassfish can also be managed and applications can be deployed from within Eclipse. Resources for Article: Further resources on this subject: Contexts and Dependency Injection in NetBeans[article] WebSockets in Wildfly[article] Creating Java EE Applications [article]
Read more
  • 0
  • 0
  • 3972