Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - CMS and E-Commerce

830 Articles
article-image-setting-woocommerce
Packt
19 Nov 2013
6 min read
Save for later

Setting Up WooCommerce

Packt
19 Nov 2013
6 min read
(For more resources related to this topic, see here.) So, you're already familiar with WordPress and know how to use plugins, widgets, and themes? Your next step is to expand your existing WordPress website or blog with an online store? In that case you've come to the right place! WooCommerce is a versatile plugin for WordPress, that gives the possibility for everyone with a little WordPress knowledge to start their own online store. In case you are not familiar with WordPress at all, this book is not the first one you should read. No worries though, WordPress isn't that hard to learn and there are tons of online possibilities to learn about the WordPress solution very quickly. Or just turn to one of the many printed books on WordPress that are available. These are the topics we'll be covering in this article: Installing and activating WooCommerce Learn everything about setting up WooCommerce correctly Preparing for takeoff Before we start, remember that it's only possible to install your own plugins if you're working in your own WordPress installation. This means that users running a website on WordPress.com will not be able to follow along. It's simply impossible in that environment to install plugins yourself. Although installing WooCommerce on top of WordPress isn't difficult, we highly recommend that you set up a test environment first. Without going too much into depth, this is what you need to do: Create a backup copy of your complete WordPress environment using FTP. Alternatively use a plugin to store a copy into your Dropbox folder automatically. There are tons of solutions available, just pick your own favorite. UpDraftPlus is one of the possibilities and delivers a complete backup solution: http://wordpress.org/plugins/updraftplus/. Don't forget to backup your WordPress database as well. You may do this using a tool like phpMyAdmin and create an export from there. But also in this case, there are plugins that make life easier. The UpDraftPlus plugin mentioned previously can perform this task as well. Once your backups are complete, install XAMPP on a local (Windows) machine that can be downloaded from http://www.apachefriends.org. Although XAMPP is available for Mac users, MAMP is a widely used alternative for this group. MAMP can be downloaded from http://www.mamp.info/en/index.html. Restore your WordPress backup on your test server and start following the remaining part of this book in your new test environment. Alternatively, install a copy of your WordPress website as a temporary subdomain at your hosting provider. For instance, if my website is http://www.example.com, I could easily create a copy of my site in http://test.example.com. Possibilities may vary, depending on the package you have with your hosting provider. If in your situation it isn't needed to add WooCommerce to an existing WordPress site, of course you may also start from scratch. Just install WordPress on a local test server or install it at your hosting provider. To keep our instructions in this book as clear as possible we did just that. We created a fresh installation of WordPress Version 3.6. Next, you see a screenshot of our fresh WordPress installation: Are these short instructions just too much for you at this moment? Do you need a more detailed step-by-step guide to create a test environment for your WordPress website? Look at the following tutorials: For Max OSX users: http://wpmu.org/local-wordpresstest-environment-mamp-osx/ For Windows users: http://www.thegeekscope.com/howto-copy-a-live-wordpress-website-to-local-indowsenvironment/ More tutorials will also be available on our website: http://www.joomblocks.com Don't forget to sign up for the free Newsletter, that will bring you even more news and tutorials on WordPress, WooCommerce, and other open source software solutions! Once ready, we'll be able to take the next step and install the WooCommerce plugin. Let's take a look at our WordPress backend. In our situation we can open this by browsing to http://localhost/wp36/wp-admin. Depending on the choices you made previously for your test environment, your URL could be different. Well, this should all be pretty familiar for you already. Again, your situation might look different, depending on your theme or the number of plugins already active for your website. Installing WooCommerce Installing a plugin is a fairly simple task: Click on Plugins in the menu on the left and click on Add New. Next, simply enter woocommerce in the Search field and click on Search Plugins. Verify if the correct plugin is shown on top and click on Install Now. Confirm the warning message that appears by clicking on OK. Click on Activate Plugin. Note that in the following screenshot, we're installing Version 2.0.13 of WooCommerce. New versions will follow rather quickly, so you might already see a higher version number. WooCommerce needs to have a number of specific WordPress pages, that it automatically will setup for you. Just click on the Install WooCommerce Pages button and make sure not to forget this step! In our example project, we're installing the English version of WooCommerce. But you might need a different language. By default, WooCommerce is already delivered in a number of languages. This means the installation will automatically follow the language of your WordPress installation. If you need something else, just browse through the plugin directory on WordPress.org to find any additional translations. Once we have created the necessary pages, the WooCommerce welcome screen will appear and you will see a new menu item has been added to the main menu on the left. Meanwhile the plugin created the necessary pages, that you can access by clicking on Pages in the menu on the left. Note that if you open a page that was automatically created by WooCommerce, you'll only see a shortcode, which is used to call the needed functionality. Do not delete the shortcodes, or WooCommerce might stop working. However, it's still possible to add your own content before or after the shortcode on these pages. WooCommerce also added some widgets to your WordPress dashboard, giving an overview of the latest product and sales statistics. At this moment this is all still empty of course. Summary In this article, we learned about the basics of WooCommerce and installing the same. We also learned that WooCommerce is a free but versatile plugin for WordPress, that you may use to easily set up your own online store. Resources for Article: Further resources on this subject: Django 1.2 E-commerce: Generating PDF Reports from Python using ReportLab [Article] Increasing sales with Brainshark slideshows/documents [Article] Implementing OpenCart Modules [Article]
Read more
  • 0
  • 0
  • 2273

article-image-configuring-ovirt
Packt
19 Nov 2013
9 min read
Save for later

Configuring oVirt

Packt
19 Nov 2013
9 min read
(For more resources related to this topic, see here.) Configuring the NFS storage NFS storage is a fairly common type of storage that is quite easy to set up and run even without special equipment. You can take the server with large disks and create NFS directory.But despite the apparent simplicity of NFS, setting s should be done with attention to details. Make sure that the NFS directory is suitable for use; go to the procedure of connecting storage to the data center. The following options are displayed after you click on the Configure Storage dialog box in which we specify the basic storage configuration: Name and Data Center: It is used to specify a name and target of the data center for storage Domain Function/Storage Type: It is used to choose a data function and NFS type Use Host: It is used to enter the host that will make the initial connection to the storage and a host who will be in the role of SPM Export Path: It is used to enter the storage server name and path of the exported directory Advanced Parameters: It provides additional connection options, such as NFS version, number of retransmissions and timeout, that are recommended to be changed only in exceptional cases Fill in the required storage settings and click on the OK button; this will start the process of connecting storage. The following image shows the New Storage dialog box with the connecting NFS storage: Configuring the iSCSI storage This section will explain how to connect the iSCSI storage to the data center with the type of storage as iSCSI. You can skip this section if you do not use iSCSI storage. iSCSI is a technology for building SAN (Storage Area Network). A key feature of this technology is the transmission of SCSI commands over the IP networks. Thus, there is a transfer of block data via IP. By using the IP networks, data transfer can take place over long distances and through network equipment such as routers and switches. These features make the iSCSI technology good for construction of low-cost SAN. oVirt supports iSCSI and iSCSI storages that can be connected to oVirt data centers. Then begin the process of connecting the storage to the data center. After you click on the Configure Storage dialog box in which you specify the basic storage configuration, the following options are displayed: Name and Data Center: It is used to specify the name and target of the data center. Domain Function/Storage Type: It is used to specify the domain function and storage type. In this case, the data function and iSCSI type. Use Host: It is used to specify the host to which the storage (SPM) will be attached. The following options are present in the search box for iSCSI targets: Address and Port: It is used to specify the address and port of the storage server that contains the iSCSI target User Authentication: Enable this checkbox if authentication is to be used on the iSCSI target CHAP username and password: It is used to specify the username and password for authentication Click on the Discover button and oVirt Engine connects to the specified server for the searching of iSCSI targets. In the resulting list, click on the designated targets, we click on the Login button to authenticate. Upon successful completion of the authentication, the display target LUN will be displayed; check it and click on OK to start connection to the data center. New storage will automatically connect to the data center. If it does not, select the location from the list and click on the Attach button in the detail pane where we choose a target data center. Configuring the Fibre Channel storage If you have selected Fibre Channel when creating the data center, we should create a Fibre Channel storage domain. oVirt supports Fibre Channel storage based on multiple preconfigured Logical Unit Numbers (LUN). Skip this section if you do not use Fibre Channel equipment. Begin the process of connecting the storage to the data center. Open the Guide Me wizard and click on the Configure Storage dialog box where you specify the basic storage configuration: Name and Data Center: It is used to specify the name and data center Domain Function/Storage Type: Here we need to specify the data function and Fibre Channel type Use Host: It specifies the address of the virtualization host that will act as the SPM In the area below, the list of LUNs are displayed, enable the Add LUN checkbox on the selected LUN to use it as Fibre Channel data storage. Click on the OK button and this will start the process of connecting storage to the data centers. In the Storage tab and in the list of storages, we can see created Fibre Channel storage. In the process of connecting, its status will change and at the end new storage will be activated and connected to the data center. The connection process can also be seen in the event pane. The following screenshot shows the New Storage dialog box with Fibre Channel storage type: Configuring the GlusterFS storage GlusterFS is a distributed, parallel, and linearly scalable filesystem. GlusterFS can combine the data storage that are located on different servers into a parallel network filesystem. GlusterFS's potential is very large, so developers directed their efforts towards the implementation and support of GlusterFS in oVirt (GlusterFS documentation is available at http://www.gluster.org/community/documentation/index.php/Main_Page). oVirt 3.3 has a complete data center with the GlusterFS type of storage. Configuring the GlusterFS volume Before attempting to connect GlusterFS storage into the data center, we need to create the volume. The procedure of creating GlusterFS volume is common in all versions. Select the Volumes tab in the resource pane and click on Create Volume. In the open window, fill the volume settings: Data Center: It is used to specify the data center that will be attached to the GlusterFS storage. Volume Cluster: It is used to specify the name of the cluster that will be created. Name: It is used to specify a name for the new volume. Type: It is used to specify the type of GlusterFS volume available to choose from, there are seven types of volume that implement various strategic placements of data on the filesystem. Base types are Distribute, Replicate, and Stripe and other combination of these types: Distributed Replicate, Distributed Stripe, Striped Replicate, and Distributed Striped Replicate (additional info can be found at the link: http://gluster.org/community/documentation/index.php/GlusterFS_Concepts). Bricks: With this button, a list of bricks will be collected from the filesystem. Brick is a separate piece with which volume will be built. These bricks are distributed across the hosts. As bricks use a separate directory, it should be placed on a separate partition. Access Protocols: It defines basic access protocols that can be used to gain access to the following: Gluster: It is a native protocol access to volumes GlusterFS, enabled by default. NFS: It is an access protocol based on NFS. CIFS: It is an access protocol based on CIFS. Allow Access From: It allows us to enter a comma-separated IP address, hostnames, or * for all hosts that are allowed to access GlusterFS volume. Optimize for oVirt Store: Enabling this checkbox will enable extended options for created volume. The following screenshot shows the dialog box of Create Volume: Fill in the parameters, click on the Bricks button, and go to the new window to add new bricks with the following properties: Volume Type: This is used to change the previously marked type of the GlusterFS volume Server: It is used to specify a separate server that will export GlusterFS brick Brick Directory: It is used to specify the directory to use Specify the server and directory and click on Add. Depending on the type of volume, specify multiple bricks. After completing the list with bricks, click on the OK button to add volume and return to the menu. Click on the OK button to create GlusterFS volumes with the specified parameters. The following screenshot shows the Add Bricks dialog box: Now that we have GlusterFS volume, we select it from the list and click on Start. Configuring the GlusterFS storage oVirt 3.3 has support for creating data centers with the GlusterFS storage type: The GlusterFS storage type requires a preconfigured data center. A pre-created cluster should be present inside the data center. The enabled Gluster service is required. Go to the Storage section in resource pane and click on New Domain. In the dialog box that opens, fill in the details of our storage. The details are given as follows: Name and Data Center: It is used to specify the name and data center Domain Function/Storage Type: It is used to specify the data function and GlusterFS type Use Host: It is used to specify the host that will connect to the SPM Path: It is used to specify the path to the location in the format hostname:volume_name VFS Type: Leave it as glusterfs and leave Mount Option blank Click on the OK button; this will start the process of creating the repository. The created storage automatically connects to the specified data centers. If not, select the repository created in the list, and in the subtab named Data Center in the detail pane, click on the Attach button and choose our data center. After you click on OK, the process of connecting storage to the data center starts. The following screenshot shows the New Storage dialog box with the GlusterFS storage type. Summary In this article we learned how to configure NFS Storage, iSCSI Storage, FC storages, and GlusterFS Storage. Resources for Article: Further resources on this subject: Tips and Tricks on Microsoft Application Virtualization 4.6 [Article] VMware View 5 Desktop Virtualization [Article] Qmail Quickstarter: Virtualization [Article]
Read more
  • 0
  • 0
  • 2248

article-image-icinga-object-configuration
Packt
18 Nov 2013
9 min read
Save for later

Icinga Object Configuration

Packt
18 Nov 2013
9 min read
(For more resources related to this topic, see here.) A localhost monitoring setup Let us take a close look at our current setup, which we created, for monitoring a localhost. Icinga by default comes with object configuration for a localhost. The object configuration files are inside /etc/icinga/objects for default installations. $ ls /etc/icinga/objects commands.cfg notifications.cfg templates.cfg contacts.cfg printer.cfg timeperiods.cfg localhost.cfg switch.cfg windows.cfg There are several configuration files with object definitions. Together, these object definitions define the monitoring setup for monitoring some services on a localhost. Let's first look at localhost.cfg, which has most of the relevant configuration. We have a host definition: define host{ use linux-server host_name localhost alias localhost address 127.0.0.1 } The preceding object block defines one object, that is, the host that we want to monitor, with details such as the hostname, alias for the host, and the address of the server—which is optional, but is useful when you don't have DNS record for the hostname. We have a localhost host object defined in Icinga with the preceding object configuration. The localhost.cfg file also has a hostgroup defined which is as follows: define hostgroup { hostgroup_name linux-servers alias Linux Servers members localhost // host_name of the host object } The preceding object defines a hostgroup with only one member, localhost, which we will extend later to include more hosts. The members directive specifies the host members of the hostgroup. The value of this directive refers to the value of the host_name directive in the host definitions. It can be a comma-separated list of several hostnames. There is also a directive called hostgroups in the host object, where you can give a comma-separated list of names of the hostgroups that we want the host to be part of. For example, in this case, we could have omitted the members directive in the hostgroup definition and specified a hostgroups directive, which has the value linux-servers, in the localhost host definition. At this point, we have a localhost host and a linux-servers hostgroup, and localhost is a member of linux-servers. This is illustrated in the following figure: Going further into localhost.cfg, we have a bunch of service object definitions that follow. Each of these definitions indicate the service on a localhost that we want to monitor with the host_name directive. define service { use local-service host_name localhost service_description PING check_command check_ping!100.0,20%!500.0,60% } This is one of the service definitions. The object defines a PING service check that monitors the reachability. The host_name directive specifies the host that this service check should be associated with, which in this case is localhost. Again, the value of the host_name directive here should reflect the value of the host_name directive defined in the host object definition. So, we have a PING service check defined for a localhost, which is illustrated by following figure: There are several such service definitions that are placed on a localhost. Each service has a check_command directive that specifies the command for monitoring that service. Note that the exclamation marks in the check_command values are the command argument separators. So, cmd!foo!bar indicates that the command is cmd with foo as its first argument and bar as the second. It is important to remember that the check_ping part in check_command in the preceding example does not mean the check_ping executable that is in /usr/lib64/nagios/plugins/check_ping for most installations; it refers to the Icinga object of type command. In our setup, all command object definitions are inside commands.cfg. The commands.cfg file has the command object definition for check_ping. define command { command_name check_ping command_line $USER1$/check_ping -H $HOSTADDRESS$ -w $ARG1$ -c $ARG2$ -p 5 } The check_command value in the PING service definition refers to the preceding command object, which indicates the exact command to be executed for performing the service check. $USER1$ is a user-defined Icinga macro. Macros in Icinga are like variables that can be used in various object definitions to wrap data inside these variables. Some macros are predefined, while some are user defined. These user macros are usually defined in /etc/icinga/resources.cfg: $USER1$=/usr/lib64/nagios/plugins So replace the $USER1$ macro with its value, and execute: $ value/of/USER1/check_ping --help This command will print the usual usage string with all the command-line options available. $ARG1$ and $ARG2$ in the command definition are macros referring to the arguments passed in the check_command value in the service definition, which are 100.0,20% and 500.0,60% respectively for the PING service definition. We will come to this later. As noted earlier, the status of the service is determined by the exit code of the command that is specified in the command_line directive in command definition. We have many such service definitions for a localhost in localhost.cfg, such as Root Partition (monitors disk space), Total Processes, Current Load, HTTP, along with command definitions in commands.cfg for check_commands of each of these service definitions. So, we have a host definition for localhost, a hostgroup definition linux-servers having localhost as its member, several service check definitions for localhost with check commands, and the command definitions specifying the exact command with arguments to execute for the checks. This is illustrated with the example Ping check in the following figure: This completes the basic understanding of how our localhost monitoring is built up from plain-text configuration. Notifications We would, as is the point of having monitoring systems, like to get alerted when something actually goes down. We don't want to keep monitoring the Icinga web interface screen, waiting for something to go down. Icinga provides a very generic and flexible way of sending out alerts. We can have any alerting script triggered when something goes wrong, which in turn may run commands for sending e-mails, SMS, Jabber messages, Twitter tweets, or practically anything that can be done from within a script. The default localhost monitoring setup has an e-mail alerting configuration. The way these notifications work is that we define contact objects where we give the contact name, e-mail addresses, pager numbers, and other necessary details. These contact names are specified in the host/service templates or the objects themselves. So, when Icinga detects that a host/service has gone down, it will use this contact object to send contact details to the alerting script. The contact object definition also has the host_notification_commands and service_notification_commands directives. These directives specify the command objects that should be used to send out the notifications for that particular contact. The former directive is used when the host goes down, and the latter is used when a service goes down. The respective command objects are then looked up and the value of their command_line directive is executed. This command object is the same as the one we looked at previously for executing checks. The same command object type is used to also define notification commands. We can also define contact groups and specify them in the host/service object definitions to alert a bunch of contacts at the same time. We can also give a comma-separated list of contact names instead of a contact group. Let's have a look at our current setup for notification configuration. The host/service template objects have the admin contact group specified, whose definition is in contacts.cfg: define contactgroup { contactgroup_name admins alias Icinga Administrators members icingaadmin } The group has the icingaadmin member contact, which is again defined in the same file: define contact { contact_name icingaadmin use generic-contact alias Icinga Admin email your@email.com } The contacts.cfg file has your e-mail address. The contact object inherits the generic-contact template contact object. define contact{ name generic-contact service_notification_period 24x7 host_notification_period 24x7 service_notification_options w,u,c,r,f,s host_notification_options d,u,r,f,s service_notification_commands notify-service-by-email host_notification_commands notify-host-by-email register 0 } This template object has the host_notification_commands and service_notification_commands directives defined as notify-host-by-email and notify-service-by-email respectively. These are commands similar to what we use in service definitions. These commands are defined in commands.cfg: define command { command_name notify-host-by-email command_line /usr/bin/printf "%b" "***** Icinga *****nnNotification Type: $NOTIFICATIONTYPE$nHost: $HOSTNAME$nState: $HOSTSTATE$nAddress: $HOSTADDRESS$nInfo: $HOSTOUTPUT$nnDate/Time: $LONGDATETIME$n" | /bin/mail -s "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" $CONTACTEMAIL$ } define command { command_name notify-service-by-email command_line /usr/bin/printf "%b" "***** Icinga *****nnNotification Type: $NOTIFICATIONTYPE$nnService: $SERVICEDESC$nHost: $HOSTALIAS$nAddress: $HOSTADDRESS$nState: $SERVICESTATE$nnDate/Time: $LONGDATETIME$nnAdditional Info:nn$SERVICEOUTPUT$n" | /bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$ } These commands are eventually executed to send out e-mail notifications to the supplied e-mail addresses. Notice that command_lines uses the /bin/mail command to send e-mails, which is why we need a working setup of a SMTP server. Similarly, we could use any command/script path to send out custom alerts, such as SMS and Jabber. We could also change the above e-mail command to change the content format to suit our requirements. The following figure illustrates the contact and notification configuration: The correlation between hosts/services and contacts/notification commands is shown below: Summary In this article, we analyzed our current configuration for the Icinga setup which monitors a localhost. We can replicate this to monitor a number of other servers using the desired service checks. We also looked at how the alerting configuration works to send out notifications when something goes down. Resources for Article: Further resources on this subject: Troubleshooting Nagios 3.0 [Article] Notifications and Events in Nagios 3.0-part1 [Article] BackTrack 4: Target Scoping [Article]
Read more
  • 0
  • 0
  • 2686

article-image-derivatives-pricing
Packt
18 Nov 2013
10 min read
Save for later

Derivatives Pricing

Packt
18 Nov 2013
10 min read
(For more resources related to this topic, see here.) Derivatives are financial instruments which derive their value from (or are dependent on) the value of another product, called the underlying. The three basic types of derivatives are forward and futures contracts, swaps, and options. In this article we will focus on this latter class and show how basic option pricing models and some related problems can be handled in R. We will start with overviewing how to use the continuous Black-Scholes model and the binomial Cox-Ross-Rubinstein model in R, and then we will proceed with discussing the connection between these models. Furthermore, with the help of calculating and plotting of the Greeks, we will show how to analyze the most important types of market risks that options involve. Finally, we will discuss what implied volatility means and will illustrate this phenomenon by plotting the volatility smile with the help of real market data. The most important characteristics of options compared to futures or swaps is that you cannot be sure whether the transaction (buying or selling the underlying) will take place or not. This feature makes option pricing more complex and requires all models to make assumptions regarding the future price movements of the underlying product. The two models we are covering here differ in these assumptions: the Black-Scholes model works with a continuous process while the Cox-Ross-Rubinstein model works with a discrete stochastic process. However, the remaining assumptions are very similar and we will see that the results are close too. The Black-Scholes model The assumptions of the Black-Scholes model (Black and Sholes, 1973, see also Merton, 1973) are as follows: The price of the underlying asset (S) follows geometric Brownian motion: Here μ (drift) and σ (volatility) are constant parameters and W is a standard Wiener process. The market is arbitrage-free. The underlying is a stock paying no dividends. Buying and (short) selling the underlying asset is possible in any (even fractional) amount. There are no transaction costs. The short-term interest rate (r) is known and constant over time. The main result of the model is that under these assumptions, the price of a European call option (c) has a closed form: Here X is the strike price, T-tis the time to maturity of the option, and N denotes the cumulative distribution function of the standard normal distribution. The equation giving the price of the option is usually referred to as the Black-Scholes formula. It is easy to see from put-call parity that the price of a European put option (p) with the same parameters is given by: Now consider a call and put option on a Google stock in June 2013 with a maturity of September 2013 (that is, with 3 months of time to maturity).Let us assume that the current price of the underlying stock is USD 900, the strike price is USD 950, the volatility of Google is 22%, and the risk-free rate is 2%. We will calculate the value of the call option with the GBSOption function from the fOptions package. Beyond the parameters already discussed, we also have to set the cost of carry (b); in the original Black-Scholes model, (with underlying paying no dividends) it equals the risk-free rate. > library(fOptions) > GBSOption(TypeFlag = "c", S = 900, X =950, Time = 1/4, r = 0.02, + sigma = 0.22, b = 0.02) Title: Black Scholes Option Valuation Call: GBSOption(TypeFlag = "c", S = 900, X = 950, Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22) Parameters: Value: TypeFlag c S 900 X 950 Time 0.25 r 0.02 b 0.02 sigma 0.22 Option Price: 21.79275 Description: Tue Jun 25 12:54:41 2013 This prolonged output returns the passed parameters with the result just below the Option Price label. Setting the TypeFlag to p would compute the price of the put option and now we are only interested in the results (found in the price slot—see the str of the object for more details) without the textual output: > GBSOption(TypeFlag = "p", S = 900, X =950, Time = 1/4, r = 0.02, sigma = 0.22, b = 0.02)@price [1] 67.05461 We also have the choice to compute the preceding values with a more user-friendly calculator provided by the GUIDE package. Running the blackscholes() function would trigger a modal window with a form where we can enter the same parameters. Please note that the function uses the dividend yield instead of cost of carry, which is zero in this case. The Cox-Ross-Rubinstein model The Cox-Ross-Rubinstein(CRR) model (Cox, Ross and Rubinstein, 1979) assumes that the price of the underlying asset follows a discrete binomial process. The price might go up or down in each period and hence changes according to a binomial tree illustrated in the following plot, where u and dare fixed multipliers measuring the price changes when it goes up and down. The important feature of the CRR model is that u=1/d and the tree is recombining; that is, the price after two periods will be the same if it first goes up and then goes down or vice versa, as shown in the following figure: To build a binomial tree, first we have to decide how many steps we are modeling (n); that is, how many steps the time to maturity of the option will be divided into. Alternatively, we can determine the length of one time step ∆t,(measured in years) on the tree: If we know the volatility (σ) of the underlying, the parameters u and dare determined according to the following formulas: And consequently: When pricing an option in a binomial model, we need to determine the tree of the underlying until the maturity of the option. Then, having all the possible prices at maturity, we can calculate the corresponding possible option values, simply given by the following formulas: To determine the option price with the binomial model, in each node we have to calculate the expected value of the next two possible option values and then discount it. The problem is that it is not trivial what expected return to use for discounting. The trick is that we are calculating the expected value with a hypothetic probability, which enables us to discount with the risk-free rate. This probability is called risk neutral probability (pn) and can be determined as follows: The interpretation of the risk-neutral probability is quite plausible: if the probability that the underlying price goes up from any of the nodes was pn, then the expected return of the underlying would be the risk-free rate. Consequently, an expected value calculated with pn can be discounted by rand the price of the option in any node of the tree is determined as: In the preceding formula, g is the price of an option in general (it may be call or put as well) in a given node, gu and gd are the values of this derivative in the two possible nodes one period later. For demonstrating the CRR model in R, we will use the same parameters as in the case of the Black-Scholes formula. Hence, S=900, X=950, σ=22%, r=2%, b=2%, T-t=0.25. We also have to set n, the number of time steps on the binomial tree. For illustrative purposes, we will work with a 3-period model: > CRRBinomialTreeOption(TypeFlag = "ce", S = 900, X = 950, + Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22, n = 3)@price [1] 20.33618 > CRRBinomialTreeOption(TypeFlag = "pe", S = 900, X = 950, + Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22, n = 3)@price [1] 65.59803 It is worth observing that the option prices obtained from the binomial model are close to (but not exactly the same as) the Black-Scholes prices calculated earlier. Apart from the final result, that is, the current price of the option, we might be interested in the whole option tree as well: > CRRTree <- BinomialTreeOption(TypeFlag = "ce", S = 900, X = 950, + Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22, n = 3) > BinomialTreePlot(CRRTree, dy = 1, xlab = "Time steps", + ylab = "Number of up steps", xlim = c(0,4)) > title(main = "Call Option Tree") Here we first computed a matrix by BinomialTreeOption with the given parameters and saved the result in CRRTree that was passed to the plot function with specified labels for both the x and y axis with the limits of the x axis set from 0 to 4, as shown in the following figure. The y-axis (number of up steps) shows how many times the underlying price has gone up in total. Down steps are defined as negative up steps. The European put option can be shown similarly by changing the TypeFlag to pe in the previous code: Connection between the two models After applying the two basic option pricing models, we give some theoretical background to them. We do not aim to give a detailed mathematical derivation, but we intend to emphasize (and then illustrate in R) the similarities of the two approaches. The financial idea behind the continuous and the binomial option pricing is the same: if we manage to hedge the option perfectly by holding the appropriate quantity of the underlying asset, it means we created a risk-free portfolio. Since the market is supposed to be arbitrage-free, the yield of a risk-free portfolio must equal the risk-free rate. One important observation is that the correct hedging ratio is holding underlying asset per option. Hence, the ratio is the partial derivative (or its discrete correspondent in the binomial model) of the option value with respect to the underlying price. This partial derivative is called the delta of the option. Another interesting connection between the two models is that the delta-hedging strategy and the related arbitrage-free argument yields the same pricing principle: the value of the derivative is the risk-neutral expected value of its future possible values, discounted by the risk-free rate. This principle is easily tractable on the binomial tree where we calculated the discounted expected values node by node; however, the continuous model has the same logic as well, even if the expected value is mathematically more complicated to compute. This is the reason why we gave only the final result of this argument, which was the Black-Scholes formula. Now we know that the two models have the same pricing principles and ideas (delta-hedging and risk-neutral valuation), but we also observed that their numerical results are not equal. The reason is that the stochastic processes assumed to describe the price movements of the underlying asset are not identical. Nevertheless, they are very similar; if we determine the value of u and d from the volatility parameter as we did it in The Cox-Ross-Rubinstein model section, the binomial process approximates the geometric Brownian motion. Consequently, the option price of the binomial model converges to that of the Black-Scholes model if we increase the number of time steps (or equivalently, decrease the length of the steps). To illustrate this relationship, we will compute the option price in the binomial model with increasing numbers of time steps. In the following figure, we compare the results with the Black-Scholes price of the option: The plot was generated by a loop running N from 1 to 200 to compute CRRBinomialTreeOption with fixed parameters: > prices <- sapply(1:200, function(n) { + CRRBinomialTreeOption(TypeFlag = "ce", S = 900, X = 950, + Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22, n = n)@price + }) Now the prices variable holds 200 computed values: > str(prices) num [1:200] 26.9 24.9 20.3 23.9 20.4... Let us also compute the option with the generalized Black-Scholes option: > price <- GBSOption(TypeFlag = "c", S = 900, X = 950, Time = 1/4, r = 0.02, sigma = 0.22, b = 0.02)@price And show the prices in a joint plot with the GBS option rendered in red: > plot(1:200, prices, type='l', xlab = 'Number of steps', + ylab = 'Prices') > abline(h = price, col ='red') > legend("bottomright", legend = c('CRR-price', 'BS-price'), + col = c('black', 'red'), pch = 19)
Read more
  • 0
  • 0
  • 2637

article-image-getting-started-ansible
Packt
18 Nov 2013
8 min read
Save for later

Getting Started with Ansible

Packt
18 Nov 2013
8 min read
(For more resources related to this topic, see here.) First steps with Ansible Ansible modules take arguments in key value pairs that look similar to key=value, perform a job on the remote server, and return information about the job as JSON. The key value pairs allow the module to know what to do when requested. The data returned from the module lets Ansible know if anything changed or if any variables should be changed or set afterwards. Modules are usually run within playbooks as this lets you chain many together, but they can also be used on the command line. Previously, we used the ping command to check that Ansible had been correctly setup and was able to access the configured node. The ping module only checks that the core of Ansible is able to run on the remote machine but effectively does nothing. A slightly more useful module is called setup. This module connects to the configured node, gathers data about the system, and then returns those values. This isn't particularly handy for us while running from the command line, however, in a playbook you can use the gathered values later in other modules. To run Ansible from the command line, you need to pass two things, though usually three. First is a host pattern to match the machine that you want to apply the module to. Second you need to provide the name of the module that you wish to run and optionally any arguments that you wish to give to the module. For the host pattern, you can use a group name, a machine name, a glob, and a tilde (~), followed by a regular expression matching hostnames, or to symbolize all of these, you can either use the word all or simply *. To run the setup module on one of your nodes, you need the following command line: $ ansible machinename -u root -k -m setup The setup module will then connect to the machine and give you a number of useful facts back. All the facts provided by the setup module itself are prepended with ansible_ to differentiate them from variables. The following is a table of the most common values you will use, example values, and a short description of the fields: Field Example Description ansible_architecture x86_64 The architecture of the managed machine ansible_distribution CentOS The Linux or Unix Distribution on the managed machine ansible_distribution_version 6.3 The version of the preceding distribution ansible_domain example.com The domain name part of the server's hostname ansible_fqdn machinename.example.com This is the fully qualified domain name of the managed machine. ansible_interfaces ["lo", "eth0"] A list of all the interfaces the machine has, including the loopback interface ansible_kernel 2.6.32-279.el6.x86_64 The kernel version installed on the managed machine ansible_memtotal_mb 996 The total memory in megabytes available on the managed machine ansible_processor_count 1 The total CPUs available on the managed machine ansible_virtualization_role guest Whether the machine is a guest or a host machine ansible_virtualization_type kvm The type of virtualization setup on the managed machine These variables are gathered using Python from the host system; if you have facter or ohai installed on the remote node, the setup module will execute them and return their data as well. As with other facts, ohai facts are prepended with ohai_ and facter facts with facter_. While the setup module doesn't appear to be too useful on the command line, once you start writing playbooks, it will come into its own. If all the modules in Ansible do as little as the setup and the ping module, we will not be able to change anything on the remote machine. Almost all of the other modules that Ansible provides, such as the file module, allow us to actually configure the remote machine. The file module can be called with a single path argument; this will cause it to return information about the file in question. If you give it more arguments, it will try and alter the file's attributes and tell you if it has changed anything. Ansible modules will almost always tell you if they have changed anything, which becomes more important when you are writing playbooks. You can call the file module, as shown in the following command, to see details about /etc/fstab: $ ansible machinename -u root -k -m file -a 'path=/etc/fstab' The preceding command should elicit a response like the following code: machinename | success >> { "changed": false, "group": "root", "mode": "0644", "owner": "root", "path": "/etc/fstab", "size": 779, "state": "file" } Or like the following command to create a new test directory in /tmp: $ ansible machinename -u root -k -m file -a 'path=/tmp/test state=directory mode=0700 owner=root' The preceding command should return something like the following code: machinename | success >> { "changed": true, "group": "root", "mode": "0700", "owner": "root", "path": "/tmp/test", "size": 4096, "state": "directory" } The second command will have the changed variable set to true, if the directory doesn't exist or has different attributes. When run a second time, the value of changed should be false indicating that no changes were required. There are several modules that accept similar arguments to the file module, and one such example is the copy module. The copy module takes a file on the controller machine, copies it to the managed machine, and sets the attributes as required. For example, to copy the /etc/fstabfile to /tmp on the managed machine, you will use the following command: $ ansible machinename -m copy -a 'path=/tmp/fstab mode=0700 owner=root' The preceding command, when run the first time, should return something like the following code: machinename | success >> { "changed": true, "dest": "/tmp/fstab", "group": "root", "md5sum": "fe9304aa7b683f58609ec7d3ee9eea2f", "mode": "0700", "owner": "root", "size": 637, "src": "/root/.ansible/tmp/ansible-1374060150.96- 77605185106940/source", "state": "file" } There is also a module called command that will run any arbitrary command on the managed machine. This lets you configure it with any arbitrary command, such as a preprovided installer or a self-written script; it is also useful for rebooting machines. Please note that this module does not run the command within the shell, so you cannot perform redirection, use pipes, and expand shell variables or background commands. Ansible modules strive to prevent changes being made when they are not required. This is referred to as idempotency and can make running commands against multiple servers much faster. Unfortunately, Ansible cannot know if your command has changed anything or not, so to help it be more idempotent you have to give it some help. It can do this either via the creates or the removes argument. If you give a creates argument, the command will not be run if the filename argument exists. The opposite is true of the removes argument; if the filename exists, the command will be run. You run the command as follows: $ ansible machinename -m command -a 'rm -rf /tmp/testing removes=/tmp/testing' If there is no file or directory named /tmp/testing, the command output will indicate that it was skipped, as follows: machinename | skipped Otherwise, if the file did exist, it will look as follows: ansibletest | success | rc=0 >> Often it is better to use another module in place of the command module. Other modules offer more options and can better capture the problem domain they work in. For example, it would be much less work for Ansible and also the person writing the configurations to use the file module in this instance, since the file module will recursively delete something if the state is set to absent. So, this command would be equivalent to the following command: $ ansible machinename -m file -a 'path=/tmp/testing state=absent' If you need to use features usually available in a shell while running your command, you will need the shell module. This way you can use redirection, pipes, or job backgrounding. You can pick which shell to use with the executable argument. However, when you write the code, it also supports the creates argument but does not support the removes argument. You can use the shell module as follows: $ ansible machinename -m shell -a '/opt/fancyapp/bin/installer.sh > /var/log/fancyappinstall.log creates=/var/log/fancyappinstall.log' Summary In this article, we have covered which installation type to choose, installing Ansible, and how to build an inventory file to reflect your environment. After this, we saw how to use Ansible modules in an ad hoc style for simple tasks. Finally, we discussed how to learn which modules are available on your system and how to use the command line to get instructions for using a module. Resources for Article: Further resources on this subject: Configuring Manage Out to DirectAccess Clients [Article] Creating and configuring a basic mobile application [Article] Deploying Applications and Software Updates on Microsoft System Center 2012 Configuration Manager [Article]
Read more
  • 0
  • 0
  • 2083

article-image-application-performance
Packt
15 Nov 2013
8 min read
Save for later

Application Performance

Packt
15 Nov 2013
8 min read
(For more resources related to this topic, see here.) Data sizing The cost of abstractions in terms of data size plays an important role. For example, whether or not a data element can fit into a processor cache line depends directly upon its size. On a Linux system, we can find out the cache line size and other parameters by inspecting the values in the files under /sys/devices/system/cpu/cpu0/cache/. Another concern we generally find with data sizing is how much data we are holding at a time in the heap. GC has direct consequences on the application's performance. While processing data, often we do not really need all the data we hold on to. Consider the example of generating a summary report of sold items for a certain period (months) of time. After the subperiod (month wise), summary data is computed. We do not need the item details anymore, hence it's better to remove the unwanted data while we add the summaries. This is shown in the following example: (defn summarize [daily-data] ; daily-data is a map (let [s (items-summary (:items daily-data))] (-> daily-data (select-keys [:digest :invoices]) ; we keep only the required key/val pairs (assoc :summary s)))) ;; now inside report generation code (-> (fetch-items period-from period-to :interval-day) (map summarize) generate-report) Had we not used select-keys in the preceding summarize function, it would have returned a map with extra summary data along with all the other existing keys in the map. Now, such a thing is often combined with lazy sequences. So, for this scheme to work, it is important not to hold on to the head of the lazy sequence. Reduced serialization An I/O channel is a common source of latency. The perils of over-serialization cannot be overstated. Whether we read or write data from a data source over an I/O channel, all of that data needs to be prepared, encoded, serialized, de-serialized, and parsed before being worked on. It is better for every step to have less data involved in order to lower the overhead. Where there is no I/O involved, such as in-process communication, it generally makes no sense to serialize. A common example of over-serialization is encountered while working with SQL databases. Often, there are common SQL query functions that fetch all columns of a table or a relation—they are called by various functions that implement the business logic. Fetching data that we do not need is wasteful and detrimental to the performance for the same reason that we discussed in the preceding paragraph. While it may seem more work to write one SQL statement and one database query function for each use case, it pays off with better performance. Code that uses NoSQL databases is also subject to this anti-pattern—we have to take care to fetch only what we need even though it may lead to additional code. There's a pitfall to be aware of when reducing serialization. Often, some information needs to be inferred in absence of the serialized data. In such cases where some of the serialization is dropped so that we can infer other information, we must compare the cost of inference versus the serialization overhead. The comparison may not be necessarily done per operation, but rather on the whole. Then, we can consider the resources we can allocate in order to achieve capacities for various parts of our systems. Chunking to reduce memory pressure What happens when we slurp a text file regardless of its size? The contents of the entire file will sit in the JVM heap. If the file is larger than the JVM heap capacity, the JVM will terminate by throwing OutOfMemoryError. If the file is large but not large enough to force the JVM into an OOM error, it leaves a relatively smaller JVM heap space for other operations in the application to continue. A similar situation takes place when we carry out any operation disregarding the JVM heap capacity. Fortunately, this can be fixed by reading data in chunks and processing them before reading further. Sizing for file/network operations Let us take the example of a data ingestion process where a semi-automated job uploads large Comma Separated File (CSV) files via the File Transfer Protocol (FTP) to a file server, and another automated job, which is written in Clojure, runs periodically to detect the arrival of files via the Network File System (NFS). After detecting a new file, the Clojure program processes the file, updates the result in a database, and archives the file. The program detects and processes several files concurrently. The size of the CSV files is not known in advance, but the format is predefined. As per the preceding description, one potential problem is that since there could be multiple files being processed concurrently, how do we distribute the JVM heap among the concurrent file-processing jobs? Another issue could be that the operating system imposes a limit on how many files can be opened at a time; on Unix-like systems, you can use the ulimit command to extend the limit. We cannot arbitrarily slurp the CSV file contents—we must limit each job to a certain amount of memory and also limit the number of jobs that can run concurrently. At the same time, we cannot read a very small number of rows from a file at a time because this may impact performance. (def ^:const K 1024) ;; create the buffered reader using custom 128K buffer-size (-> filename java.io.FileInputStream java.io.InputStreamReader (java.io.BufferedReader (* K 128))) Fortunately, we can specify the buffer size when reading from a file or even from a network stream so as to tune the memory usage and performance as appropriate. In the preceding code example, we explicitly set the buffer size of the reader to facilitate the same. Sizing for JDBC query results Java's interface standard for SQL databases, JDBC (which is technically not an acronym), supports fetch-size for fetching query results via JDBC drivers. The default fetch size depends on the JDBC driver. Most JDBC drivers keep a low default value so as to avoid high memory usage and attain internal performance optimization. A notable exception to this norm is the MySQL JDBC driver that completely fetches and stores all rows in memory by default. (require '[clojure.java.jdbc :as jdbc]) ;; using prepare-statement directly (we rarely use it directly, shown just for demo) (with-open [stmt (jdbc/prepare-statement conn sql :fetch-size 1000 max-rows 9000) rset (resultset-seq (.executeQuery stmt))] (vec rset)) ;; using query (query db [{:fetch-size 1000} "SELECT empno FROM emp WHERE country=?" 1]) When using the Clojure Contrib library java.jdbc (https://github.com/clojure/java.jdbc as of Version 0.3.0), the fetch size can be set while preparing a statement as shown in the preceding example. The fetch size does not guarantee proportional latency; however, it can be used safely for memory sizing. We must test any performance-impacting latency changes due to fetch size at different loads and use cases for the particular database and JDBC driver. Besides fetch-size, we can also pass the max-rowsargument to limit the maximum rows to be returned by a query. However, this implies that the extra rows will be truncated from the result, not that the database will internally limit the number of rows to realize. Resource pooling There are several types of resources on the JVM that are rather expensive to initialize. Examples are HTTP connections, execution threads, JDBC connections, and so on. The Java API recognizes such resources and has built-in support for creating a pool of some of those resources so that the consumer code borrows a resource from a pool when required and at the end of the job simply returns it to the pool. Java's thread pools and JDBC data sources are prominent examples. The idea is to preserve the initialized objects for reuse. Even when Java does not support pooling of a resource type directly, you can always create a pool abstraction around custom expensive resources. The pooling technique is common in I/O activities, but it can be equally applicable to non-I/O purposes where the initialization cost is high. Summary Designing an application for performance should be based on the use cases and patterns of anticipated system load and behavior. Measuring performance is extremely important to guide optimization in the process. Fortunately, there are several well-known optimization patterns to tap into, such as resource pooling, and data sizing. Thus we analysed the performance optimization using these patterns. Resources for Article: Further resources on this subject: Improving Performance with Parallel Programming [Article] Debugging Java Programs using JDB [Article] IBM WebSphere Application Server Security: A Threefold View [Article]
Read more
  • 0
  • 0
  • 1193
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-building-do-list-ajax
Packt
08 Nov 2013
8 min read
Save for later

Building a To-do List with Ajax

Packt
08 Nov 2013
8 min read
(For more resources related to this topic, see here.) Creating and migrating our to-do list's database As you know, migrations are very helpful to control development steps. We'll use migrations in this article. To create our first migration, type the following command: php artisan migrate:make create_todos_table --table=todos --create When you run this command, Artisan will generate a migration to generate a database table named todos. Now we should edit the migration file for the necessary database table columns. When you open the folder migration in app/database/ with a file manager, you will see the migration file under it. Let's open and edit the file as follows: <?php use IlluminateDatabaseMigrationsMigration; class CreateTodosTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('todos', function(Blueprint $table){ $table->create(); $table->increments("id"); $table->string("title", 255); $table->enum('status', array('0', '1'))->default('0'); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::drop("todos"); } } To build a simple TO-DO list, we need five columns: The id column will store ID numbers of to-do tasks The title column will store a to-do task's title The status column will store statuses of the tasks The created_at and updated_at columns will store the created and updated dates of tasks If you write $table->timestamps() in the migration file, Laravel's migration class automatically creates created_at and updated_at columns. As you know, to apply migrations, we should run the following command: php artisan migrate After the command is run, if you check your database, you will see that our todos table and columns have been created. Now we need to write our model. Creating a todos model To create a model, you should open the app/models/ directory with your file manager. Create a file named Todo.php under the directory and write the following code: <?php class Todo extends Eloquent { protected $table = 'todos'; } Let's examine the Todo.php file. As you see, our Todo class extends an Eloquent model, which is the ORM (Object Relational Mapper) database class of Laravel. The protected $table = 'todos'; code tells Eloquent about our model's table name. If we don't set the table variable, Eloquent accepts the plural version of the lower case model name as table name. So this isn't required technically. Now, our application needs a template file, so let's create it. Creating the template Laravel uses a template engine that is called blade for static and application template files. Laravel calls the template files from the app/views/ directory, so we need to create our first template under this directory. Create a file with the name index.blade.php. The file contains the following code: <html> <head> <title>To-do List Application</title> <link rel="stylesheet" href="assets/css/style.css"> <!--[if lt IE 9]><script src = "//html5shim.googlecode.com/svn/trunk/html5.js"> </script><![endif]--> </head> <body> <div class="container"> <section id="data_section" class="todo"> <ul class="todo-controls"> <li><img src = "/assets/img/add.png" width="14px" onClick="show_form('add_task');" /></li> </ul> <ul id="task_list" class="todo-list"> @foreach($todos as $todo) @if($todo->status) <li id="{{$todo->id}}" class="done"> <a href="#" class="toggle"></a> <span id="span_{{$todo->id}}">{ {$todo->title}}</span> <a href="#" onClick="delete_task('{{$todo->id}}');" class="icon-delete">Delete</a> <a href="#" onClick="edit_task('{{$todo->id}}', '{{$todo->title}}');" class="icon-edit">Edit</a></li> @else <li id="{{$todo->id}}"><a href="#" onClick="task_done('{{$todo->id}}');" class="toggle"></a> <span id="span_{ {$todo->id}}">{{$todo->title}}</span> <a href="#" onClick="delete_task('{ {$todo->id}}');" class= "icon-delete">Delete</a> <a href="#" onClick="edit_task('{ {$todo->id}}','{{$todo->title}}');" class="icon-edit">Edit</a></li> @endif @endforeach </ul> </section> <section id="form_section"> <form id="add_task" class="todo" style="display:none"> <input id="task_title" type="text" name="title" placeholder="Enter a task name" value=""/> <button name="submit">Add Task</button> </form> <form id="edit_task" class="todo" style="display:none"> <input id="edit_task_id" type="hidden" value="" /> <input id="edit_task_title" type="text" name="title" value="" /> <button name="submit">Edit Task</button> </form> </section> </div> <script src = "http://code.jquery.com/ jquery-latest.min.js"type="text/javascript"></script> <script src = "assets/js/todo.js" type="text/javascript"></script> </body> </html> The preceding code may be difficult to understand if you're writing a blade template for the first time, so we'll try to examine it. You see a foreach loop in the file. This statement loops our todo records. We will provide you with more knowledge about it when we are creating our controller in this article. If and else statements are used for separating finished and waiting tasks. We use if and else statements for styling the tasks. We need one more template file for appending new records to the task list on the fly. Create a file with the name ajaxData.blade.php under app/views/ folder. The file contains the following code: @foreach($todos as $todo) <li id="{{$todo->id}}"><a href="#" onClick="task_done('{{$todo- >id}}');" class="toggle"></a> <span id="span_{{$todo >id}}">{{$todo->title}}</span> <a href="#" onClick="delete_task('{{$todo->id}}');" class="icon delete">Delete</a> <a href="#" onClick="edit_task('{{$todo >id}}','{{$todo->title}}');" class="icon-edit">Edit</a></li> @endforeach Also, you see the /assets/ directory in the source path of static files. When you look at the app/views directory, there is no directory named assets. Laravel separates the system and public files. Public accessible files stay under your public folder in root. So you should create a directory under your public folder for asset files. We recommend working with these types of organized folders for developing tidy and easy-to-read code. Finally you see that we are calling jQuery from its main website. We also recommend this way for getting the latest, stable jQuery in your application. You can style your application as you wish, hence we'll not examine styling code here. We are putting our style.css files under /public/assets/css/. For performing Ajax requests, we need JavaScript coding. This code posts our add_task and edit_task forms and updates them when our tasks are completed. Let's create a JavaScript file with the name todo.js in /public/assets/js/. The files contain the following code: function task_done(id){ $.get("/done/"+id, function(data) { if(data=="OK"){ $("#"+id).addClass("done"); } }); } function delete_task(id){ $.get("/delete/"+id, function(data) { if(data=="OK"){ var target = $("#"+id); target.hide('slow', function(){ target.remove(); }); } }); } function show_form(form_id){ $("form").hide(); $('#'+form_id).show("slow"); } function edit_task(id,title){ $("#edit_task_id").val(id); $("#edit_task_title").val(title); show_form('edit_task'); } $('#add_task').submit(function(event) { /* stop form from submitting normally */ event.preventDefault(); var title = $('#task_title').val(); if(title){ //ajax post the form $.post("/add", {title: title}).done(function(data) { $('#add_task').hide("slow"); $("#task_list").append(data); }); } else{ alert("Please give a title to task"); } }); $('#edit_task').submit(function() { /* stop form from submitting normally */ event.preventDefault(); var task_id = $('#edit_task_id').val(); var title = $('#edit_task_title').val(); var current_title = $("#span_"+task_id).text(); var new_title = current_title.replace(current_title, title); if(title){ //ajax post the form $.post("/update/"+task_id, {title: title}).done(function(data) { $('#edit_task').hide("slow"); $("#span_"+task_id).text(new_title); }); } else{ alert("Please give a title to task"); } }); Let's examine the JavaScript file.
Read more
  • 0
  • 0
  • 3846

article-image-dynamic-pom
Packt
06 Nov 2013
9 min read
Save for later

Dynamic POM

Packt
06 Nov 2013
9 min read
(For more resources related to this topic, see here.) Case study Our project meets the following requirements: It depends on org.codehaus.jedi:jedi-XXX:3.0.5. Actually, the XXX is related to the JDK version, that is, either jdk5 or jdk6. The project is built and run on three different environments: PRODuction, UAT, and DEVelopment The underlying database differs owing to the environment: PostGre in PROD, MySQL in UAT, and HSQLDB in DEV. Besides, the connection is set in a Spring file, which can be spring-PROD.xml, spring-UAT.xml, or spring-DEV.xml, all being in the same src/main/resource folder. The first bullet point can be easily answered, using a jdk-version property. The dependency is then declared as follows: <dependency> <groupId>org.codehaus.jedi</groupId> <!--For this dependency two artifacts are available, one for jdk5 or and a second for jdk6--> <artifactId>jedi-${jdk.version}</artifactId> <version>${jedi.version}</version> </dependency> Still, the fourth bullet point is resolved by specifying a resource folder: <resources> <resource> <directory>src/main/resource</directory> <!--include the XML files corresponding to the environment: PROD, UAT, DEV. Here, the only XML file is a Spring configuration one. There is one file per environment--> <includes> <include> **/*-${environment}.xml </include> </includes> </resource> </resources> Then, we will have to run Maven adding the property values using one of the following commands: mvn clean install –Denvironment=PROD –Djdk.version=jdk6 mvn clean install –Denvironment=DEV –Djdk.version=jdk5 By the way, we could have merged the three XML files as a unique one, setting dynamically the content thanks to Maven's filter tag and mechanism. The next point to solve is the dependency to actual JDBC drivers. A quick and dirty solution A quick and dirty solution is to mention the three dependencies: <!--PROD --> <dependency> <groupId>postgresql</groupId> <artifactId>postgresql</artifactId> <version>9.1-901.jdbc4</version> <scope>runtime</scope> </dependency> <!--UAT--> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.25</version> <scope>runtime</scope> </dependency> <!--DEV--> <dependency> <groupId>org.hsqldb</groupId> <artifactId>hsqldb</artifactId> <version>2.3.0</version> <scope>runtime</scope> </dependency> Anyway, this idea has drawbacks. Even though only the actual driver (org. postgresql.Driver, com.mysql.jdbc.Driver, or org.hsqldb.jdbcDriver as described in the Spring files) will be instantiated at runtime, the three JARs will be transitively transmitted—and possibly packaged—in a further distribution. You may argue that we can work around this problem in most of situations, by confining the scope to provided, and embed the actual dependency by any other mean (such as rely on an artifact embarked in an application server); however, even then you should concede the dirtiness of the process. A clean solution Better solutions consist in using dynamic POM. Here, too, there will be a gradient of more or less clean solutions. Once more, as a disclaimer, beware of dynamic POMs! Dynamic POMs are a powerful and tricky feature of Maven. Moreover, modern IDEs manage dynamic POMs better than a few years ago. Yet, their use may be dangerous for newcomers: as with generated code and AOP for instance, what you write is not what you execute, which may result in strange or unexpected behaviors, needing long hours of debug and an aspirin tablet for the headache. This is why you have to carefully weigh their interest, relatively to your project before introducing them. With properties in command lines As a first step, let's define the dependency as follows: <!-- The dependency to effective JDBC drivers: PostGre, MySQL or HSQLDB--> <dependency> <groupId>${effective.groupId}</groupId> <artifactId> ${effective.artifactId} </artifactId> <version>${effective.version}</version> </dependency> As you can see, the dependency is parameterized thanks to three properties: effective.groupId, effective.artifactId, and effective.version. Then, in the same way we added earlier the –Djdk.version property, we will have to add those properties in the command line, for example,: mvn clean install –Denvironment=PROD –Djdk.version=jdk6 -Deffective.groupId=postgresql -Deffective.artifactId=postgresql -Deffective.version=9.1-901.jdbc4 Or add the following property mvn clean install –Denvironment=DEV –Djdk.version=jdk5 -Deffective.groupId=org.hsqldb -Deffective.artifactId=hsqldb -Deffective.version=2.3.0 Then, the effective POM will be reconstructed by Maven, and include the right dependencies: <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>3.2.3.RELEASE</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.codehaus.jedi</groupId> <artifactId>jedi-jdk6</artifactId> <version>3.0.5</version> <scope>compile</scope> </dependency> <dependency> <groupId>postgresql</groupId> <artifactId>postgresql</artifactId> <version>9.1-901.jdbc4</version> <scope>compile</scope> </dependency> </dependencies> Yet, as you can imagine, writing long command lines like the preceding one increases the risks of human error, all the more that such lines are "write-only". These pitfalls are solved by profiles. Profiles and settings As an easy improvement, you can define profiles within the POM itself. The profiles gather the information you previously wrote in the command line, for example: <profile> <!-- The profile PROD gathers the properties related to the environment PROD--> <id>PROD</id> <properties> <environment>PROD</environment> <effective.groupId> postgresql </effective.groupId> <effective.artifactId> postgresql </effective.artifactId> <effective.version> 9.1-901.jdbc4 </effective.version> <jdk.version>jdk6</jdk.version> </properties> <activation> <!-- This profile is activated by default: in other terms, if no other profile in activated, then PROD will be--> <activeByDefault>true</activeByDefault> </activation> </profile> Or: <profile> <!-- The profile DEV gathers the properties related to the environment DEV--> <id>DEV</id> <properties> <environment>DEV</environment> <effective.groupId> org.hsqldb </effective.groupId> <effective.artifactId> hsqldb </effective.artifactId> <effective.version> 2.3.0 </effective.version> <jdk.version>jdk5</jdk.version> </properties> <activation> <!-- The profile DEV will be activated if, and only if, it is explicitly called--> <activeByDefault>false</activeByDefault> </activation> </profile> The corresponding command lines will be shorter: mvn clean install (Equivalent to mvn clean install –PPROD) Or: mvn clean install –PDEV You can list several profiles in the same POM, and one, many or all of them may be enabled or disabled. Nonetheless, multiplying profiles and properties hurts the readability. Moreover, if your team has 20 developers, then each developer will have to deal with 20 blocks of profiles, out of which 19 are completely irrelevant for him/her. So, in order to make the thing smoother, a best practice is to extract the profiles and inset them in the personal settings.xml files, with the same information: <?xml version="1.0" encoding="UTF-8"?> <settings xsi_schemaLocation="http://maven.apache.org/ SETTINGS/1.0.0 http://maven.apache.org/xsd/ settings-1.0.0.xsd"> <profiles> <profile> <id>PROD</id> <properties> <environment>PROD</environment> <effective.groupId> postgresql </effective.groupId> <effective.artifactId> postgresql </effective.artifactId> <effective.version> 9.1-901.jdbc4 </effective.version> <jdk.version>jdk6</jdk.version> </properties> <activation> <activeByDefault>true</activeByDefault> </activation> </profile> </profiles> </settings> Dynamic POMs – conclusion As a conclusion, the best practice concerning dynamic POMs is to parameterize the needed fields within the POM. Then, by order of priority: Set an enabled profile and corresponding properties within the settings.xml. mvn <goals> [-f <pom_Without_Profiles.xml> ] [-s <settings_With_Enabled_Profile.xml>] Otherwise, include profiles and properties within the POM mvn <goals> [-f <pom_With_Profiles.xml> ] [-P<actual_Profile> ] [-s <settings_Without_Profile.xml>] Otherwise, launch Maven with the properties in command lines mvn <goals> [-f <pom_Without_Profiles.xml> ] [-s <settings_Without_Profile.xml>] -D<property_1>=<value_1> -D<property_2>=<value_2> (...) -D<property_n>=<value_n> Summary In this article we learned about Dynamic POM. We saw a case study and also saw its quick and easy solutions. Resources for Article: Further resources on this subject: Integrating Scala, Groovy, and Flex Development with Apache Maven [Article] Creating a Camel project (Simple) [Article] Using Hive non-interactively (Simple) [Article]
Read more
  • 0
  • 0
  • 3664

article-image-downloading-pyrocms-and-its-pre-requisites
Packt
31 Oct 2013
6 min read
Save for later

Downloading PyroCMS and it's pre-requisites

Packt
31 Oct 2013
6 min read
(For more resources related to this topic, see here.) Getting started PyroCMS, like many other content management systems including WordPress, Typo3, or Drupal, comes with a pre-developed installation process. For PyroCMS, this installation process is easy to use and comes with a number of helpful hints just in case you hit a snag while installing the system. If, for example, your system files don't have the correct permissions profile (writeable versus write-protected), the PyroCMS installer will help you, along with all the other installation details, such as checking for required software and taking care of file permissions. Before you can install PyroCMS (the version used for examples in this article is 2.2) on a server, there are a number of server requirements that need to be met. If you aren't sure if these requirements have been met, the PyroCMS installer will check to make sure they are available before installation is complete. Following are the software requirements for a server before PyroCMS can be installed: HTTP Web Server MySQL 5.x or higher PHP 5.2.x or higher GD2 cURL Among these requirements, web developers interested in PyroCMS will be glad to know that it is built on CodeIgniter, a popular MVC patterned PHP framework. I recommend that the developers looking to use PyroCMS should also have working knowledge of CodeIgniter and the MVC programming pattern. Learn more about CodeIgniter and see their excellent system documentation online at http://ellislab.com/codeigniter. CodeIgniter If you haven't explored the Model-View-Controller (MVC) programming pattern, you'll want to brush up before you start developing for PyroCMS. The primary reason that CodeIgniter is a good framework for a CMS is that it is a well-documented framework that, when leveraged in the way PyroCMS has done, gives developers power over how long a project will take to build and the quality with which it is built. Add-on modules for PyroCMS, for example, follow the MVC method, a programming pattern that saves developers time and keeps their code dry and portable. Dry and portable programming are two different concepts. Dry is an acronym for "don't repeat yourself" code. Portable code is like "plug-and-play" code—write it once so that it can be shared with other projects and used quickly. HTTP web server Out of the PyroCMS software requirements, it is obvious, you can guess, that a good HTTP web server platform will be needed. Luckily, PyroCMS can run on a variety of web server platforms, including the following: Abyss Web Server Apache 2.x Nginx Uniform Server Zend Community Server If you are new to web hosting and haven't worked with web hosting software before, or this is your first time installing PyroCMS, I suggest that you use Apache as a HTTP web server. It will be the system for which you will find the most documentation and support online. If you'd prefer to avoid Apache, there is also good support for running PyroCMS on Nginx, another fairly-well documented web server platform. MySQL Version 5 is the latest major release of MySQL, and it has been in use for quite some time. It is the primary database choice for PyroCMS and is thoroughly supported. You don't need expert level experience with MySQL to run PyroCMS, but you'll need to be familiar with writing SQL queries and building relational databases if you plan to create add-ons for the system. You can learn more about MySQL at http://www.mysql.com. PHP Version 5.2 of PHP is no longer the officially supported release of PHP, which is, at the time of this article, Version 5.4. Version 5.2, which has been criticized as being a low server requirement for any CMS, is allowed with PyroCMS because it is the minimum version requirement for CodeIgniter, the framework upon which PyroCMS is built. While future versions of PyroCMS may upgrade this minimum requirement to PHP 5.3 or higher, you can safely use PyroCMS with PHP 5.2. Also, many server operating systems, like SUSE and Ubuntu, install PHP 5.2 by default. You can, of course, upgrade PHP to the latest version without causing harm to your instance of PyroCMS. To help future-proof your installation of PyroCMS, it may be wise to install PHP 5.3 or above, to maximize your readiness for when PyroCMS more strictly adopts features found in PHP 5.3 and 5.4, such as namespaceing. GD2 GD2, a library used in the manipulation and creation of images, is used by PyroCMS to dynamically generate images (where needed) and to crop and resize images used in many PyroCMS modules and add-ons. The image-based support offered by this library is invaluable. cURL As described on the cURL project website, cURL is "a command line tool for transferring data with URL syntax" using a large number of methods, including HTTP(S) GET, POST, PUT, and so on. You can learn more about the project and how to use cURL on their website http://curl.haxx.se. If you've never used cURL with PHP, I recommend taking time to learn how to use it, especially if you are thinking about building a web-based API using PyroCMS. Most popular web hosting companies meet the basic server requirements for PyroCMS. Downloading PyroCMS Getting your hands on a copy of PyroCMS is very simple. You can download the system files from one of two locations, the PryoCMS project website and GitHub. To download PyroCMS from the project website, visit http://www.pyrocms.com and click on the green button labeled Get PyroCMS! This will take you to a download page that gives you the choice between downloading the Community version of PyroCMS and buying the Professional version. If you are new to PyroCMS, you can start with the Community version, currently at Version 2.2.3. The following screenshot shows the download screen: To download PyroCMS from GitHub, visit https://github.com/pyrocms/pyrocms and click on the button labeled Download ZIP to get the latest Community version of PyroCMS, as shown in the following screenshot: If you know how to use Git, you can also clone a fresh version of PyroCMS using the following command. A word of warning, cloning PyroCMS from GitHub will usually give you the latest, stable release of the system, but it could include changes not described in this article. Make sure you checkout a stable release from PyroCMS's repository. git clone https://github.com/pyrocms/pyrocms.git As a side-note, if you've never used Git, I recommend taking some time to get started using it. PyroCMS is an open source project hosted in a Git repository on Github, which means that the system is open to being improved by any developer looking to contribute to the well-being of the project. It is also very common for PyroCMS developers to host their own add-on projects on Github and other online Git repository services. Summary In this article, we have covered the pre-requisites for using PyroCMS, and also how to download PyroCMS. Resources for Article : Further resources on this subject: Kentico CMS 5 Website Development: Managing Site Structure [Article] Kentico CMS 5 Website Development: Workflow Management [Article] Web CMS [Article]
Read more
  • 0
  • 0
  • 2799

article-image-dhtmlx-grid
Packt
30 Oct 2013
7 min read
Save for later

The DHTMLX Grid

Packt
30 Oct 2013
7 min read
(For more resources related to this topic, see here.) The DHTMLX grid component is one of the more widely used components of the library. It has a vast amount of settings and abilities that are so robust we could probably write an entire book on them. But since we have an application to build, we will touch on some of the main methods and get into utilizing it. Some of the cool features that the grid supports is filtering, spanning rows and columns, multiple headers, dynamic scroll loading, paging, inline editing, cookie state, dragging/ordering columns, images, multi-selection, and events. By the end of this article, we will have a functional grid where we will control the editing, viewing, adding, and removing of users. The grid methods and events When creating a DHTMLX grid, we first create the object; second we add all the settings and then call a method to initialize it. After the grid is initialized data can then be added. The order of steps to create a grid is as follows: Create the grid object Apply settings Initialize Add data Now we will go over initializing a grid. Initialization choices We can initialize a DHTMLX grid in two ways, similar to the other DHTMLX objects. The first way is to attach it to a DOM element and the second way is to attach it to an existing DHTMLX layout cell or layout. A grid can be constructed by either passing in a JavaScript object with all the settings or built through individual methods. Initialization on a DOM element Let's attach the grid to a DOM element. First we must clear the page and add a div element using JavaScript. Type and run the following code line in the developer tools console: document.body.innerHTML = "<div id='myGridCont'></div>"; We just cleared all of the body tags content and replaced it with a div tag having the id attribute value of myGridCont. Now, create a grid object to the div tag, add some settings and initialize it. Type and run the following code in the developer tools console: var myGrid = new dhtmlXGridObject("myGridCont"); myGrid.setImagePath(config.imagePath); myGrid.setHeader(["Column1", "Column2", "Column3"]); myGrid.init(); You should see the page with showing just the grid header with three columns. Next, we will create a grid on an existing cell object. Initialization on a cell object Refresh the page and add a grid to the appLayout cell. Type and run the following code in the developer tools console: var myGrid = appLayout.cells("a").attachGrid(); myGrid.setImagePath(config.imagePath); myGrid.setHeader(["Column1","Column2","Column3"]); myGrid.init(); You will now see the grid columns just below the toolbar. Grid methods Now let's go over some available grid methods. Then we can add rows and call events on this grid. For these exercises we will be using the global appLayout variable. Refresh the page. attachGrid We will begin by creating a grid to a cell. The attachGrid method creates and attaches a grid object to a cell. This is the first step in creating a grid. Type and run the following code line in the console: var myGrid = appLayout.cells("a").attachGrid(); setImagePath The setImagePath method allows the grid to know where we have the images placed for referencing in the design. We have the application image path set in the config object. Type and run the following code line in the console: myGrid.setImagePath(config.imagePath); setHeader The setHeader method sets the column headers and determines how many headers we will have. The argument is a JavaScript array. Type and run the following code line in the console: myGrid.setHeader(["Column1", "Column2", "Column3"]); setInitWidths The setinitWidths method will set the initial widths of each of the columns. The asterisk mark (*) is used to set the width automatically. Type and run the following code line in the console: myGrid.setInitWidths("125,95,*");   setColAlign The setColAlign method allows us to align the column's content position. Type and run the following code line in the console: myGrid.setColAlign("right,center,left"); init Up until this point, we haven't seen much going on. It was all happening behind the scenes. To see these changes the grid must be initialized. Type and run the following code line in the console: myGrid.init(); Now you see the columns that we provided. addRow Now that we have a grid created let's add a couple rows and start interacting. The addRow method adds a row to the grid. The parameters are ID and columns. Type and run the following code in the console: myGrid.addRow(1,["test1","test2","test3"]); myGrid.addRow(2,["test1","test2","test3"]); We just created two rows inside the grid. setColTypes The setColTypes method sets what types of data a column will contain. The available type options are: ro (readonly) ed (editor) txt (textarea) ch (checkbox) ra (radio button) co (combobox) Currently, the grid allows for inline editing if you were to double-click on grid cell. We do not want this for the application. So, we will set the column types to read-only. Type and run the following code in the console: myGrid.setColTypes("ro,ro,ro"); Now the cells are no longer editable inside the grid. getSelectedRowId The getSelectedRowId method returns the ID of the selected row. If there is nothing selected it will return null. Type and run the following code line in the console: myGrid.getSelectedRowId(); clearSelection The clearSelection method clears all selections in the grid. Type and run the following code line in the console: myGrid.clearSelection(); Now any previous selections are cleared. clearAll The clearAll method removes all the grid rows. Prior to adding more data to the grid we first must clear it. If not we will have duplicated data. Type and run the following code line in the console: myGrid.clearAll(); Now the grid is empty. parse The parse method allows the loading of data to a grid in the format of an XML string, CSV string, XML island, XML object, JSON object, and JavaScript array. We will use the parse method with a JSON object while creating a grid for the application. Here is what the parse method syntax looks like (do not run this in console): myGrid.parse(data, "json"); Grid events The DHTMLX grid component has a vast amount of events. You can view them in their entirety in the documentation. We will cover the onRowDblClicked and onRowSelect events. onRowDblClicked The onRowDblClicked event is triggered when a grid row is double-clicked. The handler receives the argument of the row ID that was double-clicked. Type and run the following code in console: myGrid.attachEvent("onRowDblClicked", function(rowId){ console.log(rowId); }); Double-click one of the rows and the console will log the ID of that row. onRowSelect The onRowSelect event will trigger upon selection of a row. Type and run the following code in console: myGrid.attachEvent("onRowSelect", function(rowId){ console.log(rowId); }); Now, when you select a row the console will log the id of that row. This can be perceived as a single click. Summary In this article, we learned about the DHTMLX grid component. We also added the user grid to the application and tested it with the storage and callbacks methods. Resources for Article: Further resources on this subject: HTML5 Presentations - creating our initial presentation [Article] HTML5: Generic Containers [Article] HTML5 Canvas [Article]
Read more
  • 0
  • 0
  • 2694
article-image-what-drupal
Packt
30 Oct 2013
3 min read
Save for later

What is Drupal?

Packt
30 Oct 2013
3 min read
(For more resources related to this topic, see here.) Currently Drupal is being used as a CMS in below listed domains Arts Banking and Financial Beauty and Fashion Blogging Community E-Commerce Education Entertainment Government Health Care Legal Industry Manufacturing and Energy Media Music Non-Profit Publishing Social Networking Small business Diversity that is being offered by Drupal is the reason of its growing popularity. Drupal is written in PHP.PHP is open source server side scripting language and it has changed the technological landscape to great extent. The Economist, Examiner.com and The White house websites have been developed in Drupal. System requirements Disk space A minimum installation requires 15 Megabytes. 60 MB is needed for a website with many contributed modules and themes installed. Keep in mind you need much more for the database, files uploaded by the users, media, backups and other files. Web server Apache, Nginx, or Microsoft IIS. Database Drupal 6: MySQL 4.1 or higher, PostgreSQL 7.1, Drupal 7: MySQL 5.0.15 or higher with PDO, PostgreSQL 8.3 or higher with PDO, SQLite 3.3.7 or higher Microsoft SQL Server and Oracle are supported by additional modules. PHP Drupal 6: PHP 4.4.0 or higher (5.2 recommended). Drupal 7: PHP 5.2.5 or higher (5.3 recommended). Drupal 8: PHP 5.3.10 or higher. How to create multiple websites using Drupal Multi-site allows you to share a single Drupal installation (including core code, contributed modules, and themes) among several sites One of the greatest features of Drupal is Multi-site feature. Using this feature a single Drupal installation can be used for various websites. Multisite feature is helpful in managing code during the code upgradation.Each site will have will have its own content, settings, enabled modules, and enabled theme. When to use multisite feature? If the sites are similar in functionallity (use same modules or use the same drupal distribution) you should use multisite feature. If the functionality is different don't use multisite. To create a new site using a shared Drupal code base you must complete the following steps: Create a new database for the site (if there is already an existing database you can also use this by defining a prefix in the installation procedure). Create a new subdirectory of the 'sites' directory with the name of your new site (see below for information on how to name the subdirectory). Copy the file sites/default/default.settings.php into the subdirectory you created in the previous step. Rename the new file to settings.php. Adjust the permissions of the new site directory. Make symbolic links if you are using a subdirectory such as packtpub.com/subdir and not a subdomain such as subd.example.com. In a Web browser, navigate to the URL of the new site and continue with the standard Drupal installation procedure. Summary This article discusses in brief about the Drupal platform and also the requirements for installing it. Resources for Article: Further resources on this subject: Drupal Web Services: Twitter and Drupal [Article] Drupal and Ubercart 2.x: Install a Ready-made Drupal Theme [Article] Drupal 7 Module Development: Drupal's Theme Layer [Article]
Read more
  • 0
  • 0
  • 1459

article-image-dialog-widget
Packt
30 Oct 2013
14 min read
Save for later

The Dialog Widget

Packt
30 Oct 2013
14 min read
(For more resources related to this topic, see here.) Wijmo additions to the dialog widget at a glance By default, the dialog window includes the pin, toggle, minimize, maximize, and close buttons. Pinning the dialog to a location on the screen disables the dragging feature on the title bar. The dialog can still be resized. Maximizing the dialog makes it take up the area inside the browser window. Toggling it expands or collapses it so that the dialog contents are shown or hidden with the title bar remaining visible. If these buttons cramp your style, they can be turned off with the captionButtons option. You can see how the dialog is presented in the browser from the following screenshot: Wijmo features additional API compared to jQuery UI for changing the behavior of the dialog. The new API is mostly for the buttons in the title bar and managing window stacking. Window stacking determines which windows are drawn on top of other ones. Clicking on a dialog raises it above other dialogs and changes their window stacking settings. The following table shows the options added in Wijmo. Options Events Methods captionButtons contentUrl disabled expandingAnimation stack zIndex blur buttonCreating stateChanged disable enable getState maximize minimize pin refresh reset restore toggle widget The contentUrl option allows you to specify a URL to load within the window. The expandingAnimation option is applied when the dialog is toggled from a collapsed state to an expanded state. The stack and zIndex options determine whether the dialog sits on top of other dialogs. Similar to the blur event on input elements, the blur event for dialog is fired when the dialog loses focus. The buttonCreating method is called when buttons are created and can modify the buttons on the title bar. The disable method disables the event handlers for the dialog. It prevents the default button actions and disables dragging and resizing. The widget method returns the dialog HTML element. The methods maximize, minimize, pin, refresh, reset, restore, and toggle, are available as buttons on the title bar. The best way to see what they do is play around with them. In addition, the getState method is used to find the dialog state and returns either maximized, minimized, or normal. Similarly, the stateChanged event is fired when the state of the dialog changes. The methods are called as a parameter to the wijdialog method. To disable button interactions, pass the string disable: $("#dialog").wijdialog ("disable"); Many of the methods come as pairs, and enable and disable are one of them. Calling enable enables the buttons again. Another pair is restore/minimize. minimize hides the dialog in a tray on the left bottom of the screen. restore sets the dialog back to its normal size and displays it again. The most important option for usability is the captionButtons option. Although users are likely familiar with the minimize, resize, and close buttons; the pin and toggle buttons are not featured in common desktop environments. Therefore, you will want to choose the buttons that are visible depending on your use of the dialog box in your project. To turn off a button on the title bar, set the visible option to false. A default jQuery UI dialog window with only the close button can be created with: $("#dialog").wijdialog({captionButtons: { pin: { visible: false }, refresh: { visible: false }, toggle: { visible: false }, minimize: { visible: false }, maximize: { visible: false } } }); The other options for each button are click, iconClassOff, and iconClassOn. The click option specifies an event handler for the button. Nevertheless, the buttons come with default actions and you will want to use different icons for custom actions. That's where iconClass comes in. iconClassOn defines the CSS class for the button when it is loaded. iconClassOff is the class for the button icon after clicking. For a list of available jQuery UI icons and their classes, see http://jquery-ui.googlecode.com/svn/tags/1.6rc5/tests/static/icons.html. Our next example uses ui-icon-zoomin, ui-icon-zoomout, and ui-icon-lightbulb. They can be found by toggling the text for the icons on the web page as shown in the preceding screenshot. Adding custom buttons jQuery UI's dialog API lacks an option for configuring the buttons shown on the title bar. Wijmo not only comes with useful default buttons, but also lets you override them easily. <!DOCTYPE HTML> <html> <head> ... <style> .plus { font-size: 150%; } </style> <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $('#dialog').wijdialog({ autoOpen: true, captionButtons: { pin: { visible: false }, refresh: { visible: false }, toggle: {visible: true, click: function () { $('#dialog').toggleClass('plus') }, iconClassOn: 'ui-icon-zoomin', iconClassOff: 'ui-icon-zoomout'} , minimize: { visible: false }, maximize: {visible: true, click: function () { alert('To enloarge text, click the zoom icon.') }, iconClassOn: 'ui-icon-lightbulb' }, close: {visible: true, click: self.close, iconClassOn:'ui-icon-close'} } }); }); </script> </head> <body> <div id="dialog" title="Basic dialog"> <p>Loremipsum dolor sitamet, consectetueradipiscingelit. Aeneancommodo ligula eget dolor.Aeneanmassa. Cum sociisnatoquepenatibusetmagnis dis parturient montes, nasceturridiculus mus. Donec quam felis, ultriciesnec, pellentesqueeu, pretiumquis, sem. Nullaconsequatmassaquisenim. Donecpedejusto, fringillavel, aliquetnec, vulputate</p> </div> </body> </html> We create a dialog window passing in the captionButtons option. The pin, refresh, and minimize buttons have visible set to false so that the title bar is initialized without them. The final output looks as shown in the following screenshot: In addition, the toggle and maximize buttons are modified and given custom behaviors. The toggle button toggles the font size of the text by applying or removing a CSS class. Its default icon, set with iconClassOn, indicates that clicking on it will zoom in on the text. Once clicked, the icon changes to a zoom out icon. Likewise, the behavior and appearance of the maximize button have been changed. In the position where the maximize icon was displayed in the title bar previously, there is now a lightbulb icon with a tip. Although this method of adding new buttons to the title bar seems clumsy, it is the only option that Wijmo currently offers. Adding buttons in the content area is much simpler. The buttons option specifies the buttons to be displayed in the dialog window content area below the title bar. For example, to display a simple confirmation button: $('#dialog').wijdialog({buttons: {ok: function () { $(this).wijdialog('close') }}}); The text displayed on the button is ok and clicking on the button hides the dialog. Calling $('#dialog').wijdialog('open') will show the dialog again. Configuring the dialog widget's appearance Wijmo offers several options that change the dialog's appearance including title, height, width, and position. The title of the dialog can be changed either by setting the title attribute of the div element of the dialog, or by using the title option. To change the dialog's theme, you can use CSS styling on the wijmo-wijdialog and wijmo-wijdialog-captionbutton classes: <!DOCTYPE HTML> <html> <head> ... <style> .wijmo-wijdialog { /*rounded corners*/ -webkit-border-radius: 12px; border-radius: 12px; background-clip: padding-box; /*shadow behind dialog window*/ -moz-box-shadow: 3px 3px 5px 6px #ccc; -webkit-box-shadow: 3px 3px 5px 6px #ccc; box-shadow: 3px 3px 5px 6px #ccc; /*fade contents from dark gray to gray*/ background-image: -webkit-gradient(linear, left top, left bottom, from(#444444), to(#999999)); background-image: -webkit-linear-gradient(top, #444444, #999999); background-image: -moz-linear-gradient(top, #444444, #999999); background-image: -o-linear-gradient(top, #444444, #999999); background-image: linear-gradient(to bottom, #444444, #999999); background-color: transparent; text-shadow: 1px 1px 3px #888; } </style> <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $('#dialog').wijdialog({width: 350}); }); </script> </head> <body> <div id="dialog" title="Subtle gradients"> <p>Loremipsum dolor sitamet, consectetueradipiscingelit. Aeneancommodo ligula eget dolor.Aeneanmassa. Cum sociisnatoquepenatibusetmagnis dis parturient montes, nasceturridiculus mus. Donec quam felis, ultriciesnec, pellentesqueeu, pretiumquis, sem. Nullaconsequatmassaquisenim. Donecpedejusto, fringillavel, aliquetnec, vulputate </p> </div> </body> </html> We now add rounded boxes, a box shadow, and a text shadow to the dialog box. This is done with the .wijmo-wijdialog class. Since many of the CSS3 properties have different names on different browsers, the browser specific properties are used. For example, -webkit-box-shadow is necessary on Webkit-based browsers. The dialog width is set to 350 px when initialized so that the title text and buttons all fit on one line. Loading external content Wijmo makes it easy to load content in an iFrame. Simply pass a URL with the contentUrl option: $(document).ready(function () { $("#dialog").wijdialog({captionButtons: { pin: { visible: false }, refresh: { visible: true }, toggle: { visible: false }, minimize: { visible: false }, maximize: { visible: true }, close: { visible: false } }, contentUrl: "http://wijmo.com/demo/themes/" }); }); This will load the Wijmo theme explorer in a dialog window with refresh and maximize/restore buttons. This output can be seen in the following screenshot: The refresh button reloads the content in the iFrame, which is useful for dynamic content. The maximize button resizes the dialog window. Form Components Wijmo form decorator widgets for radio button, checkbox, dropdown, and textbox elements give forms a consistent visual style across all platforms. There are separate libraries for decorating the dropdown and other form elements, but Wijmo gives them a consistent theme. jQuery UI lacks form decorators, leaving the styling of form components to the designer. Using Wijmo form components saves time during development and presents a consistent interface across all browsers. Checkbox The checkbox widget is an excellent example of the style enhancements that Wijmo provides over default form controls. The checkbox is used if multiple choices are allowed. The following screenshot shows the different checkbox states: Wijmo adds rounded corners, gradients, and hover highlighting to the checkbox. Also, the increased size makes it more usable. Wijmo checkboxes can be initialized to be checked. The code for this purpose is as follows: <!DOCTYPE HTML> <html> <head> ... <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $("#checkbox3").wijcheckbox({checked: true}); $(":input[type='checkbox']:not(:checked)").wijcheckbox(); }); </script> <style> div { display: block; margin-top: 2em; } </style> </head> <body> <div><input type='checkbox' id='checkbox1' /><label for='checkbox1'>Unchecked</label></div> <div><input type='checkbox' id='checkbox2' /><label for='checkbox2'>Hover</label></div> <div><input type='checkbox' id='checkbox3' /><label for='checkbox3'>Checked</label></div> </body> </html>. In this instance, checkbox3 is set to Checked as it is initialized. You will not get the same result if one of the checkboxes is initialized twice. Here, we avoid that by selecting the checkboxes that are not checked after checkbox3 is set to be Checked. Radio buttons Radio buttons, in contrast with checkboxes, allow only one of the several options to be selected. In addition, they are customized through the HTML markup rather than a JavaScript API. To illustrate, the checked option is set by the checked attribute: <input type="radio" checked /> jQuery UI offers a button widget for radio buttons, as shown in the following screenshot, which in my experience causes confusion as users think that they can select multiple options: The Wijmo radio buttons are closer in appearance to regular radio buttons so that users would expect the same behavior, as shown in the following screenshot: Wijmo radio buttons are initialized by calling the wijradiomethod method on radio button elements: <!DOCTYPE html> <html> <head> ... <script id="scriptInit" type="text/javascript">$(document).ready(function () { $(":input[type='radio']").wijradio({ changed: function (e, data) { if (data.checked) { alert($(this).attr('id') + ' is checked') } } }); }); </script> </head> <body> <div id="radio"> <input type="radio" id="radio1" name="radio"/><label for="radio1">Choice 1</label> <input type="radio" id="radio2" name="radio" checked="checked"/><label for="radio2">Choice 2</label> <input type="radio" id="radio3" name="radio"/><label for="radio3">Choice 3</label> </div> </body> </html> In this example, the changed option, which is also available for checkboxes, is set to a handler. The handler is passed a jQuery.Event object as the first argument. It is just a JavaScript event object normalized for consistency across browsers. The second argument exposes the state of the widget. For both checkboxes and radio buttons, it is an object with only the checked property. Dropdown Styling a dropdown to be consistent across all browsers is notoriously difficult. Wijmo offers two options for styling the HTML select and option elements. When there are no option groups, the ComboBox is the better widget to use. For a dropdown with nested options under option groups, only the wijdropdown widget will work. As an example, consider a country selector categorized by continent: <!DOCTYPE HTML> <html> <head> ... <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $('select[name=country]').wijdropdown(); $('#reset').button().click(function(){ $('select[name=country]').wijdropdown('destroy') }); $('#refresh').button().click(function(){ $('select[name=country]').wijdropdown('refresh') }) }); </script> </head> <body> <button id="reset"> Reset </button> <button id="refresh"> Refresh </button> <select name="country" style="width:170px"> <optgroup label="Africa"> <option value="gam">Gambia</option> <option value="mad">Madagascar</option> <option value="nam">Namibia</option> </optgroup> <optgroup label="Europe"> <option value="fra">France</option> <option value="rus">Russia</option> </optgroup> <optgroup label="North America"> <option value="can">Canada</option> <option value="mex">Mexico</option> <option selected="selected" value="usa">United States</option> </optgroup> </select> </body> </html> The select element's width is set to 170 pixels so that when the dropdown is initialized, both the dropdown menu and items have a width of 170 pixels. This allows the North America option category to be displayed on a single line, as shown in the following screenshot. Although the dropdown widget lacks a width option, it takes the select element's width when it is initialized. To initialize the dropdown, call the wijdropdown method on the select element: $('select[name=country]').wijdropdown(); The dropdown element uses the blind animation to show the items when the menu is toggled. Also, it applies the same click animation as on buttons to the slider and menu: To reset the dropdown to a select box, I've added a reset button that calls the destroy method. If you have JavaScript code that dynamically changes the styling of the dropdown, the refresh method applies the Wijmo styles again. Summary The Wijmo dialog widget is an extension of the jQuery UI dialog. In this article, the features unique to Wijmo's dialog widget are explored and given emphasis. I showed you how to add custom buttons, how to change the dialog appearance, and how to load content from other URLs in the dialog. We also learned about Wijmo's form components. A checkbox is used when multiple items can be selected. Wijmo's checkbox widget has style enhancements over the default checkboxes. Radio buttons are used when only one item is to be selected. While jQuery UI only supports button sets on radio buttons, Wijmo's radio buttons are much more intuitive. Wijmo's dropdown widget should only be used when there are nested or categorized <select> options. The ComboBox comes with more features when the structure of the options is flat. Resources for Article: Further resources on this subject: Wijmo Widgets [Article] jQuery Animation: Tips and Tricks [Article] Building a Custom Version of jQuery [Article]
Read more
  • 0
  • 0
  • 2221

article-image-apex-plug-ins
Packt
30 Oct 2013
17 min read
Save for later

APEX Plug-ins

Packt
30 Oct 2013
17 min read
(For more resources related to this topic, see here.) In APEX 4.0, Oracle introduced the plug-in feature. A plug-in is an extension to the existing functionality of APEX. The idea behind plug-ins is to make life easier for developers. Plug-ins are reusable, and can be exported and imported. In this way it is possible to create a functionality which is available to all APEX developers. And installing and using them without the need of having a knowledge of what's inside the plug-in. APEX translates settings from the APEX builder to HTML and JavaScript. For example, if you created a text item in the APEX builder, APEX converts this to the following code (simplified): <input type="text" id="P12_NAME" name="P12_NAME" value="your name"> When you create an item type plug-in, you actually take over this conversion task of APEX, and you generate the HTML and JavaScript code yourself by using PL/SQL procedures. That offers a lot of flexibility because now you can make this code generic, so that it can be used for more items. The same goes for region type plug-ins. A region is a container for forms, reports, and so on. The region can be a div or an HTML table. By creating a region type plug-in, you create a region yourself with the possibility to add more functionality to the region. Plug-ins are very useful because they are reusable in every application. To make a plug-in available, go to Shared Components | Plug-ins , and click on the Export Plug-in link on the right-hand side of the page. Select the desired plug-in and file format and click on the Export Plug-in button. The plug-in can then be imported into another application. Following are the six types of plug-in: Item type plug-ins Region type plug-ins Dynamic action plug-ins Process type plug-ins Authorization scheme type plug-ins Authentication scheme type plug-ins In this Aricle we will discuss the first five types of plug-ins. Creating an item type plug-in In an item type plug-in you create an item with the possibility to extend its functionality. To demonstrate this, we will make a text field with a tooltip. This functionality is already available in APEX 4.0 by adding the following code to the HTML form element attributes text field in the Element section of the text field: onmouseover="toolTip_enable(event,this,'A tooltip')" But you have to do this for every item that should contain a tooltip. This can be made more easily by creating an item type plug-in with a built-in tooltip. And if you create an item of type plug-in, you will be asked to enter some text for the tooltip. Getting ready For this recipe you can use an existing page, with a region in which you can put some text items. How to do it... Follow these steps: Go to Shared Components | User Interface | Plug-ins . Click on the Create button. In the Name section, enter a name in the Name text field. In this case we enter tooltip. In the Internal Name text field, enter an internal name. It is advised to use the company's domain address reversed to ensure the name is unique when you decide to share this plug-in. So for example you can use com.packtpub.apex.tooltip. In the Source section, enter the following code in the PL/SQL Code text area: function render_simple_tooltip ( p_item in apex_plugin.t_page_item , p_plugin in apex_plugin.t_plugin , p_value in varchar2 , p_is_readonly in boolean , p_is_printer_friendly in boolean ) return apex_plugin.t_page_item_render_result is l_result apex_plugin.t_page_item_render_result; begin if apex_application.g_debug then apex_plugin_util.debug_page_item ( p_plugin => p_plugin , p_page_item => p_item , p_value => p_value , p_is_readonly => p_is_readonly , p_is_printer_friendly => p_is_printer_friendly); end if; -- sys.htp.p('<input type="text" id="'||p_item.name||'" name="'||p_item.name||'" class="text_field" onmouseover="toolTip_enable(event,this,'||''''||p_item.attribute_01||''''||')">');--return l_result;end render_simple_tooltip; This function uses the sys.htp.p function to put a text item (<input type="text") on the screen. On the text item, the onmouseover event calls the function toolTip_enable(). This function is an APEX function, and can be used to put a tooltip on an item. The arguments of the function are mandatory. The function starts with the option to show debug information. This can be very useful when you create a plug-in and it doesn't work. After the debug information, the htp.p function puts the text item on the screen, including the call to tooltip_enable. You can also see that the call to tooltip_enable uses p_item.attribute_01. This is a parameter that you can use to pass a value to the plug-in. That is, the following steps in this recipe. The function ends with the return of l_result. This variable is of the type apex_plugin.t_page_item_render_result. For the other types of plug-in there are dedicated return types also, for example t_region_render_result. Click on the Create Plug-in button The next step is to define the parameter (attribute) for this plug-in. In the Custom Attributes section, click on the Add Attribute button. In the Name section, enter a name in the Label text field, for example tooltip. Ensure that the Attribute text field contains the value 1 . In the Settings section, set the Type field to Text . Click on the Create button. In the Callbacks section, enter render_simple_tooltip into the Render Function Name text field. In the Standard Attributes section, check the Is Visible Widget checkbox. Click on the Apply Changes button. The plug-in is now ready. The next step is to create an item of type tooltip plug-in. Go to a page with a region where you want to use an item with a tooltip. In the Items section, click on the add icon to create a new item. Select Plug-ins . Now you will get a list of the available plug-ins. Select the one we just created, that is tooltip . Click on Next . In the Item Name text field, enter a name for the item, for example, tt_item. In the Region drop-down list, select the region you want to put the item in. Click on Next . In the next step you will get a new option. It's the attribute you created with the plug-in. Enter here the tooltip text, for example, This is tooltip text. Click on Next . In the last step, leave everything as it is and click on the Create Item button. You are now ready. Run the page. When you move your mouse pointer over the new item, you will see the tooltip. How it works... As stated before, this plug-in actually uses the function htp.p to put an item on the screen. Together with the call to the JavaScript function, toolTip_enable on the onmouseover event makes this a text item with a tooltip, replacing the normal text item. There's more... The tooltips shown in this recipe are rather simple. You could make them look better, for example, by using the Beautytips tooltips. Beautytips is an extension to jQuery and can show configurable help balloons. Visit http://plugins.jquery.com to download Beautytips. We downloaded Version 0.9.5-rc1 to use in this recipe. Go to Shared Components and click on the Plug-ins link. Click on the tooltip plug-in you just created. In the Source section, replace the code with the following code: function render_simple_tooltip ( p_item in apex_plugin.t_page_item, p_plugin in apex_plugin.t_plugin, p_value in varchar2, p_is_readonly in boolean, p_is_printer_friendly in boolean ) return apex_plugin.t_page_item_render_result is l_result apex_plugin.t_page_item_render_result; begin if apex_application.g_debug then apex_plugin_util.debug_page_item ( p_plugin => p_plugin , p_page_item => p_item , p_value => p_value , p_is_readonly => p_is_readonly , p_is_printer_friendly => p_is_printer_friendly); end if; The function also starts with the debug option to see what happens when something goes wrong. --Register the JavaScript and CSS library the plug-inuses. apex_javascript.add_library ( p_name => 'jquery.bgiframe.min', p_directory => p_plugin.file_prefix, p_version => null ); apex_javascript.add_library ( p_name => 'jquery.bt.min', p_directory => p_plugin.file_prefix, p_version => null ); apex_javascript.add_library ( p_name => 'jquery.easing.1.3', p_directory => p_plugin.file_prefix, p_version => null ); apex_javascript.add_library ( p_name => 'jquery.hoverintent.minified', p_directory => p_plugin.file_prefix, p_version => null ); apex_javascript.add_library ( p_name => 'excanvas', p_directory => p_plugin.file_prefix, p_version => null ); After that you see a number of calls to the function apex_javascript.add_library. These libraries are necessary to enable these nice tooltips. Using apex_javascript.add_library ensures that a JavaScript library is included in the final HTML of a page only once, regardless of how many plug-in items appear on that page. sys.htp.p('<input type="text" id="'||p_item.name||'"class="text_field" title="'||p_item.attribute_01||'">');-- apex_javascript.add_onload_code (p_code =>'$("#'||p_item.name||'").bt({ padding: 20 , width: 100 , spikeLength: 40 , spikeGirth: 40 , cornerRadius: 50 , fill: '||''''||'rgba(200, 50, 50, .8)'||''''||' , strokeWidth: 4 , strokeStyle: '||''''||'#E30'||''''||' , cssStyles: {color: '||''''||'#FFF'||''''||',fontWeight: '||''''||'bold'||''''||'} });'); -- return l_result; end render_tooltip; Another difference with the first code is the call to the Beautytips library. In this call you can customize the text balloon with colors and other options. The onmouseover event is not necessary anymore as the call to $().bt in the wwv_flow_javascript.add_onload_code takes over this task. The $().bt function is a jQuery JavaScript function which references the generated HTML of the plug-in item by ID, and converts it dynamically to show a tooltip using the Beautytips plug-in. You can of course always create extra plug-in item type parameters to support different colors and so on per item. To add the other libraries, do the following: In the Files section, click on the Upload new file button. Enter the path and the name of the library. You can use the file button to locate the libraries on your filesystem. Once you have selected the file, click on the Upload button. The files and their locations can be found in the following table: Libra ry Location jquery.bgiframe.min.js bt-0.9.5-rc1other_libsbgiframe_2.1.1 jquery.bt.min.js bt-0.9.5-rc1 jquery.easing.1.3.js bt-0.9.5-rc1other_libs jquery.hoverintent.minified.js bt-0.9.5-rc1other_libs Excanvas.js bt-0.9.5-rc1other_libsexcanvas_r3     If all libraries have been uploaded, the plug-in is ready. The tooltip now looks quite different, as shown in the following screenshot: In the plug-in settings, you can enable some item-specific settings. For example, if you want to put a label in front of the text item, check the Is Visible Widget checkbox in the Standard Attributes section. For more information on this tooltip, go to http://plugins.jquery.com/project/bt. Creating a region type plug-in As you may know, a region is actually a div. With the region type plug-in you can customize this div. And because it is a plug-in, you can reuse it in other pages. You also have the possibility to make the div look better by using JavaScript libraries. In this recipe we will make a carousel with switching panels. The panels can contain images but they can also contain data from a table. We will make use of another jQuery extension, Step Carousel. Getting ready You can download stepcarousel.js from http://www.dynamicdrive.com/dynamicindex4/stepcarousel.htm. However, in order to get this recipe work in APEX, we needed to make a slight modification in it. So, stepcarousel.js, arrowl.gif, and arrow.gif are included in this book. How to do it... Follow the given steps, to create the plug-in: Go to Shared Components and click on the Plug-ins link. Click on the Create button. In the Name section, enter a name for the plug-in in the Name field. We will use Carousel. In the Internal Name text field, enter a unique internal name. It is advised to use your domain reversed, for example com.packtpub.carousel. In the Type listbox, select Region . In the Source section, enter the following code in the PL/SQL Code text area: function render_stepcarousel ( p_region in apex_plugin.t_region, p_plugin in apex_plugin.t_plugin, p_is_printer_friendly in boolean ) return apex_plugin.t_region_render_result is cursor c_crl is select id , panel_title , panel_text , panel_text_date from app_carousel order by id; -- l_code varchar2(32767); begin The function starts with a number of arguments. These arguments are mandatory, but have a default value. In the declare section there is a cursor with a query on the table APP_CAROUSEL. This table contains several data to appear in the panels in the carousel. -- add the libraries and stylesheets -- apex_javascript.add_library ( p_name => 'stepcarousel', p_directory => p_plugin.file_prefix, p_version => null ); -- --Output the placeholder for the region which is used by--the Javascript code The actual code starts with the declaration of stepcarousel.js. There is a function, APEX_JAVASCRIPT.ADD_LIBRARY to load this library. This declaration is necessary, but this file needs also to be uploaded in the next step. You don't have to use the extension .js here in the code. -- sys.htp.p('<style type="text/css">'); -- sys.htp.p('.stepcarousel{'); sys.htp.p('position: relative;'); sys.htp.p('border: 10px solid black;'); sys.htp.p('overflow: scroll;'); sys.htp.p('width: '||p_region.attribute_01||'px;'); sys.htp.p('height: '||p_region.attribute_02||'px;'); sys.htp.p('}'); -- sys.htp.p('.stepcarousel .belt{'); sys.htp.p('position: absolute;'); sys.htp.p('left: 0;'); sys.htp.p('top: 0;'); sys.htp.p('}'); sys.htp.p('.stepcarousel .panel{'); sys.htp.p('float: left;'); sys.htp.p('overflow: hidden;'); sys.htp.p('margin: 10px;'); sys.htp.p('width: 250px;'); sys.htp.p('}'); -- sys.htp.p('</style>'); After the loading of the JavaScript library, some style elements are put on the screen. The style elements could have been put in a Cascaded Style Sheet (CSS ), but since we want to be able to adjust the size of the carousel, we use two parameters to set the height and width. And the height and the width are part of the style elements. -- sys.htp.p('<div id="mygallery" class="stepcarousel"style="overflow:hidden"><div class="belt">'); -- for r_crl in c_crl loop sys.htp.p('<div class="panel">'); sys.htp.p('<b>'||to_char(r_crl.panel_text_date,'DD-MON-YYYY')||'</b>'); sys.htp.p('<br>'); sys.htp.p('<b>'||r_crl.panel_title||'</b>'); sys.htp.p('<hr>'); sys.htp.p(r_crl.panel_text); sys.htp.p('</div>'); end loop; -- sys.htp.p('</div></div>'); The next command in the script is the actual creation of a div. Important here is the name of the div and the class. The Step Carousel searches for these identifiers and replaces the div with the stepcarousel. The next step in the function is the fetching of the rows from the query in the cursor. For every line found, the formatted text is placed between the div tags. This is done so that Step Carousel recognizes that the text should be placed on the panels. --Add the onload code to show the carousel -- l_code := 'stepcarousel.setup({ galleryid: "mygallery" ,beltclass: "belt" ,panelclass: "panel" ,autostep: {enable:true, moveby:1, pause:3000} ,panelbehavior: {speed:500, wraparound:true,persist:true} ,defaultbuttons: {enable: true, moveby: 1,leftnav:["'||p_plugin.file_prefix||'arrowl.gif", -5,80], rightnav:["'||p_plugin.file_prefix||'arrowr.gif", -20,80]} ,statusvars: ["statusA", "statusB", "statusC"] ,contenttype: ["inline"]})'; -- apex_javascript.add_onload_code (p_code => l_code); -- return null; end render_stepcarousel; The function ends with the call to apex_javascript.add_onload_code. Here starts the actual code for the stepcarousel and you can customize the carousel, like the size, rotation speed and so on. In the Callbacks section, enter the name of the function in the Return Function Name text field. In this case it is render_stepcarousel. Click on the Create Plug-in button. In the Files section, upload the stepcarousel.js, arrowl.gif, and arrowr.gif files. For this purpose, the file stepcarousel.js has a little modification in it. In the last section (setup:function), document.write is used to add some style to the div tag. Unfortunately, this will not work in APEX, as document.write somehow destroys the rest of the output. So, after the call, APEX has nothing left to show, resulting in an empty page. Document.write needs to be removed, and the following style elements need to be added in the code of the plug-in: sys.htp.p('</p><div id="mygallery" class="stepcarousel" style="overflow: hidden;"><div class="belt">'); In this line of code you see style='overflow:hidden'. That is the line that actually had to be included in stepcarousel.js. This command hides the scrollbars. After you have uploaded the files, click on the Apply Changes button. The plug-in is ready and can now be used in a page. Go to the page where you want this stepcarousel to be shown. In the Regions section, click on the add icon. In the next step, select Plug-ins . Select Carousel . Click on Next . Enter a title for this region, for example Newscarousel. Click on Next . In the next step, enter the height and the width of the carousel. To show a carousel with three panels, enter 800 in the Width text field. Enter 100 in the Height text field. Click on Next . Click on the Create Region button. The plug-in is ready. Run the page to see the result. How it works... The stepcarousel is actually a div. The region type plug-in uses the function sys.htp.p to put this div on the screen. In this example, a div is used for the region, but a HTML table can be used also. An APEX region can contain any HTML output, but for the positioning, mostly a HTML table or a div is used, especially when layout is important within the region. The apex_javascript.add_onload_code starts the animation of the carousel. The carousel switches panels every 3 seconds. This can be adjusted (Pause : 3000). See also For more information on this jQuery extension, go to http://www.dynamicdrive.com/dynamicindex4/stepcarousel.htm.
Read more
  • 0
  • 0
  • 2009
article-image-understanding-websockets-and-server-sent-events-detail
Packt
29 Oct 2013
10 min read
Save for later

Understanding WebSockets and Server-sent Events in Detail

Packt
29 Oct 2013
10 min read
(For more resources related to this topic, see here.) Encoders and decoders in Java API for WebSockets As seen in the previous chapter, the class-level annotation @ServerEndpoint indicates that a Java class is a WebSocket endpoint at runtime. The value attribute is used to specify a URI mapping for the endpoint. Additionally the user can add encoder and decoder attributes to encode application objects into WebSocket messages and WebSocket messages into application objects. The following table summarizes the @ServerEndpoint annotation and its attributes: Annotation Attribute Description @ServerEndpoint   This class-level annotation signifies that the Java class is a WebSockets server endpoint.   value The value is the URI with a leading '/.'   encoders The encoders contains a list of Java classes that act as encoders for the endpoint. The classes must implement the Encoder interface.   decoders The decoders contains a list of Java classes that act as decoders for the endpoint. The classes must implement the Decoder interface.   configurator The configurator attribute allows the developer to plug in their implementation of ServerEndpoint.Configurator that is used when configuring the server endpoint.   subprotocols The sub protocols attribute contains a list of sub protocols that the endpoint can support. In this section we shall look at providing encoder and decoder implementations for our WebSockets endpoint. The preceding diagram shows how encoders will take an application object and convert it to a WebSockets message. Decoders will take a WebSockets message and convert to an application object. Here is a simple example where a client sends a WebSockets message to a WebSockets java endpoint that is annotated with @ServerEndpoint and decorated with encoder and decoder class. The decoder will decode the WebSockets message and send back the same message to the client. The encoder will convert the message to a WebSockets message. This sample is also included in the code bundle for the book. Here is the code to define ServerEndpoint with value for encoders and decoders: @ServerEndpoint(value="/book", encoders={MyEncoder.class}, decoders = {MyDecoder.class} ) public class BookCollection { @OnMessage public void onMessage(Book book,Session session) { try { session.getBasicRemote().sendObject(book); } catch (Exception ex) { ex.printStackTrace(); } } @OnOpen public void onOpen(Session session) { System.out.println("Opening socket" +session.getBasicRemote() ); } @OnClose public void onClose(Session session) { System.out.println("Closing socket" + session.getBasicRemote()); } } In the preceding code snippet, you can see the class BookCollection is annotated with @ServerEndpoint. The value=/book attribute provides URI mapping for the endpoint. The @ServerEndpoint also takes the encoders and decoders to be used during the WebSocket transmission. Once a WebSocket connection has been established, a session is created and the method annotated with @OnOpen will be called. When the WebSocket endpoint receives a message, the method annotated with @OnMessage will be called. In our sample the method simply sends the book object using the Session.getBasicRemote() which will get a reference to the RemoteEndpoint and send the message synchronously. Encoders can be used to convert a custom user-defined object in a text message, TextStream, BinaryStream, or BinaryMessage format. An implementation of an encoder class for text messages is as follows: public class MyEncoder implements Encoder.Text<Book> { @Override public String encode(Book book) throws EncodeException { return book.getJson().toString(); } } As shown in the preceding code, the encoder class implements Encoder.Text<Book>. There is an encode method that is overridden and which converts a book and sends it as a JSON string. (More on JSON APIs is covered in detail in the next chapter) Decoders can be used to decode WebSockets messages in custom user-defined objects. They can decode in text, TextStream, and binary or BinaryStream format. Here is a code for a decoder class: public class MyDecoder implements Decoder.Text<Book> { @Override public Book decode(String string) throws DecodeException { javax.json.JsonObject jsonObject = javax.json.Json.createReader(new StringReader(string)).readObject(); return new Book(jsonObject); } @Override public boolean willDecode(String string) { try { javax.json.Json.createReader(new StringReader(string)).readObject(); return true; } catch (Exception ex) { } return false; } In the preceding code snippet, the Decoder.Text needs two methods to be overridden. The willDecode() method checks if it can handle this object and decode it. The decode() method decodes the string into an object of type Book by using the JSON-P API javax.json.Json.createReader(). The following code snippet shows the user-defined class Book: public class Book { public Book() {} JsonObject jsonObject; public Book(JsonObject json) { this.jsonObject = json; } public JsonObject getJson() { return jsonObject; } public void setJson(JsonObject json) { this.jsonObject = json; } public Book(String message) { jsonObject = Json.createReader(new StringReader(message)).readObject(); } public String toString () { StringWriter writer = new StringWriter(); Json.createWriter(writer).write(jsonObject); return writer.toString(); } } The Book class is a user-defined class that takes the JSON object sent by the client. Here is an example of how the JSON details are sent to the WebSockets endpoints from JavaScript. var json = JSON.stringify({ "name": "Java 7 JAX-WS Web Services", "author":"Deepak Vohra", "isbn": "123456789" }); function addBook() { websocket.send(json); } The client sends the message using websocket.send() which will cause the onMessage() of the BookCollection.java to be invoked. The BookCollection.java will return the same book to the client. In the process, the decoder will decode the WebSockets message when it is received. To send back the same Book object, first the encoder will encode the Book object to a WebSockets message and send it to the client. The Java WebSocket Client API WebSockets and Server-sent Events , covered the Java WebSockets client API. Any POJO can be transformed into a WebSockets client by annotating it with @ClientEndpoint. Additionally the user can add encoders and decoders attributes to the @ClientEndpoint annotation to encode application objects into WebSockets messages and WebSockets messages into application objects. The following table shows the @ClientEndpoint annotation and its attributes: Annotation Attribute Description @ClientEndpoint   This class-level annotation signifies that the Java class is a WebSockets client that will connect to a WebSockets server endpoint.   value The value is the URI with a leading /.   encoders The encoders contain a list of Java classes that act as encoders for the endpoint. The classes must implement the encoder interface.   decoders The decoders contain a list of Java classes that act as decoders for the endpoint. The classes must implement the decoder interface.   configurator The configurator attribute allows the developer to plug in their implementation of ClientEndpoint.Configurator, which is used when configuring the client endpoint.   subprotocols The sub protocols attribute contains a list of sub protocols that the endpoint can support. Sending different kinds of message data: blob/binary Using JavaScript we can traditionally send JSON or XML as strings. However, HTML5 allows applications to work with binary data to improve performance. WebSockets supports two kinds of binary data Binary Large Objects (blob) arraybuffer A WebSocket can work with only one of the formats at any given time. Using the binaryType property of a WebSocket, you can switch between using blob or arraybuffer: websocket.binaryType = "blob"; // receive some blob data websocket.binaryType = "arraybuffer"; // now receive ArrayBuffer data The following code snippet shows how to display images sent by a server using WebSockets. Here is a code snippet for how to send binary data with WebSockets: websocket.binaryType = 'arraybuffer'; The preceding code snippet sets the binaryType property of websocket to arraybuffer. websocket.onmessage = function(msg) { var arrayBuffer = msg.data; var bytes = new Uint8Array(arrayBuffer); var image = document.getElementById('image'); image.src = 'data:image/png;base64,'+encode(bytes); } When the onmessage is called the arrayBuffer is initialized to the message.data. The Uint8Array type represents an array of 8-bit unsigned integers. The image.src value is in line using the data URI scheme. Security and WebSockets WebSockets are secured using the web container security model. A WebSockets developer can declare whether the access to the WebSocket server endpoint needs to be authenticated, who can access it, or if it needs an encrypted connection. A WebSockets endpoint which is mapped to a ws:// URI is protected under the deployment descriptor with http:// URI with the same hostname,port path since the initial handshake is from the HTTP connection. So, WebSockets developers can assign an authentication scheme, user roles, and a transport guarantee to any WebSockets endpoints. We will take the same sample as we saw in , WebSockets and Server-sent Events , and make it a secure WebSockets application. Here is the web.xml for a secure WebSocket endpoint: <web-app version="3.0" xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <security-constraint> <web-resource-collection> <web-resource-name>BookCollection</web-resource-name> <url-pattern>/index.jsp</url-pattern> <http-method>PUT</http-method> <http-method>POST</http-method> <http-method>DELETE</http-method> <http-method>GET</http-method> </web-resource-collection> <user-data-constraint> <description>SSL</description> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> </web-app> As you can see in the preceding snippet, we used <transport-guarantee>CONFIDENTIAL</transport-guarantee>. The Java EE specification followed by application servers provides different levels of transport guarantee on the communication between clients and application server. The three levels are: Data Confidentiality (CONFIDENTIAL) : We use this level to guarantee that all communication between client and server goes through the SSL layer and connections won't be accepted over a non-secure channel. Data Integrity (INTEGRAL) : We can use this level when a full encryption is not required but we want our data to be transmitted to and from a client in such a way that, if anyone changed the data, we could detect the change. Any type of connection (NONE) : We can use this level to force the container to accept connections on HTTP and HTTPs. The following steps should be followed for setting up SSL and running our sample to show a secure WebSockets application deployed in Glassfish. Generate the server certificate: keytool -genkey -alias server-alias -keyalg RSA -keypass changeit --storepass changeit -keystore keystore.jks Export the generated server certificate in keystore.jks into the file server.cer: keytool -export -alias server-alias -storepass changeit -file server.cer -keystore keystore.jks Create the trust-store file cacerts.jks and add the server certificate to the trust store: keytool -import -v -trustcacerts -alias server-alias -file server.cer -keystore cacerts.jks -keypass changeit -storepass changeit Change the following JVM options so that they point to the location and name of the new keystore. Add this in domain.xml under java-config: <jvm-options>-Djavax.net.ssl.keyStore=${com.sun.aas.instanceRoot}/config/keystore.jks</jvm-options> <jvm-options>-Djavax.net.ssl.trustStore=${com.sun.aas.instanceRoot}/config/cacerts.jks</jvm-options> Restart GlassFish. If you go to https://localhost:8181/helloworld-ws/, you can see the secure WebSocket application. Here is how the the headers look under Chrome Developer Tools: Open Chrome Browser and click on View and then on Developer Tools . Click on Network . Select book under element name and click on Frames . As you can see in the preceding screenshot, since the application is secured using SSL the WebSockets URI, it also contains wss://, which means WebSockets over SSL. So far we have seen the encoders and decoders for WebSockets messages. We also covered how to send binary data using WebSockets. Additionally we have demonstrated a sample on how to secure WebSockets based application. We shall now cover the best practices for WebSocket based-applications.
Read more
  • 0
  • 0
  • 4320

article-image-getting-started-json
Packt
28 Oct 2013
6 min read
Save for later

Getting Started with JSON

Packt
28 Oct 2013
6 min read
(For more resources related to this topic, see here.) JSON was developed by Douglas Crockford. It is text-based, lightweight, and a human-readable format for data exchange between clients and servers. JSON is derived from JavaScript and bears a close resemblance to JavaScript objects, but it is not dependent on JavaScript. JSON is language-independent, and support for the JSON data format is available in all the popular languages, some of which are C#, PHP, Java, C++, Python, and Ruby JSON is a format and not a language. Prior to JSON, XML was considered to be the chosen data interchange format. XML parsing required an XML DOM implementation on the client side that would ingest the XML response, and then XPath was used to query the response in order to access and retrieve the data. That made life tedious, as querying for data had to be performed at two levels: first on the server side where the data was being queried from a database, and the second time was on the client side using XPath. JSON does not need any specific implementations; the JavaScript engine in the browser handles JSON parsing. XML messages often tend to be heavy and verbose, and take up a lot of bandwidth while sending the data over a network connection. Once the XML message is retrieved, it has to be loaded into memory to parse it; let us take a look at a students data feed in XML and JSON. The following is an example in XML: Let us take a look at the example in JSON: As we notice, the size of the XML message is bigger when compared to its JSON counterpart, and this is just for two records. A real-time feed will begin with a few thousands and go upwards. Another point to note is the amount of data that has to be generated by the server and then transmitted over the Internet is already big, and XML, as it is verbose, makes it bigger. Given that we are in the age of mobile devices where smart phones and tablets are getting more and more popular by the day, transmitting such large volumes of data on a slower network causes slow page loads, hang ups, and poor user experience, thus driving the users away from the site. JSON has come about to be the preferred Internet data interchange format, to avoid the issues mentioned earlier. Since JSON is used to transmit serialized data over the Internet, we will need to make a note of its MIME type. A MIME (Multipurpose Internet Mail Extensions) type is an Internet media type, which is a two-part identifier for content that is being transferred over the Internet. The MIME types are passed through the HTTP headers of an HTTP Request and an HTTP Response. The MIME type is the communication of content type between the server and the browser. In general, a MIME type will have two or more parts that give the browser information about the type of data that is being sent either in the HTTP Request or in the HTTP Response. The MIME type for JSON data is application/json. If the MIME type headers are not sent across the browser, it treats the incoming JSON as plain text. The Hello World program with JSON Now that we have a basic understanding of JSON, let us work on our Hello World program. This is shown in the screenshot that follows: The preceding program will alert World onto the screen when it is invoked from a browser. Let us pay close attention to the script between the <script> tags. In the first step, we are creating a JavaScript variable and initializing the variable with a JavaScript object. Similar to how we retrieve data from a JavaScript object, we use the key-value pair to retrieve the value. Simply put, JSON is a collection of key and value pairs, where every key is a reference to the memory location where the value is stored on the computer. Now let us take a step back and analyze why we need JSON, if all we are doing is assigning JavaScript objects that are readily available. The answer is, JSON is a different format altogether, unlike JavaScript, which is a language. JSON keys and values have to be enclosed in double quotes, if either are enclosed in single quotes, we will receive an error. Now, let us take a quick look at the similarities and differences between JSON and a normal JavaScript object. If we were to create a similar JavaScript object like our hello_world JSON variable from the earlier example, it would look like the JavaScript object that follows: The big difference here is that the key is not wrapped in double quotes. Since a JSON key is a string, we can use any valid string for a key. We can use spaces, special characters, and hyphens in our keys, which is not valid in a normal JavaScript object. When we use special characters, hyphens, or spaces in our keys, we have to be careful while accessing them. The reason the preceding JavaScript statement doesn't work is that JavaScript doesn't accept keys with special characters, hyphens, or strings. So we have to retrieve the data using a method where we will handle the JSON object as an associative array with a string key. This is shown in the screenshot that follows: Another difference between the two is that a JavaScript object can carry functions within, while a JSON object cannot carry any functions. The example that follows has the property getName, which has a function that alerts the name John Doe when it is invoked: Finally, the biggest difference is that a JavaScript object was never intended to be a data interchange format, while the sole purpose of JSON was to use it as a data interchange format. Summary This article introduced us to JSON, took us through its history, and its advantages over XML. It focussed on how JSON can be used in web applications for data transfer Resources for Article: Further resources on this subject: Syntax Validation in JavaScript Testing [Article] Enhancing Page Elements with Moodle and JavaScript [Article] Making a Better Form using JavaScript [Article]
Read more
  • 0
  • 0
  • 2175