Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-python-multimedia-video-format-conversion-manipulations-and-effects
Packt
10 Dec 2010
11 min read
Save for later

Python Multimedia: Video Format Conversion, Manipulations and Effects

Packt
10 Dec 2010
11 min read
  Python Multimedia Learn how to develop Multimedia applications using Python with this practical step-by-step guide Use Python Imaging Library for digital image processing. Create exciting 2D cartoon characters using Pyglet multimedia framework Create GUI-based audio and video players using QT Phonon framework. Get to grips with the primer on GStreamer multimedia framework and use this API for audio and video processing.       Installation prerequisites We will use Python bindings of GStreamer multimedia framework to process video data. See Python Multimedia: Working with Audios for the installation instructions to install GStreamer and other dependencies. For video processing, we will be using several GStreamer plugins not introduced earlier. Make sure that these plugins are available in your GStreamer installation by running the gst-inspect-0.10 command from the console (gst-inspect-0.10.exe for Windows XP users). Otherwise, you will need to install these plugins or use an alternative if available. Following is a list of additional plugins we will use in this article: autoconvert: Determines an appropriate converter based on the capabilities. It will be used extensively used throughout this article. autovideosink: Automatically selects a video sink to display a streaming video. ffmpegcolorspace: Transforms the color space into a color space format that can be displayed by the video sink. capsfilter: It's the capabilities filter—used to restrict the type of media data passing down stream, discussed extensively in this article. textoverlay: Overlays a text string on the streaming video. timeoverlay: Adds a timestamp on top of the video buffer. clockoverlay: Puts current clock time on the streaming video. videobalance: Used to adjust brightness, contrast, and saturation of the images. It is used in the Video manipulations and effects section. videobox: Crops the video frames by specified number of pixels—used in the Cropping section. ffmux_mp4: Provides muxer element for MP4 video muxing. ffenc_mpeg4: Encodes data into MPEG4 format. ffenc_png: Encodes data in PNG format. Playing a video Earlier, we saw how to play an audio. Like audio, there are different ways in which a video can be streamed. The simplest of these methods is to use the playbin plugin. Another method is to go by the basics, where we create a conventional pipeline and create and link the required pipeline elements. If we only want to play the 'video' track of a video file, then the latter technique is very similar to the one illustrated for audio playback. However, almost always, one would like to hear the audio track for the video being streamed. There is additional work involved to accomplish this. The following diagram is a representative GStreamer pipeline that shows how the data flows in case of a video playback. In this illustration, the decodebin uses an appropriate decoder to decode the media data from the source element. Depending on the type of data (audio or video), it is then further streamed to the audio or video processing elements through the queue elements. The two queue elements, queue1 and queue2, act as media data buffer for audio and video data respectively. When the queue elements are added and linked in the pipeline, the thread creation within the pipeline is handled internally by the GStreamer. Time for action – video player! Let's write a simple video player utility. Here we will not use the playbin plugin. The use of playbin will be illustrated in a later sub-section. We will develop this utility by constructing a GStreamer pipeline. The key here is to use the queue as a data buffer. The audio and video data needs to be directed so that this 'flows' through audio or video processing sections of the pipeline respectively. Download the file PlayingVidio.py from the Packt website. The file has the source code for this video player utility. The following code gives an overview of the Video player class and its methods. import time import thread import gobject import pygst pygst.require("0.10") import gst import os class VideoPlayer: def __init__(self): pass def constructPipeline(self): pass def connectSignals(self): pass def decodebin_pad_added(self, decodebin, pad): pass def play(self): pass def message_handler(self, bus, message): pass # Run the program player = VideoPlayer() thread.start_new_thread(player.play, ()) gobject.threads_init() evt_loop = gobject.MainLoop() evt_loop.run() As you can see, the overall structure of the code and the main program execution code remains the same as in the audio processing examples. The thread module is used to create a new thread for playing the video. The method VideoPlayer.play is sent on this thread. The gobject.threads_init() is an initialization function for facilitating the use of Python threading within the gobject modules. The main event loop for executing this program is created using gobject and this loop is started by the call evt_loop.run(). Instead of using thread module you can make use of threading module as well. The code to use it will be something like: import threading threading.Thread(target=player.play).start() You will need to replace the line thread.start_new_thread(player.play, ()) in earlier code snippet with line 2 illustrated in the code snippet within this note. Try it yourself! Now let's discuss a few of the important methods, starting with self.contructPipeline: 1 def constructPipeline(self): 2 # Create the pipeline instance 3 self.player = gst.Pipeline() 4 5 # Define pipeline elements 6 self.filesrc = gst.element_factory_make("filesrc") 7 self.filesrc.set_property("location", 8 self.inFileLocation) 9 self.decodebin = gst.element_factory_make("decodebin") 10 11 # audioconvert for audio processing pipeline 12 self.audioconvert = gst.element_factory_make( 13 "audioconvert") 14 # Autoconvert element for video processing 15 self.autoconvert = gst.element_factory_make( 16 "autoconvert") 17 self.audiosink = gst.element_factory_make( 18 "autoaudiosink") 19 20 self.videosink = gst.element_factory_make( 21 "autovideosink") 22 23 # As a precaution add videio capability filter 24 # in the video processing pipeline. 25 videocap = gst.Caps("video/x-raw-yuv") 26 self.filter = gst.element_factory_make("capsfilter") 27 self.filter.set_property("caps", videocap) 28 # Converts the video from one colorspace to another 29 self.colorSpace = gst.element_factory_make( 30 "ffmpegcolorspace") 31 32 self.videoQueue = gst.element_factory_make("queue") 33 self.audioQueue = gst.element_factory_make("queue") 34 35 # Add elements to the pipeline 36 self.player.add(self.filesrc, 37 self.decodebin, 38 self.autoconvert, 39 self.audioconvert, 40 self.videoQueue, 41 self.audioQueue, 42 self.filter, 43 self.colorSpace, 44 self.audiosink, 45 self.videosink) 46 47 # Link elements in the pipeline. 48 gst.element_link_many(self.filesrc, self.decodebin) 49 50 gst.element_link_many(self.videoQueue, self.autoconvert, 51 self.filter, self.colorSpace, 52 self.videosink) 53 54 gst.element_link_many(self.audioQueue,self.audioconvert, 55 self.audiosink) In various audio processing applications, we have used several of the elements defined in this method. First, the pipeline object, self.player, is created. The self.filesrc element specifies the input video file. This element is connected to a decodebin. On line 15, autoconvert element is created. It is a GStreamer bin that automatically selects a converter based on the capabilities (caps). It translates the decoded data coming out of the decodebin in a format playable by the video device. Note that before reaching the video sink, this data travels through a capsfilter and ffmpegcolorspace converter. The capsfilter element is defined on line 26. It is a filter that restricts the allowed capabilities, that is, the type of media data that will pass through it. In this case, the videoCap object defined on line 25 instructs the filter to only allow video-xraw-yuv capabilities. The ffmpegcolorspace is a plugin that has the ability to convert video frames to a different color space format. At this time, it is necessary to explain what a color space is. A variety of colors can be created by use of basic colors. Such colors form, what we call, a color space. A common example is an rgb color space where a range of colors can be created using a combination of red, green, and blue colors. The color space conversion is a representation of a video frame or an image from one color space into the other. The conversion is done in such a way that the converted video frame or image is a closer representation of the original one. The video can be streamed even without using the combination of capsfilter and the ffmpegcolorspace. However, the video may appear distorted. So it is recommended to use capsfilter and ffmpegcolorspace converter. Try linking the autoconvert element directly to the autovideosink to see if it makes any difference. Notice that we have created two sinks, one for audio output and the other for the video. The two queue elements are created on lines 32 and 33. As mentioned earlier, these act as media data buffers and are used to send the data to audio and video processing portions of the GStreamer pipeline. The code block 35-45 adds all the required elements to the pipeline. Next, the various elements in the pipeline are linked. As we already know, the decodebin is a plugin that determines the right type of decoder to use. This element uses dynamic pads. While developing audio processing utilities, we connected the pad-added signal from decodebin to a method decodebin_pad_added. We will do the same thing here; however, the contents of this method will be different. We will discuss that later. On lines 50-52, the video processing portion of the pipeline is linked. The self.videoQueue receives the video data from the decodebin. It is linked to an autoconvert element discussed earlier. The capsfilter allows only video-xraw-yuv data to stream further. The capsfilter is linked to a ffmpegcolorspace element, which converts the data into a different color space. Finally, the data is streamed to the videosink, which, in this case, is an autovideosink element. This enables the 'viewing' of the input video. Now we will review the decodebin_pad_added method. 1 def decodebin_pad_added(self, decodebin, pad): 2 compatible_pad = None 3 caps = pad.get_caps() 4 name = caps[0].get_name() 5 print "n cap name is =%s"%name 6 if name[:5] == 'video': 7 compatible_pad = ( 8 self.videoQueue.get_compatible_pad(pad, caps) ) 9 elif name[:5] == 'audio': 10 compatible_pad = ( 11 self.audioQueue.get_compatible_pad(pad, caps) ) 12 13 if compatible_pad: 14 pad.link(compatible_pad) This method captures the pad-added signal, emitted when the decodebin creates a dynamic pad. Here the media data can either represent an audio or video data. Thus, when a dynamic pad is created on the decodebin, we must check what caps this pad has. The name of the get_name method of caps object returns the type of media data handled. For example, the name can be of the form video/x-raw-rgb when it is a video data or audio/x-raw-int for audio data. We just check the first five characters to see if it is video or audio media type. This is done by the code block 4-11 in the code snippet. The decodebin pad with video media type is linked with the compatible pad on self.videoQueue element. Similarly, the pad with audio caps is linked with the one on self.audioQueue. Review the rest of the code from the PlayingVideo.py. Make sure you specify an appropriate video file path for the variable self.inFileLocation and then run this program from the command prompt as: $python PlayingVideo.py This should open a GUI window where the video will be streamed. The audio output will be synchronized with the playing video. What just happened? We created a command-line video player utility. We learned how to create a GStreamer pipeline that can play synchronized audio and video streams. It explained how the queue element can be used to process the audio and video data in a pipeline. In this example, the use of GStreamer plugins such as capsfilter and ffmpegcolorspace was illustrated. The knowledge gained in this section will be applied in the upcoming sections in this article. Playing video using 'playbin' The goal of the previous section was to introduce you to the fundamental method of processing input video streams. We will use that method one way or another in the future discussions. If just video playback is all that you want, then the simplest way to accomplish this is by means of playbin plugin. The video can be played just by replacing the VideoPlayer.constructPipeline method in file PlayingVideo.py with the following code. Here, self.player is a playbin element. The uri property of playbin is set as the input video file path. def constructPipeline(self): self.player = gst.element_factory_make("playbin") self.player.set_property("uri", "file:///" + self.inFileLocation)
Read more
  • 0
  • 0
  • 4557

article-image-microsoft-enterprise-library-authorization-and-security-cache
Packt
09 Dec 2010
6 min read
Save for later

Microsoft Enterprise Library: Authorization and Security Cache

Packt
09 Dec 2010
6 min read
  Microsoft Enterprise Library 5.0 Develop Enterprise applications using reusable software components of Microsoft Enterprise Library 5.0 Develop Enterprise Applications using the Enterprise Library Application Blocks Set up the initial infrastructure configuration of the Application Blocks using the configuration editor A step-by-step tutorial to gradually configure each Application Block and implement its functions to develop the required Enterprise Application           Read more about this book       (For more resources on Microsoft Enterprise Library, see here.) Understanding Authorization Providers An Authorization Provider is simply a class that provides authorization logic; technically it implements either an IAuthorizationProvider interface or an abstract class named AuthorizationProvider and provides authorization logic in the Authorize method. As mentioned previously, the Security Application Block provides two Authorization Providers out of the box, AuthorizationRuleProvider and AzManAuthorizationProvider both implementing the abstract class AuthorizationProvider available in the Microsoft.Practices.EnterpriseLibrary.Security namespace. This abstract class in turn implements the IAuthorizationProvider interface, which defines the basic functionality of an Authorization Provider; it exposes a single method named Authorize, which accepts an instance of the IPrincipal object and the name of the rule to evaluate. Custom providers can be implemented either by implementing the IAuthorizationProvider interface or an abstract class named AuthorizationProvider. An IPrincipal instance (GenericPrincipal, WindowsPrincipal, PassportPrincipal, and so on) represents the security context of the user on whose behalf the code is running; it also includes the user's identity represented as an instance of IIdentity (GenericIdentity, FormsIdentity, WindowsIdentity, PassportIdentity, and so on). The following diagram shows the members and inheritance hierarchy of the respective class and interface: Authorization Rule Provider The AuthorizationRuleProvider class is an implementation that evaluates Boolean expressions to determine whether the objects are authorized; these expressions or rules are stored in the configuration file. We can create authorization rules using the Rule Expression Editor part of the Enterprise Library configuration tool and validate them using the Authorize method of the Authorization Provider. This authorization provider is part of the Microsoft.Practices.EnterpriseLibrary.Security namespace. Authorizing using Authorization Rule Provider Authorization Rule Provider stores authorization rules in the configuration and this is one of the simplest ways to perform authorization. Basically, we need to configure to use the Authorization Rule Provider and provide authorization rules based on which the authorization will be performed. Let us add Authorization Rule Provider as our Authorization Provider; click on the plus symbol on the right side of the Authorization Providers and navigate to the Add Authorization Rule Provider menu item. The following screenshot shows the configuration options of the Add Authorization Rule Provider menu item: The following screenshot shows the default configuration of the newly added Authorization Provider; in this case, it is Authorization Rule Provider: Now we have the Authorization Rule Provider added to the configuration but we still need to add the authorization rules. Imagine that we have a business scenario where: We have to allow only users belonging to the administrator's role to add or delete products. We should allow all authenticated customers to view the products. This scenario is quite common where certain operations can be performed only by specific roles, basically role-based authorization. To fulfill this requirement, we will have to add three different rules for add, delete, and view operations. Right-click on the Authorization Rule Provider and click on the Add Authorization Rule menu item as shown on the following screenshot. The following screenshot shows the newly added Authorization Rule: Let us update the name of the rule to "Product.Add" to represent the operation for which the rule is configured. We will provide the rule using the Rule Expression Editor; click on the right corner button to open the Rule Expression Editor. The requirement is to allow only the administrator role to perform this action. The following action needs to be performed to configure the rule: Click on the Role button to add the Role expression: R. Enter the role name next to the role expression: R:Admin. Select the checkbox Is Authenticated to allow only authenticated users. The following screenshot displays the Rule Expression Editor dialog box with the expression configured to R:Admin. The following screenshot shows the Rule Expression property set to R:Admin. Now let us add the rule for the product delete operation. This rule is configured in a similar fashion. The resulting configuration will be similar to the configuration shown. The following screenshot displays the added authorization rule named Product.Delete with the configured Rule Expression: Alright, we now have to allow all authenticated customers to view the products. Basically we want the authorization to pass if the user is either of role Customer; also Admin role should have permission, only then the user will be able to view products. We will add another rule called Product.View and configure the rule expression using the Rule Expression Editor as given next. While configuring the rule, use the OR operator to specify that either Admin or Customer can perform this operation. The following screenshot displays the added authorization rule named Product.View with the configured Rule Expression: Now that we have the configuration ready, let us get our hands dirty with some code. Before authorizing we need to authenticate the user; based on the authentication requirement we could be using either out-of-the-box authentication mechanism or we might use custom authentication. Assuming that we are using the current Windows identity, the following steps will allow us to authorize specific operations by passing the Windows principal while invoking the Authorize method of the Authorization Provider. The first step is to get the IIdentity and IPrincipal based on the authentication mechanism. We are using current Windows identity for this sample. WindowsIdentity windowsIdentity = WindowsIdentity.GetCurrent();WindowsPrincipal windowsPrincipal = new WindowsPrincipal(windowsIdentity); Create an instance of the configured Authorization Provider using the AuthorizationFactory.GetAuthorizationProvider method; in our case we will get an instance of Authorization Rule Provider. IAuthorizationProvider authzProvider = AuthorizationFactory.GetAuthorizationProvider("Authorization Rule Provider"); Now use the instance of Authorization Provider to authorize the operation by passing the IPrincipal instance and the rule name. bool result = authzProvider.Authorize(windowsPrincipal, "Product.Add"); AuthorizationFactory.GetAuthorizationProvider also has an overloaded alternative without any parameter, which gets the default authorization provider configured in the configuration. AzMan Authorization Provider The AzManAuthorizationProvider class provides us the ability to define individual operations of an application, which then can be grouped together to form a task. Each individual operation or task can then be assigned roles to perform those operations or tasks. The best part of Authorization Manager is that it provides an administration tool as a Microsoft Management Console (MMC) snap-in to manage users, roles, operations, and tasks. Policy administrators can configure an Authorization Manager Policy store in an Active Directory, Active Directory Application Mode (ADAM) store, or in an XML file. This authorization provider is part of the Microsoft.Practices.EnterpriseLibrary.Security namespace.
Read more
  • 0
  • 0
  • 2251

article-image-microsoft-enterprise-library-security-application-block
Packt
09 Dec 2010
5 min read
Save for later

Microsoft Enterprise Library: Security Application Block

Packt
09 Dec 2010
5 min read
Microsoft Enterprise Library 5.0 Develop Enterprise applications using reusable software components of Microsoft Enterprise Library 5.0 Develop Enterprise Applications using the Enterprise Library Application Blocks Set up the initial infrastructure configuration of the Application Blocks using the configuration editor A step-by-step tutorial to gradually configure each Application Block and implement its functions to develop the required Enterprise Application The first step is the process of validating an identity against a store (Active Directory, Database, and so on); this is commonly called as Authentication. The second step is the process of verifying whether the validated identity is allowed to perform certain actions; this is commonly known Authorization. These two security mechanisms take care of allowing only known identities to access the application and perform their respective actions. Although, with the advent of new tools and technologies, it is not difficult to safeguard the application, utilizing these authentication and authorization mechanisms and implementing security correctly across different types of applications, or across different layers and in a consistent manner is pretty challenging for developers. Also, while security is an important factor, it's of no use if the application's performance is dismal. So, a good design should also consider performance and cache the outcome of authentication and authorization for repeated use. The Security Application Block provides a very simple and consistent way to implement authorization and credential caching functionality in our applications. Authorization doesn't belong to one particular layer; it is a best practice to authorize user action not only in the UI layer but also in the business logic layer. As Enterprise Library application blocks are layer-agnostic, we can leverage the same authorization rules and expect the same outcome across different layers bringing consistency. Authorization of user actions can be performed using an Authorization Provider; the block provides Authorization Rule Provider or AzMan Authorization Provider; it also provides the flexibility of implementing a custom authorization provider. Caching of security credentials is provided by the SecurityCacheProvider by leveraging the Caching Application Block and a custom caching provider can also be implemented using extension points. Both Authorization and Security cache providers are configured in the configuration file; this allows changing of provider any time without re-compilation. The following are the key features of the Security block: The Security Application Block provides a simple and consistent API to implement authorization. It abstracts the application code from security providers through configuration. It provides the Authorization Rule Provider to store rules in a configuration file and Windows Authorization Manager (AzMan) Authorization Provider to authorize against Active Directory, XML file, or database. Flexibility to implement custom Authorization Providers. It provides token generation and caching of authenticated IIdentity, IPrincipal and Profile objects. It provides User identity cache management, which improves performance while repeatedly authenticating users using cached security credentials. Flexibility to extend and implement custom Security Cache Providers. Developing an application We will explore each individual Security block feature and along the way we will understand the concepts behind the individual elements. This will help us to get up to speed with the basics. To get started, we will do the following: Reference the Validation block assemblies Add the required Namespaces Set up the initial configuration To complement the concepts and allow you to gain quick hands-on experience of different features of the Security Application Block, we have created a sample web application project with three additional projects, DataProvider, BusinessLayer, and BusinessEntities, to demonstrate the features. The application leverages SQL Membership, Role, and Profile provider for authentication, role management, and profiling needs. Before running the web application you will have to run the database generation script provided in the DBScript folder of the solution, and update the connection string in web.config appropriately. You might have to open the solution in "Administrator" mode based on your development environment. Also, create an application pool with an identity that has the required privileges to access the development SQL Server database, and map the application pool to the website. A screenshot of the sample application is shown as follows: (Move the mouse over the image to enlarge.) Referencing required/optional assemblies For the purposes of this demonstration we will be referencing non-strong-named assemblies but based on individual requirements Microsoft strong-named assemblies, or a modified set of custom assemblies can be referenced as well. The list of Enterprise Library assemblies that are required to leverage the Security Application Block functionality is given next. A few assemblies are optional based on the Authorization Provider and cache storage mechanism used. Use the Microsoft strong-named, or the non-strong-named, or a modified set of custom assemblies based on your referencing needs. The following table lists the required/optional assemblies: AssemblyRequired/OptionalMicrosoft.Practices.EnterpriseLibrary.Common.dllRequiredMicrosoft.Practices.ServiceLocation.dllRequiredMicrosoft.Practices.Unity.dllRequiredMicrosoft.Practices.Unity.Interception.dllRequiredMicrosoft.Practices.Unity.Configuration.dll Optional Useful while utilizing Unity configuration classes in our code Microsoft.Practices.EnterpriseLibrary.Security.dllRequiredMicrosoft.Practices.EnterpriseLibrary.Security.AzMan.dll Optional Used for Windows Authorization Manager Provider Microsoft.Practices.EnterpriseLibrary.Security.Cache.CachingStore.dll Optional Used for caching the User identity Microsoft.Practices.EnterpriseLibrary.Data.dll Optional Used for caching in Database Cache Storage Open Visual Studio 2008/2010 and create a new ASP.NET Web Application Project by selecting File | New | Project | ASP.NET Web Application; provide the appropriate name for the solution and the desired project location. Currently, the application will have a default web form and assembly references. In the Solution Explorer, right-click on the References section and click on Add Reference and go to the Browse tab. Next, navigate to the Enterprise Library 5.0 installation location; the default install location is %Program Files%Microsoft Enterprise Library 5.0Bin. Now select all the assemblies listed in the previous table, excluding the AzMan-related assembly (Microsoft.Practices.EnterpriseLibrary.Security.AzMan.dll). The final assembly selection will look similar to the following screenshot:
Read more
  • 0
  • 0
  • 2993

article-image-using-datastore-transactions-google-app
Packt
30 Nov 2010
12 min read
Save for later

Using Datastore Transactions in Google App

Packt
30 Nov 2010
12 min read
Google App Engine Java and GWT Application Development Build powerful, scalable, and interactive web applications in the cloud Comprehensive coverage of building scalable, modular, and maintainable applications with GWT and GAE using Java Leverage the Google App Engine services and enhance your app functionality and performance Integrate your application with Google Accounts, Facebook, and Twitter Safely deploy, monitor, and maintain your GAE applications A practical guide with a step-by-step approach that helps you build an application in stages        As the App Engine documentation states, A transaction is a Datastore operation or a set of Datastore operations that either succeed completely, or fail completely. If the transaction succeeds, then all of its intended effects are applied to the Datastore. If the transaction fails, then none of the effects are applied. The use of transactions can be the key to the stability of a multiprocess application (such as a web app) whose different processes share the same persistent Datastore. Without transactional control, the processes can overwrite each other's data updates midstream, essentially stomping all over each other's toes. Many database implementations support some form of transactions, and you may be familiar with RDBMS transactions. App Engine Datastore transactions have a different set of requirements and usage model than you may be used to. First, it is important to understand that a "regular" Datastore write on a given entity is atomic—in the sense that if you are updating multiple fields in that entity, they will either all be updated, or the write will fail and none of the fields will be updated. Thus, a single update can essentially be considered a (small, implicit) transaction— one that you as the developer do not explicitly declare. If one single update is initiated while another update on that entity is in progress, this can generate a "concurrency failure" exception. In the more recent versions of App Engine, such failures on single writes are now retried transparently by App Engine, so that you rarely need to deal with them in application-level code. However, often your application needs stronger control over the atomicity and isolation of its operations, as multiple processes may be trying to read and write to the same objects at the same time. Transactions provide this control. For example, suppose we are keeping a count of some value in a "counter" field of an object, which various methods can increment. It is important to ensure that if one Servlet reads the "counter" field and then updates it based on its current value, no other request has updated the same field between the time that its value is read and when it is updated. Transactions let you ensure that this is the case: if a transaction succeeds, it is as if it were done in isolation, with no other concurrent processes 'dirtying' its data. Another common scenario: you may be making multiple changes to the Datastore, and you may want to ensure that the changes either all go through atomically, or none do. For example, when adding a new Friend to a UserAccount, we want to make sure that if the Friend is created, any related UserAcount object changes are also performed. While a Datastore transaction is ongoing, no other transactions or operations can see the work being done in that transaction; it becomes visible only if the transaction succeeds. Additionally, queries inside a transaction see a consistent "snapshot" of the Datastore as it was when the transaction was initiated. This consistent snapshot is preserved even after the in-transaction writes are performed. Unlike some other transaction models, with App Engine, a within-transaction read after a write will still show the Datastore as it was at the beginning of the transaction. Datastore transactions can operate only on entities that are in the same entity group. We discuss entity groups later in this article. Transaction commits and rollbacks To specify a transaction, we need the concepts of a transaction commit and rollback. A transaction must make an explicit "commit" call when all of its actions have been completed. On successful transaction commit, all of the create, update, and delete operations performed during the transaction are effected atomically. If a transaction is rolled back, none of its Datastore modifications will be performed. If you do not commit a transaction, it will be rolled back automatically when its Servlet exits. However, it is good practice to wrap a transaction in a try/finally block, and explicitly perform a rollback if the commit was not performed for some reason. This could occur, for example, if an exception was thrown. If a transaction commit fails, as would be the case if the objects under its control had been modified by some other process since the transaction was started the transaction is automatically rolled back. Example—a JDO transaction With JDO, a transaction is initiated and terminated as follows: import javax.jdo.PersistenceManager; import javax.jdo.Transaction; ... PersistenceManager pm = PMF.get().getPersistenceManager(); Transaction tx; ... try { tx = pm.currentTransaction(); tx.begin(); // Do the transaction work tx.commit(); } finally { if (tx.isActive()) { tx.rollback(); } } A transaction is obtained by calling the currentTransaction() method of the PersistenceManager. Then, initiate the transaction by calling its begin() method . To commit the transaction, call its commit() method . The finally clause in the example above checks to see if the transaction is still active, and does a rollback if that is the case. While the preceding code is correct as far as it goes, it does not check to see if the commit was successful, and retry if it was not. We will add that next. App Engine transactions use optimistic concurrency In contrast to some other transactional models, the initiation of an App Engine transaction is never blocked. However, when the transaction attempts to commit, if there has been a modification in the meantime (by some other process) of any objects in the same entity group as the objects involved in the transaction, the transaction commit will fail. That is, the commit not only fails if the objects in the transaction have been modified by some other process, but also if any objects in its entity group have been modified. For example, if one request were to modify a FeedInfo object while its FeedIndex child was involved in a transaction as part of another request, that transaction would not successfully commit, as those two objects share an entity group. App Engine uses an optimistic concurrency model. This means that there is no check when the transaction initiates, as to whether the transaction's resources are currently involved in some other transaction, and no blocking on transaction start. The commit simply fails if it turns out that these resources have been modified elsewhere after initiating the transaction. Optimistic concurrency tends to work well in scenarios where quick response is valuable (as is the case with web apps) but contention is rare, and thus, transaction failures are relatively rare. Transaction retries With optimistic concurrency, a commit can fail simply due to concurrent activity on the shared resource. In that case, if the transaction is retried, it is likely to succeed. So, one thing missing from the previous example is that it does not take any action if the transaction commit did not succeed. Typically, if a commit fails, it is worth simply retrying the transaction. If there is some contention for the objects in the transaction, it will probably be resolved when it is retried. PersistenceManager pm = PMF.get().getPersistenceManager(); // ... try { for (int i =0; i < NUM_RETRIES; i++) { pm.currentTransaction().begin(); // ...do the transaction work ... try { pm.currentTransaction().commit(); break; } catch (JDOCanRetryException e1) { if (i == (NUM_RETRIES - 1)) { throw e1; } } } } finally { if (pm.currentTransaction().isActive()) { pm.currentTransaction().rollback(); } pm.close(); } As shown in the example above, you can wrap a transaction in a retry loop, where NUM_RETRIES is set to the number of times you want to re-attempt the transaction. If a commit fails, a JDOCanRetryException will be thrown. If the commit succeeds, the for loop will be terminated. If a transaction commit fails, this likely means that the Datastore has changed in the interim. So, next time through the retry loop, be sure to start over in gathering any information required to perform the transaction. Transactions and entity groups An entity's entity group is determined by its key. When an entity is created, its key can be defined as a child of another entity's key, which becomes its parent. The child is then in the same entity group as the parent. That child's key could in turn be used to define another entity's key, which becomes its child, and so on. An entity's key can be viewed as a path of ancestor relationships, traced back to a root entity with no parent. Every entity with the same root is in the same entity group. If an entity has no parent, it is its own root. Because entity group membership is determined by an entity's key, and the key cannot be changed after the object is created, this means that entity group membership cannot be changed either. As introduced earlier, a transaction can only operate on entities from the same entity group. If you try to access entities from different groups within the same transaction, an error will occur and the transaction will fail. In App Engine, JDO owned relationships place the parent and child entities in the same entity group. That is why, when constructing an owned relationship, you cannot explicitly persist the children ahead of time, but must let the JDO implementation create them for you when the parent is made persistent. JDO will define the keys of the children in an owned relationship such that they are the child keys of the parent object key. This means that the parent and children in a JDO owned relationship can always be safely used in the same transaction. (The same holds with JPA owned relationships). So in the Connectr app, for example, you could create a transaction that encompasses work on a UserAccount object and its list of Friends—they will all be in the same entity group. But, you could not include a Friend from a different UserAccount in that same transaction—it will not be in the same entity group. This App Engine constraint on transactions—that they can only encompass members of the same entity group—is enforced in order to allow transactions to be handled in a scalable way across App Engine's distributed Datastores. Entity group members are always stored together, not distributed. Creating entities in the same entity group As discussed earlier, one way to place entities in the same entity group is to create a JDO owned relationship between them; JDO will manage the child key creation so that the parent and children are in the same entity group. To explicitly create an entity with an entity group parent, you can use the App Engine KeyFactory.Builder class . This is the approach used in the FeedIndex constructor example shown previously. Recall that you cannot change an object's key after it is created, so you have to make this decision when you are creating the object. Your "child" entity must use a primary key of type Key or String-encoded Key; these key types allow parent path information to be encoded in them. As you may recall, it is required to use one of these two types of keys for JDO owned relationship children, for the same reason. If the data class of the object for which you want to create an entity group parent uses an app-assigned string ID, you can build its key as follows: // you can construct a Builder as follows: KeyFactory.Builder keyBuilder = new KeyFactory.Builder(Class1.class.getSimpleName(), parentIDString); // alternatively, pass the parent Key object: Key pkey = KeyFactory.Builder keyBuilder = new KeyFactory.Builder(pkey); // Then construct the child key keyBuilder.addChild(Class2.class.getSimpleName(), childIDString); Key ckey = keyBuilder.getKey(); Create a new KeyFactory.Builder using the key of the desired parent. You may specify the parent key as either a Key object or via its entity name (the simple name of its class) and its app-assigned (String) or system-assigned (numeric) ID, as appropriate. Then, call the addChild method of the Builder with its arguments—the entity name and the app-assigned ID string that you want to use. Then, call the getKey() method of Builder. The generated child key encodes parent path information. Assign the result to the child entity's key field. When the entity is persisted, its entity group parent will be that entity whose key was used as the parent. This is the approach we showed previously in the constructor of FeedIndex, creating its key using its parent FeedInfo key . See http://code.google.com/appengine/docs/java/javadoc/ com/google/appengine/api/datastore/KeyFactory.Builder. html for more information on key construction. If the data class of the object for which you want to create an entity group parent uses a system-assigned ID, then (because you don't know this ID ahead of time), you must go about creating the key in a different way. Create an additional field in your data class for the parent key, of the appropriate type for the parent key, as shown in the following code: @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; ... @Persistent @Extension(vendorName="datanucleus", key="gae.parent-pk", value="true") private String parentKey; Assign the parent key to this field prior to creating the object. When the object is persisted, the data object's primary key field will be populated using the parent key as the entity group parent. You can use this technique with any child key type.
Read more
  • 0
  • 0
  • 2109

article-image-data-modeling-and-scalability-google-app
Packt
30 Nov 2010
12 min read
Save for later

Data Modeling and Scalability in Google App

Packt
30 Nov 2010
12 min read
Google App Engine Java and GWT Application Development Build powerful, scalable, and interactive web applications in the cloud Comprehensive coverage of building scalable, modular, and maintainable applications with GWT and GAE using Java Leverage the Google App Engine services and enhance your app functionality and performance Integrate your application with Google Accounts, Facebook, and Twitter Safely deploy, monitor, and maintain your GAE applications A practical guide with a step-by-step approach that helps you build an application in stages         Read more about this book       In deciding how to design your application's data models, there are a number of ways in which your approach can increase the app's scalability and responsiveness. Here, we discuss several such approaches and how they are applied in the Connectr app. In particular, we describe how the Datastore access latency can sometimes be reduced; ways to split data models across entities to increase the efficiency of data object access and use; and how property lists can be used to support "join-like" behavior with Datastore entities. Reducing latency—read consistency and Datastore access deadlines By default, when an entity is updated in the Datastore, all subsequent reads of that entity will see the update at the same time; this is called strong consistency . To achieve it, each entity has a primary storage location, and with a strongly consistent read, the read waits for a machine at that location to become available. Strong consistency is the default in App Engine. However, App Engine allows you to change this default and use eventual consistency for a given Datastore read. With eventual consistency, the query may access a copy of the data from a secondary location if the primary location is temporarily unavailable. Changes to data will propagate to the secondary locations fairly quickly, but it is possible that an "eventually consistent" read may access a secondary location before the changes have been incorporated. However, eventually consistent reads are faster on average, so they trade consistency for availability. In many contexts, for example, with web apps such as Connectr that display "activity stream" information, this is an acceptable tradeoff—completely up-to-date freshness of information is not required. See http://googleappengine.blogspot.com/2010/03/ read-consistency-deadlines-more-control.html, http://googleappengine.blogspot.com/2009/09/migrationto- better-datastore.html, and http://code.google.com/ events/io/2009/sessions/TransactionsAcrossDatacenters. html for more background on this and related topics. In Connectr, we will add the use of eventual consistency to some of our feed object reads; specifically, those for feed content updates. We are willing to take the small chance that a feed object is slightly out-of-date in order to have the advantage of quicker reads on these objects. The following code shows how to set eventual read consistency for a query, using server.servlets.FeedUpdateFriendServlet as an example. Query q = pm.newQuery("select from " + FeedInfo.class.getName() + "where urlstring == :keys");//Use eventual read consistency for this queryq.addExtension("datanucleus.appengine.datastoreReadConsistency", "EVENTUAL"); App Engine also allows you to change the default Datastore access deadline. By default, the Datastore will retry access automatically for up to about 30 seconds. You can set this deadline to a smaller amount of time. It can often be appropriate to set a shorter deadline if you are concerned with response latency, and are willing to use a cached version of the data for which you got the timeout, or are willing to do without it. The following code shows how to set an access timeout interval (in milliseconds) for a given JDO query. Query q = pm.newQuery("...");// Set a Datastore access timeoutq.setTimeoutMillis(10000); Splitting big data models into multiple entities to make access more efficient Often, the fields in a data model can be divided into two groups: main and/or summary information that you need often/first, and details—the data that you might not need or tend not to need immediately. If this is the case, then it can be productive to split the data model into multiple entities and set the details entity to be a child of the summary entity, for instance, by using JDO owned relationships. The child field will be fetched lazily, and so the child entity won't be pulled in from the Datastore unless needed. In our app, the Friend model can be viewed like this: initially, only a certain amount of summary information about each Friend is sent over RPC to the app's frontend (the Friend's name). Only if there is a request to view details of or edit a particular Friend, is more information needed. So, we can make retrieval more efficient by defining a parent summary entity, and a child details entity. We do this by keeping the "summary" information in Friend, and placing "details" in a FriendDetails object , which is set as a child of Friend via a JDO bidirectional, one-to-one owned relationship, as shown in Figure 1. We store the Friend's e-mail address and its list of associated URLs in FriendDetails. We'll keep the name information in Friend. That way, when we construct the initial 'FriendSummaries' list displayed on application load, and send it over RPC, we only need to access the summary object. Splitting Friend data between a "main" Friend persistent class and a FriendDetails child class. A details field of Friend points to the FriendDetails child, which we create when we create a Friend. In this way, the details will always be transparently available when we need them, but they will be lazily fetched—the details child object won't be initially retrieved from the database when we query Friend, and won't be fetched unless we need that information. As you may have noticed, the Friend model is already set up in this manner—this is the rationale for that design. Discussion When splitting a data model like this, consider the queries your app will perform and how the design of the data objects will support those queries. For example, if your app often needs to query for property1 == x and property2 == y, and especially if both individual filters can produce large result sets, you are probably better off keeping both those properties on the same entity (for example, retaining both fields on the "main" entity, rather than moving one to a "details" entity). For persistent classes (that is, "data classes") that you often access and update, it is also worth considering whether any of its fields do not require indexes. This would be the case if you never perform a query which includes that field. The fewer the indexed fields of a persistent class, the quicker are the writes of objects of that cl ass. Splitting a model by creating an "index" and a "data" entity You can also consider splitting a model if you identify fields that you access only when performing queries, but don't require once you've actually retrieved the object. Often, this is the case with multi-valued properties. For example, in the Connectr app, this is the case with the friendKeys list of the server.domain.FeedIndex class. This multi-valued property is used to find relevant feed objects but is not used when displaying feed content information. With App Engine, there is no way for a query to retrieve only the fields that you need, so the full object must always be pulled in. If the multi-valued property lists are long, this is inefficient. To avoid this inefficiency, we can split up such a model into two parts, and put each one in a different entity—an index entity and a data entity. The index entity holds only the multi-valued properties (or other data) used only for querying, and the data entity holds the information that we actually want to use once we've identified the relevant objects. The trick to this new design is that the data entity key is defined to be the parent of the index entity key. More specifically, when an entity is created, its key can be defined as a "child" of another entity's key, which becomes its parent. The child is then in the same entity group as the parent. Because such a child key is based on the path of its parent key, it is possible to derive the parent key given only the child key, using the getParent() method of Key, without requiring the child to be instantiated. So with this design, we can first do a keys-only query on the index kind (which is faster than full object retrieval) to get a list of the keys of the relevant index entities. With that list, even though we've not actually retrieved the index objects themselves, we can derive the parent data entity keys from the index entity keys. We can then do a batch fetch with the list of relevant parent keys to grab all the data entities at once. This lets us retrieve the information we're interested in, without having to retrieve the properties that we do not need. See Brett Slatkin's presentation, Building scalable, complex apps on App Engine (http://code.google.com/events/ io/2009/sessions/BuildingScalableComplexApps. html) for more on this index/data design. Splitting the feed model into an "index" part (server.domain.FeedIndex) and a "data" part (server.domain.FeedInfo) Our feed model maps well to this design—we filter on the FeedIndex.friendKeys multi-valued property (which contains the list of keys of Friends that point to this feed) when we query for the feeds associated with a given Friend. But, once we have retrieved those feeds, we don't need the friendKeys list further. So, we would like to avoid retrieving them along with the feed content. With our app's sample data, these property lists will not comprise a lot of data, but they would be likely to do so if the app was scaled up. For example, many users might have the same friends, or many different contacts might include the same company blog in their associated feeds. So, we split up the feed model into an index part and a parent data part, as shown in Figure 2. The index class is server.domain.FeedIndex; it contains the friendKeys list for a feed. The data part, containing the actual feed content, is server.domain. FeedInfo. When a new FeedIndex object is created, its key will be constructed so that its corresponding FeedInfo object 's key is its parent key. This construction must of course take place at object creation, as Datastore entity keys cannot be changed. For a small-scale app, the payoff from this split model would perhaps not be worth it. But for the sake of example, let's assume that we expect our app to grow significantly. The FeedInfo persistent class —the parent class—simply uses an app-assigned String primary key, urlstring (the feed URL string). The server.domain. FeedIndex constructor, shown in the code below, uses the key of its FeedInfo parent—the URL string—to construct its key. This places the two entities into the same entity group and allows the parent FeedInfo key to be derived from the FeedIndex entity's key. @PersistenceCapable(identityType = IdentityType.APPLICATION, detachable="true")public class FeedIndex implements Serializable { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; ... public FeedIndex(String fkey, String url) { this.friendKeys = new HashSet<String>(); this.friendKeys.add(fkey); KeyFactory.Builder keyBuilder = new KeyFactory.Builder(FeedInfo.class.getSimpleName(), url); keyBuilder.addChild(FeedIndex.class.getSimpleName(), url); Key ckey = keyBuilder.getKey(); this.key= ckey; } The following code, from server.servlets.FeedUpdateFriendServlet, shows how this model is used to efficiently retrieve the FeedInfo objects associated with a given Friend. Given a Friend key, a query is performed for the keys of the FeedIndex entities that contain this Friend key in their friendKeys list. Because this is a keys-only query, it is much more efficient than returning the actual objects. Then, each FeedIndex key is used to derive the parent (FeedInfo) key. Using that list of parent keys, a batch fetch is performed to fetch the FeedInfo objects associated with the given Friend. We did this without needing to actually fetch the FeedIndex objects. ... imports...@SuppressWarnings("serial")public class FeedUpdateFriendServlet extends HttpServlet{ private static Logger logger = Logger.getLogger(FeedUpdateFriendServlet.class.getName()); public void doPost(HttpServletRequest req, HttpServletResponse resp) throws IOException { PersistenceManager pm = PMF.get().getPersistenceManager(); Query q = null; try { String fkey = req.getParameter("fkey"); if (fkey != null) { logger.info("in FeedUpdateFriendServlet, updating feeds for:" +fkey); // query for matching FeedIndex keys q = pm.newQuery("select key from "+FeedIndex.class.getName()+" where friendKeys == :id"); List ids=(List)q.execute(fkey); if (ids.size()==0) { return; } // else, get the parent keys of the ids Key k = null; List<Key>parent list = new ArrayList<Key>(); for (Object id : ids) { // cast to key k = (Key)id; parentlist.add(k.getParent()); } // fetch the parents using the keys Query q2 = pm.newQuery("select from +FeedInfo.class.getName()+ "where urlstring == :keys"); // allow eventual consistency on read q2.addExtension( "datanucleus.appengine.datastoreReadConsistency", "EVENTUAL"); List<FeedInfo>results = (List<FeedInfo>)q2.execute(parentlist); if(results.iterator().hasNext()){ for(FeedInfo fi : results){ fi.updateRequestedFeed(pm); } } } } catch (Exception e) { logger.warning(e.getMessage()); } finally { if q!=null) { q.closeAll(); } pm.close(); } }}//end class
Read more
  • 0
  • 0
  • 1693

article-image-web-services-microsoft-azure
Packt
29 Nov 2010
8 min read
Save for later

Web Services in Microsoft Azure

Packt
29 Nov 2010
8 min read
A web service is not one single entity and consists of three distinct parts: An endpoint, which is the URL (and related information) where client applications will find our service A host environment, which in our case will be Azure A service class, which is the code that implements the methods called by the client application A web service endpoint is more than just a URL. An endpoint also includes: The bindings, or communication and security protocols The contract (or promise) that certain methods exist, how these methods should be called, and what the data will look like when returned A simple way to remember the components of an endpoint is A/B/C, that is, address/bindings/contract. Web services can fill many roles in our Azure applications—from serving as a simple way to place messages into a queue, to being a complete replacement for a data access layer in a web application (also known as a Service Oriented Architecture or SOA). In Azure, web services serve as HTTP/HTTPS endpoints, which can be accessed by any application that supports REST, regardless of language or operating system. The intrinsic web services libraries in .NET are called Windows Communication Foundation (WCF). As WCF is designed specifically for programming web services, it's referred to as a service-oriented programming model. We are not limited to using WCF libraries in Azure development, but we expect it to be a popular choice for constructing web services being part of the .NET framework. A complete introduction to WCF can be found at http://msdn.microsoft.com/en-us/netframework/aa663324.aspx. When adding WCF services to an Azure web role, we can either create a separate web role instance, or add the web services to an existing web role. Using separate instances allows us to scale the web services independently of the web forms, but multiple instances increase our operating costs. Separate instances also allow us to use different technologies for each Azure instance; for example, the web form may be written in PHP and hosted on Apache, while the web services may be written in Java and hosted using Tomcat. Using the same instance helps keep our costs much lower, but in that case we have to scale both the web forms and the web services together. Depending on our application's architecture, this may not be desirable. Securing WCF Stored data are only as secure as the application used for accessing it. The Internet is stateless, and REST has no sense of security, so security information must be passed as part of the data in each request. If the credentials are not encrypted, then all requests should be forced to use HTTPS. If we control the consuming client applications, we can also control the encryption of the user credentials. Otherwise, our only choice may be to use clear text credentials via HTTPS. For an application with a wide or uncontrolled distribution (like most commercial applications want to be), or if we are to support a number of home-brewed applications, the authorization information must be unique to the user. Part of the behind-the-services code should check to see if the user making the request can be authenticated, and if the user is authorized to perform the action. This adds additional coding overhead, but it's easier to plan for this up front. There are a number of ways to secure web services—from using HTTPS and passing credentials with each request, to using authentication tokens in each request. As it happens, using authentication tokens is part of the AppFabric Access Control, and we'll look more into the security for WCF when we dive deeper into Access Control. Jupiter Motors web service In our corporate portal for Jupiter Motors, we included a design for a client application, which our delivery personnel will use to update the status of an order and to decide which customers will accept delivery of their vehicle. For accounting and insurance reasons, the order status needs to be updated immediately after a customer accepts their vehicle. To do so, the client application will call a web service to update the order status as soon as the Accepted button is clicked. Our WCF service is interconnected to other parts of our Jupiter Motors application, so we won't see it completely in action until it all comes together. In the meantime, it will seem like we're developing blind. In reality, all the components would probably be developed and tested simultaneously. Creating a new WCF service web role When creating a web service, we have a choice to add the web service to an existing web role or create a new web role. This helps us deploy and maintain our website application separately from our web services. And in order for us to scale the web role independently from the worker role, we'll create our web service in a role separate from our web application. Creating a new WCF service web role is very simple—Visual Studio will do the "hard work" for us and allow us to start coding our services. First, open the JupiterMotors project. Create the new web role by right-clicking on the Roles folder in our project, choosing Add, and then select the New Web Role Project… option. When we do this, we will be asked what type of web role we want to create. We will choose a WCF Service Web Role, call it JupiterMotorsWCFRole, and click on the Add button. Because different services must have unique names in our project, a good naming convention to use is the project name concatenated with the type of role. This makes the different roles and instances easily discernable and complies with the unique naming requirement. This is where Visual Studio does its magic. It creates the new role in the cloud project, creates a new web role for our WCF web services, and creates some template code for us. The template service created is called "Service1". You will see both, a Service1.svc file as well as an IService1.vb file. Also, a web.config file (as we would expect to see in any web role) is created in the web role and is already wired up for our Service1 web service. All of the generated code is very helpful if you are learning WCF web services. This is what we should see once Visual Studio finishes creating the new project: We are going to start afresh with our own services—we can delete Service1.svc and IService1.vb. Also, in the web.config file, the following boilerplate code can be deleted (we'll add our own code as needed): <system.serviceModel> <services> <service name="JupiterMotorsWCFRole.Service1" behaviorConfiguration="JupiterMotorsWCFRole. Service1Behavior"> <!-- Service Endpoints --> <endpoint address="" binding="basicHttpBinding" contract="JupiterMotorsWCFRole.IService1"> <!-- Upon deployment, the following identity element should be removed or replaced to reflect the identity under which the deployed service runs. If removed, WCF will infer an appropriate identity automatically. --> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="JupiterMotorsWCFRole.Service1Behavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> Let's now add a WCF service to the JupiterMotorsWCFRole project. To do so, right-click on the project, then Add, and select the New Item... option. We now choose a WCF service and will name it as ERPService.svc: Just like the generated code when we created the web role, ERPService.svc as well as IERPService.vb files were created for us, and these are now wired into the web.config file. There is some generated code in the ERPService.svc and IERPService.vb files, but we will replace this with our code in the next section. When we create a web service, the actual service class is created with the name we specify. Additionally, an interface class is automatically created. We can specify the name for the class; however, being an interface class, it will always have its name beginning with letter I. This is a special type of interface class, called a service contract. The service contract provides a description of what methods and return types are available in our web service.
Read more
  • 0
  • 0
  • 6383
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-getting-started-enterprise-library
Packt
10 Nov 2010
6 min read
Save for later

Getting Started with Enterprise Library

Packt
10 Nov 2010
6 min read
Introducing Enterprise Library Enterprise Library (EntLib) is a collection of reusable software components or application blocks designed to assist software developers with common enterprise development challenges. Each application block addresses a specific cross-cutting concern and provides highly configurable features, which results in higher developer productivity. EntLib is implemented and provided by Microsoft patterns & practices group, a dedicated team of professionals who work on solving these cross-cutting concerns with active participation from the developer community. This is an open source project and thus freely available under the Microsoft Public License (Ms-PL) at the CodePlex open source community site (http://entlib.codeplex.com), basically granting us a royalty-free copyright license to reproduce its contribution, build derivative works, and distribute them. More information can be found at the Enterprise Library community site http://www.codeplex.com/entlib. Enterprise Library consists of nine application blocks; two are concerned with wiring up stuff together and the remaining seven are functional application blocks. The following is the complete list of application blocks; these are briefly discussed in the next sections. Wiring Blocks Unity Dependency Injection Policy Injection Application Block Functional Blocks Data Access Application Block Logging Application Block Exception Handling Application Block Caching Application Block Validation Application Block Security Application Block Cryptography Application Block Wiring Application Blocks Wiring blocks provide mechanisms to build highly flexible, loosely coupled, and maintainable systems. These blocks are mainly about wiring or plugging together different functionalities. The following two blocks fall under this category: Unity Dependency Injection Policy Injection Application Block Unity Application Block The Unity Application Block is a lightweight, flexible, and extensible dependency injection container that supports interception and various injection mechanisms such as constructor, property, and method call injection. The Unity Block is a standalone open source project, which can be leveraged in our application. This block allows us to develop loosely coupled, maintainable, and testable applications. Enterprise Library leverages this block for wiring the configured objects. More information on the Unity block is available at http://unity.codeplex.com. Policy Injection Application Block The Policy Injection Application Block is included in this release of Enterprise Library for backwards compatibility and policy injection is implemented using the Unity interception mechanism. This block provides a mechanism to change object behavior by inserting code between the client and the target object without modifying the code of the target object. Functional Application Blocks Enterprise Library consists of the following functional application blocks, which can be used individually or can be grouped together to address a specific cross-cutting concern. Data Access Application Block Logging Application Block Exception Handling Application Block Caching Application Block Validation Application Block Security Application Block Cryptography Application Block Data Access Application Block Developing an application that stores/ retrieve data in/from some kind of a relational database is quite common; this involves performing CRUD (Create, Read, Update, Delete) operations on the database by executing T-SQL or stored procedure commands. But we often end up writing the plumbing code over and over again to perform these operations: plumbing code such as creating a connection object, opening and closing a connection, parameter caching, and so on. The following are the key benefits of the Data Access block: The Data Access Application Block (DAAB) abstracts developers from the underlying database technology by providing a common interface to perform database operations. DAAB also takes care of the ordinary tasks like creating a connection object, opening and closing a connection, parameter caching, and so on. It helps in bringing consistency to the application and allows changing of database type by modifying the configuration. Logging Application Block Logging is an essential activity, which is required to understand what's happening behind the scene while the application is running. This is especially helpful in identifying issues and tracing the source of the problem without debugging. The Logging Application Block provides a very simple, flexible, standard, and consistent way to log messages. Administrators have the power to change the log destination (file, database, e-mail, and so on), modify message format, decide on which category is turned on/off, and so on. Exception Handling Application Block Handling exceptions appropriately and allowing the user to either continue or exit gracefully is essential for any application to avoid user frustration. The Exception Handling Application Block adapts the policy-driven approach to allow developers/administrators to define how to handle exceptions. The following are the key benefits of the Exception Handling Block: It provides the ability to log exception messages using the Logging Application Block. It provides a mechanism to replace the original exception with another exception, which prevents disclosure of sensitive information. It provides mechanism to wrap the original exception inside another exception to maintain the contextual information. Caching Application Block Caching in general is a good practice for data that has a long life span; caching is recommended if the possibility of data being changed at the source is low and the change doesn't have significant impact on the application. The Caching Application Block allows us to cache data locally in our application; it also gives us the flexibility to cache the data in-memory, in a database or in an isolated storage. Validation Application Block The Validation Application Block (VAB) provides various mechanisms to validate user inputs. As a rule of thumb always assume user input is not valid unless proven to be valid. The Validation block allows us to perform validation in three different ways; we can use configuration, attributes, or code to provide validation rules. Additionally it also includes adapters specifically targeting ASP.NET, Windows Forms, and Windows Communication Foundation (WCF). Security Application Block The Security Application Block simplifies authorization based on rules and provides caching of the user's authorization and authentication data. Authorization can be done against Microsoft Active Directory Service, Authorization Manager (AzMan) , Active Directory Application Mode (ADAM), and Custom Authorization Provider. Decoupling of the authorization code from the authorization provider allows administrators to change the provider in the configuration without changing the code. Cryptography Application Block The Cryptography Application Block provides a common API to perform basic cryptography operations without inclining towards any specific cryptography provider and the provider is configurable. Using this application block we can perform encryption/decryption, hashing, & validate whether the hash matches some text.
Read more
  • 0
  • 0
  • 2145

article-image-nhibernate-30-using-linq-specifications-data-access-layer
Packt
21 Oct 2010
4 min read
Save for later

NHibernate 3.0: Using LINQ Specifications in the data access layer

Packt
21 Oct 2010
4 min read
  NHibernate 3.0 Cookbook Get solutions to common NHibernate problems to develop high-quality performance-critical data access applications Master the full range of NHibernate features Reduce hours of application development time and get better application architecture and performance Create, maintain, and update your database structure automatically with the help of NHibernate Written and tested for NHibernate 3.0 with input from the development team distilled in to easily accessible concepts and examples Part of Packt's Cookbook series: each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible         Read more about this book       (For more resources on NHibernate, see here.) Getting ready Download the LinqSpecs library from http://linqspecs.codeplex.com. Copy LinqSpecs.dll from the Downloads folder to your solution's libs folder. Complete the Setting up an NHibernate Repository recipe. How to do it... In Eg.Core.Data and Eg.Core.Data.Impl, add a reference to LinqSpecs.dll. Add these two methods to the IRepository interface. IEnumerable<T> FindAll(Specification<T> specification);T FindOne(Specification<T> specification); Add the following three methods to NHibernateRepository: public IEnumerable<T> FindAll(Specification<T> specification){ var query = GetQuery(specification); return Transact(() => query.ToList());}public T FindOne(Specification<T> specification){ var query = GetQuery(specification); return Transact(() => query.SingleOrDefault());}private IQueryable<T> GetQuery( Specification<T> specification){ return session.Query<T>() .Where(specification.IsSatisfiedBy());} Add the following specification to Eg.Core.Data.Queries: public class MoviesDirectedBy : Specification<Movie>{ private readonly string _director; public MoviesDirectedBy(string director) { _director = director; } public override Expression<Func<Movie, bool>> IsSatisfiedBy() { return m => m.Director == _director; }} Add another specification to Eg.Core.Data.Queries, using the following code: public class MoviesStarring : Specification<Movie>{ private readonly string _actor; public MoviesStarring(string actor) { _actor = actor; } public override Expression<Func<Movie, bool>> IsSatisfiedBy() { return m => m.Actors.Any(a => a.Actor == _actor); }} How it works... The specification pattern allows us to separate the process of selecting objects from the concern of which objects to select. The repository handles selecting objects, while the specification objects are concerned only with the objects that satisfy their requirements. In our specification objects, the IsSatisfiedBy method of the specification objects returns a LINQ expression to determine which objects to select. In the repository, we get an IQueryable from the session, pass this LINQ expression to the Where method, and execute the LINQ query. Only the objects that satisfy the specification will be returned. For a detailed explanation of the specification pattern, check out http://martinfowler.com/apsupp/spec.pdf. There's more... To use our new specifications with the repository, use the following code: var movies = repository.FindAll( new MoviesDirectedBy("Stephen Spielberg")); Specification composition We can also combine specifications to build more complex queries. For example, the following code will find all movies directed by Steven Speilberg starring Harrison Ford: var movies = repository.FindAll( new MoviesDirectedBy("Steven Spielberg") & new MoviesStarring("Harrison Ford")); This may result in expression trees that NHibernate is unable to parse. Be sure to test each combination. Summary In this article we covered: Using LINQ Specifications in the data access layer Further resources on this subject: NHibernate 3.0: Working with the Data Access Layer NHibernate 3.0: Using Named Queries in the Data Access Layer NHibernate 3.0: Using ICriteria and Paged Queries in the data access layer NHibernate 3.0: Testing Using NHibernate Profiler and SQLite Using the Fluent NHibernate Persistence Tester and the Ghostbusters Test
Read more
  • 0
  • 0
  • 2471

article-image-nhibernate-30-using-icriteria-and-paged-queries-data-access-layer
Packt
21 Oct 2010
4 min read
Save for later

NHibernate 3.0: Using ICriteria and Paged Queries in the Data Access Layer

Packt
21 Oct 2010
4 min read
NHibernate 3.0 Cookbook Get solutions to common NHibernate problems to develop high-quality performance-critical data access applications Master the full range of NHibernate features Reduce hours of application development time and get better application architecture and performance Create, maintain, and update your database structure automatically with the help of NHibernate Written and tested for NHibernate 3.0 with input from the development team distilled in to easily accessible concepts and examples Part of Packt's Cookbook series: each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible Using ICriteria in the data access layer For queries where the criteria are not known in advance, such as a website's advanced product search, ICriteria queries are more appropriate than named HQL queries. This article by Jason Dentler, author of NHibernate 3.0 Cookbook, shows how to use the same DAL infrastructure with ICriteria and QueryOver queries. In an effort to avoid overwhelming the user, and increase application responsiveness, large result sets are commonly broken into smaller pages of results. This article also shows how we can easily add paging to a QueryOver query object in our DAL. Getting ready Complete the previous recipe, Using Named Queries in the data access layer. How to do it... In Eg.Core.Data.Impl.Queries, add a new, empty, public interface named ICriteriaQuery. Add a class named CriteriaQueryBase with the following code: public abstract class CriteriaQueryBase<TResult> : NHibernateQueryBase<TResult>, ICriteriaQuery { public CriteriaQueryBase(ISessionFactory sessionFactory) : base(sessionFactory) { } public override TResult Execute() { var criteria = GetCriteria(); return Transact(() => Execute(criteria)); } protected abstract ICriteria GetCriteria(); protected abstract TResult Execute(ICriteria criteria); } In Eg.Core.Data.Queries, add the following enum: public enum AdvancedProductSearchSort { PriceAsc, PriceDesc, Name } Add a new interface named IAdvancedProductSearch with the following code: public interface IAdvancedProductSearch : IQuery<IEnumerable<Product>> { string Name { get; set; } string Description { get; set; } decimal? MinimumPrice { get; set; } decimal? MaximumPrice { get; set; } AdvancedProductSearchSort Sort { get; set; } } In Eg.Core.Data.Impl.Queries, add the following class: public class AdvancedProductSearch : CriteriaQueryBase<IEnumerable<Product>>, IAdvancedProductSearch { public AdvancedProductSearch(ISessionFactory sessionFactory) : base(sessionFactory) { } public string Name { get; set; } public string Description { get; set; } public decimal? MinimumPrice { get; set; } public decimal? MaximumPrice { get; set; } public AdvancedProductSearchSort Sort { get; set; } protected override ICriteria GetCriteria() { return GetProductQuery().UnderlyingCriteria; } protected override IEnumerable<Product> Execute( ICriteria criteria) { return criteria.List<Product>(); } private IQueryOver GetProductQuery() { var query = session.QueryOver<Product>(); AddProductCriterion(query); return query; } private void AddProductCriterion<T>( IQueryOver<T, T> query) where T : Product { if (!string.IsNullOrEmpty(Name)) query = query.WhereRestrictionOn(p => p.Name) .IsInsensitiveLike(Name, MatchMode.Anywhere); if (!string.IsNullOrEmpty(Description)) query.WhereRestrictionOn(p => p.Description) .IsInsensitiveLike(Description, MatchMode.Anywhere); if (MinimumPrice.HasValue) query.Where(p => p.UnitPrice >= MinimumPrice); if (MaximumPrice.HasValue) query.Where(p => p.UnitPrice <= MaximumPrice); switch (Sort) { case AdvancedProductSearchSort.PriceDesc: query = query.OrderBy(p => p.UnitPrice).Desc; break; case AdvancedProductSearchSort.Name: query = query.OrderBy(p => p.Name).Asc; break; default: query = query.OrderBy(p => p.UnitPrice).Asc; break; } } } How it works... In this recipe, we reuse the same repository and query infrastructure from the Using Named Queries in The Data Access Layer recipe. Our simple base class for ICriteria-based query objects splits query creation from query execution and handles transactions for us automatically. The example query we use is typical for an "advanced product search" use case. When a user fills in a particular field on the UI, the corresponding criterion is included in the query. When the user leaves the field blank, we ignore it. We check each search parameter for data. If the parameter has data, we add the appropriate criterion to the query. Finally, we set the order by clause based on the Sort parameter and return the completed ICriteria query. The query is executed inside a transaction, and the results are returned. There's more... For this type of query, typically, each query parameter would be set to the value of some field on your product search UI. On using this query, your code looks like this: var query = repository.CreateQuery<IAdvancedProductSearch>(); query.Name = searchCriteria.PartialName; query.Description = searchCriteria.PartialDescription; query.MinimumPrice = searchCriteria.MinimumPrice; query.MaximumPrice = searchCriteria.MaximumPrice; query.Sort = searchCriteria.Sort; var results = query.Execute();
Read more
  • 0
  • 0
  • 1858

article-image-nhibernate-30-using-named-queries-data-access-layer
Packt
15 Oct 2010
4 min read
Save for later

NHibernate 3.0: Using named queries in the data access layer

Packt
15 Oct 2010
4 min read
Getting ready Download the latest release of the Common Service Locator from http://commonservicelocator.codeplex.com, and extract Microsoft.Practices.ServiceLocation.dll to your solution's libs folder. Complete the previous recipe, Setting up an NHibernate repository. Following the Fast testing with SQLite in-memory database recipe in the previous article, create a new NHibernate test project named Eg.Core.Data.Impl.Test. Include the Eg.Core.Data.Impl assembly as an additional mapping assembly in your test project's App.Config with the following xml: <mapping assembly="Eg.Core.Data.Impl"/> How to do it... In the Eg.Core.Data project, add a folder for the Queries namespace. Add the following IQuery interfaces: public interface IQuery { } public interface IQuery<TResult> : IQuery { TResult Execute(); } Add the following IQueryFactory interface: { TQuery CreateQuery<TQuery>() where TQuery :IQuery; } Change the IRepository interface to implement the IQueryFactory interface, as shown in the following code: public interface IRepository<T> : IEnumerable<T>, IQueryFactory where T : Entity { void Add(T item); bool Contains(T item); int Count { get; } bool Remove(T item); } In the Eg.Core.Data.Impl project, change the NHibernateRepository constructor and add the _queryFactory field, as shown in the following code: private readonly IQueryFactory _queryFactory; public NHibernateRepository(ISessionFactory sessionFactory, IQueryFactory queryFactory) : base(sessionFactory) { _queryFactory = queryFactory; } Add the following method to NHibernateRepository: public TQuery CreateQuery<TQuery>() where TQuery : IQuery { return _queryFactory.CreateQuery<TQuery>(); } In the Eg.Core.Data.Impl project, add a folder for the Queries namespace. To the Eg.Core.Data.Impl project, add a reference to Microsoft.Practices.ServiceLocation.dll. To the Queries namespace, add this QueryFactory class: public class QueryFactory : IQueryFactory { private readonly IServiceLocator _serviceLocator; public QueryFactory(IServiceLocator serviceLocator) { _serviceLocator = serviceLocator; } public TQuery CreateQuery<TQuery>() where TQuery : IQuery { return _serviceLocator.GetInstance<TQuery>(); } } Add the following NHibernateQueryBase class: public abstract class NHibernateQueryBase<TResult> : NHibernateBase, IQuery<TResult> { protected NHibernateQueryBase( ISessionFactory sessionFactory) : base(sessionFactory) { } public abstract TResult Execute(); } Add an empty INamedQuery interface, as shown in the following code: public interface INamedQuery { string QueryName { get; } } Add a NamedQueryBase class, as shown in the following code: public abstract class NamedQueryBase<TResult> : NHibernateQueryBase<TResult>, INamedQuery { protected NamedQueryBase(ISessionFactory sessionFactory) : base(sessionFactory) { } public override TResult Execute() { var nhQuery = GetNamedQuery(); return Transact(() => Execute(nhQuery)); } protected abstract TResult Execute(IQuery query); protected virtual IQuery GetNamedQuery() { var nhQuery = session.GetNamedQuery( ((INamedQuery) this).QueryName); SetParameters(nhQuery); return nhQuery; } protected abstract void SetParameters(IQuery nhQuery); public virtual string QueryName { get { return GetType().Name; } } } In Eg.Core.Data.Impl.Test, add a test fixture named QueryTests inherited from NHibernateFixture. Add the following test and three helper methods: [Test] public void NamedQueryCheck() { var errors = new StringBuilder(); var queryObjectTypes = GetNamedQueryObjectTypes(); var mappedQueries = GetNamedQueryNames(); foreach (var queryType in queryObjectTypes) { var query = GetQuery(queryType); if (!mappedQueries.Contains(query.QueryName)) { errors.AppendFormat( "Query object {0} references non-existent " + "named query {1}.", queryType, query.QueryName); errors.AppendLine(); } } if (errors.Length != 0) Assert.Fail(errors.ToString()); } private IEnumerable<Type> GetNamedQueryObjectTypes() { var namedQueryType = typeof(INamedQuery); var queryImplAssembly = typeof(BookWithISBN).Assembly; var types = from t in queryImplAssembly.GetTypes() where namedQueryType.IsAssignableFrom(t) && t.IsClass && !t.IsAbstract select t; return types; } private IEnumerable<string> GetNamedQueryNames() { var nhCfg = NHConfigurator.Configuration; var mappedQueries = nhCfg.NamedQueries.Keys .Union(nhCfg.NamedSQLQueries.Keys); return mappedQueries; } private INamedQuery GetQuery(Type queryType) { return (INamedQuery) Activator .CreateInstance(queryType, new object[] { SessionFactory }); } For our example query, in the Queries namespace of Eg.Core.Data, add the following interface: public interface IBookWithISBN : IQuery<Book> { string ISBN { get; set; } } Add the implementation to the Queries namespace of Eg.Core.Data.Impl using the following code: public class BookWithISBN : NamedQueryBase<Book>, IBookWithISBN { public BookWithISBN(ISessionFactory sessionFactory) : base(sessionFactory) { } public string ISBN { get; set; } protected override void SetParameters( NHibernate.IQuery nhQuery) { nhQuery.SetParameter("isbn", ISBN); } protected override Book Execute(NHibernate.IQuery query) { return query.UniqueResult<Book>(); } } Finally, add the embedded resource mapping, BookWithISBN.hbm.xml, to Eg.Core.Data.Impl with the following xml code: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping > <query name="BookWithISBN"> <![CDATA[ from Book b where b.ISBN = :isbn ]]> </query> </hibernate-mapping>
Read more
  • 0
  • 0
  • 1974
article-image-nhibernate-30-working-data-access-layer
Packt
15 Oct 2010
3 min read
Save for later

NHibernate 3.0: Working with the Data Access Layer

Packt
15 Oct 2010
3 min read
Transaction Auto-wrapping for the data access layer This article by Jason Dentler, author of NHibernate 3.0 Cookbook, shows how we can set up the data access layer to wrap all data access in NHibernate transactions automatically. Getting ready Complete the Eg.Core model and mappings. Download code (ch:1) How to do it... Create a new class library named Eg.Core.Data. Add a reference to NHibernate.dll and the Eg.Core project. Add the following two DAO classes: public class DataAccessObject<T, TId> where T : Entity<TId> { private readonly ISessionFactory _sessionFactory; private ISession session { get { return _sessionFactory.GetCurrentSession(); } } public DataAccessObject(ISessionFactory sessionFactory) { _sessionFactory = sessionFactory; } public T Get(TId id) { return Transact(() => session.Get<T>(id)); } public T Load(TId id) { return Transact(() => session.Load<T>(id)); } public void Save(T entity) { Transact(() => session.SaveOrUpdate(entity)); } public void Delete(T entity) { Transact(() => session.Delete(entity)); } private TResult Transact<TResult>(Func<TResult> func) { if (!session.Transaction.IsActive) { // Wrap in transaction TResult result; using (var tx = session.BeginTransaction()) { result = func.Invoke(); tx.Commit(); } return result; } // Don't wrap; return func.Invoke(); } private void Transact(Action action) { Transact<bool>(() => { action.Invoke(); return false; }); } } public class DataAccessObject<T> : DataAccessObject<T, Guid> where T : Entity { } How it works... NHibernate requires that all data access occurs inside an NHibernate transaction and this can be easily accomplished with AOP. Remember, the ambient transaction created by TransactionScope is not a substitute for an NHibernate transaction. This recipe shows a more explicit approach. To ensure that at least all our data access layer calls are wrapped in transactions, we create a private Transact function that accepts a delegate, consisting of some data access methods, such as session.Save or session.Get. This Transact function first checks if the session has an active transaction. If it does, Transact simply invokes the delegate. If it doesn't, it creates an explicit NHibernate transaction, then invokes the delegate, and finally commits the transaction. If the data access method throws an exception, the transaction will be rolled back automatically as the exception bubbles up through the using block. There's more... This transactional auto-wrapping can also be set up using SessionWrapper from the unofficial NHibernate AddIns project at http://code.google.com/p/unhaddins. This class wraps a standard NHibernate session. By default, it will throw an exception when the session is used without an NHibernate transaction. However, it can be configured to check for and create a transaction automatically, much in the same way I've shown you here. See also Setting up an NHibernate repository
Read more
  • 0
  • 0
  • 1868

article-image-using-fluent-nhibernate-persistence-tester-and-ghostbusters-test
Packt
06 Oct 2010
3 min read
Save for later

Using the Fluent NHibernate Persistence Tester and the Ghostbusters Test

Packt
06 Oct 2010
3 min read
NHibernate 3.0 Cookbook Get solutions to common NHibernate problems to develop high-quality performance-critical data access applications Master the full range of NHibernate features Reduce hours of application development time and get better application architecture and performance Create, maintain, and update your database structure automatically with the help of NHibernate Written and tested for NHibernate 3.0 with input from the development team distilled in to easily accessible concepts and examples Part of Packt's Cookbook series: each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible      The reader would benefit from reading the previous article on Testing Using NHibernate Profiler and SQLite. Using the Fluent NHibernate Persistence Tester Mappings are a critical part of any NHibernate application. In this recipe, I'll show you how to test those mappings using Fluent NHibernate's Persistence tester. Getting ready Complete the Fast testing with SQLite in-Memory database recipe mentioned in the previous article. How to do it... Add a reference to FluentNHibernate. In PersistenceTests.cs, add the following using statement: using FluentNHibernate.Testing; Add the following three tests to the PersistenceTests fixture: [Test] public void Product_persistence_test() { new PersistenceSpecification<Product>(Session) .CheckProperty(p => p.Name, "Product Name") .CheckProperty(p => p.Description, "Product Description") .CheckProperty(p => p.UnitPrice, 300.85M) .VerifyTheMappings(); } [Test] public void ActorRole_persistence_test() { new PersistenceSpecification<ActorRole>(Session) .CheckProperty(p => p.Actor, "Actor Name") .CheckProperty(p => p.Role, "Role") .VerifyTheMappings(); } [Test] public void Movie_persistence_test() { new PersistenceSpecification<Movie>(Session) .CheckProperty(p => p.Name, "Movie Name") .CheckProperty(p => p.Description, "Movie Description") .CheckProperty(p => p.UnitPrice, 25M) .CheckProperty(p => p.Director, "Director Name") .CheckList(p => p.Actors, new List<ActorRole>() { new ActorRole() { Actor = "Actor Name", Role = "Role" } }) .VerifyTheMappings(); } Run these tests with NUnit. How it works... The Persistence tester in Fluent NHibernate can be used with any mapping method. It performs the following four steps: Create a new instance of the entity (Product, ActorRole, Movie) using the values provided. Save the entity to the database. Get the entity from the database. Verify that the fetched instance matches the original. At a minimum, each entity type should have a simple Persistence test, such as the one shown previously. More information about the Fluent NHibernate Persistence tester can be found on their wiki at http://wiki.fluentnhibernate.org/Persistence_specification_testing See also Testing with the SQLite in-memory database Using the Ghostbusters test  
Read more
  • 0
  • 0
  • 2549

article-image-nhibernate-30-testing-using-nhibernate-profiler-and-sqlite
Packt
06 Oct 2010
6 min read
Save for later

NHibernate 3.0: Testing Using NHibernate Profiler and SQLite

Packt
06 Oct 2010
6 min read
  NHibernate 3.0 Cookbook Get solutions to common NHibernate problems to develop high-quality performance-critical data access applications Master the full range of NHibernate features Reduce hours of application development time and get better application architecture and performance Create, maintain, and update your database structure automatically with the help of NHibernate Written and tested for NHibernate 3.0 with input from the development team distilled in to easily accessible concepts and examples Part of Packt's Cookbook series: each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible Read more about this book (For more resources on NHibernate, see here.) Using NHibernate Profiler NHibernate Profiler from Hibernating Rhinos is the number one tool for analyzing and visualizing what is happening inside your NHibernate application, and for discovering issues you may have. In this recipe, I'll show you how to get up and running with NHibernate Profiler. Getting ready Download NHibernate Profiler from http://nhprof.com, and unzip it. As it is a commercial product, you will also need a license file. You may request a 30-day trial license from the NHProf website. Using our Eg.Core model, set up a new NHibernate console application with log4net. (Download code). How to do it... Add a reference to HibernatingRhinos.Profiler.Appender.dll from the NH Profiler download. In the session-factory element of App.config, set the property generate_statistics to true. Add the following code to your Main method: log4net.Config.XmlConfigurator.Configure();HibernatingRhinos.Profiler.Appender. NHibernate.NHibernateProfiler.Initialize();var nhConfig = new Configuration().Configure();var sessionFactory = nhConfig.BuildSessionFactory();using (var session = sessionFactory.OpenSession()){ var books = from b in session.Query<Book>() where b.Author == "Jason Dentler" select b; foreach (var book in books) Console.WriteLine(book.Name);} Run NHProf.exe from the NH Profiler download, and activate the license. Build and run your console application. Check the NH Profiler. It should look like the next screenshot. Notice the gray dots indicating alerts next to the Session #1 and Recent Statements. Select Session #1 from the Sessions list at the top left pane. Select the statement from the top right pane. Notice the SQL statement in the following screenshot: Click on See the 1 row(s) resulting from this statement. Enter your database connection string in the field provided, and click on OK. Close the query results window. Switch to the Alerts tab, and notice the alert: Use of implicit transaction is discouraged. Click on the Read more link for more information and suggested solutions to this particular issue. Switch to the Stack Trace tab, as shown in the next screenshot: Double-click on the NHProfTest.NHProfTest.Program.Main stack frame to jump to that location inside Visual Studio. Using the following code, wrap the foreach loop in a transaction and commit the transaction: using (var tx = session.BeginTransaction()){ foreach (var book in books) Console.WriteLine(book.Name); tx.Commit();} In NH Profiler, right-click on Sessions on the top left pane, and select Clear All Sessions. Build and run your application. Check NH Profiler for alerts. How it works... NHibernate Profiler uses a custom log4net appender to capture data about NHibernate activities inside your application and transmit that data to the NH Profiler application. Setting generate_statistics allows NHibernate to capture many key data points. These statistics are displayed in the lower, left-hand side of the pane of NHibernate Profiler. We initialize NHibernate Profiler with a call to NHibernateProfiler.Initialize(). For best results, do this when your application begins, just after you have configured log4net. There's more... NHibernate Profiler also supports offline and remote profiling, as well as command-line options for use with build scripts and continuous integration systems. In addition to NHibernate warnings and errors, NH Profiler alerts us to 12 common misuses of NHibernate, which are as follows: Transaction disposed without explicit rollback or commit: If no action is taken, transactions will rollback when disposed. However, this often indicates a missing commit rather than a desire to rollback the transaction Using a single session on multiple threads is likely a bug: A Session should only be used by one thread at a time. Sharing a session across threads is usually a bug, not an explicit design choice with proper locking. Use of implicit transaction is discouraged: Nearly all session activity should happen inside an NHibernate transaction. Excessive number of rows: In nearly all cases, this indicates a poorly designed query or bug. Large number of individual writes: This indicates a failure to batch writes, either because adonet.batch_size is not set, or possibly because an Identity-type POID generator is used, which effectively disables batching. Select N+1: This alert indicates a particular type of anti-pattern where, typically, we load and enumerate a list of parent objects, lazy-loading their children as we move through the list. Instead, we should eagerly fetch those children before enumerating the list Superfluous updates, use inverse="true": NH Profiler detected an unnecessary update statement from a bi-directional one-to-many relationship. Use inverse="true" on the many side (list, bag, set, and others) of the relationship to avoid this. Too many cache calls per session: This alert is targeted particularly at applications using a distributed (remote) second-level cache. By design, NHibernate does not batch calls to the cache, which can easily lead to hundreds of slow remote calls. It can also indicate an over reliance on the second-level cache, whether remote or local. Too many database calls per session: This usually indicates a misuse of the database, such as querying inside a loop, a select N+1 bug, or an excessive number of writes. Too many joins: A query contains a large number of joins. When executed in a batch, multiple simple queries with only a few joins often perform better than a complex query with many joins. This alert can also indicate unexpected Cartesian products. Unbounded result set: NH Profiler detected a query without a row limit. When the application is moved to production, these queries may return huge result sets, leading to catastrophic performance issues. As insurance against these issues, set a reasonable maximum on the rows returned by each query Different parameter sizes result in inefficient query plan cache usage: NH Profiler detected two identical queries with different parameter sizes. Each of these queries will create a query plan. This problem grows exponentially with the size and number of parameters used. Setting prepare_sql to true allows NHibernate to generate queries with consistent parameter sizes. See also Configuring NHibernate with App.config Configuring log4net logging
Read more
  • 0
  • 0
  • 8460
article-image-getting-started-javafx
Packt
05 Oct 2010
11 min read
Save for later

Getting Started with JavaFX

Packt
05 Oct 2010
11 min read
  JavaFX 1.2 Application Development Cookbook Over 60 recipes to create rich Internet applications with many exciting features Easily develop feature-rich internet applications to interact with the user using various built-in components of JavaFX Make your application visually appealing by using various JavaFX classes—ListView, Slider, ProgressBar—to display your content and enhance its look with the help of CSS styling Enhance the look and feel of your application by embedding multimedia components such as images, audio, and video Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible Read more about this book (For more resources on JavaFX, see here.) Using javafxc to compile JavaFX code While it certainly makes it easier to build JavaFX with the support of an IDE (see the NetBeans and Eclipse recipes), it is not a requirement. In some situations, having direct access to the SDK tools is preferred (automated build for instance). This recipe explores the build tools that are shipped with the JavaFX SDK and provides steps to show you how to manually compile your applications. Getting ready To use the SDK tools, you will need to download and install the JavaFX SDK. See the recipe Installing the JavaFX SDK, for instructions on how to do it. How to do it... Open your favorite text/code editor and type the following code. The full code is available from ch01/source-code/src/hello/HelloJavaFX.fx. package hello;import javafx.stage.Stage;import javafx.scene.Sceneimport javafx.scene.text.Text;import javafx.scene.text.Font;Stage { title: "Hello JavaFX" width: 250 height: 80 scene: Scene { content: [ Text { font : Font {size : 16} x: 10 y: 30 content: "Hello World!" } ] }} Save the file at location hello/Main.fx. To compile the file, invoke the JavaFX compiler from the command line from a directory up from the where the file is stored (for this example, it would be executed from the src directory): javafxc hello/Main.fx If your compilation command works properly, you will not get any messages back from the compiler. You will, however, see the file HelloJavaFX.class created by the compiler in the hello directory. If, however, you get a "file not found" error during compilation, ensure that you have properly specified the path to the HelloJavaFX.fx file. How it works... The javafxc compiler works in similar ways as your regular Java compiler. It parses and compiles the JavaFX script into Java byte code with the .class extension. javafxc accepts numerous command-line arguments to control how and what sources get compiled, as shown in the following command: javafxc [options] [sourcefiles] [@argfiles] where options are your command-line options, followed by one or more source files, which can be followed by list of argument files. Below are some of the more commonly javafxc arguments: classpath (-cp)—the classpath option specifies the locations (separated by a path separator character) where the compiler can find class files and/or library jar files that are required for building the application. javafxc -cp .:lib/mylibrary.jar MyClass.fx sourcepath—in more complicated project structure, you can use this option to specify one or more locations where the compiler should search for source file and satisfy source dependencies. javafxc -cp . -sourcepath .:src:src1:src2 MyClass.fx -d—with this option, you can set the target directory where compiled class files are to be stored. The compiler will create the package structure of the class under this directory and place the compiled JavaFX classes accordingly. javafxc -cp . -d build MyClass.fx The @argfiles option lets you specify a file which can contain javafxc command-line arguments. When the compiler is invoked and a @argfile is found, it uses the content of the file as an argument for javafxc. This can help shorten tediously long arguments into short, succinct commands. Assume file cmdargs has the following content: -d build-cp .:lib/api1.jar:lib/api2.jar:lib/api3.jar-sourcepath core/src:components/src:tools/src Then you can invoke javafxc as: $> javafxc @cmdargs See also Installing the JavaFX SDK Creating and using JavaFX classes JavaFX is an object-oriented scripting language. As such, object types, represented as classes, are part of the basic constructs of the language. This section shows how to declare, initialize, and use JavaFX classes. Getting ready If you have used other scripting languages such as ActionScript, JavaScript, Python, or PHP, the concepts presented in this section should be familiar. If you have no idea what a class is or what it should be, just remember this: a class is code that represents a logical entity (tree, person, organization, and so on) that you can manipulate programmatically or while using your application. A class usually exposes properties and operations to access the state or behavior of the class. How to do it... Let's assume we are building an application for a dealership. You may have a class called Vehicle to represent cars and other type of vehicles processed in the application. The next code example creates the Vehicle class. Refer to ch01/source-code/src/javafx/Vehicle.fx for full listing of the code presented here. Open your favorite text editor (or fire up your favorite IDE). Type the following class declaration: class Vehicle { var make; var model; var color; var year; function drive () : Void { println("You are driving a " "{year} {color} {make} {model}!") }} Once your class is properly declared, it is now ready to be used. To use the class, add the following (highlighted code) to the file: class Vehicle {...}var vehicle = Vehicle { year:2010 color: "Grey" make:"Mini" model:"Cooper"};vehicle.drive(); Save the file as Vehicle.fx. Now, from the command-line, compile it with: $> javafxc Vehicle.fx If you are using an IDE, you can simply right, click on the file to run it. When the code executes, you should see: $> You are driving a 2010 Grey Mini Cooper! How it works... The previous snippet shows how to declare a class in JavaFX. Albeit a simple class, it shows the basic structure of a JavaFX class. It has properties represented by variables declarations: var make;var model;var color;var year; and it has a function: function drive () : Void { println("You are driving a " "{year} {color} {make} {model}!")} which can update the properties and/or modify the behavior (for details on JavaFX functions, see the recipe Creating and Using JavaFX functions). In this example, when the function is invoked on a vehicle object, it causes the object to display information about the vehicle on the console prompt. Object literal initialization Another aspect of JavaFX class usage is object declaration. JavaFX supports object literal declaration to initialize a new instance of the class. This format lets developers declaratively create a new instance of a class using the class's literal representation and pass in property literal values directly into the initialization block to the object's named public properties. var vehicle = Vehicle { year:2010 color: "Grey" make:"Mini" model:"Cooper"}; The previous snippet declares variable vehicle and assigns to it a new instance of the Vehicle class with year = 2010, color = Grey, make = Mini, and model = Cooper. The values that are passed in the literal block overwrite the default values of the named public properties. There's more... JavaFX class definition mechanism does not support a constructor as in languages such as Java and C#. However, to allow developers to hook into the life cycle of the object's instance creation phase, JavaFX exposes a specialized code block called init{} to let developers provide custom code which is executed during object initialization. Initialization block Code in the init block is executed as one of the final steps of object creation after properties declared in the object literal are initialized. Developers can use this facility to initialize values and initialize resources that the new object will need. To illustrate how this works, the previous code snippet has been modified with an init block. You can get the full listing of the code at ch01/source-code/src/javafx/Vehicle2.fx. class Vehicle {... init { color = "Black"; } function drive () : Void { println("You are driving a " "{year} {color} {make} {model}!"); }}var vehicle = Vehicle { year:2010 make:"Mini" model:"Cooper"};vehicle.drive(); Notice that the object literal declaration of object vehicle no longer includes the color declaration. Nevertheless, the value of property color will be initialized to Black in the init{} code block during the object's initialization. When you run the application, it should display: You are driving a 2010 Black Mini Cooper! See also Declaring and using variables in JavaFX Creating and using JavaFX functions Creating and using variables in JavaFX JavaFX is a statically type-safe and type-strict scripting language. Therefore, variables (and anything which can be assigned to a variable, including functions and expressions) in JavaFX, must be associated with a type, which indicates the expected behavior and representation of the variable. This sections explores how to create, initialize, and update JavaFX variables. Getting ready Before we look at creating and using variables, it is beneficial to have an understanding of what is meant by data type and be familiar with some common data types such as String, Integer, Float, and Boolean. If you have written code in other scripting languages such as ActionScript, Python, and Ruby, you will find the concepts in this recipe easy to understand. How to do it... JavaFX provides two ways of declaring variables including the def and the var keywords. def X_STEP = 50;prntln (X_STEP);X_STEP++; // causes errorvar x : Number;x = 100;...x = x + X_LOC; How it works... In JavaFX, there are two ways of declaring a variable: def—The def keyword is used to declare and assign constant values. Once a variable is declared with the def keyword and assigned a value, it is not allowed be reassigned a new value. var—The var keyword declares variables which are able to be updated at any point after their declaration. There's more... All variables must have an associated type. The type can be declared explicitly or be automatically coerced by the compiler. Unlike Java (similar to ActionScript and Scala), the type of the variable follows the variable's name separated by a colon. var location:String; Explicit type declaration The following code specifies the type (class) that the variable will receive at runtime: var location:String;location = "New York"; The compiler also supports a short-hand notation that combines declaration and initialization. var location:String = "New York"; Implicit coercion In this format, the type is left out of the declaration. The compiler automatically converts the variable to the proper type based on the assignment. var location;location = "New York"; Variable location will automatically receive a type of String during compilation because the first assignment is a string literal. Or, the short-hand version: var location = "New York"; JavaFX types Similar to other languages, JavaFX supports a complete set of primitive types as listed: :String—this type represents a collection of characters contained within within quotes (double or single, see following). Unlike Java, the default value for String is empty (""). "The quick brown fox jumps over the lazy dog" or 'The quick brown fox jumps over the lazy dog' :Number—this is a numeric type that represents all numbers with decimal points. It is backed by the 64-bit double precision floating point Java type. The default value of Number is 0.0. 0.01234100.01.24e12 :Integer—this is a numeric type that represents all integral numbers. It is backed by the 32-bit integer Java type. The default value of an Integer is 0. -44700xFF :Boolean—as the name implies, this type represents the binary value of either true or false. :Duration—this type represent a unit of time. You will encounter its use heavily in animation and other instances where temporal values are needed. The supported units include ms, s, m, and h for millisecond, second, minute, and hour respectively. 12ms4s12h0.5m :Void—this type indicates that an expression or a function returns no value. Literal representation of Void is null. Variable scope Variables can have three distinct scopes, which implicitly indicates the access level of the variable when it is being used. Script level Script variables are defined at any point within the JavaFX script file outside of any code block (including class definition). When a script-level variable is declared, by default it is globally visible within the script and is not accessible from outside the script (without additional access modifiers). Instance level A variable that is defined at the top-level of a class is referred to as an instance variable. An instance level is visible within the class by the class members and can be accessed by creating an instance of the class. Local level The least visible scope are local variables. They are declared within code blocks such as functions. They are visible only to members within the block.
Read more
  • 0
  • 0
  • 2947

article-image-installing-and-setting-javafx-netbeans-and-eclipse-ide
Packt
17 Sep 2010
7 min read
Save for later

Installing and Setting up JavaFX for NetBeans and Eclipse IDE

Packt
17 Sep 2010
7 min read
(For more resources on JavaFX, see here.) Introduction Today, in the age of Web 2.0, AJAX, and the iPhone, users have come to expect their applications to provide a dynamic and engaging user interface that delivers rich graphical content, audio, and video, all wrapped in GUI controls with animated cinematic-like interactions. They want their applications to be connected to the web of information and social networks available on the Internet. Developers, on the other hand, have become accustomed to tools such as AJAX/HTML5 toolkits, Flex/Flash, Google Web Toolkit, Eclipse/NetBeans RCP, and others that allow them to build and deploy rich and web-connected client applications quickly. They expect their development languages to be expressive (either through syntax or specialized APIs) with features that liberate them from the tyranny of verbosity and empower them with the ability to express their intents declaritively. The Java proposition During the early days of the Web, the Java platform was the first to introduce rich content and interactivity in the browser using the applet technology (predating JavaScript and even Flash). Not too long after applets appeared, Swing was introduced as the unifying framework to create feature-rich applications for the desktop and the browser. Over the years, Swing matured into an amazingly robust GUI technology used to create rich desktop applications. However powerful Swing is, its massive API stack lacks the lightweight higher abstractions that application and content developers have been using in other development environments. Furthermore, the applet's plugin technology was (as admitted by Sun) neglected and failed in the browser-hosted rich applications against similar technologies such as Flash. Enter JavaFX The JavaFX is Sun's (now part of Oracle) answer to the next generation of rich, web-enabled, deeply interactive applications. JavaFX is a complete platform that includes a new language, development tools, build tools, deployment tools, and new runtimes to target desktop, browser, mobile, and entertainment devices such as televisions. While JavaFX is itself built on the Java platform, that is where the commonalities end. The new JavaFX scripting language is designed as a lightweight, expressive, and a dynamic language to create web-connected, engaging, visually appealing, and content-rich applications. The JavaFX platform will appeal to both technical designers and developers alike. Designers will find JavaFX Script to be a simple, yet expressive language, perfectly suited for the integration of graphical assets when creating visually-rich client applications. Application developers, on the other hand, will find its lightweight, dynamic type inference system, and script-like feel a productivity booster, allowing them to express GUI layout, object relationship, and powerful two-way data bindings all using a declarative and easy syntax. Since JavaFX runs on the Java Platform, developers are able to reuse existing Java libraries directly from within JavaFX, tapping into the vast community of existing Java developers, vendors, and libraries. This is an introductory article to JavaFX. Use its recipes to get started with the platform. You will find instructions on how to install the SDK and directions on how to set up your IDE. Installing the JavaFX SDK The JavaFX software development kit (SDK) is a set of core tools needed to compile, run, and deploy JavaFX applications. If you feel at home at the command line, then you can start writing code with your favorite text editor and interact with the SDK tools directly. However, if you want to see code-completion hints after each dot you type, then you can always use an IDE such as NetBeans or Eclipse to get you started with JavaFX (see other recipes on IDEs). This section outlines the necessary steps to set up the JavaFX SDK successfully on your computer. These instructions apply to JavaFX SDK version 1.2.x; future versions may vary slightly. Getting ready Before you can start building JavaFX applications, you must ensure that your development environment meets the minimum requirements. As of this writing, the following are the minimum requirements to run the current released version of JavaFX runtime 1.2. Minimum system requirements How to do it... The first step for installing the SDK on you machine is to download it from http://javafx.com/downloads/. Select the appropriate SDK version as shown in the next screenshot. Once you have downloaded the SDK for your corresponding system, follow these instructions for installation on Windows, Mac, Ubuntu, or OpenSolaris. Installation on Windows Find and double-click on the newly downloaded installation package (.exe file) to start. Follow the directions from the installer wizard to continue with your installation. Make sure to select the location for your installation. The installer will run a series of validations on your system before installation starts. If the installer finds no previously installed SDK (or the incorrect version), it will download a SDK that meets the minimum requirements (which lengthens your installation). Installation on Mac OS Prior to installation, ensure that your Mac OS meets the minimum requirements. Find and double-click on the newly downloaded installation package (.dmg file) to start. Follow the directions from the installer wizard to continue your installation. The Mac OS installer will place the installed files at the following location:/Library/Frameworks/JavaFX.framework/Versions/1.2. Installation on Ubuntu Linux and OpenSolaris Prior to installation, ensure that your Ubuntu or OpenSolaris environment meets the minimum requirements. Locate the newly downloaded installation package to start installation. For Linux, the file will end with *-linux-i586.sh. For OpenSolaris, the installation file will end with *-solaris-i586.sh. Move the file to the directory where you want to install the content of the SDK. Make the file executable (chmod 755) and run it. This will extract the content of the SDK in the current directory. The installation will create a new directory, javafx-sdk1.2, which is your JavaFX home location ($JAVAFX_HOME). Now add the JavaFX binaries to your system's $PATH variable, (export PATH=$PATH:$JAVAFX_HOME/bin). When your installation steps are completed, open a command prompt and validate your installation by checking the version of the SDK. $> javafx -version$> javafx 1.2.3_b36 You should get the current version number for your installed JavaFX SDK displayed. How it works... Version 1.2.x of the SDK comes with several tools and other resources to help developers get started with JavaFX development right away. The major (and more interesting) directories in the SDK include: Setting up JavaFX for the NetBeans IDE The previous recipe shows you how to get started with JavaFX using the SDK directly. However if you are more of a syntax-highlight, code-completion, click-to-build person, you will be delighted to know that the NetBeans IDE fully supports JavaFX development. JavaFX has first-class support within NetBeans, with functionalities similar to those found in Java development including: Syntax highlighting Code completion Error detection Code block formatting and folding In-editor API documentation Visual preview panel Debugging Application profiling Continuous background build And more... This recipe shows how to set up the NetBeans IDE for JavaFX development. You will learn how to configure NetBeans to create, build, and deploy your JavaFX projects. Getting ready Before you can start building JavaFX applications in the NetBeans IDE, you must ensure that your development environment meets the minimum requirements for JavaFX and NetBeans (see previous recipe Installing the JavaFX SDK for minimum requirements). Version 1.2 of the JavaFX SDK requires NetBeans version 6.5.1 (or higher) to work properly. How to do it... As a new NetBeans user (or first-time installer), you can download NetBeans and JavaFX bundled and ready to use. The bundle contains the NetBeans IDE and all other required JavaFX SDK dependencies to start development immediately. No additional downloads are required with this option. To get started with the bundled NetBeans, go to http://javafx.com/downloads/ and download the NetBeans + JavaFX bundle as shown in the next screenshot (versions will vary slightly as newer software become available).
Read more
  • 0
  • 0
  • 15112