Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Application Development

357 Articles
article-image-article-authorizations-in-sap-hana
Packt
16 Jul 2013
28 min read
Save for later

Authorizations in SAP HANA

Packt
16 Jul 2013
28 min read
(For more resources related to this topic, see here.) Roles In SAP HANA, as in most of SAP's software, authorizations are grouped into roles. A role is a collection of authorization objects, with their associated privileges. It allows us, as developers, to define self-contained units of authorization. In the same way that at the start of this book we created an attribute view allowing us to have a coherent view of our customer data which we could reuse at will in more advanced developments, authorization roles allow us to create coherent developments of authorization data which we can then assign to users at will, making sure that users who are supposed to have the same rights always have the same rights. If we had to assign individual authorization objects to users, we could be fairly sure that sooner or later, we would forget someone in a department, and they would not be able to access the data they needed to do their everyday work. Worse, we might not give quite the same authorizations to one person, and have to spend valuable time correcting our error when they couldn't see the data they needed (or worse, more dangerous and less obvious to us as developers, if the user could see more data than was intended). It is always a much better idea to group authorizations into a role and then assign the role to users, than assign authorizations directly to users. Assigning a role to a user means that when the user changes jobs and needs a new set of privileges; we can just remove the first role, and assign a second one. Since, we're just starting out using authorizations in SAP HANA, let's get into this good habit right from the start. It really will make our lives easier later on. Creating a role Role creation is done, like all other SAP HANA development, in the Studio. If your Studio is currently closed, please open it, and then select the Modeler perspective. In order to create roles, privileges, and users, you will yourself need privileges. Your SAP HANA user will need the ROLE ADMIN, USER ADMIN, and CREATE STRUCTURED PRIVILEGE system privileges in order to do the development work in this article. You will see in the Navigator panel we have a Security folder, as we can see here: Please find the Security folder and then expand this folder. You will see a subfolder called Roles. Right-click on the Roles folder and select New Role to start creating a role. On the screen which will open, you will see a number of tabs representing the different authorization objects we can create, as we can see here: We'll be looking at each of these in turn, in the following sections, so for the moment just give your role Name (BOOKUSER might be appropriate, if not very original). Granted roles Like many other object types in SAP HANA, once you have created a role, you can then use it inside another role. This onion-like arrangement makes authorizations a lot easier to manage. If we had, for example, a company with two teams: Sales   Purchasing   And two countries, say: France   Germany   We could create a role giving access to sales analytic views, one giving purchasing analytic views, one giving access to data for France, and one giving access to data for Germany. We could then create new roles, say Sales-France, which don't actually contain any authorization objects themselves, but contain only the Sales and the France roles. The role definition is much simpler to understand and to maintain than if we had directly created the Sales-France role and a Sales-Germany role with all the underlying objects. Once again, as with other development objects, creating small self-contained roles and reusing them when possible will make your (maintenance) life easier. In the Granted Roles tab we can see the list of subroles this main role contains. Note that this list is only a pointer, you cannot modify the actual authorizations and the other roles given here, you would need to open the individual role and make changes there. Part of roles The Part of Roles tab in the role definition screen is exactly the opposite of the Granted Roles tab. This tab lists all other roles of which this role is a subrole. It is very useful to track authorizations, especially when you find yourself in a situation where a user seems to have too many authorizations and can see data they shouldn't be able to see. You cannot manipulate this list as such, it exists for information only. If you want to make changes, you need to modify the main role of which this role is a subrole. SQL privileges An SQL privilege is the lowest level at which we can define restrictions for using database objects. SQL privileges apply to the simplest objects in the database such as schemas, tables and so on. No attribute, analytical, or calculation view can be seen by SQL privileges. This is not strictly true, though you can consider it so. What we have seen as an analytical view, for example, the graphical definition, the drag and drop, the checkboxes, has been transformed into a real database object in the _SYS_BIC schema upon activation. We could therefore define SQL privileges on this database object if we wanted, but this is not recommended and indeed limits the control we can have over the view. We'll see a little later that SAP HANA has much finer-grained authorizations for views than this. An important thing to note about SQL privileges is that they apply to the object on which they are defined. They restrict access to a given object itself, but do not at any point have any impact on the object's contents. For example, we can decide that one of our users can have access to the CUSTOMER table, but we couldn't restrict their access to only CUSTOMER values from the COUNTRY USA. SQL privileges can control access to any object under the Catalog node in the Navigator panel. Let's add some authorizations to our BOOK schema and its contents. At the top of the SQL Privileges tab is a green plus sign button. Now click on this button to get the Select Catalog Object dialog, shown here: As you can see in the screenshot, we have entered the two letters bo into the filter box at the top of the dialog. As soon as you enter at least two letters into this box, the Studio will attempt to find and then list all database objects whose name contains the two letters you typed. If you continue to type, the search will be refined further. The first item in the list shown is the BOOK schema we created right back at the start of the book in the Chapter 2, SAP HANA Studio - Installation and First Look . Please select the BOOK item, and then click on OK to add it to our new role: The first thing to notice is the warning icon on the SQL Privileges tab itself: This means that your role definition is incomplete, and the role cannot be activated and used as yet. On the right of the screen, a list of checkbox options has appeared. These are the individual authorizations appropriate to the SQL object you have selected. In order to grant rights to a user via a role, you need to decide which of these options to include in the role. The individual authorization names are self-explicit. For example, the CREATE ANY authorization allows creation of new objects inside a schema. The INSERT or SELECT authorization might at first seem unusual for a schema, as it's not an object which can support such instructions. However, the usage is actually quite elegant. If a user has INSERT rights on the schema BOOK, then they have INSERT rights on all objects inside the schema BOOK. Granting rights on the schema itself avoids having to specify the names of all objects inside the schema. It also future-proofs your authorization concept, since new objects created in the schema will automatically inherit from the existing authorizations you have defined. On the far right of the screen, alongside each authorization is a radio button which gives an additional privilege, the possibility for a given user to, in turn, give the rights to a second user. This is an option which should not be given to all users, and so should not be present in all roles you create; the right to attribute privileges to users should be limited to your administrators. If you give just any user the right to pass on their authorizations further, you will soon find that you are no longer able to determine who can do what in your database. For the moment we are creating a simple role to show the working of the authorization concept in SAP HANA, so we will check all the checkboxes, and leave the radio buttons at No : There are some SQL privileges which are necessary for any user to be able to do work in SAP HANA. These are listed below. They give access to the system objects describing the development models we create in SAP HANA, and if a user does not have these privileges, nothing will work at all, the user will not be authorized to do anything. The SQL privileges you will need to add to the role in order to give access to basic SAP HANA system objects are: The SELECT privilege on the _SYS_BI schema   The SELECT privilege on the _SYS_REPO schema   The EXECUTE privilege on the REPOSITORY_REST procedure   Please add these SQL privileges to your role now, in order to obtain the following result: As you can see with the configuration we have just done, SQL privileges allow a user to access a given object and allow specific actions on the object. They do not however allow us to specify particular authorizations to the contents of the object. In order to use such fine-grained rights, we need to create an analytic privilege, and then add it to our role, so let's do that now. Analytic privileges An analytic privilege is an artifact unique to SAP HANA, it is not part of the standard SQL authorization concept. Analytic privileges allow us to restrict access to certain values of a given attribute, analytic, or calculation view. This means that we can create one view, which by default shows all available data, and then restrict what is actually visible to different users. We could restrict visible data by company code, by country, or by region. For example, our users in Europe would be allowed to see and work with data from our customers in Europe, but not those in the USA. An analytic privilege is created through the Quick Launch panel of Modeler , so please open that view now (or switch to the Quick Launch tab if it's already open). You don't need to close the role definition tab that's already open, we can leave it for now, create our analytic privilege, and then come back to the role definition later. From the Quick Launch panel, select Analytic Privilege , and then Create . As usual with SAP HANA, we are asked to give Name , Description , and select a package for our object. We'll call it AP_EU (for analytic privilege, Europe), use the name as the description, and put it into our book package alongside our other developments. As is common in SAP HANA, we have the option of creating an analytic privilege from scratch (Create New ) or copying an existing privilege (Copy From ). We don't currently have any other analytic privileges in our development, so leave Create New selected, then click on Next to go to the second screen of the wizard, shown here: On this page of the dialog, we are prompted to add development models to the analytic privilege. This will then allow us to restrict access to given values of these models. In the previous screenshot, we have added the CUST_REV analytic view to the analytic privilege. This will allow us to restrict access to any value we specify of any of the fields visible in the view. To add a view to the analytic privilege, just find it in the left panel, click on its name and then click on the Add button. Once you have added the views you require for your authorizations, click on the Finish button at the bottom of the window to go to the next step. You will be presented with the analytic privilege development panel, reproduced here: This page allows us to define our analytic privilege completely. On the left we have the list of database views we have included in the analytic privilege. We can add more, or remove one, using the Add and Remove buttons. To the right, we can see the Associated Attributes Restrictions and Assign Restrictions boxes. These are where we define the restrictions to individual values, or sets of values. In the top box, Associated Attributes Restrictions , we define on which attributes we want to restrict access (country code or region, maybe). In the bottom box, Assign Restrictions , we define the individual values on which to restrict (for example, for company code, we could restrict to value 0001, or US22; for region, we could limit access to EU or USA). Let's add a restriction to the REGION field of our CUST_REV view now. Click on the Add button next to the Associated Attributes Restrictions box, to see the Select Object dialog: As can be expected, this dialog lists all the attributes in our analytic view. We just need to select the appropriate attribute and then click on OK to add it to the analytic privilege. Measures in the view are not listed in the dialog. We cannot restrict access to a view according to numeric values. We cannot therefore, make restrictions to customers with a revenue over 1 million Euros, for example. Please add the REGION field to the analytic privilege now. Once the appropriate fields have been added, we can define the restrictions to be applied to them. Click on the REGION field in the Associated Attributes Restrictions box, then on the Add button next to the Assign Restrictions box, to define the restrictions we want to apply. As we can see, restrictions can be defined according to the usual list of comparison operators. These are the same operators we used earlier to define a restricted column in our analytic views. In our example, we'll be restricting access to those lines with a REGION column equal to EU, so we'll select Equal . In the Value column, we can either type the appropriate value directly, or use the value help button, and the familiar Value Help Dialog which will appear, to select the value from those available in the view. Please add the EU value, either by typing it or by having SAP HANA find it for us, now. There is one more field which needs to be added to our analytic privilege, and the reason behind might seem at first a little strange. This point is valid for SAP HANA SP5, up to and including (at least) release 50 of the software. If this point turns out to be a bug, then it might not be necessary in later versions of the software. The field on which we want to restrict user actions (REGION) is not actually part of the analytic view itself. REGION, if you recall, is a field which is present in CUST_REV , thanks to the included attribute view CUST_ATTR . In its current state, the analytic privilege will not work, because no fields from the analytic view are actually present in the analytic privilege. We therefore need to add at least one of the native fields of the analytic view to the analytic privilege. We don't need to do any restriction on the field; however it needs to be in the privilege for everything to work as expected. This is hinted at in SAP Note 1809199, SAP HANA DB: debugging user authorization errors. Only if a view is included in one of the cube restrictions and at least one of its attribute is employed by one of the dimension restrictions, access to the view is granted by this analytical privilege. Not an explicit description of the workings of the authorization concept, but close. Our analytic view CUST_REV contains two native fields, CURRENCY and YEAR. You can add either of these to the analytic privilege. You do not need to assign any restrictions to the field; it just needs to be in the privilege. Here is the state of the analytic privilege when development work on it is finished: The Count column lists the number of restrictions in effect for the associated field. For the CURRENCY field, no restrictions are defined. We just need (as always) to activate our analytic privilege in order to be able to use it. The activation button is the same one as we have used up until now to activate the modeling views, the round green button with the right-facing white arrow at the top-right of the panel, which you can see on the preceding screenshot. Please activate the analytic privilege now. Once that has been done, we can add it to our role. Return to the Role tab (if you left it open) or reopen the role now. If you closed the role definition tab earlier, you can get back to our role by opening the Security node in the Navigator panel, then opening Roles, and double-clicking on the BOOKUSER role. In the Analytic Privileges tab of the role definition screen, click on the green plus sign at the top, to add an analytic privilege to our role. The analytic privilege we have just created is called AP_EU, so type ap_eu into the search box at the top of the dialog window which will open. As soon as you have typed at least two characters, SAP HANA will start searching for matching analytic privileges, and your AP_EU privilege will be listed, as we can see here: Click on OK to add the privilege to the role. We will see in a minute the effect our analytic privilege has on the rights of a particular user, but for the moment we can take a look at the second-to-last tab in the role definition screen, System Privileges . System privileges As its name suggests, system privileges gives to a particular user the right to perform specific actions on the SAP HANA system itself, not just on a given table or view. These are particular rights which should not be given to just any user, but should be reserved to those users who need to perform a particular task. We'll not be adding any of these privileges to our role, however we'll take a look at the available options and what they are used for. Click on the green plus-sign button at the top of the System Privileges tab to see a list of the available privileges. By default the dialog will do a search on all available values; there are only fifteen or so, but you can as usual filter them down if you require using the filter box at the top of the dialog: For a full list of the system privileges available and their uses, please refer to the SAP HANA SQL Reference, available on the help.sap.com website at http://help.sap.com/hana/html/sql_grant.html. Package privileges The last tab in the role definition screen concerns Package Privileges . These allow a given user to access those objects in a package. In our example, the package is called book, so if we add the book package to our role in the Package Privileges tab, we will see the following result: Assigning package privileges is similar to assigning SQL privileges we saw earlier. We first add the required object (here our book package), then we need to indicate exactly which rights we give to the role. As we can see in the preceding screenshot, we have a series of checkboxes on the right-hand side of the window. At least one of these checkboxes must be checked in order to save the role. The individual rights have names which are fairly self-explanatory. REPO.READ gives access to read the package, whereas REPO.EDIT_NATIVE_OBJECTS allows modification of objects, for example. The role we are creating is destined for an end user who will need to see the data in a role, but should not need to modify the data models in any way (and in fact we really don't want them to modify our data models, do we?). We'll just add the REPO.READ privilege, on our book package, to our role. Again we can decide whether the end user can in turn assign this privilege to others. And again, we don't need this feature in our role. At this point, our role is finished. We have given access to the SQL objects in the BOOK schema, created an analytic privilege which limits access to the Europe region in our CUST_REV model, and given read-only access to our book package. After activation (always) we'll be able to assign our role to a test user, and then see the effect our authorizations have on what the user can do and see. Please activate the role now. Users Users are probably the most important part of the authorization concept. They are where all our problems begin, and their attempts to do and see things they shouldn't are the main reason we have to spend valuable time defining authorizations in the first place. In technical terms, a user is just another database object. They are created, modified, and deleted in the same way a modeling view is. They have properties (their name and password, for example), and it is by modifying these properties that we influence the actions that the person who connects using the user can perform. Up until now we have been using the SYSTEM user (or the user that your database administrator assigned to you). This user is defined by SAP, and has basically the authorizations to do anything with the database. Use of this user is discouraged by SAP, and the author really would like to insist that you don't use it for your developments. Accidents happen, and one of the great things about authorizations is that they help to prevent accidents. If you try to delete an important object with the SYSTEM user, you will delete it, and getting it back might involve a database restore. If however you use a development user with less authorization, then you wouldn't have been allowed to do the deletion, saving a lot of tears. Of course, the question then arises, why have you been using the SYSTEM user for the last couple of hundred pages of development. The answer is simple: if the author had started the book with the authorizations article, not many readers would have gotten past page 10. Let's create a new user now, and assign the role we have just created. From the Navigator panel, open the Security node, right-click on User , and select New User from the menu to obtain the user creation screen as shown in the following screenshot: Defining a user requires remarkably little information: User Name : The login that the user will use. Your company might have a naming convention for users. Users might even already have a standard login they use to connect to other systems in your enterprise. In our example, we'll create a user with the (once again rather unimaginative) name of BOOKU.   Authentication : How will SAP HANA know that the user connecting with the name of ANNE really is Anne? There are three (currently) ways of authenticating a user with SAP HANA. Password : This is the most common authentication system, SAP HANA will ask Anne for her password when she connects to the system. Since Anne is the only person who knows her password, we can be sure that Anne really is ANNE, and let her connect and do anything the user ANNE is allowed to do. Passwords in SAP HANA have to respect a certain format. By default this format is one capital, one lowercase, one number, and at least eight characters. You can see and change the password policy in the system configuration. Double-click on the system name in the Navigator panel, click on the Configuration tab, type the word pass into the filter box at the top of the tab, and scroll down to indexserver.ini and then password policy . The password format in force on your system is listed as password_layout . By default this is A1a, meaning capitals, numbers, and lowercase letters are allowed. The value can also contain the # character, meaning that special characters must also be contained in the password. The only special characters allowed by SAP HANA are currently the underscore, dollar sign, and the hash character. Other password policy defaults are also listed on this screen, such as maximum_password_lifetime (the time after which SAP HANA will force you to change your password).   Kerberos and SAML : These authentication systems need to be set up by your network administrator and allow single sign-on in your enterprise. This means that SAP HANA will be able to see the Windows username that is connecting to the system. The database will assume that the authentication part (deciding whether Anne really is ANNE) has already been done by Windows, and let the user connect.     Session Client : As we saw when we created attribute and analytic views back at the start of the book, SAP HANA understands the notion of client, referring to a partition system of the SAP ERP database. In the SAP ERP, different users can work in different Clients. In our development, we filtered on Client 100. A much better way of handling filtering is to define the default client for a user when we define their account. The Session Client field can be filled with the ERP Client in which the user works. In this way we do not need to filter on the analytic models, we can leave their client value at Dynamic in the view, and the actual value to use will be taken from the user record. Once again this means maintenance of our developments is a lot simpler. If you like, you can take a few minutes at the end of this article to create a user with a session client value of 100, then go back and reset our attribute and analytic views' default client value to Dynamic, reactivate everything, and then do a data preview with your test user. The result should be identical to that obtained when the view was filtered on client 100. However, if you then create a second user with a session client of 200, this second user will see different data.   We'll create a user with a password login, so type a password for your user now. Remember to adhere to the password policy in force on your system. Also note that the user will be required to change their password on first login. At the bottom of the user definition screen, as we can see from the preceding screenshot, we have a series of tabs corresponding to the different authorizations we can assign to our user. These are the same tabs we saw earlier when defining a role. As explained at the beginning of this article, it is considered best practice to assign authorizations to a role and then the role to a user, rather than assign authorizations directly to a user; this makes maintenance easier. For this reason we will not be looking at the different tabs for assigning authorizations to our user, other than the first one, Granted Roles . The Granted Roles tab lists, and allows adding and removing roles from the list assigned to the user. By default when we create a user, they have no roles assigned, and hence have no authorizations at all in the system. They will be able to log in to SAP HANA but will be able to do no development work, and will see no data from the system. Please click on the green plus sign button in the Granted Roles tab of the user definition screen, to add a role to the user account. You will be provided with the Select Role dialog, shown in part here: This dialog has the familiar search box at the top, so typing the first few letters of a role name will bring up a list of matching roles. Here our role was called BOOKUSER, so please do a search for it, then select it in the list and click on OK to add it to the user account. Once that is done, we can test our user to verify that we can perform the necessary actions with the role and user we have just created. We just need, as with all objects in SAP HANA, to activate the user object first. As usual, this is done with the round green button with the right-facing white arrow at the top-right of the screen. Please do this now. Testing our user and role The only real way to check if the authorizations we have defined are appropriate to the business requirements is to create a user and then try out the role to see what the user can and cannot see and do in the system. The first thing to do is to add our new user to the Studio so we can connect to SAP HANA using this new user. To do this, in the Navigator panel, right click on the SAP HANA system name, and select Add Additional User from the menu which appears. This will give you the Add additional user dialog, shown in the following screenshot:     Enter the name of the user you just created (BOOKU) and the password you assigned to the user. You will be required to change the password immediately: Click on Finish to add the user to the Studio. You will see immediately in the Navigator panel that we can now work with either our SYSTEM user, or our BOOKU user: We can also see straight away that BOOKU is missing the privileges to perform or manage data backups; the Backup node is missing from the list for the BOOKU user. Let's try to do something with our BOOKU user and see how the system reacts. The way the Studio lets you handle multiple users is very elegant, since the tree structure of database objects is duplicated, one per user, you can see immediately how the different authorization profiles affect the different users. Additionally, if you request a data preview from the CUST_REV analytic view in the book package under the BOOKU user's node in the Navigator panel, you will see the data according to the BOOKU user's authorizations. Requesting the same data preview from the SYSTEM user's node will see the data according to SYSTEM's authorizations. Let's do a data preview on the CUST_REV view with the SYSTEM user, for reference: As we can see, there are 12 rows of data retrieved, and we have data from the EU and NAR regions. If we ask for the same data preview using our BOOKU user, we can see much less data: BOOKU can only see nine of the 12 data rows in our view, as no data from the NAR region is visible to the BOOKU user. This is exactly the result we aimed to achieve using our analytic privilege, in our role, assigned to our user. Summary In this article, we have taken a look at the different aspects of the authorization concept in SAP HANA. We examined the different authorization levels available in the system, from SQL privileges, analytic privileges, system privileges, and package privileges. We saw how to add these different authorization concepts to a role, a reusable group of authorizations. We went on to create a new user in our SAP HANA system, examining the different types of authentications available, and the assignment of roles to users. Finally, we logged into the Studio with our new user account, and found out the first-hand effect our authorizations had on what the user could see and do. In the next article, we will be working with hierarchical data, seeing what hierarchies can bring to our reporting applications, and how to make the best use of them. Resources for Article : Further resources on this subject: SAP Netweaver: Accessing the MDM System [Article] SAP HANA integration with Microsoft Excel [Article] Exporting SAP BusinessObjects Dashboards into Different Environments [Article]
Read more
  • 0
  • 2
  • 7908

article-image-database-active-record-and-model-tricks
Packt
11 Jul 2013
14 min read
Save for later

Database, Active Record, and Model Tricks

Packt
11 Jul 2013
14 min read
(For more resources related to this topic, see here.) Getting data from a database Most applications today use databases. Be it a small website or a social network, at least some parts are powered by databases. Yii introduces three ways that allow you to work with databases: Active Record Query builder SQL via DAO We will use all these methods to get data from the film, film_actor, and actor tables and show it in a list. We will measure the execution time and memory usage to determine when to use these methods. Getting ready Create a new application by using yiic webapp as described in the official guide at the following URL:http://www.yiiframework.com/doc/guide/en/quickstart.first-app Download the Sakila database from the following URL:http://dev.mysql.com/doc/index-other.html Execute the downloaded SQLs; first schema then data. Configure the DB connection in protected/config/main.php to use the Sakila database. Use Gii to create models for the actor and film tables. How to do it... We will create protected/controllers/DbController.php as follows: <?php class DbController extends Controller { protected function afterAction($action) { $time = sprintf('%0.5f', Yii::getLogger() ->getExecutionTime()); $memory = round(memory_get_peak_usage()/(1024*1024),2)."MB"; echo "Time: $time, memory: $memory"; parent::afterAction($action); } public function actionAr() { $actors = Actor::model()->findAll(array('with' => 'films', 'order' => 't.first_name, t.last_name, films.title')); echo '<ol>'; foreach($actors as $actor) { echo '<li>'; echo $actor->first_name.' '.$actor->last_name; echo '<ol>'; foreach($actor->films as $film) { echo '<li>'; echo $film->title; echo '</li>'; } echo '</ol>'; echo '</li>'; } echo '</ol>'; } public function actionQueryBuilder() { $rows = Yii::app()->db->createCommand() ->from('actor') ->join('film_actor', 'actor.actor_id=film_actor.actor_id') ->leftJoin('film', 'film.film_id=film_actor.film_id') ->order('actor.first_name, actor.last_name, film.title') ->queryAll(); $this->renderRows($rows); } public function actionSql() { $sql = "SELECT * FROM actor a JOIN film_actor fa ON fa.actor_id = a.actor_id JOIN film f ON fa.film_id = f.film_id ORDER BY a.first_name, a.last_name, f.title"; $rows = Yii::app()->db->createCommand($sql)->queryAll(); $this->renderRows($rows); } public function renderRows($rows) { $lastActorName = null; echo '<ol>'; foreach($rows as $row) { $actorName = $row['first_name'].' '.$row['last_name']; if($actorName!=$lastActorName){ if($lastActorName!==null){ echo '</ol>'; echo '</li>'; } $lastActorName = $actorName; echo '<li>'; echo $actorName; echo '<ol>'; } echo '<li>'; echo $row['title']; echo '</li>'; } echo '</ol>'; } } Here, we have three actions corresponding to three different methods of getting data from a database. After running the preceding db/ar, db/queryBuilder and db/sql actions, you should get a tree showing 200 actors and 1,000 films they have acted in, as shown in the following screenshot: At the bottom there are statistics that give information about the memory usage and execution time. Absolute numbers can be different if you run this code, but the difference between the methods used should be about the same: Method Memory usage (megabytes) Execution time (seconds) Active Record 19.74 1.14109 Query builder 17.98 0.35732 SQL (DAO) 17.74 0.35038 How it works... Let's review the preceding code. The actionAr action method gets model instances by using the Active Record approach. We start with the Actor model generated with Gii to get all the actors and specify 'with' => 'films' to get the corresponding films using a single query or eager loading through relation, which Gii builds for us from InnoDB table foreign keys. We then simply iterate over all the actors and for each actor—over each film. Then for each item, we print its name. The actionQueryBuilder function uses query builder. First, we create a query command for the current DB connection with Yii::app()->db->createCommand(). We then add query parts one by one with from, join, and leftJoin. These methods escape values, tables, and field names automatically. The queryAll function returns an array of raw database rows. Each row is also an array indexed with result field names. We pass the result to renderRows, which renders it. With actionSql, we do the same, except we pass SQL directly instead of adding its parts one by one. It's worth mentioning that we should escape parameter values manually with Yii::app()->db->quoteValue before using them in the query string. The renderRows function renders the query builder. The DAO raw row requires you to add more checks and generally, it feels unnatural compared to rendering an Active Record result. As we can see, all these methods give the same result in the end, but they all have different performance, syntax, and extra features. We will now do a comparison and figure out when to use each method: Method Active Record Query Builder SQL (DAO) Syntax This will do SQL for you. Gii will generate models and relations for you. Works with models, completely OO-style, and very clean API. Produces array of properly nested models as the result. Clean API, suitable for building query on the fly. Produces raw data arrays as the result. Good for complex SQL. Manual values and keywords quoting. Not very suitable for building query on the fly. Produces raw data arrays as results. Performance Higher memory usage and execution time compared to SQL and query builder. Okay. Okay. Extra features Quotes values and names automatically. Behaviors. Before/after hooks. Validation. Quotes values and names automatically. None. Best for Prototyping selects. Update, delete, and create actions for single models (model gives a huge benefit when using with forms). Working with large amounts of data, building queries on the fly. Complex queries you want to do with pure SQL and have maximum possible performance. There's more... In order to learn more about working with databases in Yii, refer to the following resources: http://www.yiiframework.com/doc/guide/en/database.dao http://www.yiiframework.com/doc/guide/en/database.query-builder http://www.yiiframework.com/doc/guide/en/database.ar See also The Using CDbCriteria recipe Defining and using multiple DB connections Multiple database connections are not used very often for new standalone web applications. However, when you are building an add-on application for an existing system, you will most probably need another database connection. From this recipe you will learn how to define multiple DB connections and use them with DAO, query builder, and Active Record models. Getting ready Create a new application by using yiic webapp as described in the official guide at the following URL:http://www.yiiframework.com/doc/guide/en/quickstart.first-app Create two MySQL databases named db1 and db2. Create a table named post in db1 as follows: DROP TABLE IF EXISTS `post`; CREATE TABLE IF NOT EXISTS `post` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `title` VARCHAR(255) NOT NULL, `text` TEXT NOT NULL, PRIMARY KEY (`id`) ); Create a table named comment in db2 as follows: DROP TABLE IF EXISTS `comment`; CREATE TABLE IF NOT EXISTS `comment` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `text` TEXT NOT NULL, `postId` INT(10) UNSIGNED NOT NULL, PRIMARY KEY (`id`) ); How to do it... We will start with configuring the DB connections. Open protected/config/main.php and define a primary connection as described in the official guide: 'db'=>array( 'connectionString' => 'mysql:host=localhost;dbname=db1', 'emulatePrepare' => true, 'username' => 'root', 'password' => '', 'charset' => 'utf8', ), Copy it, rename the db component to db2, and change the connection string accordingly. Also, you need to add the class name as follows: 'db2'=>array( 'class'=>'CDbConnection', 'connectionString' => 'mysql:host=localhost;dbname=db2', 'emulatePrepare' => true, 'username' => 'root', 'password' => '', 'charset' => 'utf8', ), That is it. Now you have two database connections and you can use them with DAO and query builder as follows: $db1Rows = Yii::app()->db->createCommand($sql)->queryAll(); $db2Rows = Yii::app()->db2->createCommand($sql)->queryAll(); Now, if we need to use Active Record models, we first need to create Post and Comment models with Gii. Starting from Yii version 1.1.11, you can just select an appropriate connection for each model.Now you can use the Comment model as usual. Create protected/controllers/DbtestController.php as follows: <?php class DbtestController extends CController { public function actionIndex() { $post = new Post(); $post->title = "Post #".rand(1, 1000); $post->text = "text"; $post->save(); echo '<h1>Posts</h1>'; $posts = Post::model()->findAll(); foreach($posts as $post) { echo $post->title."<br />"; } $comment = new Comment(); $comment->postId = $post->id; $comment->text = "comment #".rand(1, 1000); $comment->save(); echo '<h1>Comments</h1>'; $comments = Comment::model()->findAll(); foreach($comments as $comment) { echo $comment->text."<br />"; } } } Run dbtest/index multiple times and you should see records added to both databases, as shown in the following screenshot: How it works... In Yii you can add and configure your own components through the configuration file. For non-standard components, such as db2, you have to specify the component class. Similarly, you can add db3, db4, or any other component, for example, facebookApi. The remaining array key/value pairs are assigned to the component's public properties respectively. There's more... Depending on the RDBMS used, there are additional things we can do to make it easier to use multiple databases. Cross-database relations If you are using MySQL, it is possible to create cross-database relations for your models. In order to do this, you should prefix the Comment model's table name with the database name as follows: class Comment extends CActiveRecord { //… public function tableName() { return 'db2.comment'; } //… } Now, if you have a comments relation defined in the Post model relations method, you can use the following code: $posts = Post::model()->with('comments')->findAll(); Further reading For further information, refer to the following URL: http://www.yiiframework.com/doc/api/CActiveRecord See also The Getting data from a database recipe Using scopes to get models for different languages Internationalizing your application is not an easy task. You need to translate interfaces, translate messages, format dates properly, and so on. Yii helps you to do this by giving you access to the Common Locale Data Repository ( CLDR ) data of Unicode and providing translation and formatting tools. When it comes to applications with data in multiple languages, you have to find your own way. From this recipe, you will learn a possible way to get a handy model function that will help to get blog posts for different languages. Getting ready Create a new application by using yiic webapp as described in the official guide at the following URL:http://www.yiiframework.com/doc/guide/en/quickstart.first-app Set up the database connection and create a table named post as follows: DROP TABLE IF EXISTS `post`; CREATE TABLE IF NOT EXISTS `post` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `lang` VARCHAR(5) NOT NULL DEFAULT 'en', `title` VARCHAR(255) NOT NULL, `text` TEXT NOT NULL, PRIMARY KEY (`id`) ); INSERT INTO `post`(`id`,`lang`,`title`,`text`) VALUES (1,'en_us','Yii news','Text in English'), (2,'de','Yii Nachrichten','Text in Deutsch'); Generate a Post model using Gii. How to do it... Add the following methods to protected/models/Post.php as follows: class Post extends CActiveRecord { public function defaultScope() { return array( 'condition' => "lang=:lang", 'params' => array( ':lang' => Yii::app()->language, ), ); } public function lang($lang){ $this->getDbCriteria()->mergeWith(array( 'condition' => "lang=:lang", 'params' => array( ':lang' => $lang, ), )); return $this; } } That is it. Now, we can use our model. Create protected/controllers/ DbtestController.php as follows: <?php class DbtestController extends CController { public function actionIndex() { // Get posts written in default application language $posts = Post::model()->findAll(); echo '<h1>Default language</h1>'; foreach($posts as $post) { echo '<h2>'.$post->title.'</h2>'; echo $post->text; } // Get posts written in German $posts = Post::model()->lang('de')->findAll(); echo '<h1>German</h1>'; foreach($posts as $post) { echo '<h2>'.$post->title.'</h2>'; echo $post->text; } } } Now, run dbtest/index and you should get an output similar to the one shown in the following screenshot: How it works... We have used Yii's Active Record scopes in the preceding code. The defaultScope function returns the default condition or criteria that will be applied to all the Post model query methods. As we need to specify the language explicitly, we create a scope named lang, which accepts the language name. With $this->getDbCriteria(), we get the model's criteria in its current state and then merge it with the new condition. As the condition is exactly the same as in defaultScope, except for the parameter value, it overrides the default scope. In order to support chained calls, lang returns the model instance by itself. There's more... For further information, refer to the following URLs: http://www.yiiframework.com/doc/guide/en/database.ar http://www.yiiframework.com/doc/api/CDbCriteria/ See also The Getting data from a database recipe The Using CDbCriteria recipe Processing model fields with AR event-like methods Active Record implementation in Yii is very powerful and has many features. One of these features is event-like methods , which you can use to preprocess model fields before putting them into the database or getting them from a database, as well as deleting data related to the model, and so on. In this recipe, we will linkify all URLs in the post text and we will list all existing Active Record event-like methods. Getting ready Create a new application by using yiic webapp as described in the official guide at the following URL:http://www.yiiframework.com/doc/guide/en/quickstart.first-app Set up a database connection and create a table named post as follows: DROP TABLE IF EXISTS `post`; CREATE TABLE IF NOT EXISTS `post` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `title` VARCHAR(255) NOT NULL, `text` TEXT NOT NULL, PRIMARY KEY (`id`) ); Generate the Post model using Gii How to do it... Add the following method to protected/models/Post.php as follows: protected function beforeSave() { $this->text = preg_replace('~((?:https?|ftps?)://.*?)( |$)~iu', '<a href="1">1</a>2', $this->text); return parent::beforeSave(); } That is it. Now, try saving a post containing a link. Create protected/controllers/TestController.php as follows: <?php class TestController extends CController { function actionIndex() { $post=new Post(); $post->title='links test'; $post->text='test http://www.yiiframework.com/ test'; $post->save(); print_r($post->text); } } Run test/index. You should get the following: How it works... The beforeSave method is implemented in the CActiveRecord class and executed just before saving a model. By using a regular expression, we replace everything that looks like a URL with a link that uses this URL and call the parent implementation, so that real events are raised properly. In order to prevent saving, you can return false. There's more... There are more event-like methods available as shown in the following table: Method name Description afterConstruct Called after a model instance is created by the new operator beforeDelete/afterDelete Called before/after deleting a record beforeFind/afterFind Method is invoked before/after each record is instantiated by a find method beforeSave/afterSave Method is invoked before/after saving a record successfully beforeValidate/afterValidate Method is invoked before/after validation ends Further reading In order to learn more about using event-like methods in Yii, you can refer to the following URLs: http://www.yiiframework.com/doc/api/CActiveRecord/ http://www.yiiframework.com/doc/api/CModel See also The Using Yii events recipe  The Highlighting code with Yii recipe The Automating timestamps recipe The Setting up an author automatically recipe
Read more
  • 0
  • 0
  • 3143

article-image-why-mybatis
Packt
10 Jul 2013
8 min read
Save for later

Why MyBatis

Packt
10 Jul 2013
8 min read
(For more resources related to this topic, see here.) Eliminates a lot of JDBC boilerplate code Java has a Java DataBase Connectivity (JDBC) API to work with relational databases. But JDBC is a very low-level API, and we need to write a lot of code to perform database operations. Let us examine how we can implement simple insert and select operations on a STUDENTS table using plain JDBC. Assume that the STUDENTS table has STUD_ID, NAME, EMAIL, and DOB columns. The corresponding Student JavaBean is as follows: package com.mybatis3.domain; import java.util.Date; public class Student { private Integer studId; private String name; private String email; private Date dob; // setters and getters } The following StudentService.java program implements the SELECT and INSERT operations on the STUDENTS table using JDBC. public Student findStudentById(int studId) { Student student = null; Connection conn = null; try{ //obtain connection conn = getDatabaseConnection(); String sql = "SELECT * FROM STUDENTS WHERE STUD_ID=?"; //create PreparedStatement PreparedStatement pstmt = conn.prepareStatement(sql); //set input parameters pstmt.setInt(1, studId); ResultSet rs = pstmt.executeQuery(); //fetch results from database and populate into Java objects if(rs.next()) { student = new Student(); student.setStudId(rs.getInt("stud_id")); student.setName(rs.getString("name")); student.setEmail(rs.getString("email")); student.setDob(rs.getDate("dob")); } } catch (SQLException e){ throw new RuntimeException(e); }finally{ //close connection if(conn!= null){ try { conn.close(); } catch (SQLException e){ } } } return student; } public void createStudent(Student student) { Connection conn = null; try{ //obtain connection conn = getDatabaseConnection(); String sql = "INSERT INTO STUDENTS(STUD_ID,NAME,EMAIL,DOB) VALUES(?,?,?,?)"; //create a PreparedStatement PreparedStatement pstmt = conn.prepareStatement(sql); //set input parameters pstmt.setInt(1, student.getStudId()); pstmt.setString(2, student.getName()); pstmt.setString(3, student.getEmail()); pstmt.setDate(4, new java.sql.Date(student.getDob().getTime())); pstmt.executeUpdate(); } catch (SQLException e){ throw new RuntimeException(e); }finally{ //close connection if(conn!= null){ try { conn.close(); } catch (SQLException e){ } } } } protected Connection getDatabaseConnection() throws SQLException { try{ Class.forName("com.mysql.jdbc.Driver"); return DriverManager.getConnection ("jdbc:mysql://localhost:3306/test", "root", "admin"); } catch (SQLException e){ throw e; } catch (Exception e){ throw new RuntimeException(e); } } There is a lot of duplicate code in each of the preceding methods, for creating a connection, creating a statement, setting input parameters, and closing the resources, such as the connection, statement, and result set. MyBatis abstracts all these common tasks so that the developer can focus on the really important aspects, such as preparing the SQL statement that needs to be executed and passing the input data as Java objects. In addition to this, MyBatis automates the process of setting the query parameters from the input Java object properties and populates the Java objects with the SQL query results as well. Now let us see how we can implement the preceding methods using MyBatis: Configure the queries in a SQL Mapper config file, say StudentMapper.xml. <select id="findStudentById" parameterType="int" resultType=" Student"> SELECT STUD_ID AS studId, NAME, EMAIL, DOB FROM STUDENTS WHERE STUD_ID=#{Id} </select> <insert id="insertStudent" parameterType="Student"> INSERT INTO STUDENTS(STUD_ID,NAME,EMAIL,DOB) VALUES(#{studId},#{name},#{email},#{dob}) </insert> Create a StudentMapper interface. public interface StudentMapper { Student findStudentById(Integer id); void insertStudent(Student student); } In Java code, you can invoke these statements as follows: SqlSession session = getSqlSessionFactory().openSession(); StudentMapper mapper = session.getMapper(StudentMapper.class); // Select Student by Id Student student = mapper.selectStudentById(1); //To insert a Student record mapper.insertStudent(student); That's it! You don't need to create the Connection, PrepareStatement, extract, and set parameters and close the connection by yourself for every database operation. Just configure the database connection properties and SQL statements, and MyBatis will take care of all the ground work. Don't worry about what SqlSessionFactory, SqlSession, and Mapper XML files are. Along with these, MyBatis provides many other features that simplify the implementation of persistence logic. It supports the mapping of complex SQL result set data to nested object graph structures It supports the mapping of one-to-one and one-to-many results to Java objects It supports building dynamic SQL queries based on the input data Low learning curve One of the primary reasons for MyBatis' popularity is that it is very simple to learn and use because it depends on your knowledge of Java and SQL. If developers are familiar with Java and SQL, they will fnd it fairly easy to get started with MyBatis. Works well with legacy databases Sometimes we may need to work with legacy databases that are not in a normalized form. It is possible, but diffcult, to work with these kinds of legacy databases with fully-fedged ORM frameworks such as Hibernate because they attempt to statically map Java objects to database tables. MyBatis works by mapping query results to Java objects; this makes it easy for MyBatis to work with legacy databases. You can create Java domain objects following the object-oriented model, execute queries Embraces SQL Full-fedged ORM frameworks such as Hibernate encourage working with entity objects and generate SQL queries under the hood. Because of this SQL generation, we may not be able to take advantage of database-specifc features. Hibernate allows to execute native SQLs, but that might defeat the promise of a database-independent persistence. The MyBatis framework embraces SQL instead of hiding it from developers. As MyBatis won't generate any SQLs and developers are responsible for preparing the queries, you can take advantage of database-specifc features and prepare optimized SQL queries. Also, working with stored procedures is supported by MyBatis. Supports integration with Spring and Guice frameworks MyBatis provides out-of-the-box integration support for the popular dependency injection frameworks Spring and Guice; this further simplifes working with MyBatis. Supports integration with third-party cache libraries MyBatis has inbuilt support for caching SELECT query results within the scope of SqlSession level ResultSets. In addition to this, MyBatis also provides integration support for various third-party cache libraries, such as EHCache, OSCache, and Hazelcast. Better performance Performance is one of the key factors for the success of any software application. There are lots of things to consider for better performance, but for many applications, the persistence layer is a key for overall system performance. MyBatis supports database connection pooling that eliminates the cost of creating a database connection on demand for every request. MyBatis has an in-built cache mechanism which caches the results of SQL queries at the SqlSession level. That is, if you invoke the same mapped select query, then MyBatis returns the cached result instead of querying the database again. MyBatis doesn't use proxying heavily and hence yields better performance compared to other ORM frameworks that use proxies extensively. There are no one-size-fits-all solutions in software development. Each application has a different set of requirements, and we should choose our tools and frameworks based on application needs. In the previous section, we have seen various advantages of using MyBatis. But there will be cases where MyBatis may not be the ideal or best solution.If your application is driven by an object model and wants to generate SQL dynamically, MyBatis may not be a good ft for you. Also, if you want to have a transitive persistence mechanism (saving the parent object should persist associated child objects as well) for your application, Hibernate will be better suited for it. Installing and configuring MyBatis We are assuming that the JDK 1.6+ and MySQL 5 database servers have been installed on your system. The installation process of JDK and MySQL is outside the scope of this article. At the time of writing this article, the latest version of MyBatis is MyBatis 3.2.2. Even though it is not mandatory to use IDEs, such as Eclipse, NetBeans IDE, or IntelliJ IDEA for coding, they greatly simplify development with features such as handy autocompletion, refactoring, and debugging. You can use any of your favorite IDEs for this purpose. This section explains how to develop a simple Java project using MyBatis: By creating a STUDENTS table and inserting sample data By creating a Java project and adding mybatis-3.2.2.jar to the classpath By creating the mybatis-config.xml and StudentMapper.xml configuration files By creating the MyBatisSqlSessionFactory singleton class By creating the StudentMapper interface and the StudentService classes By creating a JUnit test for testing StudentService Summary In this article, we discussed about MyBatis and the advantages of using MyBatis instead of plain JDBC for database access. Resources for Article : Further resources on this subject: Building an EJB 3.0 Persistence Model with Oracle JDeveloper [Article] New Features in JPA 2.0 [Article] An Introduction to Hibernate and Spring: Part 1 [Article]
Read more
  • 0
  • 0
  • 7792

article-image-hubs
Packt
28 Jun 2013
8 min read
Save for later

Hubs

Packt
28 Jun 2013
8 min read
(For more resources related to this topic, see here.) Moving up one level While PersistentConnection seems very easy to work with, it is the lowest level in SignalR. It does provide the perfect abstraction for keeping a connection open between a client and a server, but that's just about all it does provide. Working with different operations is not far from how you would deal with things in a regular socket connection, where you basically have to parse whatever is coming from a client and figure out what operation is being asked to be performed based on the input. SignalR provides a higher level of abstraction that removes this need and you can write your server-side code in a more intuitive manner. In SignalR, this higher level of abstraction is called a Hub. Basically, a Hub represents an abstraction that allows you to write classes with methods that take different parameters, as you would with any API in your application, and then makes it completely transparent on the client—at least for JavaScript. This resembles a concept called Remote Procedure Call (RPC), with many incarnations of it out there. For our chat application at this stage, we basically just want to be able to send a message from a client to the server and have it send the message to all of the other clients connected. To do this, we will now move away from the PersistentConnection and introduce a new class called Hub using the following steps: First, start off by deleting the ChatConnection class from your Web project. Now we want to add a Hub implementation instead. Right-click on the SignalRChat project and select Add | New Item. In the dialog, chose Class and give it a name Chat.cs. This is the class that will represent our Hub. Make it inherit from Hub: Public class Chat : Hub Add the necessary import statement at the top of the file: using Microsoft.AspNet.SignalR.Hubs; In the class we will add a simple method that the clients will call to send a message. We call the method Send and take one parameter into it; a string which contains the message being sent by the client: Public void Send(string message){} From the base class of Hub, we get a few things that we can use. For now we'll be using the Clients property to broadcast to all other clients connected to the Hub. On the Clients property, you'll find an All property which is dynamic; on this we can call anything and the client will just have to subscribe to the method we call, if the client is interested. It is possible to change the name of the Hub to not be the same as the class name. An attribute called HubName() can be placed in front of the class to give it a new name. The attribute takes one parameter; the name you want for your Hub. Similarly, for methods inside your Hub, you can use an attribute called HubMethodName() to give the method a different name. The next thing we need to do is to go into the Global.asax.cs file, and make some changes. Firstly, we remove the .MapConnection(…) line and replace it with a .MapHubs() line. This will make all Hubs in your application automatically accessible from a default URL. All Hubs in the application will be mapped to /signalr/<name of hub>; so more concretely the path will be: http:// <your-site>:port/signalr/<name of hub>. We're going with the defaults for now. It should cover the needs on the server-side code. Moving into the JavaScript/HTML part of things, SignalR comes with a JavaScript proxy generator that can generate JavaScript proxies from your Hubs mapped using .MapHubs(). This is also subject to the same default URL but will follow the configuration given to .MapHubs().We will need to include a script reference in the HTML code right after the line that references the SignalR JavaScript file. We add the following: <script src = "/signalr/hubs" type="text/javascript"></script> This will include the generated proxies for our JavaScript client. What this means is that we get whatever is exposed on a Hub generated for us and we can start using it straight away. Before we get started with the concrete implementation for our web client, we can move all of the custom code revitalizing the Rich Client, for PersistentConnection altogether. We then want to get to our proxy, and work with it. It sits on the connection object that SignalR adds to jQuery. So, for us, that means an object called chat will be there. On the the chat object, sit two important properties, one representing the client functions that get invoked when the server "calls" something on the client. And the second one is the property representing the server and all of the functionalities that we can call from the client. Let's start by hooking up the client and its methods. Earlier we implemented in the Hub sitting on the server a call to addMessage() with the message. This can be added to the client property inside the chat Hub instance: Basically, whenever the server calls that method, our client counterpart will be called. Now what we need to do is to start the Hub and print out when we are connected to the chat window: $.connection.hub.start().done(function() {$("#chatWindow").val("Connectedn");}); Then we need to hook up the click event on the button and call the server to send messages. Again, we use the server property sitting on the chat hub instance in the client, which corresponds to a method on the Hub: $("#sendButton").click(function() {chat.server.send($("#messageTextBox").val());$("#messageTextBox").val("");}); You should now have something that looks as follows: You may have noticed that the send function on the client is in camelCase and the server-side C# code has it in PascalCase. SignalR automatically translates between the two case types. In general, camelCase is the preferred and the most broadly used casing style in JavaScript—while Pascal being the most used in C#. You should now be having a full sample in HTML/JavaScript that looks like the following screenshot: Running it should produce the same result as before, with the exception of the .NET terminal client, which also needs alterations. In fact, let's just get rid of the code inside Program.cs and start over. The client API is a bit rougher in C#; this comes from the more statically typed nature of C#. Sure, it is possible—technically—to get pretty close to what has been done in JavaScript, but it hasn't been a focal point for the SignalR team. Basically, we need a different connection than the PersistentConnection class. We'll be needing a HubConnection class. From the HubConnection class we can create a proxy for the chat Hub: As with JavaScript, we can hook up client-side methods that get invoked when the server calls any client. Although as mentioned, not as elegantly as in JavaScript. On the chat Hub instance, we get a method called On(), which can be used to specify a client-side method corresponding to the client call from the server. So we set addMessage to point to a method which, in our case, is for now just an inline lambda expression. Now we need, as with PersistentConnection, to start the connection and wait until it's connected: hubConnection.Start().Wait(); Now we can get user input and send it off to the server. Again, as with client methods called from the server, we have a slightly different approach than with JavaScript; we call the Invoke method giving it the name of the method to call on the server and any arguments. The Invoke() method does take a parameter, so you can specify any number of arguments which will then be sent to the server: The finished result should look something like the following screenshot, and now work in full correspondence with the JavaScript chat: Summary Exposing our functionality through Hubs makes it easier to consume on the client, at least on JavaScript based clients, due to the proxy generation. It basically brings it to the client as if it was on the client. With the Hub you also get the ability to call the client from the server in a more natural manner. One of the things often important for applications is the ability to ?lter out messages so you only get messages relevant for your context. Resources for Article : Further resources on this subject: Working with Microsoft Dynamics AX and .NET: Part 1 [Article] Working with Microsoft Dynamics AX and .NET: Part 2 [Article] Deploying .NET-based Applications on to Microsoft Windows CE Enabled Smart Devices [Article]
Read more
  • 0
  • 0
  • 1354

article-image-deploying-html5-applications-gnome
Packt
28 May 2013
10 min read
Save for later

Deploying HTML5 Applications with GNOME

Packt
28 May 2013
10 min read
(For more resources related to this topic, see here.) Before we start Most of the discussions in this article require a moderate knowledge of HTML5, JSON, and common client-side JavaScript programming. One particular exercise uses JQuery and JQuery Mobile to show how a real HTML5 application will be implemented. Embedding WebKit What we need to learn first is how to embed a WebKit layout engine inside our GTK+ application. Embedding WebKit means we can use HTML and CSS as our user interface instead of GTK+ or Clutter. Time for action – embedding WebKit With WebKitGTK+, this is a very easy task to do; just follow these steps: Create an empty Vala project without GtkBuilder and no license. Name it hello-webkit. Modify configure.ac to include WebKitGTK+ into the project. Find the following line of code in the file: PKG_CHECK_MODULES(HELLO_WEBKIT, [gtk+-3.0]) Remove the previous line and replace it with the following one: PKG_CHECK_MODULES(HELLO_WEBKIT, [gtk+-3.0 webkitgtk-3.0]) Modify Makefile.am inside the src folder to include WebKitGTK into the Vala compilation pipeline. Find the following lines of code in the file: hello_webkit_VALAFLAGS = --pkg gtk+-3.0 Remove it and replace it completely with the following lines: hello_webkit_VALAFLAGS = --vapidir . --pkg gtk+-3.0 --pkg webkit-1.0 --pkglibsoup-2.4 Fill the hello_webkit.vala file inside the src folder with the following lines: using GLib;using Gtk;using WebKit;public class Main : WebView{public Main (){load_html_string("<h1>Hello</h1>","/");}static int main (string[] args){Gtk.init (ref args);var webView = new Main ();var window = new Gtk.Window();window.add(webView);window.show_all ();Gtk.main ();return 0;}} Copy the accompanying webkit-1.0.vapi file into the src folder. We need to do this, unfortunately, because the webkit-1.0.vapi file distributed with many distributions is still using GTK+ Version 2. Run it, you will see a window with the message Hello, as shown in the following screenshot: What just happened? What we need to do first is to include WebKit into our namespace, so we can use all the functions and classes from it. using WebKit; Our class is derived from the WebView widget. It is an important widget in WebKit, which is capable of showing a web page. Showing it means not only parsing and displaying the DOM properly, but that it's capable to run the scripts and handle the styles referred to by the document. The derivation declaration is put in the class declaration as shown next: public class Main : WebView In our constructor, we only load a string and parse it as an HTML document. The string is Hello, styled with level 1 heading. After the execution of the following line, WebKit will parse and display the presentation of the HTML5 code inside its body: public Main (){load_html_string("<h1>Hello</h1>","/");} In our main function, what we need to do is create a window to put our WebView widget into. After adding the widget, we need to call the show_all() function in order to display both the window and the widget. static int main (string[] args){Gtk.init (ref args);var webView = new Main ();var window = new Gtk.Window();window.add(webView); The window content now only has a WebView widget as its sole displaying widget. At this point, we no longer use GTK+ to show our UI, but it is all written in HTML5. Runtime with JavaScriptCore An HTML5 application is, most of the time, accompanied by client-side scripts that are written in JavaScript and a set of styling definition written in CSS3. WebKit already provides the feature of running client-side JavaScript (running the script inside the web page) with a component called JavaScriptCore, so we don't need to worry about it. But how about the connection with the GNOME platform? How to make the client-side script access the GNOME objects? One approach is that we can expose our objects, which are written in Vala so that they can be used by the client-side JavaScript. This is where we will utilize JavaScriptCore. We can think of this as a frontend and backend architecture pattern. All of the code of business process which touch GNOME will reside in the backend. They are all written in Vala and run by the main process. On the opposite side, the frontend, the code is written in JavaScript and HTML5, and is run by WebKit internally. The frontend is what the user sees while the backend is what is going on behind the scene. Consider the following diagram of our application. The backend part is grouped inside a grey bordered box and run in the main process. The frontend is outside the box and run and displayed by WebKit. From the diagram, we can see that the frontend creates an object and calls a function in the created object. The object we create is not defined in the client side, but is actually created at the backend. We ask JavaScriptCore to act as a bridge to connect the object created at the backend to be made accessible by the frontend code. To do this, we wrap the backend objects with JavaScriptCore class and function definitions. For each object we want to make available to frontend, we need to create a mapping in the JavaScriptCore side. In the following diagram, we first map the MyClass object, then the helloFromVala function, then the intFromVala, and so on: Time for action – calling the Vala object from the frontend Now let's try and create a simple client-side JavaScript code and call an object defined at the backend: Create an empty Vala project, without GtkBuilder and no license. Name it hello-jscore. Modify configure.ac to include WebKitGTK+ exactly like our previous experiment. Modify Makefile.am inside the src folder to include WebKitGTK+ and JSCore into the Vala compilation pipeline. Find the following lines of code in the file: hello_jscore_VALAFLAGS = --pkg gtk+-3.0 Remove it and replace it completely with the following lines: hello_jscore_VALAFLAGS = --vapidir . --pkg gtk+-3.0 --pkg webkit-1.0 --pkglibsoup-2.4 --pkg javascriptcore Fill the hello_jscore.vala file inside the src folder with the following lines of code: using GLib;using Gtk;using WebKit;using JSCore;public class Main : WebView{public Main (){load_html_string("<h1>Hello</h1>" +"<script>alert(HelloJSCore.hello())</script>","/");window_object_cleared.connect ((frame, context) => {setup_js_class ((JSCore.GlobalContext) context);});}public static JSCore.Value helloFromVala (Context ctx,JSCore.Object function,JSCore.Object thisObject,JSCore.Value[] arguments,out JSCore.Value exception) {exception = null;var text = new String.with_utf8_c_string ("Hello fromJSCore");return new JSCore.Value.string (ctx, text);}static const JSCore.StaticFunction[] js_funcs = {{ "hello", helloFromVala, PropertyAttribute.ReadOnly },{ null, null, 0 }};static const ClassDefinition js_class = {0, // versionClassAttribute.None, // attribute"HelloJSCore", // classNamenull, // parentClassnull, // static valuesjs_funcs, // static functionsnull, // initializenull, // finalizenull, // hasPropertynull, // getPropertynull, // setPropertynull, // deletePropertynull, // getPropertyNamesnull, // callAsFunctionnull, // callAsConstructornull, // hasInstancenull // convertToType};void setup_js_class (GlobalContext context) {var theClass = new Class (js_class);var theObject = new JSCore.Object (context, theClass,context);var theGlobal = context.get_global_object ();var id = new String.with_utf8_c_string ("HelloJSCore");theGlobal.set_property (context, id, theObject,PropertyAttribute.None, null);}static int main (string[] args){Gtk.init (ref args);var webView = new Main ();var window = new Gtk.Window();window.add(webView);window.show_all ();Gtk.main ();return 0;}} Copy the accompanied webkit-1.0.vapi and javascriptcore.vapi files into the src folder. The javascriptcore.vapi file is needed because some distributions do not have this .vapi file in their repositories. Run the application. The following output will be displayed: What just happened? The first thing we do is include the WebKit and JavaScriptCore namespaces. Note, in the following code snippet, that the JavaScriptCore namespace is abbreviated as JSCore: using WebKit;using JSCore; In the Main function, we load HTML content into the WebView widget. We display a level 1 heading and then call the alert function. The alert function displays a string returned by the hello function inside the HelloJSCore class, as shown in the following code: public Main (){load_html_string("<h1>Hello</h1>" +"<script>alert(HelloJSCore.hello())</script>","/"); In the preceding code snippet, we can see that the client-side JavaScript code is as follows: alert(HelloJSCore.hello()) And we can also see that we call the hello function from the HelloJSCore class as a static function. It means that we don't instantiate the HelloJSCore object before calling the hello function. In WebView, we initialize the class defined in the Vala class when we get the window_object_cleared signal. This signal is emitted whenever a page is cleared. The initialization is done in setup_js_class and this is also where we pass the JSCore global context into. The global context is where JSCore keeps the global variables and functions. It is accessible by every code. window_object_cleared.connect ((frame, context) => {setup_js_class ((JSCore.GlobalContext)context);}); The following snippet of code contains the function, which we want to expose to the clientside JavaScript. The function just returns a Hello from JSCore string message: public static JSCore.Value helloFromVala (Context ctx,JSCore.Object function,JSCore.Object thisObject,JSCore.Value[] arguments,out JSCore.Value exception) {exception = null;var text = new String.with_utf8_c_string ("Hello from JSCore");return new JSCore.Value.string (ctx, text);} Then we need to put a boilerplate code that is needed to expose the function and other members of the class. The first part of the code is the static function index. This is the mapping between the exposed function and the name of the function defined in the wrapper. In the following example, we map the hello function, which can be used in the client side, with the helloFromVala function defined in the code. The index is then ended with null to mark the end of the array: static const JSCore.StaticFunction[] js_funcs = {{ "hello", helloFromVala, PropertyAttribute.ReadOnly },{ null, null, 0 }}; The next part of the code is the class definition. It is about the structure that we have to fill, so that JSCore would know about the class. All of the fields are filled with null, except for those we want to make use of. In this example, we use the static function for the hello function. So we fill the static function field with js_funcs, which we defined in the preceding code snippet: static const ClassDefinition js_class = {0, // versionClassAttribute.None, // attribute"HelloJSCore", // classNamenull, // parentClassnull, // static valuesjs_funcs, // static functionsnull, // initializenull, // finalizenull, // hasPropertynull, // getPropertynull, // setPropertynull, // deletePropertynull, // getPropertyNamesnull, // callAsFunctionnull, // callAsConstructornull, // hasInstancenull // convertToType}; After that, in the the setup_js_class function, we set up the class to be made available in the JSCore global context. First, we create JSCore.Class with the class definition structure we filled previously. Then, we create an object of the class, which is created in the global context. Last but not least, we assign the object with a string identifier, which is HelloJSCore. After executing the following code, we will be able to refer HelloJSCore on the client side: void setup_js_class (GlobalContext context) {var theClass = new Class (js_class);var theObject = new JSCore.Object (context, theClass,context);var theGlobal = context.get_global_object ();var id = new String.with_utf8_c_string ("HelloJSCore");theGlobal.set_property (context, id, theObject,PropertyAttribute.None, null);}
Read more
  • 0
  • 0
  • 4967

article-image-developing-web-project-jasperreports
Packt
27 May 2013
11 min read
Save for later

Developing a Web Project for JasperReports

Packt
27 May 2013
11 min read
(For more resources related to this topic, see here.) Setting the environment First, we need to install the required software, Oracle Enterprise Pack for Eclipse 12c, from http://www.oracle.com/technetwork/middleware/ias/ downloads/wls-main-097127.html using Installers with Oracle WebLogic Server, Oracle Coherence and Oracle Enterprise Pack for Eclipse, and download the Oracle Database 11g Express Edition from http://www.oracle.com/technetwork/products/express-edition/overview/index.html. Setting the environment requires the following tasks: Creating database tables Configuring a data source in WebLogic Server 12c Copying JasperReports required JAR files to the server classpath First create a database table, which shall be the data source for creating the reports, with the following SQL script. If a database table has already been created, the table may be used for this article too. CREATE TABLE OE.Catalog(CatalogId INTEGER PRIMARY KEY, Journal VARCHAR(25), Publisher VARCHAR(25),Edition VARCHAR(25), Title Varchar(45), Author Varchar(25)); INSERT INTO OE.Catalog VALUES('1', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'Database Resource Manager', 'Kimberly Floss'); INSERT INTO OE.Catalog VALUES('2', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'From ADF UIX to JSF', 'Jonas Jacobi'); INSERT INTO OE.Catalog VALUES('3', 'Oracle Magazine', 'Oracle Publishing', 'March-April 2005', 'Starting with Oracle ADF ', 'Steve Muench'); Next, configure a data source in WebLogic server with JNDI name jdbc/OracleDS. Next, we need to download some JasperReports JAR files including dependencies. Download the JAR/ZIP files listed below and extract the zip/tar.gz to a directory, c:/jasperreports for example.   JAR/ZIP Donwload URL jasperreports-4.7.0.jar http://sourceforge.net/projects/ jasperreports/files/jasperreports/JasperReports%204.7.0/ itext-2.1.0 http://mirrors.ibiblio.org/pub/mirrors/maven2/com/ lowagie/itext/2.1.0/itext-2.1.0.jar commons-beanutils-1.8.3-bin.zip http://commons.apache.org/beanutils/download_beanutils.cgi commons-digester-2.1.jar http://commons.apache.org/digester/download_digester.cgi commons-logging-1.1.1-bin http://commons.apache.org/logging/download_logging.cgi  poi-bin-3.8-20120326 zip or tar.gz http://poi.apache.org/download.html#POI-3.8 All the JasperReports libraries are open source. We shall be using the following JAR files to create a JasperReports report: JAR File Description commons-beanutils-1.8.3.jar JavaBeans utility classes commons-beanutils-bean-collections-1.8.3.jar Collections framework extension classes commons-beanutils-core-1.8.3.jar JavaBeans utility core classes commons-digester-2.1.jar Classes for processing XML documents. commons-logging-1.1.1.jar Logging classes iText-2.1.0.jar PDF library jasperreports-4.7.0.jar JasperReports API poi-3.8-20120326.jar, poi-excelant-3.8-20120326.jar, poi-ooxml-3.8-20120326.jar, poi-ooxml-schemas-3.8-20120326.jar, poi-scratchpad-3.8-20120326.jar Apache Jakarta POI  classes and dependencies. Add the Jasper Reports required by the JAR files to the user_projectsdomains base_domainbinstartWebLogic.bat script's CLASSPATH variable: set SAVE_CLASSPATH=%CLASSPATH%;C:jasperreportscommonsbeanutils- 1.8.3commons-beanutils-1.8.3.jar;C:jasperreportscommonsbeanutils- 1.8.3commons-beanutils-bean-collections-1.8.3.jar;C: jasperreportscommons-beanutils-1.8.3commons-beanutils-core- 1.8.3.jar;C:jasperreportscommons-digester-2.1.jar;C:jasperreports commons-logging-1.1.1commons-logging-1.1.1.jar;C:jasperreports itext-2.1.0.jar;C:jasperreportsjasperreports-4.7.0.jar;C: jasperreportspoi-3.8poi-3.8-20120326.jar;C:jasperreportspoi- 3.8poi-scratchpad-3.8-20120326.jar;C:jasperreportspoi-3.8poiooxml- 3.8-20120326.jar;C:jasperreportspoi-3.8.jar;C:jasperreports poi-3.8poi-excelant-3.8-20120326.jar;C:jasperreportspoi-3.8poiooxml- schemas-3.8-20120326.jar Creating a Dynamic Web project in Eclipse First, we need to create a web project for generating JasperReports reports. Select File | New | Other. In New wizard select Web | Dynamic Web Project. In Dynamic Web Project configuration specify a Project name (PDFExcelReports for example), select the Target Runtime as Oracle WebLogic Server 11g R1 ( 10.3.5). Click on Next. Select the default Java settings; that is, Default output folder as build/classes, and then click on Next. In WebModule, specify ContextRoot as PDFExcelReports and Content Directory as WebContent. Click on Finish. A web project for PDFExcelReports gets generated. Right-click on the project node in ProjectExplorer and select Project Properties. In Properties, select Project Facets. The Dynamic Web Module project facet should be selected by default as shown in the following screenshot: Next, create a User Library for JasperReports JAR files and dependencies. Select Java Build Path in Properties. Click on Add Library. In Add Library, select User Library and click on Next. In User Library, click on User Libraries. In User Libraries, click on New. In New User Library, specify a User library name (JasperReports) and click on OK. A new user library gets added to User Libraries. Click on Add JARs to add JAR files to the library. The following screenshot shows the JasperReports that are added: Creating the configuration file We require a JasperReports configuration file for generating reports. JasperReports XML configuration files are based on the jasperreport.dtd DTD, with a root element of jasperReport. We shall specify the JasperReports report design in an XML configuration bin file, which we have called config.xml. Create an XML file config.xml in the webContent folder by selecting XML | XML File in the New wizard. Some of the other elements (with commonly used subelements and attributes) in a JasperReports configuration XML file are listed in the following table: XML Element Description Sub-Elements Attributes jasperReport Root Element reportFont, parameter, queryString, field, variable, group, title, pageHeader, columnHeader, detail, columnFooter, pageFooter. name, columnCount, pageWidth, pageHeight, orientation, columnWidth, columnSpacing, leftMargin, rightMargin, topMargin, bottomMargin. reportFont Report level font definitions - name, isDefault, fontName, size, isBold, isItalic, isUnderline, isStrikeThrough, pdfFontName, pdfEncoding, isPdfEmbedded parameter Object references used in generating a report. Referenced with P${name} parameterDescription, defaultValueExpression name, class queryString Specifies the SQL query for retrieving data from a database. - - field Database table columns included in report. Referenced with F${name} fieldDescription name, class variable Variable used in the report XML file. Referenced with V${name} variableExpression, initialValueExpression name,class. title Report title band - pageHeader Page Header band - columnHeader Specifies the different columns in the report generated. band - detail Specifies the column values band - columnFooter Column footer band - A report section is represented with the band element. A band element includes staticText and textElement elements. A staticText element is used to add static text to a report (for example, column headers) and a textElement element is used to add dynamically generated text to a report (for example, column values retrieved from a database table). We won't be using all or even most of these element and attributes. Specify the page width with the pageWidth attribute in the root element jasperReport. Specify the report fonts using the reportFont element. The reportElement elements specify the ARIAL_NORMAL, ARIAL_BOLD, and ARIAL_ITALIC fonts used in the report. Specify a ReportTitle parameter using the parameter element. The queryString of the example JasperReports configuration XML file catalog.xml specifies the SQL query to retrieve the data for the report. <queryString><![CDATA[SELECT CatalogId, Journal, Publisher, Edition, Title, Author FROM OE.Catalog]]> </queryString> The PDF report has the columns CatalogId, Journal, Publisher, Edition, Title, and Author. Specify a report band for the report title. The ReportTitle parameter is invoked using the $P {ReportTitle} expression. Specify a column header using the columnHeader element. Specify static text with the staticText element. Specify the report detail with the detail element. A column text field is defined using the textField element. The dynamic value of a text field is defined using the textFieldExpression element: <textField> <reportElement x="0" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Cata logId}]]></textFieldExpression> </textField> Specify a page footer with the pageFooter element. Report parameters are defined using $P{}, report fields using $F{}, and report variables using $V{}. The config. xml file is listed as follows: <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE jasperReport PUBLIC "-//JasperReports//DTD Report Design// EN" "http://jasperreports.sourceforge.net/dtds/jasperreport.dtd"> <jasperReport name="PDFReport" pageWidth="975"> The following code snippet specifies the report fonts: <reportFont name="Arial_Normal" isDefault="true" fontName="Arial" size="15" isBold="false" isItalic="false" isUnderline="false" isStrikeThrough="false" pdfFontName="Helvetica" pdfEncoding="Cp1252" isPdfEmbedded="false"/> <reportFont name="Arial_Bold" isDefault="false" fontName="Arial" size="15" isBold="true" isItalic="false" isUnderline="false" isStrikeThrough="false" pdfFontName="Helvetica-Bold" pdfEncoding="Cp1252" isPdfEmbedded="false"/> <reportFont name="Arial_Italic" isDefault="false" fontName="Arial" size="12" isBold="false" isItalic="true" isUnderline="false" isStrikeThrough="false" pdfFontName="Helvetica-Oblique" pdfEncoding="Cp1252" isPdfEmbedded="false"/> The following code snippet specifies the parameter for the report title, the SQL query to generate the report with, and the report fields. The resultset from the SQL query gets bound to the fields. <parameter name="ReportTitle" class="java.lang.String"/> <queryString><![CDATA[SELECT CatalogId, Journal, Publisher, Edition, Title, Author FROM Catalog]]></queryString> <field name="CatalogId" class="java.lang.String"/> <field name="Journal" class="java.lang.String"/> <field name="Publisher" class="java.lang.String"/> <field name="Edition" class="java.lang.String"/> <field name="Title" class="java.lang.String"/> <field name="Author" class="java.lang.String"/> Add the report title to the report as follows: <title> <band height="50"> <textField> <reportElement x="350" y="0" width="200" height="50" /> <textFieldExpression class="java.lang. String">$P{ReportTitle}</textFieldExpression> </textField> </band> </title> <pageHeader> <band> </band> </pageHeader> Add the column's header as follows: <columnHeader> <band height="20"> <staticText> <reportElement x="0" y="0" width="100" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[CATALOG ID]]></text> </staticText> <staticText> <reportElement x="125" y="0" width="100" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[JOURNAL]]></text> </staticText> <staticText> <reportElement x="250" y="0" width="150" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[PUBLISHER]]></text> </staticText> <staticText> <reportElement x="425" y="0" width="100" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[EDITION]]></text> </staticText> <staticText> <reportElement x="550" y="0" width="200" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[TITLE]]></text> </staticText> <staticText> <reportElement x="775" y="0" width="200" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[AUTHOR]]></text> </staticText> </band> </columnHeader> The following code snippet shows how to add the report detail, which consists of values retrieved using the SQL query from the Oracle database: <detail> <band height="20"> <textField> <reportElement x="0" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Cata logId}]]></textFieldExpression> </textField> <textField pattern="0.00"> <reportElement x="125" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Jour nal}]]></textFieldExpression> </textField> <textField pattern="0.00"> <reportElement x="250" y="0" width="150" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Publ isher}]]></textFieldExpression> </textField> <textField> <reportElement x="425" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Edit ion}]]></textFieldExpression> </textField> <textField pattern="0.00"> <reportElement x="550" y="0" width="200" height="20"/> <textFieldExpression class="java.lang. String"><![CDATA[$F{Title}]]></textFieldExpression> </textField> <textField> <reportElement x="775" y="0" width="200" height="20"/> <textFieldExpression class="java.lang. String"><![CDATA[$F{Author}]]></textFieldExpression> </textField> </band> </detail> Add the column and page footer including the page number as follows: <columnFooter> <band> </band> </columnFooter> <pageFooter> <band height="15"> <staticText> <reportElement x="0" y="0" width="40" height="15"/> <textElement> <font isUnderline="false" reportFont="Arial_Italic"/> </textElement> <text><![CDATA[Page #]]></text> </staticText> <textField> <reportElement x="40" y="0" width="100" height="15"/> <textElement> <font isUnderline="false" reportFont="Arial_Italic"/> </textElement> <textFieldExpression class="java.lang. Integer"><![CDATA[$V{PAGE_NUMBER}]]></textFieldExpression> </textField> </band> </pageFooter> <summary> <band> </band> </summary> </jasperReport> We need to create a JAR file for the config.xml file and add the JAR file to the WebLogic Server's domain's lib directory. Create a JAR file using the following command from the directory containing the config.xml as follows: >jar cf config.jar config.xml Add the config.jar file to the user_projectsdomainsbase_domainlib directory, which is in the classpath of the server.
Read more
  • 0
  • 0
  • 5204
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-validating-and-using-model-data
Packt
07 May 2013
14 min read
Save for later

Validating and Using the Model Data

Packt
07 May 2013
14 min read
(For more resources related to this topic, see here.) Declarative validation It's easy to set up declarative validation for an entity object to validate the data that is passed through the metadata file. Declarative validation is the validation added for an attribute or an entity object to fulfill a particular business validation. It is called declarative validation because we don't write any code to achieve the validation as all the business validations are achieved declaratively. The entity object holds the business rules that are defined to fulfill specific business needs such as a range check for an attribute value or to check if the attribute value provided by the user is a valid value from the list defined. The rules are incorporated to maintain a standard way to validate the data. Knowing the lifecycle of an entity object It is important to know the lifecycle of an entity object before knowing the validation that is applied to an entity object. The following diagram depicts the lifecycle of an entity: When a new row is created using an entity object, the status of the entity is set to NEW. When an entity is initialized with some values, the status is changed from NEW to INITIALIZED. At this time, the entity is marked invalid or dirty; this means that the state of the entity is changed from the value that was previously checked with the database value. The status of an entity is changed to UNMODIFIED, and the entity is marked valid after applying validation rules and committing to the database. When the value of an unmodified entity is changed, the status is changed to MODIFIED and the entity is marked dirty again. The modified entity again goes to an UNMODIFIED state when it is saved to the database. When an entity is removed from the database, the status is changed to DELETED. When the value is committed, the status changes to DEAD. Types of validation Validation rules are applied to an entity to make sure that only valid values are committed to the database and to prevent any invalid data from getting saved to the database. In ADF, we use validation rules for the entity object to make sure the row is valid all the time. There are three types of validation rules that can be set for the entity objects; they are as follows: Entity-level validation Attribute-level validation Transaction-level validation Entity-level validation As we know, an entity represents a row in the database table. Entity-level validation is the business rule that is added to the database row. For example, the validation rule that has to be applied to a row is termed as entity-level validation. There are two unique declarative validators that will be available only for entity-level validation—Collection and UniqueKey. The following diagram explains that entity-level validations are applied on a single row in the EMP table. The validated row is highlighted in bold. Attribute-level validation Attribute-level validations are applied to attributes. Business logic mostly involves specific validations to compare different attribute values or to restrict the attributes to a specific range. These kinds of validations are done in attribute-level validation. Some of the declarative validators available in ADF are Compare, Length, and Range. The Precision and Mandatory attribute validations are added, by default, to the attributes from the column definition in the underlying database table. We can only set the display message for the validation. The following diagram explains that the validation is happening on the attributes in the second row: There can be any number of validations defined on a single attribute or on multiple attributes in an entity. In the diagram, Empno has a validation that is different from the validation defined for Ename. Validation for the Job attribute is different from that for the Sal attribute. Similarly, we can define validations for attributes in the entity object. Transaction-level validation Transaction-level validations are done after all entity-level validations are completed. If you want to add any kind of validation at the end of the process, you can defer the validation to the transaction level to ensure that the validation is performed only once. Built-in declarative validators ADF Business Components includes some built-in validators to support and apply validations for entity objects. The following screenshot explains how a declarative validation will show up in the Overview tab: The Business Rules section for the EmpEO.xml file will list all the validations for the EmpEO entity. In the previous screenshot, we will see that the there are no entity-level validators defined and some of the attribute-level validations are listed in the Attributes folder. Collection validator A Collection validator is available only for entity-level validation. To perform operations such as average, min, max, count, and sum for the collection of rows, we use the collection validator. Collection validators are compared to the GROUP BY operation in an SQL query with a validation. The aggregate functions, such as count, sum, min, and max are added to validate the entity row. The validator is operated against the literal value, expression, query result, and so on. You must have the association accessor to add a collection validation. Time for action – adding a collection validator for the DeptEO file Now, we will add a Collection validator to DeptEO.xml for adding a count validation rule. Imagine a business rule that says that the number of employees added to department number 10 should be more than five. In this case, you will have a count operation for the employees added to department number 10 and show a message if the count is less than 5 for a particular department. We will break this action into the following three parts: Adding a declarative validation: In this case, the number of employees added to the department should be greater than five Specifying the execution rule: In our case, the execution of this validation should be fired only for department number 10 Displaying the error message: We have to show an error message to the user stating that the number of employees added to the department is less than five Adding the validation Following are the steps to add the validation: Go to the Business Rules section of DeptEO.xml. You will find the Business Rules section in the Overview tab. Select Entity Validators and click on the + button. You may right-click on the Entity Validators folder and then select New Validator to add a validator. Select Collection as Rule Type and move on to the Rule Definition tab. In this section, select Count for the Operation field; Accessor is the association accessor that gets added through a composition association relationship. Only the composition association accessor will be listed in the Accessor drop-down menu. Select the accessor for EmpEO listed in the dropdown, with Empno as the value for Attribute. In order to create a composition association accessor, you will have to create an association between DeptEO.xml and EmpEO.xml based on the Deptno attribute with cardinality of 1 to *. The Composition Association option has to be selected to enable a composition relationship between the two entities. The value of the Operator option should be selected as Greater Than. Compare with will be a literal value, which is 5 that can be entered in the Enter Literal Value section below. Specifying the execution rule Following are the steps to specify the execution: Now to set the execution rule, we will move to the Validation Execution tab. In the Conditional Execution section, add Deptno = '10' as the value for Conditional Execution Expression. In the Triggering Attribute section, select the Execute only if one of the Selected Attributes has been changed checkbox. Move the Empno attribute to the Selected Attributes list. This will make sure that the validation is fired only if the Empno attribute is changed: Displaying the error message Following are the steps to display the error message: Go to the Failure Handling section and select the Error option for Validation Failure Severity. In the Failure Message section, enter the following text: Please enter more than 5 Employees You can add the message stored in a resource bundle to Failure Message by clicking on the magnifying glass icon. What just happened? We have added a collection validation for our EmpEO.xml object. Every time a new employee is added to the department, the validation rule fires as we have selected Empno as our triggering attribute. The rule is also validated against the condition that we have provided to check if the department number is 10. If the department number is 10, the count for that department is calculated. When the user is ready to commit the data to the database, the rule is validated to check if the count is greater than 5. If the number of employees added is less than 5, the error message is displayed to the user. When we add a collection validator, the EmpEO.xml file gets updated with appropriate entries. The following entries get added for the aforementioned validation in the EmpEO.xml file: <validation:CollectionValidationBean Name="EmpEO_Rule_0" ResId= "com.empdirectory.model.entity.EmpEO_Rule_0" OnAttribute="Empno" OperandType="LITERAL" Inverse="false" CompareType="GREATERTHAN" CompareValue="5" Operation="count"> <validation:OnCondition> <![CDATA[Deptno = '10']]> </validation:OnCondition> </validation:CollectionValidationBean> <ResourceBundle> <PropertiesBundle PropertiesFile= "com.empdirectory.model.ModelBundle"/> </ResourceBundle> The error message that is added in the Failure Handling section is automatically added to the resource bundle. The Compare validator The Compare validator is used to compare the current attribute value with other values. The attribute value can be compared against the literal value, query result, expression, view object attribute, and so on. The operators supported are equal, not-equal, less-than, greater-than, less-than or equal to, and greater-than or equal to. The Key Exists validator This validator is used to check if the key value exists for an entity object. The key value can be a primary key, foreign key, or an alternate key. The Key Exists validator is used to find the key from the entity cache, and if the key is not found, the value is determined from the database. Because of this reason, the Key Exists validator is considered to give better performance. For example, when an employee is assigned to a department deptNo 50 and you want to make sure that deptNo 50 already exists in the DEPT table. The Length validator This validator is used to check the string length of an attribute value. The comparison is based on the character or byte length. The List validator This validator is used to create a validation for the attribute in a list. The operators included in this validation are In and NotIn. These two operators help the validation rule check if an attribute value is in a list. The Method validator Sometimes, we would like to add our own validation with some extra logic coded in our Java class file. For this purpose, ADF provides a declarative validator to map the validation rule against a method in the entity-implementation class. The implementation class is generated in the Java section of the entity object. We need to create and select a method to handle method validation. The method is named as validateXXX(), and the returned value will be of the Boolean type. The Range validator This validator is used to add a rule to validate a range for the attribute value. The operators included are Between and NotBetween. The range will have a minimum and maximum value that can be entered for the attribute. The Regular Expression validator For example, let us consider that we have a validation rule to check if the e-mail ID provided by the user is in the correct format. For the e-mail validation, we have some common rules such as the following: The e-mail ID should start with a string and end with the @ character The e-mail ID's last character cannot be the dot (.) character Two @ characters are not allowed within an e-mail ID For this purpose, ADF provides a declarative Regular Expression validator. We can use the regex pattern to check the value of the attribute. The e-mail address and the US phone number pattern is provided by default: Email: [A-Z0-9._%+-]+@[A-Z0-,9.-]+.[A-Z]{2,4} Phone Number (US): [0-9]{3}-?[0-9]{3}-?[0-9]{4} You should select the required pattern and then click on the Use Pattern button to use it. Matches and NotMatches are the two operators that are included with this validator. The Script validator If we want to include an expression and validate the business rule, the Script validator is the best choice. ADF supports Groovy expressions to provide Script validation for an attribute. The UniqueKey validator This validator is available for use only for entity-level validation. To check for uniqueness in the record, we would be using this validator. If we have a primary key defined for the entity object, the Uniqueness Check Definition section will list the primary keys defined to check for uniqueness, as shown in the following screenshot: If we have to perform a uniqueness check against any attribute other than the primary key attributes, we will have to create an alternate key for the entity object. Time for action – creating an alternate key for DeptEO Currently, the DeptEO.xml file has Deptno as the primary key. We would add business validation that states that there should not be a way to create a duplicate of the department name that is already available. The following steps show how to create an alternate key: Go to the General section of the DeptEO.xml file and expand the Alternate Keys section. Alternate keys are keys that are not part of the primary key. Click on the little + icon to add a new alternate key. Move the Dname attribute from the Available list to the Selected list and click on the OK button. What just happened? We have created an alternate key against the Dname attribute to prepare for a unique check validation for the department name. When the alternate key is added to an entity object, we will see the AltKey attribute listed in the Alternate Key section of the General tab. In the DeptEO.xml file, you will find the following code that gets added for the alternate key definition: <Key Name="AltKey" AltKey="true"> <DesignTime> <Attr Name="_isUnique" Value="true"/> <Attr Name="_DBObjectName" Value="HR.DEPT"/> </DesignTime> <AttrArray Name="Attributes"> <Item Value= "com.empdirectory.model.entity.DeptEO.Dname"/> </AttrArray> </Key> Have a go hero – compare the attributes For the first time, we have learned about the validations in ADF. So it's time for you to create your own validation for the EmpEO and DeptEO entity objects. Add validations for the following business scenarios: Continue with the creation of the uniqueness check for the department name in the DeptEO.xml file. The salary of the employees should not be greater than 1000. Display the following message if otherwise: Please enter Salary less than 1000. Display the message invalid date if the employee's hire date is after 10-10-2001. The length of the characters entered for Dname of DeptEO.xml should not be greater than 10. The location of a department can only be NEWYORK, CALIFORNIA, or CHICAGO. The department name should always be entered in uppercase. If the user enters a value in lowercase, display a message. The salary of an employee with the MANAGER job role should be between 800 and 1000. Display an error message if the value is not in this range. The employee name should always start with an uppercase letter and should end with any character other than special characters such as :, ;, and _. After creating all the validations, check the code and tags generated in the entity's XML file for each of the aforementioned validations.
Read more
  • 0
  • 0
  • 4432

article-image-creating-lazarus-component
Packt
22 Apr 2013
14 min read
Save for later

Creating a Lazarus Component

Packt
22 Apr 2013
14 min read
(For more resources related to this topic, see here.) Creating a new component package We are going to create a custom-logging component, and add it to the Misc tab of the component palette. To do this, we first need to create a new package and add out component to that package along with any other required resources, such as an icon for the component. To create a new package, do the following: Select package from the main menu. Select New Package.... from the submenu. Select a directory that appears in the Save dialog and create a new directory called MyComponents. Select the MyComponents directory. Enter MyComponents as the filename and press the Save button. Now, you have a new package that is ready to have components added to it. Follow these steps: On the Package dialog window, click on the add (+) button. Select the New Component tab. Select TComponent as Ancestor Type. Set New class name to TMessageLog. Set Palette Page to Misc. Leave all the other settings as they are. You should now have something similar to the following screenshot. If so, click on the Create New Component button: You should see messagelog.pas listed under the Files node in the Package dialog window. Let's open this file and see what the auto-generated code contains. Double-click on the file or choose Open file from More menu in the Package dialog. Do not name your component the same as the package. This will cause you problems when you compile the package later. If you were to do this, the .pas file would be over written, because the compile procedure creates a .pas file for the package automatically. The code in the Source Editor window is given as follows: unit TMessageLog;{$mode objfpc}{$H+}interfaceusesClasses, SysUtils, LResources, Forms, Controls, Graphics, Dialogs,StdCtrls;typeTMessageLog = class(TComponent)private{ Private declarations }protected{ Protected declarations }public{ Public declarations }published{ Published declarations }end;procedure Register;implementationprocedure Register;beginRegisterComponents('Misc',[TMessageLog]);end;end. What should stand out in the auto-generated code is the global procedure RegisterComponents. RegisterComponents is contained in the Classes unit. The procedure registers the component (or components if you create more than one in the unit) to the component page that is passed to it as the first parameter of the procedure. Since everything is in order, we can now compile the package and install the component. Click the Compile button on the toolbar. Once the compile procedure has been completed, select Install, which is located in the menu under the Use button. You will be presented with a dialog telling you that Lazarus needs to be rebuilt. Click on the Yes button, as shown in the following screenshot: The Lazarus rebuilding process will take some time. When it is complete, it will need to be restarted. If this does not happen automatically, then restart Lazarus yourself. On restarting Lazarus, select the Misc tab on the component palette. You should see the new component as the last component on the tab, as shown in the following screenshot: You have now successfully created and installed a new component. You can now create a new application and add this component to a Lazarus form. The component in its current state does not perform any action. Let us now look at adding properties and events to the component that will be accessible in the Object Inspector window at design time. Adding properties Properties of a component that you would like to have visible in the Object Inspector window must be declared as published. Properties are attributes that determine an object's status and behavior. A property is a name that is mapped to read and write methods or access data directly. This means, when you read or write a property, you are accessing a field or calling a method of the object. For example, let us add a FileName property to TMessageLog, which is the name of the file that messages will be written to. The actual field of the object that will store this data will be named fFileName. To the TMessageLog private declaration section, add: fFileName: String; To the TMessagLog published declaration section, add: property FileName: String read fFileName write fFileName; With these changes, when the packages are compiled and installed, the property FileName will be visible in the Object Inspector window when the TMessageLog declaration is added to a form in a project. You can do this now if you would like to verify this. Adding events Any interaction that a user has with a component, such as clicking it, generates an event. Events are also generated by the system in response to a method call or a change in a component's property, or if different component's property changes, such as the focus being set on one component causes the current component in focus to lose it, which triggers an event call. Event handlers are methods of the form containing the component; this technique is referred to as delegation. You will notice that when you double-click on a component's event in the object inspector it creates a new procedure of the form. Events are properties, and such methods are assigned to event properties, as we just saw with normal properties. Because events are the properties and use of delegation, multiple events can share the same event handler. The simplest way to create an event is to define a method of the type TNotifyEvent. For example, if we want to add an OnChange event to TMessageLog, we could add the following code: ...privateFonChange : TNotifyEvent;...publicproperty OnChange: TNotifyEvent read FOnChange write FOnChange;…end; When you double-click on the OnChange event in Object Inspector, the following method stub would be created in the form containing the TMessageLog component: procedure TForm.MessageLogChange(Sender: TObject);beginend; Some properties, such as OnChange or OnFocus, are sometimes called on the change of value of a component's property or the firing of another event. Traditionally, in this case, a method with the prefix of Do and with the suffix of the On event are called. So, in the case of our OnChange event, it would be called from the DoChange method (as called by some other method). Let us assume that, when a filename is set for the TMessageLog component, the procedure SetFileName is called, and that calls DoChange. The code would look as follows: procedure SetFileName(name : string);beginFFileName = name;//fire the eventDoChange;end;procedure DoChange;beginif Assigned(FOnChange) thenFOnChange(Self);end; The DoChange procedure checks to see if anything has been assigned to the FOnChange field. If it is assigned, then it executes what is assigned to it. What this means is that if you double-click on the OnChange event in Object Inspector, it assigns the method name you enter to FOnChange, and this is the method that is called by DoChange. Events with more parameters You probably noticed that the OnChange event only had one parameter, which was Sender and is of the type Object. Most of the time, this is adequate, but there may be times when we want to send other parameters into an event. In those cases, TNotifyEvent is not an adequate type, and we will need to define a new type. The new type will need to be a method pointer type, which is similar to a procedural type but has the keyword of object at the end of the declaration. In the case of TMessageLog, we may need to perform some action before or after a message is written to the file. To do this, we will need to declare two method pointers, TBeforeWriteMsgEvent and TAfterWriteMsgEvent, both of which will be triggered in another method named WriteMessage. The modification of our code will look as follows: typeTBeforeWriteMsgEvent = procedure(var Msg: String; var OKToWrite:Boolean) of Object;TAfterWriteMsgEvent = procedure(Msg: String) of Object;TmessageLog = class(TComponent)…publicfunction WriteMessage(Msg: String): Boolean;...publishedproperty OnBeforeWriteMsg: TBeforeWriteMsgEvent read fBeforeWriteMsgwrite fBeforeWriteMsg;property OnAfterWriteMsg: TAfterWriteMsgEvent read fAfterWriteMsgwrite fAfterWriteMsg;end;implementationfunction TMessageLog.WriteMessage(Msg: String): Boolean;varOKToWrite: Boolean;beginResult := FALSE;OKToWrite := TRUE;if Assigned(fBeforeWriteMsg) thenfBeforeWriteMsg(Msg, OKToWrite);if OKToWrite thenbegintryAssignFile(fLogFile, fFileName);if FileExists(fFileName) thenAppend(fLogFile)elseReWrite(fLogFile);WriteLn(fLogFile, DateTimeToStr(Now()) + ' - ' + Msg);if Assigned(fAfterWriteMsg) thenfAfterWriteMsg(Msg);Result := TRUE;CloseFile(fLogFile);exceptMessageDlg('Cannot write to log file, ' + fFileName + '!',mtError, [mbOK], 0);CloseFile(fLogFile);end; // try...exceptend; // ifend; // WriteMessage While examining the function WriteMessage, we see that, before the Msg parameter is written to the file, the FBeforeWriteMsg field is checked to see if anything is assigned to it, and, if so, the write method of that field is called with the parameters Msg and OKToWrite. The method pointer TBeforeWriteMsgEvent declares both of these parameters as var types. So if any changes are made to the method, the changes will be returned to WriteMessage function. If the Msg parameter is successfully written to the file, the FAfterWriteMsg parameter is checked for assigned and executed parameter (if it is). The file is then closed and the function's result is set to True. If the Msg parameter value is not able to be written to the file, then an error dialog is shown, the file is closed, and the function's result is set to False. With the changes that we have made to the TMessageLog unit, we now have a functional component. You can now save the changes, recompile, reinstall the package, and try out the new component by creating a small application using the TMessageLog component. Property editors Property editors are custom dialogs for editing special properties of a component. The standard property types, such as strings, images, or enumerated types, have default property editors, but special property types may require you to write custom property editors. Custom property editors must extend from the class TPropertyEditor or one of its descendant classes. Property editors must be registered in the Register procedure using the function RegisterPropertyEditor from the unit PropEdits. An example of property editor class declaration is given as follows: TPropertyEditor = classpublicfunction AutoFill: Boolean; Virtual;procedure Edit; Virtual; // double-clicking the property value toactivateprocedure ShowValue; Virtual; //control-clicking the propertyvalue to activatefunction GetAttributes: TPropertyAttributes; Virtual;function GetEditLimit: Integer; Virtual;function GetName: ShortString; Virtual;function GetHint(HintType: TPropEditHint; x, y: integer): String;Virtual;function GetDefaultValue: AnsiString; Virtual;function SubPropertiesNeedsUpdate: Boolean; Virtual;function IsDefaultValue: Boolean; Virtual;function IsNotDefaultValue: Boolean; Virtual;procedure GetProperties(Proc: TGetPropEditProc); Virtual;procedure GetValues(Proc: TGetStrProc); Virtual;procedure SetValue(const NewValue: AnsiString); Virtual;procedure UpdateSubProperties; Virtual;end; Having a class as a property of a component is a good example of a property that would need a custom property editor. Because a class has many fields with different formats, it is not possible for Lazarus to have the object inspector make these fields available for editing without a property editor created for a class property, as with standard type properties. For such properties, Lazarus shows the property name in parentheses followed by a button with an ellipsis (…) that activates the property editor. This functionality is handled by the standard property editor called TClassPropertyEditor, which can then be inherited to create a custom property editor, as given in the following code: TClassPropertyEditor = class(TPropertyEditor)publicconstructor Create(Hook: TPropertyEditorHook; APropCount: Integer);Override;function GetAttributes: TPropertyAttributes; Override;procedure GetProperties(Proc: TGetPropEditProc); Override;function GetValue: AnsiString; Override;property SubPropsTypeFilter: TTypeKinds Read FSubPropsTypeFilterWrite SetSubPropsTypeFilterDefault tkAny;end; Using the preceding class as a base class, all you need to do to complete a property editor is add a dialog in the Edit method as follows: TMyPropertyEditor = class(TClassPropertyEditor)publicprocedure Edit; Override;function GetAttributes: TPropertyAttributes; Override;end;procedure TMyPropertyEditor.Edit;varMyDialog: TCommonDialog;beginMyDialog := TCommonDialog.Create(NIL);try…//Here you can set attributes of the dialogMyDialog.Options := MyDialog.Options + [fdShowHelp];...finallyMyDialog.Free;end;end; Component editors Component editors control the behavior of a component when double-clicked or right-clicked in the form designer. Classes that define a component editor must descend from TComponentEditor or one of its descendent classes. The class should be registered in the Register procedure using the function RegisterComponentEditor. Most of the methods of TComponentEditor are inherited from it's ancestor TBaseComponentEditor, and, if you are going to write a component editor, you need to be aware of this class and its methods. Declaration of TBaseComponentEditor is as follows: TBaseComponentEditor = classprotectedpublicconstructor Create(AComponent: TComponent;ADesigner: TComponentEditorDesigner); Virtual;procedure Edit; Virtual; Abstract;procedure ExecuteVerb(Index: Integer); Virtual; Abstract;function GetVerb(Index: Integer): String; Virtual; Abstract;function GetVerbCount: Integer; Virtual; Abstract;procedure PrepareItem(Index: Integer; const AnItem: TMenuItem);Virtual; Abstract;procedure Copy; Virtual; Abstract;function IsInInlined: Boolean; Virtual; Abstract;function GetComponent: TComponent; Virtual; Abstract;function GetDesigner: TComponentEditorDesigner; Virtual;Abstract;function GetHook(out Hook: TPropertyEditorHook): Boolean;Virtual; Abstract;procedure Modified; Virtual; Abstract;end; Let us look at some of the more important methods of the class. The Edit method is called on the double-clicking of a component in the form designer. GetVerbCount and GetVerb are called to build the context menu that is invoked by right-clicking on the component. A verb is a menu item. GetVerb returns the name of the menu item. GetVerbCount gets the total number of items on the context menu. The PrepareItem method is called for each menu item after the menu is created, and it allows the menu item to be customized, such as adding a submenu or hiding the item by setting its visibility to False. ExecuteVerb executes the menu item. The Copy method is called when the component is copied to the clipboard. A good example of a component editor is the TCheckListBox component editor. It is a descendant from TComponentEditor so all the methods of the TBaseComponentEditor do not need to be implemented. TComponentEditor provides empty implementation for most methods and sets defaults for others. Using this, methods that are needed for the TCheckListBoxComponentEditor component are overwritten. An example of the TCheckListBoxComponentEditor code is given as follows: TCheckListBoxComponentEditor = class(TComponentEditor)protectedprocedure DoShowEditor;publicprocedure ExecuteVerb(Index: Integer); override;function GetVerb(Index: Integer): String; override;function GetVerbCount: Integer; override;end;procedure TCheckGroupComponentEditor.DoShowEditor;varDlg: TCheckGroupEditorDlg;beginDlg := TCheckGroupEditorDlg.Create(NIL);try// .. shortenedDlg.ShowModal;// .. shortenedfinallyDlg.Free;end;end;procedure TCheckGroupComponentEditor.ExecuteVerb(Index: Integer);begincase Index of0: DoShowEditor;end;end;function TCheckGroupComponentEditor.GetVerb(Index: Integer): String;beginResult := 'CheckBox Editor...';end;function TCheckGroupComponentEditor.GetVerbCount: Integer;beginResult := 1;end; Summary In this article, we learned how to create a new Lazarus package and add a new component to that using the New Package dialog window to create our own custom component, TMessageLog. We also learned about compiling and installing a new component into the IDE, which requires Lazarus to rebuild itself in order to do so. Moreover, we discussed component properties. Then, we became acquainted with the events, which are triggered by any interaction that a user has with a component, such as clicking it, or by a system response, which could be caused by the change in any component of a form that affects another component. We studied that Events are properties, and they are handled through a technique called delegation. We discovered the simplest way to create an event is to create a descendant of TNotifyEvent—if you needed to send more parameters to an event and a single parameter provided by TNotifyEvent, then you need to declare a method pointer. We learned that property editors are custom dialogs for editing special properties of a component that aren't of a standard type, such as string or integer, and that they must extend from TPropertyEditor. Then, we discussed the component editors, which control the behavior of a component when it is right-clicked or double- clicked in the form designer, and that a component editor must descend from TComponentEditor or a descendant class of it. Finally, we looked at an example of a component editor for the TCheckListBox. Resources for Article : Further resources on this subject: User Extensions and Add-ons in Selenium 1.0 Testing Tools [Article] 10 Minute Guide to the Enterprise Service Bus and the NetBeans SOA Pack [Article] Support for Developers of Spring Web Flow 2 [Article]
Read more
  • 0
  • 0
  • 6212

article-image-testing-your-app
Packt
12 Apr 2013
15 min read
Save for later

Testing your App

Packt
12 Apr 2013
15 min read
(For more resources related to this topic, see here.) Types of testing Testing can happen on many different levels. From the code level to integration and even testing individual functions of the user-facing implementation of an enterprise application, there are numerous tools and techniques to test your application. In particular, we will cover the following: Unit testing Functional testing Browser testing Black box versus white box testing Testing is often talked about within the context of black box versus white box testing. This is a useful metaphor in understanding testing at different levels. With black box testing, you look at your application as a black box knowing nothing of its internals—typically from the perspective of a user of the system. You simply execute functionality of the application and test whether the expected outcomes match the actual outcomes. White box differs from black box testing in that you know the internals of the application upfront and can thus pinpoint failures directly and test for specific conditions. In this case, you simply feed in data into specific parts of the system and test whether the expected output matches the actual output. Unit testing The first level of testing is at the code level. When you are te sting specific and individual units of code on whether they meet their stated goals, you are unit testing. Unit testing is often talked about in conjunction with test-driven development, the practice of writing unit tests first and then writing the minimal amount of code necessary to pass those tests. Having a suite of unit tests against your code and employing test-driven processes—when done right—can keep your code focused and help to ensure the stability of your enterprise application. Typically, unit tests are set up in a separate folder in your codebase. Each test case is composed of the following parts: Setup to build the test conditions under which the code or module is being tested An instantiation and invocation of the code or module being tested A verification of the results returned Setting up your unit test You usually start by setting up your test data. For example, if you are testing a piece of code that requires an authenticated account, you might consider creating a set of test users of your enterprise application. It is advisable that your test data be coupled with your test so that your tests are not dependent on your system being in a specific state. Invoking your target Once you have set up your test data and the conditions in which the code you are testing needs to run, you are ready to invoke it. This can be as simple as invoking a method. Mocking is a very important concept to understand when unit testing. Consider a set of unit tests for a business logic module that has a dependency on some external application programming interface (API). Now imagine if the API goes down. The tests would fail. While it is nice to get an indication that the API you are dependent upon is having issues, a failing unit test because of this is misleading because the goal of the unit test is to test the business logic rather than external resources on which you are dependent. This is where mock objects come into the picture. Mock objects are stubs that replicate the interface of a resource. They are set up to always return the same data the external resource would under normal conditions. This way you are isolating your test to just the unit of code you are testing. Mocking employs a pattern called dependency injection or inversion of control. Sure, the code you are testing may be dependent on an external resource. Yet how will you swap it in a mock resource? Code that is easy to unit test allows you to pass in or "inject" these dependencies when invoking it. Dependency injection is a design pattern where code that is dependent on an external resource has that dependency passed into it thereby decoupling your code from that dependency. The following code snippet is difficult to test since the dependency is encapsulated into the function being tested. We are at an impasse. var doSomething = function() {var api = getApi();//A bunch of codeapi.call();}var testOfDoSomething = function() {var mockApi = getMockApi();//What do I do now???} The following new code snippet uses dependency injection to circumvent the problem by instantiating the dependency and passing it into the function being tested: var doSomething = function(api) {//A bunch of codeapi.call();}var testOfDoSomething = function() {var mockApi = getMockApi();doSomething(mockApi);} In general, this is good practice not just for unit testing but for keeping your code clean and easy to manage. Instantiating a dependency once and injecting where it is needed makes it easier to change that dependency if the need occurs. There are many mocking frameworks available including JsMockito (http://jsmockito.org/ ) for JavaScript and Mockery (https://github.com/padraic/mockery)for PHP. Verifying the results Once you have invoked the code being tested, you need to capture the results and verify them. Verification comes in the form of assertions. Every unit testing framework comes with its own set of assertion methods, but the concept is the same: take a result and test it against an expectation. You can assert whether two things are equal. You can assert whether two things are not equal. You can assert whether a result is a valid number of a string. You can assert whether one value is greater than another. The general idea is you are testing actual data against your hypothesis. Assertions usually bubble up to the framework's reporting module and are manifested as a list of passed or failed tests. Frameworks and tools A bevy of tools have arisen in the past few years that aid in unit testing of JavaScript. What follows is a brief survey of notable frameworks and tools used to unit test JavaScript code. JsTestDriver JsTestDriver is a framework built at Google for unit testing. It has a server that runs on multiple browsers on a machine and will allow you to execute test cases in the Eclipse IDE. This screenshot shows the results of JsTestDriver. When run, it executes all tests configured to run and displays the results. More information about JsTestDriver can be found at http://code.google.com/p/js-test-driver/. QUnit QUnit is a JavaScript unit testing framework created by John Resig of jQuery fame. To use it, you need to create only a test harness web page and include the QUnit library as a script reference. There is even a hosted version of the library. Once included, you need to only invoke the test method, passing in a function and a set of assertions. It will then generate a nice report. Although QUnit has no dependencies and can test standard JavaScript code, it is oriented around jQuery. More information about QUnit can be found at http://qunitjs.com/. Sinon.JS Often coupled with QUnit, Sinon.JS introduces the concept of spying wherein it records function calls, the arguments passed in, the return value, and even the value of the this object. You can also create fake objects such as fake servers and fake timers to make sure your code tests in isolation and your tests run as quickly as possible. This is particularly useful when you need to make fake AJAX requests. More information about Sinon.JS can be found at http://sinonjs.org/. Jasmine Jasmine is a testing framework based on the concept of behavior-driven development. Much akin to test-driven development, it extends it by infusing domain-driven design principles and seeks to frame unit tests back to user-oriented behavior and business value. Jasmine as well as other behavior-driven design based frameworks build test cases—called specs—using as much English as possible so that when a report is generated, it reads more naturally than a conventional unit test report. More information about Jasmine can be found at http://pivotal.github.com/jasmine/. Functional testing Selenium has become the name in website functional testing. Its browser automation capabilities allow you to record test cases in your favorite web browser and run them across multiple browsers. When you have this, you can automate your browser tests, integrate them with your build and continuous integration server, and run them simultaneously to get quicker results when you need them. Selenium includes the Selenium IDE, a utility for recording and running Selenium scripts. Built as a Firefox add-on, it allows you to create Selenium test cases by loading and clicking on web pages in Firefox. You can easily record what you do in the browser and replay it. You can then add tests to determine whether actual behavior matches expected behavior. It is very useful for quickly creating simple test cases for a web application. Information on installing it can be found at http://seleniumhq.org/docs/02_selenium_ide.html. The following screenshot shows the Selenium IDE. Click on the red circle graphic on the right-hand side to set it to record, and then browse to http://google.com in the browser window and search for "html5". Click on the red circle graphic to stop recording. You can then add assertions to test whether certain properties of the page match expectations. In this case, we are asserting that the text of the first link in the search results is for the Wikipedia page for HTML5. When we run our test, we see that it passes (of course, if the search results for "html5" on Google change, then this particular test will fail). Selenium includes WebDriver, an API that allows you to drive a browser natively either locally or remotely. Coupled with its automation capabilities, WebDriver can run tests against browsers on multiple remote machines to achieve greater scale. For our MovieNow application, we will set up functional testing by using the following components: The Selenium standalone server The php-webdriver connector from Facebook PHPUnit The Selenium standalone server The Selenium standalone server routes requests to the HTML5 application. It needs to be started for the tests to run. It can be deployed anywhere, but by default it is accessed at http://localhost:4444/wd/hub. You can download the latest version of the standalone server at http://code.google.com/p/selenium/downloads/list or you can fire up the version included in the sample code under the test/lib folder. To start the server, execute the following line via the command line (you will need to have Java installed on your machine): java -jar lib/selenium-server-standalone-#.jar Here, # indicates the version number. You should see something akin to the following: At this point, it is listening for connections. You will see log messages here as you run your tests. Keep this window open. The php-webdriver connector from Facebook The php-webdriver connector serves as a library for WebDriver in PHP. It gives you the ability to make and inspect web requests using drivers for all the major web browsers as well as HtmlUnit. Thus it allows you to create test cases against any web browser. You can download it at https://github.com/facebook/php-webdriver.We have included the files in the webdriver folder. PHPUnit PHPUnit is a unit testing framework that provides the constructs necessary for running our tests. It has the plumbing necessary for building and validating test cases. Any unit testing framework will work with Selenium; we have chosen PHPUnit since it is lightweight and works well with PHP. You can download and install PHPUnit any number of ways (you can go to http://www.phpunit.de/manual/current/en/installation.html for more information on installing it). We have included the phpunit.phar file in the test/lib folder for your convenience. You can simply run it by executing the following via the command line: php lib/phpunit.phar <your test suite>.php To begin, we will add some PHP files to the test folder. The first file is webtest. php. Create this file and add the following code: <?phprequire_once "webdriver/__init__.php";class WebTest extends PHPUnit_Framework_TestCase {protected $_session;protected $_web_driver;public function __construct() {parent::__construct();$_web_driver = new WebDriver();$this->_session = $_web_driver->session('firefox');}public function __destruct() {$this->_session->close();unset($this->_session);}}?> The WebTest class integrated WebDriver into PHPUnit via the php-webdriver connector. This will serve as the base class for all of our test cases. As you can see, it starts with the following: require_once "webdriver/__init__.php"; This is a reference to __init__.php in the php-webdriver files. This brings in all the classes needed for WebDriver. In the constructor, WebTest initializes the driver and session objects used in all test cases. In the destructor, it cleans up its connections. Now that we have everything set up, we can create our first functional test. Add a file called generictest.php to the test folder. We will import WebTest and extend that class as follows: <?phprequire_once "webtest.php";class GenericTest extends WebTest {}?> Inside of the GenericTest class, add the following test case: public function testForData() {$this->_session->open('http://localhost/html5-book/Chapter%2010/');sleep(5); //Wait for AJAX data to load$result = $this->_session->element("id", "movies-near-me")->text();//May need to change settings to always allow sharing of location$this->assertGreaterThan(0, strlen($result));} We will open a connection to our application (feel free to change the URL to wherever you are running your HTML5 application), wait 5 seconds for the initial AJAX to load, and then test for whether the movies-near-me div is populated with data. To run this test, go to the command line and execute the following lines: chmod +x lib/phpunit.pharphp lib/phpunit.phar generictest.php You should see the following: This indicates that the test is passed. Congratulations! Now let us see it fail. Add the following test case: public function testForTitle() {$this->_session->open('http://localhost/html5-book/Chapter%2010/');$result = $this->_session->title();$this->assertEquals('Some Title', $result);} Rerun PHPUnit and you should see something akin to the following: As you can see, it was expecting 'Some Title' but actually found 'MovieNow'. Now that we have gotten you started, we will let you create your own tests. Refer to http://www.phpunit.de/manual/3.7/en/index.html for guidance on the different assertions you can make using PHPUnit. More information about Selenium can be found at http://seleniumhq.org/. Browser testing HTML5 enterprise applications must involve actually looking at the application on different web browsers. Thankfully, many web browsers are offered on multiple platforms. Google Chrome, Mozilla Firefox, and Opera all have versions that will install easily on Windows, Mac OSX, and flavors of Linux such as Ubuntu. Safari has versions for Windows and Mac OSX, and there are ways to install it on Linux with some tweaking. Nevertheless, Internet Explorer can only run on Windows. One way to work around this limitation is to install virtualization software. Virtualization allows you to run an entire operating system virtually within a host operating system. It allows you to run Windows applications on Mac OSX or Linux applications on Windows. There are a number of notable virtualization packages including VirtualBox, VMWare Fusion, Parallels, and Virtual PC. Although Virtual PC runs only on Windows, Microsoft does offer a set of prepackaged virtual hard drives that include specific versions of Internet Explorer for testing purposes. See the following URLs for details: http://www.microsoft. com/en-us/download/details.aspx?id=11575. Another common way to test for compatibility is to use web-based browser virtualization. There are a number of services such as BrowserStack (http://www.browserstack.com/), CrossBrowserTesting (http://crossbrowsertesting.com/), and Sauce Labs (https://saucelabs.com/) that offer a service whereby you can enter a URL and see it rendered in an assortment of web browsers and platforms (including mobile) virtually through the web. Many of them even work through a proxy to allow you to view, test, and debug web applications running on your local machine. Continuous integration With any testing solution, it is important to create and deploy your builds and run your tests in an automated fashion. Continuous integration solutions like Hudson, Jenkins, CruiseControl, and TeamCity allow you to accomplish this. They merge code from multiple developers, and run a number of automated functions from deploying modules to running tests. They can be invoked to run on a schedule basis or can be triggered by events such as a commitment of code to a code repository via a postcommit hook. Summary We covered several types of testing in this article including unit testing, functional testing, and browser testing. For each type of testing, there are many tools to help you make sure that your enterprise application runs in a stable way, most of which we covered bar a few. Because every minute change to your application code has the potential to destabilize it, we must assume that that every change does. To ensure that your enterprise applications remain stable and with minimal defect, having a testing strategy in place with a rich suite of tests—from unit to functional—combined with a continuous integration server running those tests is essential. One must, of course, weigh the investment in time for writing and executing tests against the time needed for writing production code, but the savings in long-term maintenance costs can make that investment worthwhile. Resources for Article : Further resources on this subject: Building HTML5 Pages from Scratch [Article] Blocking versus Non blocking scripts [Article] HTML5: Generic Containers [Article]
Read more
  • 0
  • 0
  • 1596

article-image-getting-started-zeromq
Packt
04 Apr 2013
5 min read
Save for later

Getting Started with ZeroMQ

Packt
04 Apr 2013
5 min read
(For more resources related to this topic, see here.) The message queue A message queue, or technically a FIFO (First In First Out) queue is a fundamental and well-studied data structure. There are different queue implementations such as priority queues or double-ended queues that have different features, but the general idea is that the data is added in a queue and fetched when the data or the caller is ready. Imagine we are using a basic in-memory queue. In case of an issue, such as power outage or a hardware failure, the entire queue could be lost. Hence, another program that expects to receive a message will not receive any messages. However, adopting a message queue guarantees that messages will be delivered to the destination no matter what happens. Message queuing enables asynchronous communication between loosely-coupled components and also provides solid queuing consistency. In case of insufficient resources, which prevent you from immediately processing the data that is sent, you can queue them up in the message queue server that would store the data until the destination is ready to accept the messages. Message queuing has an important role in large-scaled distributed systems and enables asynchronous communication. Let's have a quick overview on the difference between synchronous and asynchronous systems. In ordinary synchronous systems, tasks are processed one at a time. A task is not processed until the task in-process is finished. This is the simplest way to get the job done. Synchronous system We could also implement this system with threads. In this case threads process each task in parallel. Threaded synchronous system In the threading model, threads are managed by the operating system itself on a single processor or multiple processors/cores. Asynchronous Input/Output (AIO) allows a program to continue its execution while processing input/output requests. AIO is mandatory in real-time applications. By using AIO, we could map several tasks to a single thread. Asynchronous system The traditional way of programming is to start a process and wait for it to complete. The downside of this approach is that it blocks the execution of the program while there is a task in progress. However, AIO has a different approach. In AIO, a task that does not depend on the process can still continue. You may wonder why you would use message queue instead of handling all processes with a single-threaded queue approach or multi-threaded queue approach. Let's consider a scenario where you have a web application similar to Google Images in which you let users type some URLs. Once they submit the form, your application fetches all the images from the given URLs. However: If you use a single-threaded queue, your application would not be able to process all the given URLs if there are too many users If you use a multi-threaded queue approach, your application would be vulnerable to a distributed denial of service attack (DDoS) You would lose all the given URLs in case of a hardware failure In this scenario, you know that you need to add the given URLs into a queue and process them. So, you would need a message queuing system. Introduction to ZeroMQ Until now we have covered what a message queue is, which brings us to the purpose of this article, that is, ZeroMQ. The community identifies ZeroMQ as "sockets on steroids". The formal definition of ZeroMQ is it is a messaging library that helps developers to design distributed and concurrent applications. The first thing we need to know about ZeroMQ is that it is not a traditional message queuing system, such as ActiveMQ, WebSphereMQ, or RabbitMQ. ZeroMQ is different. It gives us the tools to build our own message queuing system. It is a library. It runs on different architectures from ARM to Itanium, and has support for more than 20 programming languages. Simplicity ZeroMQ is simple. We can do some asynchronous I/O operations and ZeroMQ could queue the message in an I/O thread. ZeroMQ I/O threads are asynchronous when handling network traffic, so it can do the rest of the job for us. If you have worked on sockets before, you will know that it is quite painful to work on. However, ZeroMQ makes it easy to work on sockets. Performance ZeroMQ is fast. The website Second Life managed to get 13.4 microseconds end-to-end latencies and up to 4,100,000 messages per second. ZeroMQ can use multicast transport protocol, which is an efficient method to transmit data to multiple destinations. The brokerless design Unlike other traditional message queuing systems, ZeroMQ is brokerless. In traditional message queuing systems, there is a central message server (broker) in the middle of the network and every node is connected to this central node, and each node communicates with other nodes via the central broker. They do not directly communicate with each other. However, ZeroMQ is brokerless. In a brokerless design, applications can directly communicate with each other without any broker in the middle. ZeroMQ does not store messages on disk. Please do not even think about it. However, it is possible to use a local swap file to store messages if you set zmq.SWAP. Summary This article explained what a message queuing system is, discussed the importance of message queuing, and introduced ZeroMQ to the reader. Resources for Article : Further resources on this subject: RESTful Web Service Implementation with RESTEasy [Article] BizTalk Server: Standard Message Exchange Patterns and Types of Service [Article] AJAX Chat Implementation: Part 1 [Article]  
Read more
  • 0
  • 0
  • 3891
article-image-collaborative-work-svn-and-git
Packt
19 Mar 2013
11 min read
Save for later

Collaborative Work with SVN and Git

Packt
19 Mar 2013
11 min read
(For more resources related to this topic, see here.) Working with SVN At first we will take a look at the SVN perspective.The SVN perspective provides a group of views that help us to work with a Subversion server. You can open this perspective by using the Perspective menu in the top-right of the Aptana Studio window. The important and most frequently used views related to SVN, which we will take a look at, are the SVN Repositories view, the Team | History view, and the SVN | Console view. These views are categorized as the views selection into the SVN and Team folder, as shown in the following screenshot:     The SVN Repositories view allows you to add new repositories and manage all available repositories. Additionally, you have the option to create new tags or branches of the Repository. These views belong to the SVN views, as shown in the following screenshot: The History view allows you to get an overview about the project and revisions history. This view is used by SVN and Git projects; for this reason the view is stored in the Team views group. The History view can be opened by the menu under Window | Show View | History. Here you can see all the revisions with their comments and data creation. Furthermore, you can get a view into all revisions of a file and you also have, the ability to compare all revisions. The following screenshot shows the History view: Within the SVN Console view, you will find the output from all the SVN actions that are executed by Aptana Studio. Therefore, if you have an SVN conflict or something else, you can take a look at this Console view's output and you might locate the problem a bit faster. The SVN Console view was automatically integrated in the Aptana Studio Console, while the SVN plugin was installed. So if you need the SVN Console view, just open the general Console view from Window | Show view | Console. If the Console view is open, just use the View menu to select the Console type, which in this case is the SVN Console entry. The following screenshot shows the Console view and how you can select SVN Console: However, before we can start work with SVN, we have to add the related SVN Repository. Time for action – adding an SVN Repository Open the SVN perspective by using the Perspective menu in the top-right corner of the Aptana Studio window. Now, you should see the SVN Repositories view on the left-hand side of the Aptana Studio window. If it does not open automatically, open it by selecting the view from the navigation Window | Show view | SVN Repositories. In order to add a new SVN Repository, click on the small SVN icon with the plus sign at the top of the SVN Repositories view. You will now have to enter the address of the Subversion server in the pop up that appears, for example, svn://219.199.99.99/svn_codeSnippets. After you have clicked on the Finish button, Aptana Studio tries to reach the Subversion server in order to complete the process of adding a new Repository. If the Subversion server was reached and the SVN Repository is password protected, you will have to enter the access data for reading the SVN data. If you don't have the required access data available currently, you can abort the process and Aptana Studio will ask you whether you want to keep the location. If you click on NO, the newly added SVN Repository will be deleted, but if you click on YES, the location will remain. This allows you to retrieve the required access data later, enter them, and begin to work with the SVN Repository. Regardless of whether you keep the location or enter the required access data, the new SVN Repository will be listed in the SVN Repository view. What just happened? We have added a new SVN Repository into Aptana Studio. The new Repository is now listed in our SVN Repositories view and we can check this out from there, or create new tags or branches. Checking out an SVN Repository After we have seen how to add a new SVN Repository to Aptana Studio, we also want to know how we can check this Repository in order to work with the contained source code. You can do this, like many other things are done in Aptana Studio, in different ways. We will take a look at how we can do this directly from the SVN Repositories view, because every time we add a new Repository to Aptana Studio, we will also want to check it and use it as a project. Time for action – checking out an SVN Repository Open the SVN Repositories view. Expand the SVN Repository that you wish to check out. We do this because we want to check out the trunk directory from the Repository, not the tags and branches directory. Now, right-click on the trunk directory and select the Check Out... entry. Aptana Studio will now read the properties of the SVN Repository directly from the Subversion server. When all the required properties are received, the following window will appear on your screen: First of all, we select the Check out as a project in the workspace option and enter the name of the new SVN project. After this, we select the revision that we want to check out. This is usually the head revision. This means that you want to check out the last committed one—called the head revision. But you can check out any revision number you want from the past. If this is so, just deselect the Check out HEAD revision checkbox and enter the number of the revision that you want to check out. In the last section, we select the Fully recursive option within the Depth drop-down list and uncheck the Ignore externals checkbox, but select the Allow unversioned obstructions checkbox. After you have selected these settings, click on the Next button. Finally, you can select the location where the project should be created. Normally, this is the current workspace, but sometimes the location is different from the workspace. Maybe you have a web server installed and want to place the source code directly into the web root, in order to run the web application directly on your local machine. Finally, whether you select a different location for the project or not, you have to click on the Finish button to finalize the "Check out" into a new project. What just happened? We have checked out an SVN Repository from the SVN Repositories view. In addition to that, we have seen how we can also check out the Repository source code into another location other than the workspace. Finally, you should now have a ready SVN project where you can start working. File states If you're now changing some lines within a source code file, the Project Explorer view and the App Explorer view change the files' icon, so that you see a small white star on the black background. This means the file has changed since the last commit/update. There are some more small icons, which give you information about the related files and directories. Let's take a closer look at the Label Decorations tab as shown in the following screenshot: Now, we will discuss the symbols in the order shown in the previous screenshot: The small rising arrow shows you that the file or directory is an external one. The small yellow cylinder shows you that the file or directory is already under version control. The red X shows you that this file or directory is marked for deletion. The next time you commit your changes, the file will be deleted. The small blue cylinder shows you that the file or directory is switched. These are files or directories that belong to a different working copy other than their local parent directory. The small blue plus symbol shows you that this already versioned file or directory needs to be added to the repository. These could be files or directories you may have renamed or moved to a different directory. The small cornered square shows you that these files have a conflict with the repository. The small white star on the black background shows you that these files or directories have been changed since the last commit. If the file's or directory's icon has no small symbol, it means the file is ignored by the SVN Repository. The small white hook on the black background shows you that this file or directory is locked. The small red stop sign shows you that this file or directory is read-only. The small yellow cylinder shows you that this file or directory is already under version control and unchanged since the last commit. The small question mark shows you that this new file or directory isn't currently under version control. If you didn't find your icons in this list, or your icons look different, no problem. Just navigate to Window | Preferences and select the Label Decorations entry under Team | SVN within the tree. Here you will find all of the icons which are used. Committing an SVN Repository If you have finished extending your web application with some new features, you can now commit these changes so that the changes are stored in the Repository, and other developers can also update their working copies and get the new features. But how can you simply commit the changed files? Unlike a Git Repository, SVN allows you to commit changes in a tree from the Repository. By using Git, you can only commit changes in the complete Repository at once. But for now, we want to commit our SVN Repository changes, therefore just follow the steps mentioned in the following Time for action – updating and committing an SVN Repository section. Time for action – updating and committing an SVN Repository The first step, before performing a commit, is to perform an update on your working copy. Therefore, we will start by doing this, Aptana Studio reads all new revisions from the Subversion server and merges them with your local working copy. In order to do this update, right-click on your project root and select Team | Update to HEAD. When your working copy is up to date, navigate to the App Explorer view or the Project Explorer view and right-click on the files or directories that you want to commit, and then select the Commit... entry in the Team option. If you select a few directories or the whole project, the Commit window lists only those files within the selection that have changed since the last commit. So, you are able to select just the files and directories that you want to commit. Compose the selected files and directories as you need, and enter a comment in the top of the window. Why do you have to enter a comment while committing a change? Because, by committing the SVN Repository, it automatically saves the date, time, and your username; with this data the revision history stores information about who has changed which file at what time. In addition to that comes the commenting part. The comment should describe what kind of changes were made and what is their purpose. To finalize the commit, you just have to click on the OK button and the commit process will start. As described previously, you can see the output from all your SVN processes within the SVN Console view. In the following screenshot you can see the result of our commit process: What just happened? We have updated our working copy in order to commit our changes. Now the other developers can update their working copies too and can then work with your extensions. It should be noted again that it's recommended to perform an update before every commit. You can perform an update in a single file tree node. You don't have to update your whole project every time, a single node can also be committed. Updating an SVN Repository Additionally, similar to the SVN check out, you have the option to update your working copy not only to the Head revision, but also to a special revision number. In order to do this, right-click on the project root within the Project Explorer view and select the Update to Head... option or the Update to Version... option under the Team tab. After selecting one of these entries, Aptana Studio determines all the new files and files to be updated, downloads them from the Repository, and merges them with your local working copy. Now you should have all the source code from your current project. But, how can you identify which parts of a file are new or have been changed? No problem! Aptana Studio allows you not only to compare two different local files, you can also compare files from different revisions in your Repository. Refer to the following Time for action section to understand how this works:
Read more
  • 0
  • 0
  • 1473

article-image-painting-multi-finger-paint
Packt
08 Mar 2013
19 min read
Save for later

Painting – Multi-finger Paint

Packt
08 Mar 2013
19 min read
(For more resources related to this topic, see here.) What is multi-touch? The genesis of multi-touch on Mac OS X was the ability to perform two finger scrolling on a trackpad. The technology was further refined on mobile touch screen devices such as the iPod Touch, iPhone, and iPad. And it has also matured on the Mac OS X platform to allow the use of multi-touch or magic trackpad combined with one or more fingers and a motion to interact with the computer. Gestures are intuitive and allow us to control what is on the screen with fluid motions. Some of the things that we can do using multi-touch are as follows: Two finger scrolling: This is done by placing two fingers on the trackpad and dragging in a line Tap or pinch to zoom : This is done by tapping once with a single finger, or by placing two fingers on the trackpad and dragging them closer to each other Swipe to navigate: This is done by placing one or more fingers on the trackpad and quickly dragging in any direction followed by lifting all the fingers Rotate : This is done by placing two fingers on the trackpad and turning them in a circular motion while keeping them on the trackpad But these gestures just touch the surface of what is possible with multi-touch hardware. The magic trackpad can detect and track all 10 of our fingers with ease. There are plenty of new things that can be done with multi-touch — we are just waiting for someone to invent them. Implementing a custom view Multi-touch events are sent to the NSView objects. So before we can invent that great new multi-touch thing, we first need to understand how to implement a custom view. Essentially, a custom view is a subclass of NSView that overrides some of the behavior of the NSView object. Primarily, it will override the drawRect: method and some of the event handling methods. Time for action — creating a GUI with a custom view By now we should be familiar with creating new Xcode projects so some of the steps here are very high level. Let's get started! Create a new Xcode project with Automatic Reference Counting enabled and these options enabled as follows: Option Value Product Name Multi-Finger Paint Company Identifier com.yourdomain Class Prefix Your initials After Xcode creates the new project, design an icon and drag it in to the App Icon field on the TARGET Summary. Remember to set the Organization in the Project Document section of the File inspector. Click on the filename MainMenu.xib in the project navigator. Select the Multi-Finger Paint window and in the Size inspector change its Width and Height to 700 and 600 respectively. Enable both the Minimum Size and Maximum Size Constraints values. From the Object Library , drag a custom view into the window. In the Size inspector , change the Width and Height of the custom view to 400 and 300 respectively. Center the window using the guides that appear. From the File menu, select New>, then select the File…option. Select the Mac OS X Cocoa Objective-C class template and click on the Next button. Name the class BTSFingerView and select subclass of NSView. It is very important that the subclass is NSView. If we make a mistake and select the wrong subclass, our App won't work. Click on the button titled Create to create the .h and .m files. Click on the filename BTSFingerView.m and look at it carefully. It should look similar to the following code: // // BTSFingerView.m // Multi-Finger Paint // // Created by rwiebe on 12-05-23. // Copyright (c) 2012 BurningThumb Software. All rights reserved. // #import "BTSFingerView.h" @implementation BTSFingerView - (id)initWithFrame:(NSRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code here. } return self; } - (void)drawRect:(NSRect)dirtyRect { // Drawing code here. } @end By default, custom views do not receive events (keyboard, mouse, trackpad, and so on) but we need our custom view to receive events. To ensure our custom view will receive events, add the following code to the BTSFingerView.m file to accept the first responder: /* ** - (BOOL) acceptsFirstResponder ** ** Make sure the view will receive ** events. ** ** Input: none ** ** Output: YES to accept, NO to reject */ - (BOOL) acceptsFirstResponder { return YES; } And, still in the BTSFingerView.m file, modify the initWithFrame method to allow the view to accept touch events from the trackpad as follows: - (id)initWithFrame:(NSRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code here. // Accept trackpad events [self setAcceptsTouchEvents: YES]; } return self; } Once we are sure our custom view will receive events, we can start the process of drawing its content. This is done in the drawRect: method. Add the following code to the drawRect: method to clear it with a transparent color and draw a focus ring if the view is first responder: /* ** - (void)drawRect:(NSRect)dirtyRect ** ** Draw the view content ** ** Input: dirtyRect - the rectangle to draw ** ** Output: none */ - (void)drawRect:(NSRect)dirtyRect { // Drawing code here. // Preserve the graphics content // so that other things we draw // don't get focus rings [NSGraphicsContext saveGraphicsState]; // color the background transparent [[NSColor clearColor] set]; // If this view has accepted first responder // it should draw the focus ring if ([[self window] firstResponder] == self) { NSSetFocusRingStyle(NSFocusRingAbove); } // Fill the view with fully transparent // color so that we can see through it // to whatever is below [[NSBezierPath bezierPathWithRect:[self bounds]] fill]; // Restore the graphics content // so that other things we draw // don't get focus rings [NSGraphicsContext restoreGraphicsState]; } Next, we need to go back into the .xib file, and select our custom view, and then select the Identity Inspector where we will see that in the section titled Custom Class, the Class field contains NSView as the class. Finally, to connect this object to our new custom view program code, we need to change the Class to BTSFingerView as shown in the following screenshot: What just happened? We created our Xcode project and implemented a custom NSView object that will receive events. When we run the project we notice that the focus ring is drawn so that we can be confident the view has accepted the firstResponder status. How to receive multi-touch events Because our custom view accepts first responder, the Mac OS will automatically send events to it. We can override the methods that process the events that we want to handle in our view. Specifically, we can override the following events and process them to handle multi-touch events in our custom view: - (void)touchesBeganWithEvent:(NSEvent *)event - (void)touchesMovedWithEvent:(NSEvent *)event - (void)touchesEndedWithEvent:(NSEvent *)event - (void)touchesCancelledWithEvent:(NSEvent *)event Time for action — drawing our fingers When the multi-touch or magic trackpad is touched, our custom view methods will be invoked and we will be able to draw the placement of our fingers on the trackpad in our custom view. In Xcode, click on the filename BTSFingerView.h in the project navigator and add the following highlighted property: // // BTSFingerView.h // Multi-Finger Paint // // Created by rwiebe on 12-05-23. // Copyright (c) 2012 BurningThumb Software. All rights reserved. // #import <Cocoa/Cocoa.h> @interface BTSFingerView : NSView // A reference to the object that will // store the currently active touches @property (strong) NSMutableDictionary *m_activeTouches; @end In Xcode, click on the file BTSFingerView.m in the project navigator and add the following program code to synthesize the property: // // BTSFingerView.m // Multi-Finger Paint // // Created by rwiebe on 12-05-23. // Copyright (c) 2012 BurningThumb Software. All rights reserved. // #import "BTSFingerView.h" @implementation BTSFingerView // Synthesize the object that will // store the currently active touches @synthesize m_activeTouches; Add the following code to the initWithFrame: method in the BTSFingerView.m file to create the dictionary object that will be used to store the active touch objects: - (id)initWithFrame:(NSRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code here. // Create the mutable dictionary that // will hold the list of currently active // touch events m_activeTouches = [[NSMutableDictionary alloc] init]; } return self; } Add the following code to the BTSFingerView.m file to add BeganWith touch events to the dictionary of active touches: /** ** - (void)touchesBeganWithEvent:(NSEvent *)event ** ** Invoked when a finger touches the trackpad ** ** Input: event - the touch event ** ** Output: none */ - (void)touchesBeganWithEvent:(NSEvent *)event { // Get the set of began touches NSSet *l_touches = [event touchesMatchingPhase:NSTouchPhaseBegan inView:self]; // For each began touch, add the touch // to the active touches dictionary // using its identity as the key for (NSTouch *l_touch in l_touches) { [m_activeTouches setObject:l_touch forKey:l_touch. identity]; } // Redisplay the view [self setNeedsDisplay:YES]; } Add the following code to the BTSFingerView.m file to add moved touch events to the dictionary of active touches: /** ** - (void)touchesMovedWithEvent:(NSEvent *)event ** ** Invoked when a finger moves on the trackpad ** ** Input: event - the touch event ** ** Output: none */ - (void)touchesMovedWithEvent:(NSEvent *)event { // Get the set of move touches NSSet *l_touches = [event touchesMatchingPhase:NSTouchPhaseMoved inView:self]; // For each move touch, update the touch // in the active touches dictionary // using its identity as the key for (NSTouch *l_touch in l_touches) { // Update the touch only if it is found // in the active touches dictionary if ([m_activeTouches objectForKey:l_touch.identity]) { [m_activeTouches setObject:l_touch forKey:l_touch.identity]; } } // Redisplay the view [self setNeedsDisplay:YES]; } Add the following code to the BTSFingerView.m file to remove the touch from the dictionary of active touches when the touch ends: /** ** - (void)touchesEndedWithEvent:(NSEvent *)event ** ** Invoked when a finger lifts off the trackpad ** ** Input: event - the touch event ** ** Output: none */ - (void)touchesEndedWithEvent:(NSEvent *)event { // Get the set of ended touches NSSet *l_touches = [event touchesMatchingPhase:NSTouchPhaseEnded inView:self]; // For each ended touch, remove the touch // from the active touches dictionary // using its identity as the key for (NSTouch *l_touch in l_touches) { [m_activeTouches removeObjectForKey:l_touch.identity]; } // Redisplay the view [self setNeedsDisplay:YES]; } Add the following code to the BTSFingerView.m file to remove the touch from the dictionary of active touches when the touch is cancelled: /** ** - (void)touchesCancelledWithEvent:(NSEvent *)event ** ** Invoked when a touch is cancelled ** ** Input: event - the touch event ** ** Output: none */ - (void)touchesCancelledWithEvent:(NSEvent *)event { // Get the set of cancelled touches NSSet *l_touches = [event touchesMatchingPhase:NSTouchPhaseCancelled inView:self]; // For each cancelled touch, remove the touch // from the active touches dictionary // using its identity as the key for (NSTouch *l_touch in l_touches) { [m_activeTouches removeObjectForKey:l_touch.identity]; } // Redisplay the view [self setNeedsDisplay:YES]; } When we touch the trackpad we are going to draw a "finger cursor" in our custom view. We need to decide how big we want that cursor to be and the color that we want the cursor to be. Then we can add a series of #define to the file named BTSFingerView.h to define that value: // Define the size of the cursor that // will be drawn in the view for each // finger on the trackpad #define D_FINGER_CURSOR_SIZE 20 // Define the color values that will // be used for the finger cursor #define D_FINGER_CURSOR_RED 1.0 #define D_FINGER_CURSOR_GREEN 0.0 #define D_FINGER_CURSOR_BLUE 0.0 #define D_FINGER_CURSOR_ALPHA 0.5 Now we can add the program code to our drawRect: implementation that will draw the finger cursors in the custom view. // For each active touch for (NSTouch *l_touch in m_activeTouches.allValues) { // Create a rectangle reference to hold the // location of the cursor NSRect l_cursor; // Determine where the touch point NSPoint l_touchNP = [l_touch normalizedPosition]; // Calculate the pixel position of the touch point l_touchNP.x = l_touchNP.x * [self bounds].size.width; l_touchNP.y = l_touchNP.y * [self bounds].size.height; // Calculate the rectangle around the cursor l_cursor.origin.x = l_touchNP.x - (D_FINGER_CURSOR_SIZE / 2); l_cursor.origin.y = l_touchNP.y - (D_FINGER_CURSOR_SIZE / 2); l_cursor.size.width = D_FINGER_CURSOR_SIZE; l_cursor.size.height = D_FINGER_CURSOR_SIZE; // Set the color of the cursor [[NSColor colorWithDeviceRed: D_FINGER_CURSOR_RED green: D_FINGER_CURSOR_GREEN blue: D_FINGER_CURSOR_BLUE alpha: D_FINGER_CURSOR_ALPHA] set]; // Draw the cursor as a circle [[NSBezierPath bezierPathWithOvalInRect: l_cursor] fill]; } What just happened? We implemented the methods required to keep track of the touches and to draw the location of the touches in our custom view. If we run the App now, and move the mouse pointer over the view area, and then touch the trackpad, we will see red circles that track our fingers being drawn in the view as shown in the following screenshot: What is an NSBezierPath? A Bezier Path consists of straight and curved line segments that can be used to draw recognizable shapes. In our program code, we use Bezier Paths to draw a rectangle and a circle but a Bezier Path can be used to draw many other shapes. How to manage the mouse cursor One of the interesting things about the trackpad and the mouse is the association between a single finger touch and the movement of the mouse cursor. Essentially, Mac OS X treats a single finger movement as if it was a mouse movement. The problem with this is that when we move just a single finger on the trackpad, the mouse cursor will move away from our NSView causing it to lose focus so that when we lift our finger we need to move the mouse cursor back to our NSView to receive touch events. Time for action — detaching the mouse cursor from the mouse hardware The solution to this problem is to detach the mouse cursor from the mouse hardware (typically called capturing the mouse) whenever a touch event is active so that the cursor is not moved by touch events. In addition, since a "stuck" mouse cursor may be cause for concern to our App user, we can hide the mouse cursor when touches are active. In Xcode, click on the file named BTSFingerView.h in the project navigator and add the following flag to the interface: @interface BTSFingerView : NSView { // Define a flag so that touch methods can behave // differently depending on the visibility of // the mouse cursor BOOL m_cursorIsHidden; } In Xcode, click on the file named BTSFingerView.m in the project navigator. Add the following code to the beginning of the touchesBeganWithEvent: method to detach and hide the mouse cursor when a touch begins. We only want to do this one time so it is guarded by a BOOL flag and an if statement to make sure we don't do it for every touch that begins. - (void)touchesBeganWithEvent:(NSEvent *)event { // If the mouse cursor is not already hidden, if (NO == m_cursorIsHidden) { // Detach the mouse cursor from the mouse // hardware so that moving the mouse (or a // single finger) will not move the cursor CGAssociateMouseAndMouseCursorPosition(false); // Hide the mouse cursor [NSCursor hide]; // Remember that we detached and hid the // mouse cursor m_cursorIsHidden = YES; } Add the following code to the end of the touchesEndedWithEvent: method to attach and unhide the mouse cursor when all touches end. We use a BOOL flag to remember the state of the cursor so that the touchesBeganWithEvent: method will re-hide it when the next touch begins. // If there are no remaining active touches if (0 == [m_activeTouches count]) { // Attach the mouse cursor to the mouse // hardware so that moving the mouse (or a // single finger) will move the cursor CGAssociateMouseAndMouseCursorPosition(true); // Show the mouse cursor [NSCursor unhide]; // Remember that we attached and unhid the // mouse cursor so that the next touch that // begins will detach and hide it m_cursorIsHidden = NO; } // Redisplay the view [self setNeedsDisplay:YES]; } Add the following code to the end of the touchesCancelledWithEvent: method to attach and unhide the mouse cursor when all touches end. We use a BOOL flag to remember the state of the cursor so that the touchesBeganWithEvent: method will re-hide it when the next touch begins. // If there are no remaining active touches if (0 == [m_activeTouches count]) { // Attach the mouse cursor to the mouse // hardware so that moving the mouse (or a // single finger) will move the cursor CGAssociateMouseAndMouseCursorPosition(true); // Show the mouse cursor [NSCursor unhide]; // Remember that we attached and unhid the // mouse cursor so that the next touch that // begins will detach and hide it m_cursorIsHidden = NO; } // Redisplay the view [self setNeedsDisplay:YES]; } While we are looking at the movement of the mouse, we also notice that the focus ring for our custom view is being drawn regardless of whether or not the mouse cursor is over our view. Since touch events will only be sent to our view if the mouse cursor is over it, we want to change the program code so that the focus ring only appears when the mouse cursor is over the custom view. This is something we can do with another BOOL flag. Add the following code to the file to define a BOOL flag that will allow us to determine if the mouse cursor is over our custom view: // Define a flag so that view methods can behave // differently depending on the position of the // mouse cursor BOOL m_mouseIsInFingerView; In the file named BTSFingerView.m, add the following code to create a tracking rectangle that matches the bounds of our custom view. Once the tracking rectangle is active, the methods mouseEntered: and mouseExited: will be automatically invoked as the mouse cursor enters and exits our custom view. /** ** - (void)viewDidMoveToWindow ** ** Informs the receiver that it has been added to ** a new view hierarchy. ** ** We need to make sure the view window is valid ** and when it is, we can add the tracking rect ** ** Once the tracking rect is added the mouseEntered: ** and mouseExited: events will be sent to our view ** */ - (void)viewDidMoveToWindow { // Is the views window valid if ([self window] != nil) { // Add a tracking rect such that the // mouseEntered; and mouseExited: methods // will be automatically invoked [self addTrackingRect:[self bounds] owner:self userData:NULL assumeInside:NO]; } } In the file named BTSFingerView.m, add the following code to implement the mouseEntered: and mouseExited: methods. In those methods, we set the BOOL flag so that the drawRect: method knows whether or not to draw the focus ring. /** ** - (void)mouseEntered: ** ** Informs the receiver that the mouse cursor ** entered a tracking rectangle ** ** Since we only have a single tracking rect ** we know the mouse is over our custom view ** */ - (void)mouseEntered:(NSEvent *)theEvent { // Set the flag so that other methods know // the mouse cursor is over our view m_mouseIsInFingerView = YES; // Redraw the view so that the focus ring // will appear [self setNeedsDisplay:YES]; } /** ** - (void)mouseExited: ** ** Informs the receiver that the mouse cursor ** exited a tracking rectangle ** ** Since we only have a single tracking rect ** we know the mouse is not over our custom view ** */ - (void)mouseExited:(NSEvent *)theEvent { // Set the flag so that other methods know // the mouse cursor is not over our view m_mouseIsInFingerView = NO; // Redraw the view so that the focus ring // will not appear [self setNeedsDisplay:YES]; } Finally, in the drawRect: method, change the program code that draws the focus ring to only do so if the mouse cursor is in the tracking rectangle: // If this view has accepted first responder // it should draw the focus ring but only if // the mouse cursor is over this view if ( ([[self window] firstResponder] == self) && (YES == m_mouseIsInFingerView) ) { NSSetFocusRingStyle(NSFocusRingAbove); } What just happened? We implemented the program code that will prevent the mouse cursor from moving out of our custom view when touch events are active. In doing so we noticed that our focus ring behavior could be improved. Therefore we added additional program code to ensure the focus ring is visible only when the mouse pointer is over our view. Performing 2D drawing in a custom view Mac OS X provides a number of ways to perform drawing. The methods provided range from very simple methods to very complex methods. For our multi-finger painting program we are going to use the core graphics APIs designed to draw a path. We are going to collect each stroke as a series of points and construct a path from those points so that we can draw the stroke. Each active touch event will have a corresponding active stroke object that needs to be drawn in our custom view. When a stroke is finished, and the App user lifts the finger, we are going to send the finished stroke to another custom view so that it is drawn only one time and not each time fingers move. The optimization of using the second view will ensure our finger tracking is not slowed down too much by drawing. Before we can begin drawing, we need to create two new objects that will be used to store individual points and strokes. The program code for these two objects is not shown but the objects are included in the Multi-Finger Paint Xcode project. The two objects are as follows: BTSPoint BTSStroke The BTSPoint object is a wrapper for an NSPoint structure. The NSPoint structure needs to be wrapped in an object so that it can be stored in an NSArray object. It has a single instance variable: NSPoint m_point; It implements the following methods which allows it to be initialized: return the point (x and y), return just the x value, or return just the y value. For more information on the object, we can read the source code file in the project: - (id) initWithNSPoint:(NSPoint)a_point; - (NSPoint) point; - (CGFloat)x; - (CGFloat)y; The BTSStroke object is a wrapper for an array of BTSPoint objects, a color, and a stroke width. It is used to store strokes that are drawn in our custom NSView. It has the following instance variables and properties: float m_red; float m_green; float m_blue; float m_alpha; float m_width; @property (strong) NSMutableArray *m_points; It implements the following methods which allows it to be initialized: a new point to be added, return the array of points, return any of the color components, and return the stroke width. For more information on the object, we can read the source code file in the project: - (id) initWithWidth:(float)a_width red:(float)a_red green:(float)a_green blue:(float)a_blue alpha:(float)a_alpha; - (void) addPoint:(BTSPoint *)a_point; - (NSMutableArray *) points; - (float)red; - (float)green; - (float)blue; - (float)alpha; - (float)width;
Read more
  • 0
  • 0
  • 1278

article-image-asynchrony-action
Packt
06 Mar 2013
17 min read
Save for later

Asynchrony in Action

Packt
06 Mar 2013
17 min read
(For more resources related to this topic, see here.) Asynchrony When we talk about C# 5.0, the primary topic of conversation is the new asynchronous programming features. What does asynchrony mean? Well, it can mean a few different things, but in our context, it is simply the opposite of synchronous. When you break up execution of a program into asynchronous blocks, you gain the ability to execute them side-by-side, in parallel. As you can see in the following diagram, executing multiple ac-tions concurrently can bring various positive qualities to your programs: Parallel execution can bring performance improvements to the execution of a program. The best way to put this into context is by way of an example, an example that has been experienced all too often in the world of desktop software. Let's say you have an application that you are developing, and this software should fulfill the following requirements: When the user clicks on a button, initiate a call to a web service. Upon completion of the web service call, store the results into a database. Finally, bind the results and display them to the user. There are a number of problems with the naïve way of implementing this solution. The first is that many developers write code in such a way that the user interface will be completely unresponsive while we are waiting to receive the results of these web service calls. Then, once the results finally arrive, we continue to make the user wait while we store the results in a database, an operation that the user does not care about in this case. The primary vehicle for mitigating these kinds of problems in the past has been writing multithreaded code. This is of course nothing new, as multi-threaded hardware has been around for many years, along with software capabilities to take advantage of this hardware. Most of the programming languages did not provide a very good abstraction layer on top of this hardware, often letting (or requiring) you program directly against the hardware threads. Thankfully, Microsoft introduced a new library to simplify the task of writing highly concurrent programs, which is explained in the next section. Task Parallel Library The Task Parallel Library (TPL) was introduced in .NET 4.0 (along with C# 4.0). Firstly, it is a huge topic and could not have been examined properly in such a small space. Secondly, it is highly relevant to the new asynchrony features in C# 5.0, so much so that they are the literal foundation upon which the new features are built. So, in this section, we will cover the basics of the TPL, along with some of the background information about how and why it works. TPL introduces a new type, the Task type, which abstracts away the concept of something that must be done into an object. At first glance, you might think that this abstraction already exists in the form of the Thread class. While there are some similarities between Task and Thread, the implementations have quite different implications. With a Thread class, you can program directly against the lowest level of parallelism supported by the operating system, as shown in the following code: Thread thread = new Thread(new ThreadStart(() => { Thread.Sleep(1000); Console.WriteLine("Hello, from the Thread"); })); thread.Start(); Console.WriteLine("Hello, from the main thread"); thread.Join(); In the previous example, we create a new Thread class, which when started will sleep for a second and then write out the text Hello, from the Thread. After we call thread.Start(), the code on the main thread immediately continues and writes Hello, from the main thread. After a second, we see the text from the background thread printed to the screen. In one sense, this example of using the Thread class shows how easy it is to branch off the execution to a background thread, while allowing execution of the main thread to continue, unimpeded. However, the problem with using the Thread class as your "concurrency primitive" is that the class itself is an indication of the implementation, which is to say, an operating system thread will be created. As far as abstractions go, it is not really an abstraction at all; your code must both manage the lifecycle of the thread, while at the same time dealing with the task the thread is executing. If you have multiple tasks to execute, spawning multiple threads can be disastrous, because the operating system can only spawn a finite number of them. For performance intensive applications, a thread should be considered a heavyweight resource, which means you should avoid using too many of them, and keep them alive for as long as possible. As you might imagine, the designers of the .NET Framework did not simply leave you to program against this without any help. The early versions of the frameworks had a mechanism to deal with this in the form of the ThreadPool, which lets you queue up a unit of work, and have the thread pool manage the lifecycle of a pool of threads. When a thread becomes available, your work item is then executed. The following is a simple example of using the thread pool: int[] numbers = { 1, 2, 3, 4 }; foreach (var number in numbers) { ThreadPool.QueueUserWorkItem(new WaitCallback(o => { Thread.Sleep(500); string tabs = new String('t', (int)o); Console.WriteLine("{0}processing #{1}", tabs, o); }), number); } This sample simulates multiple tasks, which should be executed in parallel. We start with an array of numbers, and for each number we want to queue a work item that will sleep for half a second, and then write to the console. This works much better than trying to manage multiple threads yourself because the pool will take care of spawning more threads if there is more work. When the configured limit of concurrent threads is reached, it will hold work items until a thread becomes available to process it. This is all work that you would have done yourself if you were using threads directly. However, the thread pool is not without its complications. First, it offers no way of synchronizing on completion of the work item. If you want to be notified when a job is completed, you have to code the notification yourself, whether by raising an event, or using a thread synchronization primitive, such as ManualResetEvent. You also have to be careful not to queue too many work items, or you may run into system limitations with the size of the thread pool. With the TPL, we now have a concurrency primitive called Task. Consider the following code: Task task = Task.Factory.StartNew(() => { Thread.Sleep(1000); Console.WriteLine("Hello, from the Task"); }); Console.WriteLine("Hello, from the main thread"); task.Wait(); Upon first glance, the code looks very similar to the sample using Thread, but they are very different. One big difference is that with Task, you are not committing to an implementation. The TPL uses some very interesting algorithms behind the scenes to manage the workload and system resources, and in fact, allows you customize those algorithms through the use of custom schedulers and synchronization contexts. This allows you to control the parallel execution of your programs with a high degree of control. Dealing with multiple tasks, as we did with the thread pool, is also easier because each task has synchronization features built-in. To demonstrate how simple it is to quickly parallelize an arbitrary number of tasks, we start with the same array of integers, as shown in the previous thread pool example: int[] numbers = { 1, 2, 3, 4 }; Because Task can be thought of as a primitive type that represents an asynchronous task, we can think of it as data. This means that we can use things such as Linq to project the numbers array to a list of tasks as follows: var tasks = numbers.Select(number => Task.Factory.StartNew(() => { Thread.Sleep(500); string tabs = new String('t', number); Console.WriteLine("{0}processing #{1}", tabs, number); })); And finally, if we wanted to wait until all of the tasks were done before continuing on, we could easily do that by calling the following method: Task.WaitAll(tasks.ToArray()); Once the code reaches this method, it will wait until every task in the array completes before continuing on. This level of control is very convenient, especially when you consider that, in the past, you would have had to depend on a number of different synchronization techniques to achieve the very same result that was accomplished in just a few lines of TPL code. With the usage patterns that we have discussed so far, there is still a big disconnect between the process that spawns a task, and the child process. It is very easy to pass values into a background task, but the tricky part comes when you want to retrieve a value and then do something with it. Consider the following requirements: Make a network call to retrieve some data. Query the database for some configuration data. Process the results of the network data, along with the configuration data. The following diagram shows the logic: Both the network call and query to the database can be done in parallel. With what we have learned so far about tasks, this is not a problem. However, acting on the results of those tasks would be slightly more complex, if it were not for the fact that the TPL provides support for exactly that scenario. There is an additional kind of Task, which is especially useful in cases like this called Task<T>. This generic version of a task expects the running task to ultimately return a value, whenever it is finished. Clients of the task can access the value through the .Result property of the task. When you call that property, it will return immediately if the task is completed and the result is available. If the task is not done, however, it will block execution in the current thread until it is. Using this kind of task, which promises you a result, you can write your programs such that you can plan for and initiate the parallelism that is required, and handle the response in a very logical manner. Look at the following code: varwebTask = Task.Factory.StartNew(() => { WebClient client = new WebClient(); return client.DownloadString("http://bing.com"); }); vardbTask = Task.Factory.StartNew(() => { // do a lengthy database query return new { WriteToConsole=true }; }); if (dbTask.Result.WriteToConsole) { Console.WriteLine(webTask.Result); } else { ProcessWebResult(webTask.Result); } In the previous example, we have two tasks, the webTask, and dbTask, which will execute at the same time. The webTask is simply downloading the HTML from http://bing.com Accessing things over the Internet can be notoriously flaky due to the dynamic nature of accessing the network so you never know how long that is going to take. With the dbTask task, we are simulating accessing a database to return some stored settings. Although in this simple example we are just returning a static anonymous type, database access will usually access a different server over the network; again, this is an I/O bound task just like downloading something over the Internet. Rather than waiting for both of them to execute like we did with Task.WaitAll, we can simply access the .Result property of the task. If the task is done, the result will be returned and execution can continue, and if not, the program will simply wait until it is. This ability to write your code without having to manually deal with task synchronization is great because the fewer concepts a programmer has to keep in his/her head, the more resources he/she can devote to the program. If you are curious about where this concept of a task that returns a value comes from, you can look for resources pertaining to "Futures", and "Promises" at: http://en.wikipedia.org/wiki/Promise_%28programming%29 At the simplest level, this is a construct that "promises" to give you a result in the "future", which is exactly what Task<T> does. Task composability Having a proper abstraction for asynchronous tasks makes it easier to coordinate multiple asynchronous activities. Once the first task has been initiated, the TPL allows you to compose a number of tasks together into a cohesive whole using what are called continuations. Look at the following code: Task<string> task = Task.Factory.StartNew(() => { WebClient client = new WebClient(); return client.DownloadString("http://bing.com"); }); task.ContinueWith(webTask => { Console.WriteLine(webTask.Result); }); Every task object has the .ContinueWith method, which lets you chain another task to it. This continuation task will begin execution once the first task is done. Unlike the previous example, where we relied on the .Result method to wait until the task was done—thus potentially holding up the main thread while it completed—the continuation will run asynchronously. This is a better approach for composing tasks because you can write tasks that will not block the UI thread, which results in very responsive applications. Task composability does not stop at providing continuations though, the TPL also provides considerations for scenarios, where a task must launch a number of subtasks. You have the ability to control how completion of those child tasks affects the parent task. In the following example, we will start a task, which will in turn launch a number of subtasks: int[] numbers = { 1, 2, 3, 4, 5, 6 }; varmainTask = Task.Factory.StartNew(() => { // create a new child task foreach (intnum in numbers) { int n = num; Task.Factory.StartNew(() => { Thread.SpinWait(1000); int multiplied = n * 2; Console.WriteLine("Child Task #{0}, result {1}", n, multiplied); }); } }); mainTask.Wait(); Console.WriteLine("done"); Each child task will write to the console, so that you can see how the child tasks behave along with the parent task. When you execute the previous program, it results in the following output: Child Task #1, result 2 Child Task #2, result 4 done Child Task #3, result 6 Child Task #6, result 12 Child Task #5, result 10 Child Task #4, result 8 Notice how even though you have called the .Wait() method on the outer task before writing done, the execution of the child task continues a bit longer after the task is concluded. This is because, by default, child tasks are detached, which means their execution is not tied to the task that launched it. An unrelated, but important bit in the previous example code, is you will notice that we assigned the loop variable to an intermediary variable before using it in the task. int n = num; Task.Factory.StartNew(() => { int multiplied = n * 2; This is actually related to the way closures work, and is a common misconception when trying to "pass in" values in a loop. Because the closure actually creates a reference to the value, rather than copying the value in, using the loop value will end up changing every time the loop iterates, and you will not get the behavior you expect. As you can see, an easy way to mitigate this is to set the value to a local variable before passing it into the lambda expression. That way, it will not be a reference to an integer that changes before it is used. You do however have the option to mark a child task as Attached, as follows: Task.Factory.StartNew( () =>DoSomething(), TaskCreationOptions.AttachedToParent); The TaskCreationOptions enumeration has a number of different options. Specifically in this case, the ability to attach a task to its parent task means that the parent task will not complete until all child tasks are complete. Other options in TaskCreationOptions let you give hints and instructions to the task scheduler. From the documentation, the following are the descriptions of all these options: None: This specifies that the default behavior should be used. PreferFairness: This is a hint to a TaskScheduler class to schedule a task in as fair a manner as possible, meaning that tasks scheduled sooner will be more likely to be run sooner, and tasks scheduled later will be more likely to be run later. LongRunning: This specifies that a task will be a long-running, coarsegrained operation. It provides a hint to the TaskScheduler class that oversubscription may be warranted. AttachedToParent: This specifies that a task is attached to a parent in the task hierarchy. DenyChildAttach: This specifies that an exception of the type InvalidOperationException will be thrown if an attempt is made to attach a child task to the created task. HideScheduler: This prevents the ambient scheduler from being seen as the current scheduler in the created task. This means that operations such as StartNew or ContinueWith that are performed in the created task, will see Default as the current scheduler. The best part about these options, and the way the TPL works, is that most of them are merely hints. So you can suggest that a task you are starting is long running, or that you would prefer tasks scheduled sooner to run first, but that does not guarantee this will be the case. The framework will take the responsibility of completing the tasks in the most efficient manner, so if you prefer fairness, but a task is taking too long, it will start executing other tasks to make sure it keeps using the available resources optimally. Error handling with tasks Error handling in the world of tasks needs special consideration. In summary, when an exception is thrown, the CLR will unwind the stack frames looking for an appropriate try/catch handler that wants to handle the error. If the exception reaches the top of the stack, the application crashes. With asynchronous programs, though, there is not a single linear stack of execution. So when your code launches a task, it is not immediately obvious what will happen to an exception that is thrown inside of the task. For example, look at the following code: Task t = Task.Factory.StartNew(() => { throw new Exception("fail"); }); This exception will not bubble up as an unhandled exception, and your application will not crash if you leave it unhandled in your code. It was in fact handled, but by the task machinery. However, if you call the .Wait() method, the exception will bubble up to the calling thread at that point. This is shown in the following example: try { t.Wait(); } catch (Exception ex) { Console.WriteLine(ex.Message); } When you execute that, it will print out the somewhat unhelpful message One or more errors occurred, rather than the fail message that is the actual message contained in the exception. This is because unhandled exceptions that occur in tasks will be wrapped in an AggregateException exception, which you can handle specifically when dealing with task exceptions. Look at the following code: catch (AggregateException ex) { foreach (var inner in ex.InnerExceptions) { Console.WriteLine(inner.Message); } } If you think about it, this makes sense, because of the way that tasks are composable with continuations and child tasks, this is a great way to represent all of the errors raised by this task. If you would rather handle exceptions on a more granular level, you can also pass a special TaskContinuationOptions parameter as follows: Task.Factory.StartNew(() => { throw new Exception("Fail"); }).ContinueWith(t => { // log the exception Console.WriteLine(t.Exception.ToString()); }, TaskContinuationOptions.OnlyOnFaulted); This continuation task will only run if the task that it was attached to is faulted (for example, if there was an unhandled exception). Error handling is, of course, something that is often overlooked when developers write code, so it is important to be familiar with the various methods of handling exceptions in an asynchronous world.
Read more
  • 0
  • 0
  • 1557
article-image-getting-started-kinect-windows-sdk-programming
Packt
20 Feb 2013
12 min read
Save for later

Getting started with Kinect for Windows SDK Programming

Packt
20 Feb 2013
12 min read
(For more resources related to this topic, see here.) System requirements for the Kinect for Windows SDK While developing applications for any device using an SDK, compatibility plays a pivotal role. It is really important that your development environment must fulfill the following set of requirements before starting to work with the Kinect for Windows SDK. Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support. and register to have the files e-mailed directly to you. Supported operating systems The Kinect for Windows SDK, as its name suggests, runs only on the Windows operating system. The following are the supported operating systems for development: Windows 7 Windows Embedded 7 Windows 8 The Kinect for Windows sensor will also work on Windows operating systems running in a virtual machine such as Microsoft HyperV, VMWare, and Parallels. System configuration The hardware requirements are not as stringent as the software requirements. It can be run on most of the hardware available in the market. The following are the minimum configurations required for development with Kinect for Windows: A 32- (x86) or 64-bit (x64) processor Dual core 2.66 GHz or faster processor Dedicated USB 2.0 bus 2 GB RAM The Kinect sensor It goes without saying, you need a Kinect sensor for your development. You can use the Kinect for Windows or the Kinect for Xbox sensor for your development. Before choosing a sensor for your development, make sure you are clear about the limitations of the Kinect for Xbox sensor over the Kinect for Windows sensor, in terms of features, API supports, and licensing mechanisms. The Kinect for Windows sensor By now, you are already familiar with the Kinect for Windows sensor and its different components. The Kinect for Windows sensor comes with an external power supply, which supplies the additional power, and a USB adapter to connect with the system. For the latest updates and availability of the Kinect for Windows sensor, you can refer to http://www.microsoft.com/en-us/kinectforwindows/site. The Kinect for Xbox sensor If you already have a Kinect sensor with your Xbox gaming console, you may use it for development. Similar to the Kinect for Windows sensor, you will require a separate power supply for the device so that it can power up the motor, camera, IR sensor, and so on. If you have bought a Kinect sensor with an Xbox as a bundle, you will need to buy the adapter / power supply separately. You can check out the external power supply adapter at http://www.microsoftstore.com. If you have bought only the Kinect for Xbox sensor, you will have everything that is required for a connection with a PC and external power cable. Development tools and software The following are the software that are required for development with Kinect SDK: Microsoft Visual Studio 2010 Express or higher editions of Visual Studio Microsoft .NET Framework 4.0 or higher Kinect for Windows SDK Kinect for Windows SDK uses the underlying speech capability of a Windows operating system to interact with the Kinect audio system. This will require Microsoft Speech Platform – Server Runtime, the Microsoft Speech Platform SDK, and a language pack to be installed in the system, and these will be installed along with the Kinect for Windows SDK. The system requirements for SDK may change with upcoming releases. Refer to http://www.microsoft.com/en-us/ kinectforwindows/. for the latest system requirements. Evaluation of the Kinect for Windows SDK   Though the Kinect for Xbox sensor has been in the market for quite some time, Kinect for Windows SDK is still fairly new in the developer paradigm, and it's evolving. The book is written on Kinect for Windows SDK v1.6. The Kinect for Windows SDK was first launched as a Beta 1 version in June 2011, and after a thunderous response from the developer community, the updated version of Kinect for Windows SDK Beta 2 version was launched in November 2011. Initially, both the SDK versions were a non-commercial release and were meant only for hobbyists. The first commercial version of Kinect for Windows SDK (v1.0) was launched in February 2012 along with a separate commercial hardware device. SDK v1.5 was released on May 2012 with bunches of new features, and the current version of Kinect for Windows SDK (v1.6) was launched in October 2012. The hardware hasn't changed since its first release. It was initially limited to only 12 countries across the globe. Now the new Kinect for Windows sensor is available in more than 40 countries. The current version of SDK also has the support of speech recognition for multiple languages.   Downloading the SDK and the Developer Toolkit The Kinect SDK and the Developer Toolkit are available for free and can be downloaded from http://www.microsoft.com/en-us/kinectforwindows/. The installer will automatically install the 64- or 32-bit version of SDK depending on your operating system. The Kinect for Windows Developer Toolkit is an additional installer that includes samples, tools, and other development extensions. The following diagram shows these components:   The main reason behind keeping SDK and Developer Toolkit in two different installers is to update the Developer Toolkit independently from the SDK. This will help to keep the toolkit and samples updated and distributed to the community without changing or updating the actual SDK version. The version of Kinect for Windows SDK and that for the Kinect for Windows Developer Toolkit might not be the same. Installing Kinect for Windows SDK Before running the installation, make sure of the following: You have uninstalled all the previous versions of Kinect for Windows SDK The Kinect sensor is not plugged into the USB port on the computer There are no Visual Studio instances currently running Start the installer, which will display the start screen as End User License Agreement. You need to read and accept this agreement to proceed with the installation. The following screenshot shows the license agreement:   Accept the agreement by selecting the checkbox and clicking on the Install option, which will do the rest of the job automatically. Before the installation, your computer may pop out the User Access Control (UAC) dialog, to get a confirmation from you that you are authorizing the installer to make changes in your computer. Once the installation is over, you will be notified along with an option for installing the Developer Toolkit, as shown in the next screenshot: Is it mandatory to uninstall the previous version of SDK before we install the new one? The upgrade will happen without any hassles if your current version is a non-Beta version. As a standard procedure, it is always recommended to uninstall the older SDK prior to installing the newer one, if your current version is a Beta version. Installing the Developer Toolkit If you didn't downloaded the Developer Toolkit installer earlier, you can click on the Download the Developer Toolkit option of the SDK setup wizard (refer to the previous screenshot); this will first download and then install the Developer Toolkit setup. If you have already downloaded the setup, you can close the current window and execute the standalone Toolkit installer. The installation process for Developer Toolkit is similar to the process for the SDK installer. Components installed by the SDK and the Developer Toolkit The Kinect for Windows SDK and Kinect for Windows Developer Toolkit install the drivers, assemblies, samples, and the documentation. To check which components are installed, you can navigate to the Install and Uninstall Programs section of Control Panel and search for Kinect. The following screenshot shows the list of components that are installed with the SDK and Toolkit installer: The default location for the SDK and Toolkit installation is %ProgramFiles%/Microsoft SDKs/Kinect. Kinect management service The Kinect for Windows SDK also installs Kinect Management, which is a Windows service that runs in the background while your PC communicates with the device. This service is responsible for the following tasks: Listening to the Kinect device for any status changes Interacting with the COM Server for any native support Managing the Kinect audio components by interacting with Windows audio drivers You can view this service by launching Services by navigating to Control Panel |Administrative Tools, or by typing Services.msc in the Run command. Is it necessary to install the Kinect SDK to end users' systems? The answer is No. When you install the Kinect for Windows SDK, it creates a Redist directory containing an installer that is designed to be deployed with Kinect applications, which install the runtime and drivers. This is the path where you can find the setup file after the SDK is installed: %ProgramFiles%/Microsoft SDKsKinectv1.6Redist KinectRuntime-v1.6-Setup.exe This can be used with your application deployment package, which will install only the runtime and necessary drivers. Connecting the sensor with the system Now that we have installed the SDK, we can plug the Kinect device into your PC. The very first time you plug the device into your system, you will notice the LED indicator of the Kinect sensor turning solid red and the system will start installing the drivers automatically. The default location of the driver is %Program Files%Microsoft Kinect DriversDrivers. The drivers will be loaded only after the installation of SDK is complete and it's a one-time job. This process also checks for the latest Windows updates on USB Drivers, so it is good to be connected to the Internet if you don't have the latest updates of Windows. The check marks in the dialog box shown in the next screenshot indicate successful driver software installation: When the drivers have finished loading and are loaded properly, the LED light on your Kinect sensor will turn solid green. This indicates that the device is functioning properly and can communicate with the PC as well. Verifying the installed drivers This is typically a troubleshooting procedure in case you encounter any problems. Also, the verification procedure will help you to understand how the device drivers are installed within your system. In order to verify that the drivers are installed correctly, open Control Panel and select Device Manager; then look for the Kinect for Windows node. You will find the Kinect for Windows Device option listed as shown in the next screenshot: Not able to view all the device components At some point of time, it may happen that you are able to view only the Kinect for Windows Device node (refer to the following screenshot). At this point of time, it looks as if the device is ready. However, a careful examination reveals a small hitch. Let's see whether you can figure it out or not! The Kinect device LED is on and Device Manager has also detected the device, which is absolutely fine, but we are still missing something here. The device is connected to the PC using the USB port, and the system prompt shows the device installed successfully—then where is the problem? The default USB port that is plugged into the system doesn't have the power capabilities required by the camera, sensor, and motor. At this point, if you plug it into an external power supplier and turn the power on, you will find all the driver nodes in Device Manager loaded automatically. This is one of the most common mistakes made by the developers. While working with Kinect SDK, make sure your Kinect device is connected with the computer using the USB port, and the external power adapter is plugged in and turned on. The next picture shows the Kinect sensor with USB connector and power adapter, and how they have been used: With the aid of the external power supply, the system will start searching for Windows updates for the USB components. Once everything is installed properly, the system will prompt you as shown in the next screenshot: All the check marks in the screenshot indicate that the corresponding components are ready to be used and the same components are also reflected in Device Manager. The messages prompting for the loading of drivers, and the prompts for the installation displaying during the loading of drivers, may vary depending upon the operating system you are using. You might also not receive any of them if the drivers are being loaded in the background. Detecting the loaded drivers in Device Manager Navigate to Control Panel | Device Manager, look for the Kinect for Windows node, and you will find the list of components detected. Refer to the next screenshot: The Kinect for Windows Audio Array Control option indicates the driver for the Kinect audio system whereas the Kinect for Windows Camera option controls the camera sensor. The Kinect for Windows Security Control option is used to check whether the device being used is a genuine Microsoft Kinect for Windows or not. In addition to appearing under the Kinect for Windows node, the Kinect for Windows USB Audio option should also appear under the Sound, Video and Game Controllers node, as shown in the next screenshot: Once the Kinect sensor is connected, you can identify the Kinect microphone like any other microphone connected to your PC in the Audio Device Manager section. Look at the next screenshot:  
Read more
  • 0
  • 0
  • 7747

article-image-applying-linq-entities-wcf-service
Packt
05 Feb 2013
15 min read
Save for later

Applying LINQ to Entities to a WCF Service

Packt
05 Feb 2013
15 min read
(For more resources related to this topic, see here.) Creating the LINQNorthwind solution The first thing we need to do is create a test solution. In this article, we will start from the data access layer. Perform the following steps: Start Visual Studio. Create a new class library project LINQNorthwindDAL with solution name LINQNorthwind (make sure the Create directory for the solution is checked to specify the solution name). Delete the Class1.cs file. Add a new class ProductDAO to the project. Change the new class ProductDAO to be public. Now you should have a new solution with the empty data access layer class. Next, we will add a model to this layer and create the business logic layer and the service interface layer. Modeling the Northwind database In the previous section, we created the LINQNorthwind solution. Next, we will apply LINQ to Entities to this new solution. For the data access layer, we will use LINQ to Entities instead of the raw ADO.NET data adapters. As you will see in the next section, we will use one LINQ statement to retrieve product information from the database and the update LINQ statements will handle the concurrency control for us easily and reliably. As you may recall, to use LINQ to Entities in the data access layer of our WCF service, we first need to add an entity data model to the project. In the Solution Explorer, right-click on the project item LINQNorthwindDAL, select menu options Add | New Item..., and then choose Visual C# Items | ADO.NET Entity Data Model as Template and enter Northwind.edmx as the name. Select Generate from database, choose the existing Northwind connection, and add the Products table to the model. Click on the Finish button to add the model to the project. The new column RowVersion should be in the Product entity . If it is not there, add it to the database table with a type of Timestamp and refresh the entity data model from the database In the EMD designer, select the RowVersion property of the Product entity and change its Concurrency Mode from None to Fixed. Note that its StoreGeneratedPattern should remain as Computed. This will generate a file called Northwind.Context.cs, which contains the Db context for the Northwind database. Another file called Product.cs is also generated, which contains the Product entity class. You need to save the data model in order to see these two files in the Solution Explorer. In Visual Studio Solution Explorer, the Northwind.Context.cs file is under the template file Northwind.Context.tt and Product.cs is under Northwind.tt. However, in Windows Explorer, they are two separate files from the template files. Creating the business domain object project During Implementing a WCF Service in the Real World, we create a business domain object (BDO) project to hold the intermediate data between the data access objects and the service interface objects. In this section, we will also add such a project to the solution for the same purpose. In the Solution Explorer, right-click on the LINQNorthwind solution. Select Add | New Project... to add a new class library project named LINQNorthwindBDO. Delete the Class1.cs file. Add a new class file ProductBDO.cs. Change the new class ProductBDO to be public. Add the following properties to this class: ProductID ProductName QuantityPerUnit UnitPrice Discontinued UnitsInStock UnitsOnOrder ReorderLevel RowVersion The following is the code list of the ProductBDO class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace LINQNorthwindBDO { public class ProductBDO { public int ProductID { get; set; } public string ProductName { get; set; } public string QuantityPerUnit { get; set; } public decimal UnitPrice { get; set; } public int UnitsInStock { get; set; } public int ReorderLevel { get; set; } public int UnitsOnOrder { get; set; } public bool Discontinued { get; set; } public byte[] RowVersion { get; set; } } } As noted earlier, in this article we will use BDO to hold the intermediate data between the data access objects and the data contract objects. Besides this approach, there are some other ways to pass data back and forth between the data access layer and the service interface layer, and two of them are listed as follows: The first one is to expose the Entity Framework context objects from the data access layer up to the service interface layer. In this way, both the service interface layer and the business logic layer—we will implement them soon in following sections—can interact directly with the Entity Framework. This approach is not recommended as it goes against the best practice of service layering. Another approach is to use self-tracking entities. Self-tracking entities are entities that know how to do their own change tracking regardless of which tier those changes are made on. You can expose self-tracking entities from the data access layer to the business logic layer, then to the service interface layer, and even share the entities with the clients. Because self-tracking entities are independent of entity context, you don't need to expose the entity context objects. The problem of this approach is, you have to share the binary files with all the clients, thus it is the least interoperable approach for a WCF service. Now this approach is not recommended by Microsoft, so in this book we will not discuss it. Using LINQ to Entities in the data access layer Next we will modify the data access layer to use LINQ to Entities to retrieve and update products. We will first create GetProduct to retrieve a product from the database and then create UpdateProduct to update a product in the database. Adding a reference to the BDO project Now we have the BDO project in the solution, we need to modify the data access layer project to reference it. In the Solution Explorer, right-click on the LINQNorthwindDAL project. Select Add Reference.... Select the LINQNorthwindBDO project from the Projects tab under Solution. Click on the OK button to add the reference to the project. Creating GetProduct in the data access layer We can now create the GetProduct method in the data access layer class ProductDAO, to use LINQ to Entities to retrieve a product from the database. We will first create an entity DbContext object and then use LINQ to Entities to get the product from the DbContext object. The product we get from DbContext will be a conceptual entity model object. However, we don't want to pass this product object back to the upper-level layer because we don't want to tightly couple the business logic layer with the data access layer. Therefore, we will convert this entity model product object to a ProductBDO object and then pass this ProductBDO object back to the upper-level layers. To create the new method, first add the following using statement to the ProductBDO class: using LINQNorthwindBDO; Then add the following method to the ProductBDO class: public ProductBDO GetProduct(int id) { ProductBDO productBDO = null; using (var NWEntities = new NorthwindEntities()) { Product product = (from p in NWEntities.Products where p.ProductID == id select p).FirstOrDefault(); if (product != null) productBDO = new ProductBDO() { ProductID = product.ProductID, ProductName = product.ProductName, QuantityPerUnit = product.QuantityPerUnit, UnitPrice = (decimal)product.UnitPrice, UnitsInStock = (int)product.UnitsInStock, ReorderLevel = (int)product.ReorderLevel, UnitsOnOrder = (int)product.UnitsOnOrder, Discontinued = product.Discontinued, RowVersion = product.RowVersion }; } return productBDO; } Within the GetProduct method, we had to create an ADO.NET connection, create an ADO. NET command object with that connection, specify the command text, connect to the Northwind database, and send the SQL statement to the database for execution. After the result was returned from the database, we had to loop through the DataReader and cast the columns to our entity object one by one. With LINQ to Entities, we only construct one LINQ to Entities statement and everything else is handled by LINQ to Entities. Not only do we need to write less code, but now the statement is also strongly typed. We won't have a runtime error such as invalid query syntax or invalid column name. Also, a SQL Injection attack is no longer an issue, as LINQ to Entities will also take care of this when translating LINQ expressions to the underlying SQL statements. Creating UpdateProduct in the data access layer In the previous section, we created the GetProduct method in the data access layer, using LINQ to Entities instead of ADO.NET. Now in this section, we will create the UpdateProduct method, using LINQ to Entities instead of ADO.NET. Let's create the UpdateProduct method in the data access layer class ProductBDO, as follows: public bool UpdateProduct( ref ProductBDO productBDO, ref string message) { message = "product updated successfully"; bool ret = true; using (var NWEntities = new NorthwindEntities()) { var productID = productBDO.ProductID; Product productInDB = (from p in NWEntities.Products where p.ProductID == productID select p).FirstOrDefault(); // check product if (productInDB == null) { throw new Exception("No product with ID " + productBDO.ProductID); } NWEntities.Products.Remove(productInDB); // update product productInDB.ProductName = productBDO.ProductName; productInDB.QuantityPerUnit = productBDO.QuantityPerUnit; productInDB.UnitPrice = productBDO.UnitPrice; productInDB.Discontinued = productBDO.Discontinued; productInDB.RowVersion = productBDO.RowVersion; NWEntities.Products.Attach(productInDB); NWEntities.Entry(productInDB).State = System.Data.EntityState.Modified; int num = NWEntities.SaveChanges(); productBDO.RowVersion = productInDB.RowVersion; if (num != 1) { ret = false; message = "no product is updated"; } } return ret; } Within this method, we first get the product from database, making sure the product ID is a valid value in the database. Then, we apply the changes from the passed-in object to the object we have just retrieved from the database, and submit the changes back to the database. Let's go through a few notes about this method: You have to save productID in a new variable and then use it in the LINQ query. Otherwise, you will get an error saying Cannot use ref or out parameter 'productBDO' inside an anonymous method, lambda expression, or query expression. If Remove and Attach are not called, RowVersion from database (not from the client) will be used when submitting to database, even though you have updated its value before submitting to the database. An update will always succeed, but without concurrency control. If Remove is not called and you call the Attach method, you will get an error saying The object cannot be attached because it is already in the object context. If the object state is not set to be Modified, Entity Framework will not honor your changes to the entity object and you will not be able to save any change to the database. Creating the business logic layer Now let's create the business logic layer. Right click on the solution item and select Add | New Project.... Add a class library project with the name LINQNorthwindLogic. Add a project reference to LINQNorthwindDAL and LINQNorthwindBDO to this new project. Delete the Class1.cs file. Add a new class file ProductLogic.cs. Change the new class ProductLogic to be public. Add the following two using statements to the ProductLogic.cs class file: using LINQNorthwindDAL; using LINQNorthwindBDO; Add the following class member variable to the ProductLogic class: ProductDAO productDAO = new ProductDAO(); Add the following new method GetProduct to the ProductLogic class: public ProductBDO GetProduct(int id) { return productDAO.GetProduct(id); } Add the following new method UpdateProduct to the ProductLogic class: public bool UpdateProduct( ref ProductBDO productBDO, ref string message) { var productInDB = GetProduct(productBDO.ProductID); // invalid product to update if (productInDB == null) { message = "cannot get product for this ID"; return false; } // a product cannot be discontinued // if there are non-fulfilled orders if (productBDO.Discontinued == true && productInDB.UnitsOnOrder > 0) { message = "cannot discontinue this product"; return false; } else { return productDAO.UpdateProduct(ref productBDO, ref message); } } Build the solution. We now have only one more step to go, that is, adding the service interface layer. Creating the service interface layer The last step is to create the service interface layer. Right-click on the solution item and select Add | New Project.... Add a WCF service library project with the name of LINQNorthwindService. Add a project reference to LINQNorthwindLogic and LINQNorthwindBDO to this new service interface project. Change the service interface file IService1.cs, as follows: Change its filename from IService1.cs to IProductService.cs. Change the interface name from IService1 to IProductService, if it is not done for you. Remove the original two service operations and add the following two new operations: [OperationContract] [FaultContract(typeof(ProductFault))] Product GetProduct(int id); [OperationContract] [FaultContract(typeof(ProductFault))] bool UpdateProduct(ref Product product, ref string message); Remove the original CompositeType and add the following data contract classes: [DataContract] public class Product { [DataMember] public int ProductID { get; set; } [DataMember] public string ProductName { get; set; } [DataMember] public string QuantityPerUnit { get; set; } [DataMember] public decimal UnitPrice { get; set; } [DataMember] public bool Discontinued { get; set; } [DataMember] public byte[] RowVersion { get; set; } } [DataContract] public class ProductFault { public ProductFault(string msg) { FaultMessage = msg; } [DataMember] public string FaultMessage; } The following is the content of the IProductService.cs file: using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ServiceModel; using System.Text; namespace LINQNorthwindService { [ServiceContract] public interface IProductService { [OperationContract] [FaultContract(typeof(ProductFault))] Product GetProduct(int id); [OperationContract] [FaultContract(typeof(ProductFault))] bool UpdateProduct(ref Product product, ref string message); } [DataContract] public class Product { [DataMember] public int ProductID { get; set; } [DataMember] public string ProductName { get; set; } [DataMember] public string QuantityPerUnit { get; set; } [DataMember] public decimal UnitPrice { get; set; } [DataMember] public bool Discontinued { get; set; } [DataMember] public byte[] RowVersion { get; set; } } [DataContract] public class ProductFault { public ProductFault(string msg) { FaultMessage = msg; } [DataMember] public string FaultMessage; } } Change the service implementation file Service1.cs, as follows: Change its filename from Service1.cs to ProductService.cs. Change its class name from Service1 to ProductService, if it is not done for you. Add the following two using statements to the ProductService.cs file: using LINQNorthwindLogic; using LINQNorthwindBDO; Add the following class member variable: ProductLogic productLogic = new ProductLogic(); Remove the original two methods and add following two methods: public Product GetProduct(int id) { ProductBDO productBDO = null; try { productBDO = productLogic.GetProduct(id); } catch (Exception e) { string msg = e.Message; string reason = "GetProduct Exception"; throw new FaultException<ProductFault> (new ProductFault(msg), reason); } if (productBDO == null) { string msg = string.Format("No product found for id {0}", id); string reason = "GetProduct Empty Product"; throw new FaultException<ProductFault> (new ProductFault(msg), reason); } Product product = new Product(); TranslateProductBDOToProductDTO(productBDO, product); return product; } public bool UpdateProduct(ref Product product, ref string message) { bool result = true; // first check to see if it is a valid price if (product.UnitPrice <= 0) { message = "Price cannot be <= 0"; result = false; } // ProductName can't be empty else if (string.IsNullOrEmpty(product.ProductName)) { message = "Product name cannot be empty"; result = false; } // QuantityPerUnit can't be empty else if (string.IsNullOrEmpty(product.QuantityPerUnit)) { message = "Quantity cannot be empty"; result = false; } else { try { var productBDO = new ProductBDO(); TranslateProductDTOToProductBDO(product, productBDO); result = productLogic.UpdateProduct( ref productBDO, ref message); product.RowVersion = productBDO.RowVersion; } catch (Exception e) { string msg = e.Message; throw new FaultException<ProductFault> (new ProductFault(msg), msg); } } return result; } Because we have to convert between the data contract objects and the business domain objects, we need to add the following two methods: private void TranslateProductBDOToProductDTO( ProductBDO productBDO, Product product) { product.ProductID = productBDO.ProductID; product.ProductName = productBDO.ProductName; product.QuantityPerUnit = productBDO.QuantityPerUnit; product.UnitPrice = productBDO.UnitPrice; product.Discontinued = productBDO.Discontinued; product.RowVersion = productBDO.RowVersion; } private void TranslateProductDTOToProductBDO( Product product, ProductBDO productBDO) { productBDO.ProductID = product.ProductID; productBDO.ProductName = product.ProductName; productBDO.QuantityPerUnit = product.QuantityPerUnit; productBDO.UnitPrice = product.UnitPrice; productBDO.Discontinued = product.Discontinued; productBDO.RowVersion = product.RowVersion; } Change the config file App.config, as follows: Change Service1 to ProductService. Remove the word Design_Time_Addresses. Change the port to 8080. Now, BaseAddress should be as follows: http://localhost:8080/LINQNorthwindService/ProductService/ Copy the connection string from the App.config file in the LINQNorthwindDAL project to the following App.config file: <connectionStrings> <add name="NorthwindEntities" connectionString="metadata=res://*/Northwind. csdl|res://*/Northwind.ssdl|res://*/Northwind. msl;provider=System.Data.SqlClient;provider connection string="data source=localhost;initial catalog=Northwind;integrated security=True;Multipl eActiveResultSets=True;App=EntityFramework"" providerName="System.Data.EntityClient" /> </connectionStrings> You should leave the original connection string untouched in the App.config file in the data access layer project. This connection string is used by the Entity Model Designer at design time. It is not used at all during runtime, but if you remove it, whenever you open the entity model designer in Visual Studio, you will be prompted to specify a connection to your database. Now build the solution and there should be no errors. Testing the service with the WCF Test Client Now we can run the program to test the GetProduct and UpdateProduct operations with the WCF Test Client. You may need to run Visual Studio as administrator to start the WCF Test Client. First set LINQNorthwindService as the startup project and then press Ctrl + F5 to start the WCF Test Client. Double-click on the GetProduct operation, enter a valid product ID, and click on the Invoke button. The detailed product information should be retrieved and displayed on the screen, as shown in the following screenshot: Now double-click on the UpdateProduct operation, enter a valid product ID, and specify a name, price, quantity per unit, and then click on Invoke. This time you will get an exception as shown in the following screenshot: From this image we can see that the update failed. The error details, which are in HTML View in the preceding screenshot, actually tell us it is a concurrency error. This is because, from WCF Test Client, we can't enter a row version as it is not a simple datatype parameter, thus we didn't pass in the original RowVersion for the object to be updated, and when updating the object in the database, the Entity Framework thinks this product has been updated by some other users.
Read more
  • 0
  • 0
  • 4091