Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Application Development

357 Articles
article-image-customization-using-adf-meta-data-services
Packt
15 Jun 2011
8 min read
Save for later

Customization using ADF Meta Data Services

Packt
15 Jun 2011
8 min read
Oracle ADF Enterprise Application Development—Made Simple Successfully plan, develop, test and deploy enterprise applications with Oracle ADF      Why customization? The reason ADF has customization features built-in is because Oracle Fusion Applications need them. Oracle Fusion Applications is a suite of programs capable of handling every aspect of a large organization—personnel, finance, project management, manufacturing, logistics, and much more. Because organizations are different, Oracle has to offer a way for each customer organization to fit Oracle Fusion Applications to their requirements. This customization functionality can also be very useful for organizations that do not use Oracle Fusion Applications. If you have two screens that work with the same data, but one of the screens must show more fields than the other, you can create one screen with all the fields and use customization to create another version of the same screen with fewer fields for other users. For example, the destination management application might have a data entry screen showing all details of a task to a dispatcher, but only the relevant details to an airport transfer guide: Companies such as DMC Solutions that produce software for sale realize additional benefit from the customization features in ADF. DMC Solu a base application, sell it to different customers and customize each in application to that customer without changing the base application. How does an ADF customization work? More and more Oracle products are using something called Meta Data Services to store metadata. Metadata is data that describes other pieces of information—where it came from, what it means, or how it is intended to be used. An image captured by a digital camera might include metadata about where and when the picture was taken, which camera settings were used, and so on. In the case of an ADF application, the metadata describes how the application is intended to be used. There are three kinds of customizations in ADF: Seeded customizations:They are customizations defined in advance (before the user runs the application) by customization developers. User customizations(sometimes called personalizations): They are changes to aspects of the user interface by application end users. The ADF framework offers a few user customization features, but you need additional software such as Oracle WebCenter for most user customizations. User customizations are outside the scope of this article. Design time at runtime:They are advanced customization of the application by application administrators and/or properly authorized end users. This requires that application developers have prepared the possible customizations as part of application development—it is complicated to program using only ADF, but Oracle WebCenter provides advanced components that make this easier. This is outside the scope of this article. Your customization metadata is stored in either files or a database repository. If you are only planning to use seeded customizations, a file-based repository is fine. However, if you plan to allow user customizations or design time at runtime, you should set up your production server to store customizations in a metadata database. Refer to the Fusion Middleware Administrator's Guide for information about setting up a metadata database. Applying the customization layers When an ADF application is customized, the ADF framework applies one or more customization layers on top of the base application. Each layer has a value, and customizations are assigned to a specific customization layer and value. The concept of multiple layers makes it possible to apply, for example: Industry customization (customizing the application for example, the travel industry: industry=travel) Organization customization (customizing the application for a specific travel company: org=xyztravel) Site customization (customizing the application for the Berlin office) Role-based customization (customizing the application for casual, normal, and advanced users) The XDM application that DMC Solution is building could be customized in one way for ABC Travel and in another way for XYZ Travel, and XYZ Travel might decide to further customize the application for different types of users: You can have as many layers as you need—Oracle Fusion Applications is reported to use 12 layers, but your applications are not likely to be that complex. For each customization layer, the developer of the base application must provide a customization class that will be executed at runtime, returning a value for each customization layer. The ADF framework will then apply the customizations that the customization developer has specified for that layer/value combination. This means that the same application can look in many different ways, depending on the values returned by the customization classes and the customizations registered:     Org layer value Role layer value Result qrstravel any Base application, because there are no customizations defined for QRS Travel abctravel any The application customized for ABC Travel, because there are no role layer customizations for ABC Travel, the value of the role layer does not change the application xyztravel normal The application customized for XYZ Travel and further customized for normal users in XYZ Travel xyztravel superuser The application customized for XYZ Travel and further customized for super users in XYZ Travel Making an application customizable To make an application customizable, you need to do three things: Develop a customization class for each layer of customization. Enable seeded customization in the application. Link the customization class to the application. The customization developer, who will be developing the customizations, will additionally have to set up JDeveloper correctly so that all customization levels can be accessed. This setup is described later in the article. Developing the customization classes For each layer of customization, you need to develop a customization class with a specific format—technically, it has to extend the Oracle-supplied abstract class oracle.mds.cust.CustomizationClass. A customization class has a name (returned by the getName() method) and a value (returned by the getValue() method). At runtime, the ADF framework will execute the customization classes for all layers to determine the customization value at each level. Additionally, the customization class has to return a short unique prefix to use for all customized items, and a cache hint telling ADF if this is a static or dynamic customization. Building the classes Your customization classes should go in your Common Code workspace. A customization class is a normal Java class, that is, it is created with File | New | General | Java Class. In the Create Java Class dialog, give your class a name (OrgLayerCC) and place it into a customization package (for example, com.dmcsol. xdm.customization). Choose to extend oracle.mds.cust.CustomizationClass and check the Implement Abstract Methods checkbox: Create a similar class called RoleLayerCC. Implementing the methods Because you asked the JDeveloper to implement the abstract methods, your classes already contain three methods: getCacheHint() getName() getValue(RestrictedSession, MetadataObject) The getCacheHint() method must return an oracle.mds.cust.CacheHint constant that tells ADF if the value of this layer is static (common for all users) or dynamic (depending on the user). The normal values here are ALL_USERS for static customizations or MULTI_USER for customizations that apply to multiple users. In the XDM application, you will use: ALL_USERS for OrgLevelCC, because this customization layer will apply to all users in the organization MULTI_USER for RoleLevelCC, because the role-based customization will apply to multiple users, but not necessarily to all Refer to the chapter on customization with MDS in Fusion Developer's Guide for Oracle Application Development Framework for information on other possible values. The getName() method simply returns the the name of the customization layer. The getValue() method must return an array of String objects. It will normally make most sense to return just one value—the application is running for exactly one organization, you are either a normal user or a super user. For advanced scenarios, it is possible to return multiple values, in such a case multiple customizations will be applied at the same layer. Each customization that a customization developer defines will be tied to a specific layer and value—there might be a customization that happens when org has the value xyztravel. For the OrgLayerCC class, the value is static and is defined when DMC Solutions installs the application for XYZ Travel—for example, in a property file. For the RoleLayerCC class , the value is dynamic, depending on the current user, and can be retrieved from the ADF security context. The OrgLayerCC class could look like the following: package com.dmcsol.xdm.customization; import ... public class RoleLayerCC extends CustomizationClass { public CacheHint getCacheHint() { return CacheHint.MULTI_USER; } public String getName() { return "role"; } public String[] getValue(RestrictedSession restrictedSession, MetadataObject metadataObject) { String[] roleValue = new String[1]; SecurityContext sec = ADFContext.getCurrent(). getSecurityContext(); if (sec.isUserInRole("superuser")) { roleValue[0] = "superuser"; } else { roleValue[0] = "normal"; } return roleValue; } } The GetCacheHint() method returns MULTI_USER because this is a dynamic customization—it will return different values for different users. The GetName() method simply returns the name of the layer. The GetValue() method uses oracle.adf.share.security.SecurityContext to look up if the user has the super user role and returns the value superuser or normal. Deploying the customization classes Because you place your customization class in the Common Code project, you need to deploy the Common Code project to an ADF library and have the build/ configuration manager copy it to your common library directory.
Read more
  • 0
  • 0
  • 2532

article-image-adf-proof-concept
Packt
10 Jun 2011
12 min read
Save for later

The ADF Proof of Concept

Packt
10 Jun 2011
12 min read
Oracle ADF Enterprise Application Development—Made Simple Successfully plan, develop, test and deploy enterprise applications with Oracle ADF You can compare the situation at the start of a project to standing in front of a mountain with the task to excavate a tunnel. The mountainsides are almost vertical, and there is no way for you to climb the mountain to figure out how wide it is. You can take two approaches: You can either start blasting and drilling in the full width of the tunnel you need You can start drilling a very small pilot tunnel all through the mountain, and then expand it to full width later It's probably more efficient to build in the full width of the tunnel straight from the beginning, but this approach has some serious disadvantages as well. You don't know how wide the mountain is, so you can't tell how long it will take to build the tunnel. In addition, you don't know what kind of surprises might lurk in the mountain—porous rock, aquifers, or any number of other obstacles to your tunnel building. That's why you should build the pilot tunnel first—so you know the size of the task and have an idea of the obstacles you might meet on the way. The Proof of Concept is that pilot tunnel. The very brief ADF primer Since you have decided to evaluate ADF for your enterprise application, you probably already have a pretty good idea of its architecture and capabilities. Therefore, this section will only give a very brief overview of ADF—there are many whitepapers, tutorials, and demonstrations available at the Oracle Technology Network website. Your starting point for ADF information is http://otn.oracle. com/developer-tools/jdev/overview. Enterprise architecture A modern enterprise application typically consists of a frontend, user-facing part and a backend business service part. Frontend The frontend part is constructed from several layers. In a web-based application, these are normally arranged in the common Model-View-Controller (MVC) pattern as illustrated next: The View layer is interacting with the user, displaying data as well as receiving updates and user actions. The Controller layer is in charge of interpreting user actions and deciding which screens are presented to the user in which order. And the Model layer is representing the backend business services to the View and Controller, hiding the complexity of storing and retrieving data. This architecture implements a clean separation of duties— the page doesn't have to worry about where to go next, because that is the task of the controller. And the controller doesn't have to worry about how to store data in the data service, because that is the task of the model. Other Frontends An enterprise application could also have a desktop application frontend, and might have additional frontends for mobile users or even use existing desktop applications like Microsoft Excel to interact with data. In the ADF technology stack, all of these alternative frontends interact with the same model, making it easy to develop multiple frontend applications against the same data services. Backend The backend part consists of a business service layer that implements the business logic and provide some way of accessing the underlying data services. Business services can be implemented as API code written in Java, PL/SQL or other languages, web services, or using a business service framework such as ADF Business Components. Under the business services layer there will be a data service layer actually storing persistent data. Typically, this is based on relational tables, but it could also be XML files in a file system or data in other systems accessed through an interface. ADF architecture There are many different ways of building applications with Oracle Application Development Framework, but Oracle has chosen a modern SOA-based architecture for Oracle Fusion Applications. This brand new product has been built from the ground up as the successor to Oracle E-Business Suite, Siebel, PeopleSoft, J.D. Edwards and many other applications Oracle has acquired over the last couple of years. If it is good enough for Oracle Fusion Applications, arguably the biggest enterprise application development effort ever undertaken by mankind, it is probably good enough for you, too. Oracle Fusion Applications are using the following parts of the ADF framework: ADF Faces Rich Client (ADFv), a very rich set of user interface components implementing advanced functionality in a web application. ADF Controller (ADFc), implementing the features of a normal JSF controller, but extended with the possibility to define modular, reusable page flows. ADFc also allows you to declare transaction boundaries so one database transaction can span many pages. ADF binding layer (ADFm), standard defining a common backend model that the user interface can communicate with. ADF Business Components (ADFbc), a highly productive, declarative way of defining business services based on relational tables. You can see all of these in the following figure: There are many ways of getting from A to B—this article is about travelling the straight and well-paved road Oracle has built for Fusion Applications. However, other routes might be appropriate in some situations: You could build the user interface as a desktop application using ADF Swing components, you could use ADF for a mobile device, or you could use ADF Desktop Integration to access your data directly from within Microsoft Excel. Your business services could be based on Web Services, EJBs or many other technologies, using the ADF binding layer to connect to the user interface. Entity objects and associations Entity objects (EOs) takes care of object-relational mapping: Making your relational tables available to the application as Java objects. Entity objects are the base that view objects are built on, and all data modifications go through the entity object. You will normally have one entity object for every database table or database view your application uses, and this object is responsible for producing the correct SQL statements to insert, update or delete in the underlying relational tables. The entity objects helps you build scalable and well-performing applications by intelligently caching records on the application server in order to minimize the load the application places on the database. Like entity objects are the middle-tier reflection of database tables and database views, Associations are the reflection of foreign key relationships between tables. An association represents a connection between two entity objects and allows ADF to relate data in one entity object with data in another. JDeveloper is normally able to create these automatically by simply inspecting the database, but in case your database does not contain foreign keys, you can build associations by hand to tell ADF about the relationships in your data. View objects and View Links While you do not really need to make any major decisions when building the entity objects for the Proof of Concept, you do need to consider the consumers of your business services when you start building view objects—for example, what information you would display on a screen. View objects are typically based on entity objects and you will be using them for two purposes: To provide data for your screens To provide data for lists of values (LOVs) The data handling view objects are normally specific for each screen or business service. One screen can use multiple view objects—in general, you need to create one view object for each master-detail level you wish to display on your screen. One view object can pull together data from several entity objects, so if you just need to retrieve a reference value from another table, you do not need to create a separate view object for this. The LOV view objects are used for drop-down lists and other selections in your user interface. They will typically be defined as read-only and because they are reusable, you will define them once and re-use them everywhere you need a drop-down list on a specific data set. View Links are used to define the relationships between the view objects and are typically based on associations (again often based on foreign keys in the database). The following figure shows an example of two ways to display the data from the familiar EMP and DEPT tables. The left-hand illustration shows a situation where you wish to display a department with all the employees of the department in a master-detail screen. In this case, you create two view objects connected by a view link. The right-hand illustration shows a situation where you wish to display all employees, together with the name of the department where they work. In this case, you only need one view object, pulling together data from both the EMP and DEPT tables through the entity objects. Application modules Application modules encapsulate the view object instances and business service methods necessary to perform a unit of work. Each application module has its own transactional context and holds its own database connection. This means that all of the work a user performs using view objects from one application module is part of one database transaction. Application modules can have different granularity, but typically, you will have one application module for each major piece of functionality. If your requirements are specified with use cases, there will often be one application module for each major use case. However, multiple use cases can also be grouped together into one application module – indeed, it is possible to build a small application using just one application modules. Application modules for Oracle Forms If you come from an Oracle Forms background and are developing a replacement for an Oracle Forms application, your application will often have a relatively small number of complex, major Forms, and larger number of simple data maintenance Forms. You will often create one Application Module per major Form, and a few application modules that each provide data for a number of simple Forms. If you wish, you can combine multiple application modules inside one root application module. This is called nesting and allows several application modules to participate in the transaction of the root application module. This also saves database connections because only the root application module needs a connection. The ADF user interface The preferred way to build the user interface in an ADF enterprise application is with JavaServer Faces (JSF). JSF is a component-based framework for building webbased user interfaces that overcome many of the limitations of earlier technologies like JavaServer Pages (JSP). In a JSF application, the user interface does not contain any code, but is instead built from configurable components from a component library. For your application, you will want to use the sophisticated ADF 11g JavaServer Faces (JSF) component library, known as the ADF Faces Rich Client. There are other JSF component libraries—for example, the previous version of the ADF Faces components (version 10g) has been released by Oracle as Open Source and is now part of the Apache MyFaces Trinidad project. But for a modern enterprise application, use ADF Faces Rich Client. ADF Task Flows One of the great improvements in ADF 11g was the addition of ADF Task Flows. It had long been clear to web developers that in a web application, you cannot just let each page decide where to go next—you need the controller from the MVC architecture. Various frameworks and technologies have implemented controllers (both the popular Struts framework and JSF has this), but the controller in ADF Task Flows is the first controller capable of handling large enterprise applications. An ADF web application has one Unbounded Task Flow where you place all the publicly accessible pages and define the navigation between them. This corresponds to other controller architectures. But ADF also has Bounded Task Flows, which are complete, reusable mini-applications that can be called from the unbounded task flow or from another bounded task flow. A bounded task flow has a well-defined entry point, accepts input parameters and can deliver an outcome back to the caller. For example, you might build a customer management task flow to handle customer data. In this way, your application can be built in a modular fashion—the developers in charge of implementing each use case can define their own bounded task flow with a well-defined interface for others to call. The team building the customer management task flow is thus free to add new pages or change the navigation flow without affecting the rest of the application. ADF pages and fragments In your task flows, you can define either pages or page fragments. Pages are complete web pages that you can run on their own, while page fragments are reusable components that you place inside regions on pages. An enterprise application will often have a small number of pages (possibly only one), and a larger number of page fragments that dynamically replace each other inside a region. This design means that the user does not see the whole browser window redraw itself—only parts of the page will change as one fragment is replaced with another. It is this technique that makes an ADF application seem more like a desktop application than a traditional web application. On your pages or page fragments, you add content using layout components, data components and control components: The layout components are containers for other components and control the screen layout. Often, multiple layout components are nested inside each other to achieve the desired layout. The data components are the fields, drop-down lists, radio buttons and so on that the user interacts with to create and modify data. The control components are the buttons and links used to perform actions in an ADF application.
Read more
  • 0
  • 0
  • 1186

article-image-java-refactoring-netbeans
Packt
08 Jun 2011
7 min read
Save for later

Java Refactoring in NetBeans

Packt
08 Jun 2011
7 min read
  NetBeans IDE 7 Cookbook Over 70 highly focused practical recipes to maximize your output with NetBeans         Introduction Be warned that many of the refactoring techniques presented in this article might break some code. NetBeans, and other IDEs for that matter too, make it easier to revert changes but of course be wary of things going wrong. With that in mind, let's dig in. Renaming elements This recipe focuses on how the IDE handles the renaming of all elements of a project, being the project itself, classes, methods, variables, and packages. How to do it... Let's create the code to be renamed: Create a new project, this can be achieved by either clicking File and then New Project or pressing Ctrl+Shift+N. On New Project window, choose Java on the Categories side, and on the Projects side select Java Application. Then click Next. Under Name and Location: name the project as RenameElements and click Finish. With the project created we will need to clear the RenameElements.java class of the main method and insert the following code: package renameelements; import java.io.File; public class RenameElements { private void printFiles(String string) { File file = new File(string); if (file.isFile()) { System.out.println(file.getPath()); } else if (file.isDirectory()) { for(String directory : file.list()) printFiles(string + file.separator + directory); } if (!file.exists()) System.out.println(string + " does not exist."); } } The next step is to rename the package, so place the cursor on top of the package name, renameelements, and press Ctrl+R. A Rename dialog pops-up with the package name. Type util under New Name and click on Refactor. Our class contains several variables we can rename: Place the cursor on the top of the String parameter named string and press Ctrl+R. Type path and press Enter Let's rename the other variables: Rename file into filePath. To rename methods, perform the steps below: Place the cursor on the top of the method declaration, printFiles, right-click it then select Refactor and Rename.... On the Rename Method dialog, under New Name enter recursiveFilePrinting and press Refactor. Then let's rename classes: To rename a class navigate to the Projects window and press Ctrl+R on the RenameElements.java file. On the Rename Class dialog enter FileManipulator and press Enter. And finally renaming an entire project: Navigate to the Project window, right-click on the project name, RenamingElements, and choose Rename.... Under Project Name enter FileSystem and tick Also Rename Project Folder; after that, click on Rename. How it works... Renaming a project works a bit differently from renaming a variable, since in this action NetBeans needs to rename the folder where the project is placed. The Ctrl+R shortcut is not enough in itself so NetBeans shows the Rename Project dialog. This emphasizes to the developer that something deeper is happening. When renaming a project, NetBeans gives the developer the possibility of renaming the folder where the project is contained to the same name of the project. This is a good practice and, more often than not, is followed. Moving elements NetBeans enables the developer to easily move classes around different projects and packages. No more breaking compatibility when moving those classes around, since all are seamlessly handled by the IDE. Getting ready For this recipe we will need a Java project and a Java class so we can exemplify how moving elements really work. The exisiting code, created in the previous recipe, is going to be enough. Also you can try doing this with your own code since moving classes are not such a complicated step that can't be undone. Let's create a project: Create a new project, which can be achieved either by clicking File and then New Project or pressing Ctrl+Shift+N. In the New Project window, choose Java on the Categories side and Java Application on the Projects side, then click Next. Under Name and Location, name the Project as MovingElements and click Finish. Now right-click on the movingelements package, select New... and Java Class.... On the New Java Class dialog enter the class name as Person. Leave all the other fields with their default values and click Finish. How to do it... Place the cursor inside of Person.java and press Ctrl+M. Select a working project from Project field. Select Source Packages in the Location field. Under the To Package field enter: classextraction: How it works... When clicking the Refactor button the class is removed from the current project and placed in the project that was selected from the dialog. The package in that class is then updated to match. Extracting a superclass Extracting superclasses enables NetBeans to add different levels of hierarchy even after the code is written. Usually, requirements changing in the middle of development, and rewriting classes to support inheritance would quite complicated and time-consuming. NetBeans enables the developer to create those superclasses in a few clicks and, by understanding how this mechanism works, even creates superclasses that extend other superclasses. Getting ready We will need to create a Project based on the Getting Ready section of the previous recipe, since it is very similar. The only change from the previous recipe is that this recipe's project name will be SuperClassExtraction. After project creation: Right-click on the superclassextraction package, select New... and Java Class.... On the New Java Class dialog enter the class name as DataAnalyzer. Leave all the other fields with their default values and click Finish. Replace the entire content of the DataAnalyzer.java with the following code: package superclassextraction; import java.util.ArrayList; public class DataAnalyzer { ArrayList<String> data; static final boolean CORRECT = true; static final boolean INCORRECT = false; private void fetchData() { //code } void saveData() { } public boolean parseData() { return CORRECT; } public String analyzeData(ArrayList<String> data, int offset) { //code return ""; } } Now let's extract our superclass. How to do it... Right-click inside of the DataAnalyzer.java class, select Refactor and Extract Superclass.... When the Extract Superclass dialog appears, enter Superclass Name as Analyzer. On Members to Extract, select all members, but leave saveData out. Under the Make Abstract column select analyzeData() and leave parseData(), saveData(), fetchData() out. Then click Refactor. How it works... When the Refactor button is pressed, NetBeans copies the marked methods from DataAnalyzer.java and re-creates them in the superclass. NetBeans deals intelligently with methods marked as abstract. The abstract methods are moved up in the hierarchy and the implementation is left in the concrete class. In our example analyzeData is moved to the abstract class but marked as abstract; the real implementation is then left in DataAnalyzer. NetBeans also supports the moving of fields, in our case the CORRECT and INCORRECT fields. The following is the code in DataAnalyzer.java: public class DataAnalyzer extends Analyzer { public void saveData() { //code } public String analyzeData(ArrayList<String> data, int offset) { //code return ""; } } The following is the code in Analyzer.java: public abstract class Analyzer { static final boolean CORRECT = true; static final boolean INCORRECT = false; ArrayList<String> data; public Analyzer() { } public abstract String analyzeData(ArrayList<String> data, int offset); public void fetchData() { //code } public boolean parseData() { //code return DataAnalyzer.CORRECT; } } There's more... Let's learn how to implement parent class methods. Implementing parent class methods Let's add a method to the parent class: Open Analyzer.java and enter the following code: public void clearData(){ data.clear(); } Save the file. Open DataAnalyzer.java, press Alt+Insert and select Override Method.... In the Generate Override Methods dialog select the clearData() option and click Generate. NetBeans will then override the method and add the implementation to DataAnalyzer.java: @Override public void clearData() { super.clearData(); }  
Read more
  • 0
  • 0
  • 15195

article-image-working-user-defined-values-sap-business-one
Packt
03 Jun 2011
8 min read
Save for later

Working with User Defined Values in SAP Business One

Packt
03 Jun 2011
8 min read
  Mastering SQL Queries for SAP Business One Utilize the power of SQL queries to bring Business Intelligence to your small to medium-sized business         Read more about this book       The User-Defined Values function enables SAP Business One users to enter values, originated by a predefined search process, for any field in the system (including user-defined fields). This function enables the user to enter data more efficiently and – perhaps most importantly – more accurately. In fact, the concept is sort of a "Workflow Light" implementation. It can both save user time and reduce data double entries. In this article by Gordon Du, author of Mastering SQL Queries for SAP Business One, we will see how to work with User-Defined Values. (For more resources on Microsoft, see here.) How to work with User-Defined Values To access the User-Defined Values, you can choose menu item Tools | User-Defined Values. You can also use the shortcut key Shift+Alt+F2 instead. Another option is to access it directly from a non-assigned field by using Shift+F2. This will be discussed later. You must notice that the option will not be available until you brought up at least one form. This is because the UDV has to be associated with a form. It can't stand alone. The following screenshots are taken from A/R Down Payment Invoice. It is one of the standard marketing documents. From the UDV point of view, there is no big difference between this and the other types of documents, namely, Sales Order, Purchase Order, Invoice, and so on. After a form is opened, a UDV can be defined. We will start from an empty screen to show you the first step: bringing up a form. When a form is opened, you can define or change any UDV. In this case, we stop our cursor on the Due Date field and then enter Shift+F2. A system message will pop up as shown in the following screenshot: If you click on Yes, it will bring up the same window in the manner you select the menu item mentioned earlier from the Tools menu or press Shift+Alt+F2. When you get the User-Define Values-Setup screen, you have three options. Apart from the default option: Without Search User-Define Values, you actually have only two choices: Search in Existing User-Define Values Search in Existing User-Define Values according to Saved Query Let's go through the last option first: Search in Existing User-Define Values according to Saved Query. The topic related to query will always be assigned with the top priority. There are quite a few screenshots that will help you understand the entire process. Search in existing User-Defined Values according to the saved queries The goal for this example is to input the due date as the current date automatically. The first thing to do for this option is to click on the bottom radio button among three options. The screenshot is shown next: After you have clicked the Search in Existing User-Defined Values according to Saved Query radio button, you will find a long empty textbox in a grey color and a checkbox for Auto Refresh When Field Changes underneath. Don't get confused by the color. Even though in other functions throughout SAP Business One, a gray colored field normally means that you cannot input or enter information into the field. That is not the case here. You can double-click it to get the User-Defined Values. When you double-click on the empty across-window text box, you can bring up the query manager window to select a query. You can then browse the query category that relates to Formatted Searches and find the query you need. The query called Auto Date Today in the showcase is very simple. The query script is as simple as this: SELECT GetDate() This query returns the current date as the result. You need to double-click to select the query and then go back to the previous screen but with the query name, as shown in the following screenshot: It may not be good enough to select only query because if you stop here you have to always manually trigger the FMS query run by entering Shift+F2. To automate the FMS query process, you can click on the checkbox under the selected query. After you check this box, another long text box will be displayed with a drop-down list button. Under the text box, there are two radio buttons for Auto Refresh When Field Changes: Refresh Regularly Display Saved User-Defined Value Display Saved User-Defined Values will be the default selection, if you do not change it. When you click on the drop-down list arrow button, you will get a list of fields that are associated with the current form. You can see in the following screenshot that Customer/Vendor Code field has been selected. For header document UDV, this field is often the most useful field to auto refresh the UDV. In theory, you can select any fields from the list. However, in reality only a few fields are good candidates for the task. These include Customer/Vendor Code, Document Currency, Document Number, and Document Total for document header; Item Code and Quantity for document lines. Choosing the correct data field from this drop-down list is always the most difficult step in Formatted Search, and you should test your data field selection fully. Now, the text box is filled with Customer/Vendor Code for automatically refreshing the UDV. Between two options, this query can only select the default option of Display Saved User-Defined Value. Otherwise, the date will always change to the date you have updated the document on. That will invalidate the usage of this UDV. The Refresh Regularly option is only suitable to the value that is closely related to the changed field that you have selected. In general, Display Saved User-Defined Value is always a better option than Refresh Regularly. At least it gives the system less burden. If you have selected Refresh Regularly, it means you want to get the UDV changed whenever the base field changes. The last step to set up this UDV is by clicking Update. As soon as you click the button, the User-Defined Values–Setup window will be closed. You can find a green message on the bottom-left of the screen saying Operation Completed Successfully. You can find a small "magnifying glass" added to the right corner of the Due Date field. This means the Formatted Search is successfully set up. You can try it for yourself. Sometimes this "magnifying glass" disappears for no reason. Actually, there are reasons but not easy to be understood. The main reason is that you may have assigned some different values to the same field on different forms. Other reasons may be related to add-on, and so on. In order to test this FMS, the first thing to try is to use the menu function or key combination Shift+F2. The other option is to just click on the "magnifying glass". Both functions have the same result. It will force the query to run. You can find that the date is filled by the same date as posting date and document date. You may find some interesting date definitions in SAP Business One, such as Posting Date is held by the field DocDate. Document Date however, is saved under TaxDate. Be careful in dealing with dates. You must follow the system's definition in using those terms, so that you get the correct result. A better way to use this FMS query is by entering the customer code directly without forcing FMS query to run first. The following screenshot shows that the customer code OneTime has been entered. Please note that the DueDate field is still empty. Is there anything wrong? No. That is the system's expected behavior. Only if your cursor leaves the Customer Code field, can the FMS query be triggered. That is a perfect example of When Field Value Changes. The system can only know that the field value is changed when you tab out of the field. When you are working with the field, the field is not changed yet. Be careful to follow system requirements while entering data. Never press Enter in most of the forms unless you are ready for the last step to add or update data. If you do, you may add the wrong documents to the system and they are irrevocable. The previous screenshot shows the complete process of setting up search in Existing User-Define Values according to Saved Query. Now it is time to discuss the $ sign field.
Read more
  • 0
  • 0
  • 12737

article-image-netbeans-ide-7-building-ejb-application
Packt
01 Jun 2011
10 min read
Save for later

NetBeans IDE 7: Building an EJB Application

Packt
01 Jun 2011
10 min read
  NetBeans IDE 7 Cookbook Over 70 highly focused practical recipes to maximize your output with NetBeans         Introduction Enterprise Java Beans (EJB) is a framework of server-side components that encapsulates business logic. These components adhere to strict specifications on how they should behave. This ensures that vendors who wish to implement EJB-compliant code must follow conventions, protocols, and classes ensuring portability. The EJB components are then deployed in EJB containers, also called application servers, which manage persistence, transactions, and security on behalf of the developer. If you wish to learn more about EJBs, visit http://jcp.org/en/jsr/detail?id=318 or https://www.packtpub.com/developer-guide-for-ejb3/book. For our EJB application to run, we will need the application servers. Application servers are responsible for implementing the EJB specifications and creating the perfect environment for our EJBs to run in. Some of the capabilities supported by EJB and enforced by Application Servers are: Remote access Transactions Security Scalability NetBeans 6.9, or higher, supports the new Java EE 6 platform, making it the only IDE so far to bring the full power of EJB 3.1 to a simple IDE interface for easy development. NetBeans makes it easy to develop an EJB application and deploy on different Application Servers without the need to over-configure and mess with different configuration files. It's as easy as a project node right-click. Creating EJB project In this recipe, we will see how to create an EJB project using the wizards provided by NetBeans. Getting ready It is required to have NetBeans with Java EE support installed to continue with this recipe. If this particular NetBeans version is not available in your machine, then you can download it from http://download.netbeans.org. There are two application servers in this installation package, Apache Tomcat or GlassFish, and either one can be chosen, but at least one is necessary. In this recipe, we will use the GlassFish version that comes together with NetBeans 7.0 installation package. How to do it... Lets create a new project by either clicking File and then New Project, or by pressing Ctrl+Shift+N. In the New Project window, in the categories side, choose Java Web and in Projects side, select WebApplication, then click Next. In Name and Location, under Project Name, enter EJBApplication. Tick the Use Dedicated Folder for Storing Libraries option box. Now either type the folder path or select one by clicking on browse. After choosing the folder, we can proceed by clicking Next. In Server and Settings, under Server, choose GlassFish Server 3.1. Tick Enable Contexts and Dependency Injection. Leave the other values with their default values and click Finish. The new project structure is created. How it works... NetBeans creates a complete file structure for our project. It automatically configures the compiler and test libraries and creates the GlassFish deployment descriptor. The deployment descriptor filename specific for the GlassFish web server is glassfish-web.xml.   Adding JPA support The Java Persistence API (JPA) is one of the frameworks that equips Java with object/relational mapping. Within JPA, a query language is provided that supports the developers abstracting the underlying database. With the release of JPA 2.0, there are many areas that were improved, such as: Domain Modeling EntityManager Query interfaces JPA query language and others We are not going to study the inner workings of JPA in this recipe. If you wish to know more about JPA, visit http://jcp.org/en/jsr/detail?id=317 or http://download.oracle.com/javaee/5/tutorial/doc/bnbqa.html. NetBeans provides very good support for enabling your application to quickly create entities annotated with JPA. In this recipe, we will see how to configure your application to use JPA. We will continue to expand the previously-created project. Getting ready We will use GlassFish Server in this recipe since it is the only server that supports Java EE 6 at the moment. We also need to have Java DB configured. GlassFish already includes a copy of Java DB in its installation folder. Another source of installed Java DB is the JDK installation directory. It is not necessary to build on top of the previous recipe, but it is imperative to have a database schema. Feel free to create your own entities by following the steps presented in this recipe. How to do it... Right-click on EJBApplication node and select New Entity Classes from Database.... In Database Tables: Under Data Source, select jdbc/sample and let the IDE initialize Java DB. When Available Tables is populated, select MANUFACTURER, click Add, and then click Next. In Entity Classes: leave all the fields with their default values and only in Package, enter entities and click Finish. How it works... NetBeans then imports and creates our Java class from the database schema, in our case the Manufacturer.java file placed under the entities package. Besides that, NetBeans makes it easy to import and start using the entity straightaway. Many of the most common queries, for example find by name, find by zip, and find all, are already built into the class itself. The JPA queries, which are akin to normal SQL queries, are defined in the entity class itself. Listed below are some of the queries defined in the entity class Manufacturer.java: @Entity @Table(name = "MANUFACTURER") @NamedQueries({ @NamedQuery(name = "Manufacturer.findAll", query = "SELECT m FROM Manufacturer m"), @NamedQuery(name = "Manufacturer.findByManufacturerId", query = "SELECT m FROM Manufacturer m WHERE m.manufacturerId = :manufacturerId"), The @Entity annotation defines that this class, Manufacturer.java, is an entity and when followed by the @Table annotation, which has a name parameter, points out the table in the Database where the information is stored. The @NamedQueries annotation is the place where all the NetBeans-generated JPA queries are stored. There can be as many @NamedQueries as the developer feels necessary. One of the NamedQueries we are using in our example is named Manufacturer.findAll, which is a simple select query. When invoked, the query is translated to: SELECT m FROM Manufacturer m On top of that, NetBeans implements the equals, hashCode, and toString methods. Very useful if the entities need to be used straight away with some collections, such as HashMap. Below is the NetBeans-generated code for both hashCode and the toString methods: @Override public int hashCode() { int hash = 0; hash += (manufacturerId != null ? manufacturerId.hashCode() : 0); return hash; } @Override public boolean equals(Object object) { // TODO: Warning - this method won't work in the case the id fields are not set if (!(object instanceof Manufacturer)) { return false; } Manufacturer other = (Manufacturer) object; if ((this.manufacturerId == null && other.manufacturerId != null) || (this.manufacturerId != null && !this.manufacturerId. equals(other.manufacturerId))) { return false; } return true; } NetBeans also creates a persistence.xml and provides a Visual Editor, simplifying the management of different Persistence Units (in case our project needs to use more than one); thereby making it possible to manage the persistence.xml without even touching the XML code. A persistence unit, or persistence.xml, is the configuration file in JPA which is placed under the configuration files, when the NetBeans view is in Projects mode. This file defines the data source and what name the persistence unit has in our example: <persistence-unit name="EJBApplicationPU" transaction-type="JTA"> <jta-data-source>jdbc/sample</jta-data-source> <properties/> </persistence-unit> The persistence.xml is placed in the configuration folder, when using the Projects view. In our example, our persistence unit name is EJBApplicationPU, using the jdbc/sample as the data source. To add more PUs, click on the Add button that is placed on the uppermost right corner of the Persistence Visual Editor. This is an example of adding another PU to our project:   Creating Stateless Session Bean A Session Bean encapsulates business logic in methods, which in turn are executed by a client. This way, the business logic is separated from the client. Stateless Session Beans do not maintain state. This means that when a client invokes a method in a Stateless bean, the bean is ready to be reused by another client. The information stored in the bean is generally discarded when the client stops accessing the bean. This type of bean is mainly used for persistence purposes, since persistence does not require a conversation with the client. It is not in the scope of this recipe to learn how Stateless Beans work in detail. If you wish to learn more, please visit: http://jcp.org/en/jsr/detail?id=318 or https://www.packtpub.com/developer-guide-for-ejb3/book In this recipe, we will see how to use NetBeans to create a Stateless Session Bean that retrieves information from the database, passes through a servlet and prints this information on a page that is created on-the-fly by our servlet. Getting ready It is required to have NetBeans with Java EE support installed to continue with this recipe. If this particular NetBeans version is not available in your machine, please visit http://download.netbeans.org. We will use the GlassFish Server in this recipe since it is the only Server that supports Java EE 6 at the moment. We also need to have Java DB configured. GlassFish already includes a copy of Java DB in its installation folder. It is possible to follow the steps on this recipe without the previous code, but for better understanding we will continue to build on the top of the previous recipes source code. How to do it... Right-click on EJBApplication node and select New and Session Bean.... For Name and Location: Name the EJB as ManufacturerEJB. Under Package, enter beans. Leave Session Type as Stateless. Leave Create Interface with nothing marked and click Finish. Here are the steps for us to create business methods: Open ManufacturerEJB and inside the class body, enter: @PersistenceUnit EntityManagerFactory emf; public List findAll(){ return emf.createEntityManager().createNamedQuery("Manufacturer. findAll").getResultList(); } Press Ctrl+Shift+I to resolve the following imports: java.util.List; javax.persistence.EntityManagerFactory; javax.persistence.PersistenceUnit; Creating the Servlet: Right-click on the EJBApplication node and select New and Servlet.... For Name and Location: Name the servlet as ManufacturerServlet. Under package, enter servlets. Leave all the other fields with their default values and click Next. For Configure Servlet Deployment: Leave all the default values and click Finish. With the ManufacturerServlet open: After the class declaration and before the processRequest method, add: @EJB ManufacturerEJB manufacturerEJB; Then inside the processRequest method, first line after the try statement, add: List<Manufacturer> l = manufacturerEJB.findAll(); Remove the /* TODO output your page here and also */. And finally replace: out.println("<h1>Servlet ManufacturerServlet at " + request. getContextPath () + "</h1>"); With: for(int i = 0; i < 10; i++ ) out.println("<b>City</b>"+ l.get(i).getCity() +", <b>State</b>"+ l.get(i).getState() +"<br>" ); Resolve all the import errors and save the file. How it works... To execute the code produced in this recipe, right-click on the EJBApplication node and select Run. When the browser launches append to the end of the URL/ManufacturerServlet, hit Enter. Our application will return City and State names. One of the coolest features in Java EE 6 is that usage of web.xml can be avoided if annotating the servlet. The following code does exactly that: @WebServlet(name="ManufacturerServlet", urlPatterns={"/ ManufacturerServlet"}) Since we are working on Java EE 6, our Stateless bean does not need the daunting work of creating interfaces, the @Stateless annotation takes care of that, making it easier to develop EJBs. We then add the persistence unit, represented by the EntityManagerFactory and inserted by the @PersistenceUnit annotation. Finally we have our business method that is used from the servlet. The findAll method uses one of the named queries from our entity to fetch information from the database.  
Read more
  • 0
  • 0
  • 4648

article-image-sql-query-basics-sap-business-one
Packt
18 May 2011
7 min read
Save for later

SQL Query Basics in SAP Business One

Packt
18 May 2011
7 min read
  Mastering SQL Queries for SAP Business One Utilize the power of SQL queries to bring Business Intelligence to your small to medium-sized business Who can benefit from using SQL Queries in SAP Business One? There are many different groups of SAP Business One users who may need this tool. To my knowledge, there is no standard organization chart for Small and Midsized enterprises. Most of them are different. You may often find one person that handles more than one role. You may check the following list to see if anything applies to you: Do you need to check specific sales results over certain time periods, for certain areas or certain customers? Do you want to know who the top vendors from certain locations for certain materials are? Do you have dynamic updated version of your sales force performance in real time? Do you often check if approval procedures are exactly matching your expectations? Have you tried to start building your SQL query but could not get it done properly? Have you experienced writing SQL query but the results are not always correct or up to your expectations? Consultant If you are an SAP Business One consultant, you have probably mastered SQL query already. However, if that is not the case, this would be a great help to extend your consulting power. It will probably become a mandatory skill in the future that any SAP Business One consultant should be able to use SQL query. Developer If you are an SAP Business One add-on developer, these skills will be good additions to your capabilities. You may find this useful even in some other development work like coding or programming. Very often you need to embed SQL query to your codes to complete your Software Development Kit (SDK) project. SAP Business One end user If you are simply a normal SAP Business One end user, you may need this more. This is because SQL query usage is best applied for the companies who have SAP Business One live data. Only you as the end users know better than anyone else what you are looking for to make Business Intelligence a daily routine job. It is very important for you to have an ability to create a query report so that you can map your requirement by query in a timely manner. SQL query and related terms Before going into the details of SQL query, I would like to briefly introduce some basic database concepts because SQL is a database language for managing data in Relational Database Management Systems (RDBMS). RDBMS RDBMS is a Database Management System that is based on the relation model. Relational here is a key word for RDBMS. You will find that data is stored in the form of Tables and the relationship among the data is also stored in the form of tables for RDBMS. Table Table is a key component within a database. One table or a group of tables represent one kind of data. For example, table OSLP within SAP Business One holds all Sales Employee Data. Tables are two-dimensional data storage place holders. You need to be familiar with their usage and their relationships with each other. If you are familiar with Microsoft Excel, the worksheet in Excel is a kind of two-dimensional table. Table is also one of the most often used concepts. Relationships between each table may be more important than tables themselves because without relation, nothing could be of any value. One important function within SAP Business One is allowing User Defined Table (UDT). All UDTs start with "@". Field A field is the lowest unit holding data within a table. A table can have many fields. It is also called a column. Field and column are interchangeable. A table is comprised of records, and all records have the same structure with specific fields. One important concept in SAP Business One is User Defined Field (UDF). All UDFs start with U_. SQL SQL is often referred to as Structured Query Language. It is pronounced as S-Q-L or as the word "Sequel". There are many different revisions and extensions of SQL. The current revision is SQL: 2008, and the first major revision is SQL-92. Most of SQL extensions are built on top of SQL-92. T-SQL Since SAP Business One is built on Microsoft SQL Server database, SQL here means Transact-SQL or T-SQL in brief. It is a Microsoft's/Sybase's extension of general meaning for SQL. Subsets of SQL There are three main subsets of the SQL language: Data Control Language (DCL) Data Definition Language (DDL) Data Manipulation Language (DML) Each set of the SQL language has a special purpose: DCL is used to control access to data in a database such as to grant or revoke specified users' rights to perform specified tasks. DDL is used to define data structures such as to create, alter, or drop tables. DML is used to retrieve and manipulate data in the table such as to insert, delete, and update data. Select, however, becomes a special statement belonging to this subset even though it is a read-only command that will not manipulate data at all. Query Query is the most common operation in SQL. It could refer to all three SQL subsets. You have to understand the risks of running any Add, Delete, or Update queries that could potentially alter system tables even if they are User Defined Fields. Only SELECT query is legitimate for SAP Business One system table. Data dictionary In order to create working SQL queries, you not only need to know how to write it, but also need to have a clear view regarding the relationship between tables and where to find the information required. As you know, SAP Business One is built on Microsoft SQL Server. Data dictionary is a great tool for creating SQL queries. Before we start, a good Data Dictionary is essential for the database. Fortunately, there is a very good reference called SAP Business One Database Tables Reference readily available through SAP Business One SDK help Centre. You can find the details in the following section. SAP Business One—Database tables reference The database tables reference file named REFDB.CHM is the one we are looking for. SDK is usually installed on the same server as the SAP Business One database server. Normally, the file path is: X:Program FilesSAPSAP Business One SDKHelp. Here, "X" means the drive where your SAP Business One SDK is installed. The help file looks like this: In this help file, we will find the same categories as the SAP Business One menu with all 11 modules. The tables related to each module are listed one by one. There are tree structures in the help file if the header tables have row tables. Each table provides a list of all the fields in the table along with their description, type, size, related tables, default value, and constraints. Naming convention of tables for SAP Business One To help you understand the previous mentioned data dictionary quickly, we will be going through the naming conventions for the table in SAP Business One. Three letter words Most tables for SAP Business One have four letters. The only exceptions are numberending tables, if the numbers are greater than nine. Those tables will have five letters. To understand table names easily, there is a three letter abbreviation in SAP Business One. Some of the commonly used abbreviations are listed as follows: ADM: Administration ATC: Attachments CPR: Contact Persons CRD: Business Partners DLN: Delivery Notes HEM: Employees INV: Sales Invoices ITM: Items ITT: Product Trees (Bill of Materials) OPR: Sales Opportunities PCH: Purchase Invoices PDN: Goods Receipt PO POR: Purchase Orders QUT: Sales Quotations RDR: Sales Orders RIN: Sales Credit Notes RPC: Purchase Credit Notes SLP: Sales Employees USR: Users WOR: Production Orders WTR: Stock Transfers  
Read more
  • 0
  • 1
  • 7556
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-tcltk-handling-string-expressions
Packt
02 Mar 2011
11 min read
Save for later

Tcl/Tk: Handling String Expressions

Packt
02 Mar 2011
11 min read
Tcl/Tk 8.5 Programming Cookbook Over 100 great recipes to effectively learn Tcl/Tk 8.5 The quickest way to solve your problems with Tcl/Tk 8.5 Understand the basics and fundamentals of the Tcl/Tk 8.5 programming language Learn graphical User Interface development with the Tcl/Tk 8.5 Widget set Get a thorough and detailed understanding of the concepts with a real-world address book application Each recipe is a carefully organized sequence of instructions to efficiently learn the features and capabilities of the Tcl/Tk 8.5 language When I first started using Tcl, everything I read or researched stressed the mantra "Everything is a string". Coming from a hard-typed coding environment, I was used to declaring variable types and in Tcl this was not needed. A set command could—and still does—create the variable and assigns the type on the fly. For example, set variable "7" and set variable 7 will both create a variable containing 7. However, with Tcl, you can still print the variable containing a numeric 7 and add 1 to the variable containing a string representation of 7. It still holds true today that everything in Tcl is a string. When we explore the TK Toolkit and widget creation, you will rapidly see that widgets themselves have a set of string values that determine their appearance and/or behavior. As a pre-requisite for the recipes in this article, launch the Tcl shell as appropriate for your operating system. You can access Tcl from the command line to execute the commands. As with everything else we have seen, Tcl provides a full suite of commands to assist in handling string expressions. However due to the sheer number of commands and subsets, I won't be listing every item individually in the following section. Instead we will be creating numerous recipes and examples to explore in the following sections. A general list of the commands is as follows: CommandDescriptionstringThe string command contains multiple keywords allowing for manipulation and data gathering functions.appendAppends to a string variable.formatFormat a string in the same manner as C sprint.regexpRegular Expression matching.regsubPerforms substitution, based on Regular Expression matching.scanParses a string using conversion specifiers in the same manner as C sscanf.substPerform backslash, command, and variable substitution on a string. Using the commands listed in the table, a developer can address all their needs as applies to strings. In the following sections, we will explore these commands as well as many subsets of the string command. Appending to a string Creating a string in Tcl using the set command is the starting point for all string commands. This will be the first command for most, if not all of the following recipes. As we have seen previously, entering a set variable value on the command line does this. However, to fully implement strings within a Tcl script, we need to interact with these strings from time to time, for example, with an open channel to a file or HTTP pipe. To accomplish this, we will need to read from the channel and append to the original string. To accomplish appending to a string, Tcl provides the append command. The append command is as follows: append variable value value value... How to do it… In the following example, we will create a string of comma-delimited numbers using the for control construct. Return values from the commands are provided for clarity. Enter the following command: % set var 0 0 % for {set x 1} {$x<=10}{$x<=10} {incr x} { append var , $x } %puts $var 0,1,2,3,4,5,6,7,8,9,10 How it works… The append command accepts a named variable to contain the resulting string and a space delimited list of strings to append. As you can see, the append command accepted our variable argument and a string containing the comma. These values were used to append to original variable (containing a starting value of 0). The resulting string output with the puts command displays our newly appended variable complete with commas. Formatting a string Strings, as we all know, are our primary way of interacting with the end-user. Whether presented in a message box or simply directed to the Tcl shell, they need to be as fluid as possible, in the values they present. To accomplish this, Tcl provides the format command. This command allows us to format a string with variable substitution in the same manner as the ANSI C sprintf procedure. The format command is as follows: format string argument argument argument... The format command accepts a string containing the value to be formatted as well as % conversion specifiers. The arguments contain the values to be substituted into the final string. Each conversion specifier may contain up to six (6) sections—an XPG2 position specifier, a set of fags, minimum field width, a numeric precision specifier, size modifier, and a conversion character. The conversion specifiers are as follows: SpecifierDescriptiond or iFor converting an integer to a signed decimal string.uFor converting an integer to an unsigned decimal string.oFor converting an integer to an unsigned octal sting.x or XFor converting an integer to an unsigned hexadecimal string. The lowercase x is used for lowercase hexadecimal notations. The uppercase X will contain the uppercase hexadecimal notations.cFor converting an integer to the Unicode character it represents.sNo conversion is performed.fFor converting the number provided to a signed decimal string of the form xxx.yyy, where the number of y's is determined with the precision of 6 decimal places (by default).e or EIf the uppercase E is used, it is utilized in the string in place of the lowercase e.g or GIf the exponent is less than -4 or greater than or equal to the precision, then this is used for converting the number utilized for the %e or %E; otherwise for converting in the same manner as %f.%The % sign performs no conversion; it merely inserts a % character into the string. There are three differences between the Tcl format and the ANSI C sprintf procedure: The %p and %n conversion switches are not supported. The % conversion for %c only accepts an integer value. Size modifiers are ignored for formatting of floating-point values. How to do it… In the following example, we format a long date string for output on the command line. Return values from the commands are provided for clarity. Enter the following command: % set month May May % set weekday Friday Friday % set day 5 5 % set extension th th %set year 2010 2010 %puts [format "Today is %s, %s %d%s %d" $weekday $month $day $extension $year] Today is Friday, May 5th 2010 How it works… The format command successfully replaced the desired conversion fag delimited regions with the variables assigned. Matching a regular expression within a string Regular expressions provide us with a powerful method to locate an arbitrarily complex pattern within a string. The regexp command is similar to a Find function in a text editor. You search for a defined string for the character or the pattern of characters you are looking for and it returns a Boolean value that indicates success or failure and populates a list of optional variables with any matched strings. The -indices and -inline options must be used to modify the behavior, as indicated by this statement. But it doesn't stop there; by providing switches, you can control the behavior of regexp. The switches are as follows: SwitchBehavior-aboutNo actual matching is made. Instead regexp returns a list containing information about the regular expression where the first element is a subexpression count and the second is a list of property names describing various attributes about the expression.-expandedAllows the use of expanded regular expression, wherein whitespaces and comments are ignored.-indicesReturns a list of two decimal strings, containing the indices in the string to match for the first and last characters in the range-lineEnables the newline-sensitive matching similar to passing the -linestop and -lineanchor switches. -linestop Changes the behavior of [^] bracket expressions and the "." character so that they stop at newline characters.-lineanchorChanges the behavior of ^ and $ (anchors) so that they match both the beginning and end of a line.-nocaseTreats uppercase characters in the search string as lowercase.-allCauses the command to match as many times as possible and returns the count of the matches found.-inline Causes regexp to return a list of the data that would otherwise have been placed in match variables. Match variables may NOT be used if -inline is specified.   -startAllows us to specify a character index from which searching should start.--Denotes the end of switches being passed to regexp. Any argument following this switch will be treated as an expression, even if they start with a "-". Now that we have a background in switches, let's look at the command itself: regexp switches expression string submatchvar submatchvar... The regexp command determines if the expression matches part or all of the string and returns a 1 if the match exists or a 0 if it is not found. If the variables (submatchvar) (for example myNumber or myData) are passed after the string, they are used as variables to store the returned submatchvar. Keep in mind that if the –inline switch has been passed, no return variables should be included in the command. Getting ready To complete the following example, we will need to create a Tcl script file in your working directory. Open the text editor of your choice and follow the next set of instructions. How to do it… A common use for regexp is to accept a string containing multiple words and to split it into its constituent parts. In the following example, we will create a string containing an IP address and assign the values to the named variables. Enter the following command: % regexp "([0-9]{1,3}).([0-9]{1,3}).([0-9]{1,3}).([0-9]{1,3})" $ip all first second third fourth % puts "$all n$first n$second n$third n$fourth" 192.168.1.65 192 168 1 65 How it works… As you can see, the IP Address has been split into its individual octet values. What regexp has done is match the groupings of decimal characters [0-9] of a varying length of 1 to 3 characters {1, 3} delimited by a "." character. The original IP address is assigned to the first variable (all) while the octet values are assigned to the remaining variables (first, second, third and fourth). Performing character substitution on a string If regexp is a Find function, then regsub is equivalent to Find and Replace. The regsub command accepts a string and using Regular Expression pattern matching, it locates and, if desired, replaces the pattern with the desired value. The syntax of regsub is similar to regexp as are the switches. However, additional control over the substitution is added. The switches are as listed next: SwitchDescription-allCauses the command to perform substitution for each match found The & and n sequences are handled for each substitution-expandedAllows use of expanded regular expression wherein whitespace and comments are ignored-lineEnables newline sensitive matching similar to passing the -linestop and -lineanchor switches-linestopChanges the behavior of [^] bracket expressions so that they stop at newline characters-lineanchorChanges the behavior of ^ and $ (anchors) so that they match both the beginning and end of a line-nocaseTreats Upper Case characters in the search string as Lower Case-startAllows specification of a character offset in the string from which to start matching Now that we have a background in switches as they apply to the regsub command, let's look at the command: regsub switches expression string substitution variable The regsub command matches the expression against the string provided and either copies the string to the variable or returns the string if a variable is not provided. If a match is located, the portion of the string that matched is replaced by substitution. Whenever a substitution contains an & or a character, it is replaced with the portion of the string that matches the expression. If the substitution contains the switch "n" (where n represents a numeric value between 1 and 9), it is replaced with the portion of the string that matches with the nth sub-expression of the expression. Additional backslashes may be used in the substitution to prevent interpretation of the &, , n, and the backslashes themselves. As both the regsub command and the Tcl interpreter perform backslash substitution, you should enclose the string in curly braces to prevent unintended substitution. How to do it… In the following example, we will substitute every instance of the word one, which is a word by itself, with the word three. Return values from the commands are provided for clarity. Enter the following command: % set original "one two one two one two" one two one two one two % regsub -all {one} $original three new 3 % puts $new three two three two three two How it works… As you can see, the value returned from the regsub command lists the number of matches found. The string original has been copied into the string new, with the substitutions completed. With the addition of additional switches, you can easily parse a lengthy string variable and perform bulk updates. I have used this to rapidly parse a large text file prior to importing data into a database.  
Read more
  • 0
  • 0
  • 2524

article-image-overview-tcl-shell
Packt
15 Feb 2011
10 min read
Save for later

An Overview of the Tcl Shell

Packt
15 Feb 2011
10 min read
  Tcl/Tk 8.5 Programming Cookbook Over 100 great recipes to effectively learn Tcl/Tk 8.5 The quickest way to solve your problems with Tcl/Tk 8.5 Understand the basics and fundamentals of the Tcl/Tk 8.5 programming language Learn graphical User Interface development with the Tcl/Tk 8.5 Widget set Get a thorough and detailed understanding of the concepts with a real-world address book application Each recipe is a carefully organized sequence of instructions to efficiently learn the features and capabilities of the Tcl/Tk 8.5 language      Introduction So, you've installed Tcl, written some scripts, and now you're ready to get a deeper understanding of Tcl and all that it has to offer. So, why are we starting with the shell when it is the most basic tool in the Tcl toolbox? When I started using Tcl I needed to rapidly deliver a Graphical User Interface (GUI) to display a video from the IP-based network cameras. The solution had to run on Windows and Linux and it could not be browser-based due to the end user's security concerns. The client needed it quickly and our sales team had, as usual, committed to a delivery date without speaking to the developer in advance. So, with the requirement document in hand, I researched the open source tools available at the time and Tcl/Tk was the only language that met the challenge. The original solution quickly evolved into a full-featured IP Video Security system with the ability to record and display historic video as well as providing the ability to attach to live video feeds from the cameras. Next search capabilities were added to review the stored video and a method to navigate to specific dates and times. The final version included configuring advanced recording settings such as resolution, color levels, frame rate, and variable speed playback. All was accomplished with Tcl. Due to the time constraints, I was not able get a full appreciation of the capabilities of the shell. I saw it as a basic tool to interact with the interpreter to run commands and access the file system. When I had the time, I returned to the shell and realized just how valuable a tool it is and the many capabilities I had failed to make use of. When used to its fullest, the shell provides much more that an interface to the Tcl interpreter, especially in the early stages of the development process. Need to isolate and test a procedure in a program? Need a quick debugging tool? Need real-time notification of the values stored in a variable? The Tcl shell is the place to go. Since then, I have learned countless uses for the shell that would not only have sped up the development process, but also saved me several headaches in debugging the GUI and video collection. I relied on numerous dialog boxes to pop up values or turned to writing debugging information to error logs. While this was an excellent way to get what I needed, I could have minimized the overhead in terms of coding by simply relying on the shell to display the desired information in the early stages. While dialog windows and error logs are irreplaceable, I now add in quick debugging by using the commands the shell has to offer. If something isn't proceeding as expected, I drop in a command to write to standard out and voila! I have my answer. The shell continues to provide me with a reliable method to isolate issues with a minimum investment of time. The Tcl shell The Tcl Shell (Tclsh) provides an interface to the Tcl interpreter that accepts commands from both standard input and text files. Much like the Windows Command Line or Linux Terminal, the Tcl shell allows a developer to rapidly invoke a command and observe the return value or error messages in standard output. The shell differs based on the Operating System in use. For the Unix/Linux systems, this is the standard terminal console; while on a Windows system, the shell is launched separately via an executable. If invoked with no arguments, the shell interface runs interactively, accepting commands from the native command line. The input line is demarked with a percent sign (%) with the prompt located at the start position. If the shell is invoked from the command line (Windows DOS or Unix/Linux terminal) and arguments are passed, the interpreter will accept the first as the filename to be read. Any additional arguments are processed as variables. The shell will run until the exit command is invoked or until it has reached the end of the text file. When invoked with arguments, the shell sets several Tcl variables that may be accessed within your program, much like the C family of languages. These variables are: VariableExplanationargcThis variable contains the number of arguments passed in with the exception of the script file name. A value of 0 is returned if no arguments were passed in.argvThis variable contains a Tcl List with elements detailing the arguments passed in. An empty string is returned if no arguments were provided.argv0This variable contains the filename (if specified) or the name used to invoke the Tcl shell.TCL_interactiveThis variable contains a '1' if Tclsh is running in interactive mode, otherwise a '0' is contained.envThe env variable is maintained automatically, as an array in Tcl and is created at startup to hold the environment variables on your system. Writing to the Tcl console The following recipe illustrates a basic command invocation. In this example, we will use the puts command to output a "Hello World" message to the console. Getting ready To complete the following example, launch your Tcl Shell as appropriate, based on your operating platform. For example, on Windows, you would launch the executable contained in the Tcl installation location within the bin directory, while on a Unix/Linux installation, you would enter TCLsh at the command line, provided this is the executable name for your particular system. To check the name, locate the executable in the bin directory of your installation. How to do it… Enter the following command: % puts "Hello World" Hello World How it works… As you can see, the puts command writes what it was passed as an argument to standard out. Although this is a basic "Hello World" recipe, you can easily see how this 'simple' command can be used for rapid tracking of the location within a procedure, where a problem may have arisen. Add in variable values and some error handling and you can rapidly isolate issues and correct them without the additional efforts of creating a Dialog Window or writing to an error log. Mathematical expressions The expr command is used to evaluate mathematical expressions. This command can address everything from simple addition and subtraction to advanced computations, such as sine and cosine. This eliminates the need to make system calls to perform advanced mathematical functions. The expr command evaluates the input and arguments, and returns an integer or floating-point value. A Tcl expression consists of a combination of operators, operands, and parenthetical containers (parenthesis, braces, or brackets). There are no strict typing requirements, so any white space is stripped by the command automatically. Tcl supports non-numeric and string comparisons as well as Tcl-specific operators. Tcl expr operands Tcl operands are treated as integers, where feasible. They may be specified as decimal, binary (first two characters must be 0b), hexadecimal (first two characters must be 0x), or octal (first two characters must be 0o). Care should be taken when passing integers with a leading 0, for example 08, as the interpreter would evaluate 08 as an illegal octal value. If no integer formats are included, the command will evaluate the operand as a floating-point numeric value. For scientific notations, the character e (or E) is inserted as appropriate. If no numeric interpretation is feasible, the value will be evaluated as a string. In this case, the value must be enclosed within double quotes or braces. Please note that not all operands are accepted by all operators. To avoid inadvertent variable substitution, it is always best to enclose the operands within braces. For example, take a look at the following: expr 1+1*3 will return a value of 4. expr (1+1)*3 will return a value of 6. Operands may be presented in any of the following: OperandExplanationNumericInteger and floating-point values may be passed directly to the command.BooleanAll standard Boolean values (true, false, yes, no, 0, or 1) are supported.Tcl variableAll referenced variables (in Tcl, a variable is referenced using the $ notation, for example, myVariable is a named variable, whereas $myVariable is the referenced variable).Strings (in double quotes)Strings contained within double quotes may be passed with no need to include backslash, variable, or command substitution, as these are handled automatically.Strings (in braces)Strings contained within braces will be used with no substitution.Tcl commandsTcl commands must be enclosed within square braces. The command will be executed and the mathematical function is performed on the return value.Named functionsFunctions, such as sine, cosine, and so on. Tcl supports a subset of the C programming language math operators and treats them in the same manner and precedence. If a named function (such as sine) is encountered, expr automatically makes a call to the mathfunc namespace to minimize the syntax required to obtain the value. Tcl expr operators may be specified as noted in the following table, in the descending order of precedence: OperatorExplanation- + ~ !Unary minus, unary plus, bitwise NOT and logical NOT. Cannot be applied to string operands. Bit-wise NOT may be applied to only integers.**Exponentiation Numeric operands only.*/ %Multiply, divide, and remainder. Numeric operands only.+ -Add and subtract. Numeric operands only.<< >>Left shift and right shift. Integer operands only. A right shift always propagates the sign bit.< > <= >=Boolean Less, Boolean Greater, Boolean Less Than or Equal To, Boolean Greater Than or Equal To (A value of 1 is returned if the condition is true, otherwise a 0 is returned). If utilized for strings, string comparison will be applied.== !=Boolean Equal and Boolean Not Equal (A value of 1 is returned if the condition is true, otherwise a 0 is returned).eq neBoolean String Equal and Boolean String Not Equal (A value of 1 is returned if the condition is true, otherwise a 0 is returned). Any operand provided will be interpreted as a string.in niList Containment and Negated List Containment (A value of 1 is returned if the condition is true, otherwise a 0 is returned). The first operand is treated as a string value, the second as a list.&Bitwise AND Integers only.^Bitwise Exclusive OR Integers only.|Bitwise OR Integers only.&&Logical AND (a value of 1 is returned if both operands are 0, otherwise a 1 is returned). Boolean and numeric (integer and floating-point) operands only.x?y:zIf-then-else (if x evaluates to non-zero, then the return is the value of y, otherwise the value of z is returned). The x operand must have a Boolean or a numeric value.  
Read more
  • 0
  • 0
  • 4865

article-image-aspnet-mvc-2-validating-mvc
Packt
21 Jan 2011
5 min read
Save for later

ASP.NET MVC 2: Validating MVC

Packt
21 Jan 2011
5 min read
  ASP.NET MVC 2 Cookbook A fast-paced cookbook with recipes covering all that you wanted to know about developing with ASP.NET MVC Solutions to the most common problems encountered with ASP.NET MVC development Build and maintain large applications with ease using ASP.NET MVC Recipes to enhance the look, feel, and user experience of your web applications Expand your MVC toolbox with an introduction to lots of open source tools Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible         Introduction ASP.NET MVC provides a simple, but powerful, framework for validating forms. In this article, we'll start by creating a simple form, and then incrementally extend the functionality of our project to include client-side validation, custom validators, and remote validation. Basic input validation The moment you create an action to consume a form post, you're validating. Or at least the framework is. Whether it is a textbox validating to a DateTime, or checkbox to a Boolean, we can start making assumptions on what should be received and making provisions for what shouldn't. Let's create a form. How to do it... Create an empty ASP.NET MVC 2 project and add a master page called Site.Master to Views/Shared. In the models folder, create a new model called Person. This model is just an extended version of the Person class.Models/Person.cs: public class Person { [DisplayName("First Name")] public string FirstName { get; set; } [DisplayName("Middle Name")] public string MiddleName { get; set; } [DisplayName("Last Name")] public string LastName { get; set; } [DisplayName("Birth Date")] public DateTime BirthDate { get; set; } public string Email { get; set; } public string Phone { get; set; } public string Postcode { get; set; } public string Notes { get; set; } } Create a controller called HomeController and amend the Index action to return a new instance of Person as the view model.Controllers/HomeController.cs: public ActionResult Index() { return View(new Person()); } Build and then right-click on the action to create an Index view. Make it an empty view that strongly types to our Person class. Create a basic form in the Index view.Views/Home/Index.aspx: <% using (Html.BeginForm()) {%> <%: Html.EditorForModel() %> <input type="submit" name="submit" value="Submit" /> <% } %> We'll go back to the home controller now to capture the form submission. Create a second action called Index, which accepts only POSTs.Controllers/HomeController.cs: [HttpPost] public ActionResult Index(... At this point, we have options. We can consume our form in a few different ways, let's have a look at a couple of them now:Controllers/HomeController.cs (Example): // Individual Parameters public ActionResult Index(string firstName, DateTime birthdate... // Model Public ActionResult Index(Person person) { Whatever technique you choose, the resolution of the parameters is roughly the same. The technique that I'm going to demonstrate relies on a method called UpdateModel. But first we need to differentiate our POST action from our first catch-all action. Remember, actions are just methods, and overrides need to take sufficiently different parameters to prevent ambiguity. We will do this by taking a single parameter of type FormCollection, though we won't necessarily make use of it.Controllers/HomeController.cs: [HttpPost] public ActionResult Index(FormCollection form) { var person = new Person(); UpdateModel(person); return View(person); } The UpdateModel technique is a touch more long-winded, but comes with advantages. The first is that if you add a breakpoint on the UpdateModel line, you can see the exact point when an empty model becomes populated with the form collection, which is great for demonstration purposes. The main reason I go back to UpdateModel time and time again, is the optional second parameter, includeProperties. This parameter allows you to selectively update the model, thereby bypassing validation on certain properties that you might want to handle independently. Build, run, and submit your form. If your page validates, your info should be returned back to you. However, add your birth date in an unrecognized format and watch it bomb. UpdateModel is a temperamental beast. Switch your UpdateModel for TryUpdateModel and see what happens. TryUpdateModel will return a Boolean indicating the success or failure of the submission. However, the most interesting thing is happening in the browser. How it works... With ASP.NET MVC, it sometimes feels like you're stripping the development process back to basics. I think this is a good thing; more control to render the page you want is good. But there is a lot of clever stuff going on in the background, starting off with Model Binders. When you send a request (GET, POST, and so on) to an ASP.NET MVC application, the query string, route values and the form collection are passed through model binding classes, which result in usable structures (for example, your action's input parameters). These model binders can be overridden and extended to deal with more complex scenarios, but since ASP.NET MVC2, I've rarely made use of this. A good starting point for further investigation would be with DefaultModelBinder and IModelBinder. What about that validation message in the last screenshot, where did it come from? Apart from LableFor and EditorFor, but we also have ValidationMessageFor. If the model binders fail at any point to build our input parameters, the model binder will add an error message to the model state. The model state is picked up and displayed by the ValidationMessageFor method, but more on that later.
Read more
  • 0
  • 0
  • 2341

article-image-manage-sql-azure-databases-web-interface-houston
Packt
21 Jan 2011
2 min read
Save for later

Manage SQL Azure Databases with the Web Interface 'Houston'

Packt
21 Jan 2011
2 min read
  Microsoft SQL Azure Enterprise Application Development Build enterprise-ready applications and projects with SQL Azure Develop large scale enterprise applications using Microsoft SQL Azure Understand how to use the various third party programs such as DB Artisan, RedGate, ToadSoft etc developed for SQL Azure Master the exhaustive Data migration and Data Synchronization aspects of SQL Azure. Includes SQL Azure projects in incubation and more recent developments including all 2010 updates Appendix In order to use this program and follow the article you should have an account on the Windows Azure Platform on which preferably an SQL Azure server has been provisioned. This would also imply that you have a Windows Live ID to access the portal. As mentioned, in this article we look at some of the features of this web based tool and carry out a few tasks. Click the Launch Houston button in the Project Houston CTP1 page shown here on the SQLAzureLabs portal page. This brings up a world map displaying the current Windows Azure Data Centers available and you have to choose the data center on which you have an account. For the present article we will use the Southeast Asia data center and sometimes the North Central US data center. Click on Southeast Asia location. The Silverlight application gets launched from the URL: https://manage-sgp.cloudapp.net/ displaying the license information that you need to agree to before going forward. When you click OK, the Login in page is displayed as shown. You need to enter the server information at the Southeast Asia data center as shown. Click Connect. The connection gets established to the above SQL Azure server as shown in the next image. This is much better looking than the somewhat ‘drab’ looking SSMS interface (albeit fully mature)shown here for comparison. Changing the database If you need to work with a different database, click on Connect DB at the top left of 'Houston' user interface, as shown in the next image. The conneciton interface comes up again where you indicate the name of database as shown. Here the database has been changed to master. Click Connect now connects you to the master database as shown.
Read more
  • 0
  • 0
  • 2306
article-image-python-multimedia-enhancing-images
Packt
20 Jan 2011
5 min read
Save for later

Python Multimedia: Enhancing Images

Packt
20 Jan 2011
5 min read
Adjusting brightness and contrast One often needs to tweak the brightness and contrast level of an image. For example, you may have a photograph that was taken with a basic camera, when there was insufficient light. How would you correct that digitally? The brightness adjustment helps make the image brighter or darker whereas the contrast adjustments emphasize differences between the color and brightness level within the image data. The image can be made lighter or darker using the ImageEnhance module in PIL. The same module provides a class that can auto-contrast an image. Time for action – adjusting brightness and contrast Let's learn how to modify the image brightness and contrast. First, we will write code to adjust brightness. The ImageEnhance module makes our job easier by providing Brightness class. Download image 0165_3_12_Before_BRIGHTENING.png and rename it to Before_BRIGHTENING.png. Use the following code: 1 import Image 2 import ImageEnhance 3 4 brightness = 3.0 5 peak = Image.open( "C:imagesBefore_BRIGHTENING.png ") 6 enhancer = ImageEnhance.Brightness(peak) 7 bright = enhancer.enhance(brightness) 8 bright.save( "C:imagesBRIGHTENED.png ") 9 bright.show() On line 6 in the code snippet, we created an instance of the class Brightness. It takes Image instance as an argument. Line 7 creates a new image bright by using the specified brightness value. A value between 0.0 and less than 1.0 gives a darker image, whereas a value greater than 1.0 makes it brighter. A value of 1.0 keeps the brightness of the image unchanged. The original and resultant image are shown in the next illustration. Comparison of images before and after brightening. Let's move on and adjust the contrast of the brightened image. We will append the following lines of code to the code snippet that brightened the image. 10 contrast = 1.3 11 enhancer = ImageEnhance.Contrast(bright) 12 con = enhancer.enhance(contrast) 13 con.save( "C:imagesCONTRAST.png ") 14 con.show() Thus, similar to what we did to brighten the image, the image contrast was tweaked by using the ImageEnhance.Contrast class. A contrast value of 0.0 creates a black image. A value of 1.0 keeps the current contrast. The resultant image is compared with the original in the following illustration. The original image with the image displaying the increasing contrast. In the preceding code snippet, we were required to specify a contrast value. If you prefer PIL for deciding an appropriate contrast level, there is a way to do this. The ImageOps.autocontrast functionality sets an appropriate contrast level. This function normalizes the image contrast. Let's use this functionality now. Use the following code: import ImageOps bright = Image.open( "C:imagesBRIGHTENED.png ") con = ImageOps.autocontrast(bright, cutoff = 0) con.show() The highlighted line in the code is where contrast is automatically set. The autocontrast function computes histogram of the input image. The cutoff argument represents the percentage of lightest and darkest pixels to be trimmed from this histogram. The image is then remapped. What just happened? Using the classes and functionality in ImageEnhance module, we learned how to increase or decrease the brightness and the contrast of the image. We also wrote code to auto-contrast an image using functionality provided in the ImageOps module. Tweaking colors Another useful operation performed on the image is adjusting the colors within an image. The image may contain one or more bands, containing image data. The image mode contains information about the depth and type of the image pixel data. The most common modes we will use are RGB (true color, 3x8 bit pixel data), RGBA (true color with transparency mask, 4x8 bit) and L (black and white, 8 bit). In PIL, you can easily get the information about the bands data within an image. To get the name and number of bands, the getbands() method of the class Image can be used. Here, img is an instance of class Image. >>> img.getbands() ('R', 'G', 'B', 'A') Time for action – swap colors within an image! To understand some basic concepts, let's write code that just swaps the image band data. Download the image 0165_3_15_COLOR_TWEAK.png and rename it as COLOR_TWEAK.png. Type the following code: 1 import Image 2 3 img = Image.open( "C:imagesCOLOR_TWEAK.png ") 4 img = img.convert('RGBA') 5 r, g, b, alpha = img.split() 6 img = Image.merge( "RGBA ", (g, r, b, alpha)) 7 img.show() Let's analyze this code now. On line 2, the Image instance is created as usual. Then, we change the mode of the image to RGBA. Here we should check if the image already has that mode or if this conversion is possible. You can add that check as an exercise! Next, the call to Image.split() creates separate instances of Image class, each containing a single band data. Thus, we have four Image instances—r, g, b, and alpha corresponding to red, green, and blue bands, and the alpha channel respectively. The code in line 6 does the main image processing. The first argument that Image.merge takes mode as the first argument whereas the second argument is a tuple of image instances containing band information. It is required to have same size for all the bands. As you can notice, we have swapped the order of band data in Image instances r and g while specifying the second argument. The original and resultant image thus obtained are compared in the next illustration. The color of the flower now has a shade of green and the grass behind the flower is rendered with a shade of red. Please download and refer to the supplementary PDF file Chapter 3 Supplementary Material.pdf. Here, the color images are provided that will help you see the difference. Original (left) and the color swapped image (right). What just happened? We accomplished creating an image with its band data swapped. We learned how to use PIL's Image.split() and Image.merge() to achieve this. However, this operation was performed on the whole image. In the next section, we will learn how to apply color changes to a specific color region.
Read more
  • 0
  • 0
  • 6665

article-image-working-master-pages-aspnet-mvc-2
Packt
17 Jan 2011
6 min read
Save for later

Working with Master Pages in ASP.NET MVC 2

Packt
17 Jan 2011
6 min read
  ASP.NET MVC 2 Cookbook A fast-paced cookbook with recipes covering all that you wanted to know about developing with ASP.NET MVC Solutions to the most common problems encountered with ASP.NET MVC development Build and maintain large applications with ease using ASP.NET MVC Recipes to enhance the look, feel, and user experience of your web applications Expand your MVC toolbox with an introduction to lots of open source tools Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible      How to create a master page In this recipe, we will take a look at how to create a master page and associate it with our view. Part of creating a master page is defining placeholders for use in the view. We will then see how to utilize the content placeholders that we defined in the master page. How to do it... Start by creating a new ASP.NET MVC application. Then add a new master page to your solution called Custom.Master. Place it in the Views/Shared directory. Notice that there is a placeholder already placed in the middle of our page. Let's wrap that placeholder with a table. We will put a column to the left and the right of the existing placeholder. Then we will rename the placeholder to MainContent.Views/Shared/Custom.Master: <table> <tr> <td> </td> <td> <asp:ContentPlaceHolder ID="ContentPlaceHolder1" runat="server"></asp:ContentPlaceHolder> </td> <td> </td> </tr> </table> Next, we will copy the placeholder into the first and the third columns.Views/Shared/Custom.Master: <table> <tr> <td> <asp:ContentPlaceHolder ID="ContentPlaceHolder1" runat="server"></asp:ContentPlaceHolder> </td> <td> <asp:ContentPlaceHolder ID="MainContent" runat="server"></asp:ContentPlaceHolder> </td> <td> <asp:ContentPlaceHolder ID="ContentPlaceHolder2" runat="server"></asp:ContentPlaceHolder> </td> </tr> </table> Next, we need to add a new action to the HomeController.cs file, from which we will create a new view. Do this by opening the HomeController.cs file, then add a new action named CustomMasterDemo.Controllers/HomeController.cs: public ActionResult CustomMasterDemo() { return View(); } Then right-click on the CustomerMasterDemo and choose AddView, and select the new Custom.Master page that we created. Next, you need to change the ContentPlaceHolderID box to show the center placeholder name ContentPlaceHolder2. Then hit Add and you should see a new view with four placeholders. Views/Home/CustomMasterDemo.aspx: <asp:Content ID="Content1" ContentPlaceHolderID="MainContent" runat="server"> <h2>Custom Master Demo</h2> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="head" runat="server"> <meta name="description" content="Here are some keywords for our page description."> </asp:Content> <asp:Content ID="Content3" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <div style="width:200px;height:200px;border:1px solid #ff0000;"> <ul> <li>Home</li> <li>Contact Us</li> <li>About Us</li> </ul> </div> </asp:Content> <asp:Content ID="Content" ContentPlaceHolderID="ContentPlaceHolder2" runat="server"> <div style="width:200px;height:200px;border:1px solid #000000;"> <b>News</b><br/> Here is a blurb of text on the right! </div> </asp:Content> You should now see a page similar to this: How it works... This particular feature is a server-side carry over from web forms. It works just as it always has. Before being sent down to the client, the view is merged into the master file and processed according to the matching placeholder IDs. Determining the master page in the ActionResult In the previous recipe, we took a look at how to build a master page. In this recipe, we are going to take a look at how to control what master page to use programmatically. There are all sorts of reasons for using different master pages. For example, you might want to use different master pages based on the time of day, if a user is logged in or not, for different areas of your site (blog, shopping, forum, and so on). How to do it... We will get started by first creating a new MVC web application. Next, we need to create a second master page. We can do this quickly by making a copy of the default master page that is provided. Name it Site2.Master. Next, we need to make sure we can tell these two master pages apart. The easiest way to do this is to change the contents of the H1 tag to say Master 1 and Master 2 in each of the master pages. Now we can take a look at the HomeController. We will check if we are in an even or odd second and based on that we can return an even or odd master page. We do this by specifying the master page name that we want to use when we return the view.Controllers/HomeController.cs: public ActionResult Index() { ViewData["Message"] = "Welcome to ASP.NET MVC!"; string masterName = ""; if (DateTime.Now.Second % 2 == 0) masterName = "Site2"; else masterName = "Site"; return View("Index", masterName); } Now you can run the application. Refreshing the home page should alternate between the two master pages now and then. (Remember that this is based on the second and is now just a pure alternating page scheme.) How it works... This method of controlling which master page is used by the view is built into the MVC framework and is the easiest way of performing this type of control. However, having to dictate this type of logic in every single action would create quite a bit of fluff code in our controller. This option might be appropriate for certain needs though!
Read more
  • 0
  • 0
  • 3010

article-image-introduction-cloud-computing-microsoft-azure
Packt
13 Jan 2011
6 min read
Save for later

Introduction to cloud computing with Microsoft Azure

Packt
13 Jan 2011
6 min read
What is an enterprise application? Before we hop into the cloud, let's talk about who this book is for. Who are "enterprise developers"? In the United States, over half of the economy is small businesses, usually privately owned, with a couple dozen of employees and revenues up to the millions of dollars. The applications that run these businesses have lower requirements because of smaller data volumes and a low number of application users. A single server may host several applications. Many of the business needs for these companies can be met with off-the-shelf software requiring little to no modification. The minority of the United States economy is made up of huge publicly owned corporations—think Microsoft, Apple, McDonald's, Coca-Cola, Best Buy, and so on. These companies have thousands of employees and revenues in the billions of dollars. Because these companies are publicly owned, they are subject to tight regulatory scrutiny. The applications utilized by these companies must faithfully keep track of an immense amount of data to be utilized by hundreds or thousands of users, and must comply with all matters of regulations. The infrastructure for a single application may involve dozens of servers. A team of consultants is often retained to install and maintain the critical systems of a business, and there is often an ecosystem of internal applications built around the enterprise systems that are just as critical. These are the applications we consider to be "enterprise applications", and the people who develop and extend them are "enterprise developers". The high availability of cloud platforms makes them attractive for hosting these critical applications, and there are many options available to the enterprise developer. What is cloud computing? At its most basic, cloud computing is moving applications accessible from our internal network onto an internet (cloud)-accessible space. We're essentially renting virtual machines in someone else's data center, with the capabilities for immediate scale-out, failover, and data synchronization. In the past, having an Internet-accessible application meant we were building a website with a hosted database. Cloud computing changes that paradigm—our application could be a website, or it could be a client installed on a local PC accessing a common data store from anywhere in the world. The data store could be internal to our network or itself hosted in the cloud. The following diagram outlines three ways in which cloud computing can be utilized for an application. In option 1, both data and application have been hosted in the cloud, the second option is to host our application in the cloud and our data locally, and the third option is to host our data in the cloud and our application locally. The expense (or cost) model is also very different. In our local network, we have to buy the hardware and software licenses, install and configure the servers, and finally we have to maintain them. All this counts in addition to building and maintaining the application! In cloud computing, the host usually handles all the installation, configuration, and maintenance of the servers, allowing us to focus mostly on the application. The direct costs of running our application in the cloud are only for each machine-hour of use and storage utilization. The individual pieces of cloud computing have all been around for some time. Shared mainframes and supercomputers have for a long time billed the end users based on that user's resource consumption. Space for websites can be rented on a monthly basis. Providers offer specialized application hosting and, relatively recently, leased virtual machines have also become available. If there is anything revolutionary about cloud computing, then it is its ability to combine all the best features of these different components into a single affordable service offering. Some benefits of cloud computing Cloud computing sounds great so far, right? So, what are some of the tangible benefits of cloud computing? Does cloud computing merit all the attention? Let's have a look at some of the advantages: Low up-front cost:At the top of the benefits list is probably the low up-front cost. With cloud computing, someone else is buying and installing the servers, switches, and firewalls, among other things. In addition to the hardware, software licenses and assurance plans are also expensive on the enterprise level, even with a purchasing agreement. In most cloud services, including Microsoft's Azure platform, we do not need to purchase separate licenses for operating systems or databases. In Azure, the costs include licenses for Windows Azure OS and SQL Azure. As a corollary, someone else is responsible for the maintenance and upkeep of the servers—no more tape backups that must be rotated and sent to off-site storage, no extensive strategies and lost weekends bringing servers up to the current release level, and no more counting the minutes until the early morning delivery of a hot swap fan to replace the one that burned out the previous afternoon. Easier disaster recovery and storage management:With synchronized storage across multiple data centers, located in different regions in the same country or even in different countries, disaster recovery planning becomes significantly easier. If capacity needs to be increased, it can be done quite easily by logging into a control panel and turning on an additional VM. It would be a rare instance indeed when our provider doesn't sell us additional capacity. When the need for capacity passes, we can simply turn off the VMs we no longer need and pay only for the uptime and storage utilization. Simplified migration:Migration from a test to a production environment is greatly simplified. In Windows Azure, we can test an updated version of our application in a local sandbox environment. When we're ready to go live, we deploy our application to a staged environment in the cloud and, with a few mouse clicks in the control panel, we turn off the live virtual machine and activate the staging environment as the live machine—we barely miss a beat! The migration can be performed well in advance of the cut-over, so daytime migrations and midnight cut-overs can become routine. Should something go wrong, the environments can be easily reversed and the issues analyzed the following day. Familiar environment:Finally, the environment we're working on is very familiar. In Azure's case, the environment can include the capabilities of IIS and .NET (or Java or PHP and Apache), with Windows and SQL Server or MySQL. One of the great features of Windows is that it can be confi gured in so many ways, and to an extent, Azure can also be configured in many ways, supporting a rich and familiar application environment.
Read more
  • 0
  • 0
  • 1811
article-image-microsoft-azure-blob-storage
Packt
07 Jan 2011
5 min read
Save for later

Microsoft Azure Blob Storage

Packt
07 Jan 2011
5 min read
  Microsoft Azure: Enterprise Application Development Straight talking advice on how to design and build enterprise applications for the cloud Build scalable enterprise applications using Microsoft Azure The perfect fast-paced case study for developers and architects wanting to enhance core business processes Packed with examples to illustrate concepts Written in the context of building an online portal for the case-study application Blobs in the Azure ecosystem Blobs are one of the three simple storage options for Windows Azure, and are designed to store large files in binary format. There are two types of blobs—block blobs and page blobs. Block blobs are designed for streaming, and each blob can have a size of up to 200 GB. Page blobs are designed for read/write access and each blob can store up to 1 TB each. If we're going to store images or video for use in our application, we'd store them in blobs. On our local systems, we would probably store these files in different folders. In our Azure account, we place blobs into containers, and just as a local hard drive can contain any number of folders, each Azure account can have any number of containers. Similar to folders on a hard drive, access to blobs is set at the container level, where permissions can be either "public read" or "private". In addition to permission settings, each container can have 8 KB of metadata used to describe or categorize it (metadata are stored as name/value pairs). Each blob can be up to 1 TB depending on the type of blob, and can also have up to 8 KB of metadata. For data protection and scalability, each blob is replicated at least three times, and "hot blobs" are served from multiple servers. Even though the cloud can accept blobs of up to 1 TB in size, Development Storage can accept blobs only up to 2 GB. This typically is not an issue for development, but still something to remember when developing locally. Page blobs form the basis for Windows Azure Drive—a service that allows Azure storage to be mounted as a local NTFS drive on the Azure instance, allowing existing applications to run in the cloud and take advantage of Azure-based storage while requiring fewer changes to adapt to the Azure environment. Azure drives are individual virtual hard drives (VHDs) that can range in size from 16 MB to 1 TB. Each Windows Azure instance can mount up to 16 Azure drives, and these drives can be mounted or dismounted dynamically. Also, Windows Azure Drive can be mounted as readable/writable from a single instance of an Azure service, or it can be mounted as a read-only drive for multiple instances. At the time of writing, there was no driver that allowed direct access to the page blobs forming Azure drives, but the page blobs can be downloaded, used locally, and uploaded again using the standard blob API. Creating Blob Storage Blob Storage can be used independent of other Azure services, and even if we've set up a Windows Azure or SQL Azure account, Blob Storage is not automatically created for us. To create a Blob Storage service, we need to follow these steps: Log in to the Windows Azure Developer portal and select our project. After we select our project, we should see the project page, as shown in the next screenshots: Clicking the New Service link on the application page takes us to the service creation page, as shown next: Selecting Storage Account allows us to choose a name and description for our storage service. This information is used to identify our services in menus and listings. Next, we choose a unique name for our storage account. This name must be unique across all of Azure—it can include only lowercase letters and numbers, and must be at least three characters long. If our account name is available, we then choose how to localize our data. Localization is handled by "affinity groups", which tie our storage service to the data centers in different geographic regions. For some applications, it may not matter where we locate our data. For other applications, we may want multiple affinity groups to provide timely content delivery. And for a few applications, regulatory requirements may mean we have to bind our data to a particular region. Clicking the Create button creates our storage service, and when complete, a summary page is shown. The top half of the summary page reiterates the description of our service and provides the endpoints and 256-bit access keys. These access keys are very important—they are the authentication keys we need to pass in our request if we want to access private storage or add/update a blob. The bottom portion of the confirmation page reiterates the affinity group the storage service belongs to. We can also enable a content delivery network and custom domain for our Blob Storage account. Once we create a service, it's shown on the portal menu and in the project summary once we select a project. That's it! We now have our storage services created. We're now ready to look at blobs in a little more depth.
Read more
  • 0
  • 0
  • 1421

article-image-using-groovy-closures-instead-template-method
Packt
23 Dec 2010
3 min read
Save for later

Using Groovy Closures Instead of Template Method

Packt
23 Dec 2010
3 min read
  Groovy for Domain-Specific Languages Extend and enhance your Java applications with Domain Specific Languages in Groovy Build your own Domain Specific Languages on top of Groovy Integrate your existing Java applications using Groovy-based Domain Specific Languages (DSLs) Develop a Groovy scripting interface to Twitter A step-by-step guide to building Groovy-based Domain Specific Languages that run seamlessly in the Java environment         Read more about this book       (For more resources on Groovy, see here.) Template Method Pattern Overview The template method pattern often applies during the thought "Well I have a piece of code that I want to use again, but I can't use it 100%. I want to change a few lines to make it useful." In general, using this pattern involves creating an abstract class and varying its implementation through abstract hook methods. Subclasses implement these abstract hook methods to solve their specific problem. This approach is very effective and is used extensively in frameworks. However, closures provide an elegant solution. Sample HttpBuilder Request It is best to illustrate the closure approach with an example. Recently I was developing a consumer of REST webservices with HttpBuilder. With HttpBuilder, the client simply creates the class and issues an HTTP call. The framework waits for a response and provides hooks for processing. Many of the requests being made were very similar to one another, only the URI was different. In addition, each request needed to process the returned XML differently, as the XML received would vary. I wanted to use the same request code, but vary the XML processing. To summarize the problem: HttpBuilder code should be reused Different URIs should be sent out with the same HttpBuilder code Different XML should be processed with the same HttpBuilder code Here is my first draft of HttpBuilder code. Note the call to convertXmlToCompanyDomainObject(xml). static String URI_PREFIX = '/someApp/restApi/' private List issueHttpBuilderRequest(RequestObject requestObj, String uriPath) { def http = new HTTPBuilder("http://localhost:8080/") def parsedObjectsFromXml = [] http.request(Method.POST, ContentType.XML) { req -> // set uri path on the delegate uri.path = URI_PREFIX + uriPath uri.query = [ company: requestObj.company, date: requestObj.date type: requestObj.type ] headers.'User-Agent' = 'Mozilla/5.0' // when response is a success, parse the gpath xml response.success = { resp, xml -> assert resp.statusLine.statusCode == 200 // store the list parsedObjectsFromXml = convertXmlToCompanyDomainObject(xml) } // called only for a 404 (not found) status code: response.'404' = { resp -> log.info 'HTTP status code: 404 Not found' } } parsedObjectsFromXml } private List convertXmlToCompanyDomainObject(GPathResult xml) { def list = [] // .. implementation to parse the xml and turn into objects } As you can see, URI is passed as a parameter to issueHttpBuilderRequest. This solves the problem of sending different URIs, but what about parsing the different XML formats that are returned? Using Template Method Pattern The following diagram illustrates applying the template method pattern to this problem. In summary, we need to move the issueHttpBuilderRequest code to an abstract class, and provide an abstract method convertXmlToDomainObjects(). Subclasses would provide the appropriate XML conversion implementation.
Read more
  • 0
  • 0
  • 2596