Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Application Development

357 Articles
article-image-delving-deep-application-design
Packt
11 Jan 2013
14 min read
Save for later

Delving Deep into Application Design

Packt
11 Jan 2013
14 min read
(For more resources related to this topic, see here.) Before we get started, please note the folder structure that we'll be using. This will help you quickly find the files referred to in each recipe. Executables for every sample project will be output in the bin/debug or bin/release folders depending on the project's build configuration. These folders also contain the following required DLLs and configuration files: File name   Description   OgreMain.dll   Main Ogre DLL.   RenderSystem_Direct3D9.dll   DirectX 9 Ogre render system DLL. This is necessary only if you want Ogre to use the DirectX 9 graphics library.   RenderSystem_GL.dll   OpenGL Ogre render system DLL. This is necessary only if you want Ogre to use the OpenGL graphics library.   Plugin_OctreeSceneManager.dll   Octree scene manager Ogre plugin DLL.   Plugin_ParticleFX.dll   Particle effects Ogre plugin DLL.   ogre.cfg   Ogre main configuration file that includes render system settings.   resources.cfg   Ogre resource configuration file that contains paths to all resource locations. Resources include graphics files, shaders, material files, mesh files, and so on.   plugins.cfg   Ogre plugin configuration file that contains a list of all the plugins we want Ogre to use. Typical plugins include the Plugin_OctreeSceneManager, RenderSystem_Direct3D9, RenderSystem_ GL, and so on.   In the bin/debug folder, you'll notice that the debug versions of the Ogre plugin DLLs all have a _d appended to the filename. For example, the debug version of OgreMain.dll is OgreMain_d.dll. This is the standard method for naming debug versions of Ogre DLLs. The media folder contains all the Ogre resource files, and the OgreSDK_vc10_v1-7-1 folder contains the Ogre header and library files. Creating a Win32 Ogre application The Win32 application is the leanest and meanest of windowed applications, which makes it a good candidate for graphics. In this recipe, we will create a simple Win32 application that displays a 3D robot model that comes with Ogre, in a window. Because these steps are identical for all Win32 Ogre applications, you can use the completed project as a starting point for new Win32 applications. Getting ready To follow along with this recipe, open the solution located in the Recipes/Chapter01/OgreInWin32 folder in the code bundle available on the Packt website. How to do it... We'll start off by creating a new Win32 application using the Visual C++ Win32 application wizard. Create a new project by clicking on File | New | Project. In the New Project dialog-box, expand Visual C++, and click on Win32 Project. Name the project OgreInWin32. For Location, browse to the Recipes folder and append Chapter_01_Examples, then click on OK. In the Win32 Application Wizard that appears, click on Next. For Application type, select Windows application, and then click on Finish to create the project. At this point, we have everything we need for a bare-bones Win32 application without Ogre. Next, we need to adjust our project properties, so that the compiler and linker know where to put our executable and find the Ogre header and library files. Open the Property Pages dialog-box, by selecting the Project menu and clicking on Properties. Expand Configuration Properties and click on General. Set Character Set to Not Set. Next, click on Debugging. Select the Local Windows Debugger as the Debugger to launch, then specify the Command for starting the application as ......bindebug$(TargetName)$(TargetExt). Each project property setting is automatically written to a per-user file with the extension .vcxproj.user, whenever you save the solution. Next we'll specify our VC++ Directories, so they match our Cookbook folder structure. Select VC++ Directories to bring up the property page where we'll specify general Include Directories and Library Directories. Click on Include Directories, then click on the down arrow button that appears on the right of the property value, and click on <edit>. In the Include Directories dialog-box that appears, click on the first line of the text area, and enter the relative path to the Boost header files: ......OgreSDK_vc10_v1-7-1boost_1_42. Click on the second line, and enter the relative path to the Ogre header files ......OgreSDK_vc10_v1-7-1includeOGRE, and click OK. Edit the Library Directories property in the same way. Add the library directory ......OgreSDK_vc10_v1-7-1boost_1_42lib for Boost, and ......OgreSDK_vc10_v1-7-1libdebug for Ogre, then click OK. Next, expand the Linker section, and select General. Change the Output File property to ......bindebug$(TargetName)$(TargetExt). Then, change the Additional Library Directories property to ......OgreOgreSDK_vc10_v1-7-1libdebug. Finally, provide the linker with the location of the main Ogre code library. Select the Input properties section, and prepend OgreMain_d.lib; at the beginning of the line. Note that if we were setting properties for the release configuration, we would use OgreMain.lib instead of OgreMain_d.lib. Now that the project properties are set, let's add the code necessary to integrate Ogre in our Win32 application. Copy the Engine.cpp and Engine.h files from the Cookbook sample files to your new project folder, and add them to the project. These files contain the CEngine wrapper class that we'll be using to interface with Ogre. Open the OgreInWin32.cpp file, and include Engine.h, then declare a global instance of the CEngine class, and a forward declaration of our InitEngine() function with the other globals at the top of the file. CEngine *m_Engine = NULL; void InitEngine(HWND hWnd); Next, create a utility function to instantiate our CEngine class, called void InitEngine(HWND hWnd){ m_Engine = new CEngine(hWnd); } Then, call InitEngine() from inside the InitInstance() function, just after the window handle has been created successfully, as follows: hWnd = CreateWindow(szWindowClass, szTitle, WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, 0, CW_USEDEFAULT, 0, NULL, NULL, hInstance, NULL); if (!hWnd){ return FALSE; } InitEngine(hWnd); Our last task is to render the 3D scene and display it in the window when we receive a WM_PAINT message. Add a call to renderOneFrame() to the WndProc() function, as follows: case WM_PAINT: hdc = BeginPaint(hWnd, &ps); m_Engine->m_Root->renderOneFrame(); EndPaint(hWnd, &ps); break; And that's it! How it works... Let's look at the CEngine class to see how we create and initialize an instance of the Ogre engine, and add a camera and robot model to the scene. Open Engine.cpp, and look at the constructor for CEngine. In the constructor, we create an instance of the Ogre engine, and store it in the m_Root class member variable. m_Root = new Ogre::Root("", "", Ogre::String(ApplicationPath + Ogre::String("OgreInWin32.log"))); An instance of Ogre::Root must exist before any other Ogre functions are called. The first parameter to the constructor is the plugins configuration filename, which defaults to plugins.cfg, but we pass it an empty string because we are going to load that file manually later. The second parameter is the main configuration filename, which defaults to ogre.cfg, but we pass it an empty string, also because we'll be loading that file manually as well. The third parameter is the name of the log file where Ogre will write the debugging and the hardware information. Once the Ogre::Root instance has been created, it can be globally accessed by Root::getSingleton(), which returns a reference or Root::getSingletonPtr(), which returns a pointer. Next, we manually load the configuration file ogre.cfg, which resides in the same directory as our application executable. OgreConfigFile.load(Ogre::String(ApplicationPath + Ogre::String("ogre.cfg")), "t:=", false); The ogre.cfg configuration file contains Ogre 3D engine graphics settings and typically looks as follows: # Render System indicates which of the render systems # in this configuration file we'll be using. Render System=Direct3D9 Rendering Subsystem [Direct3D9 Rendering Subsystem] Allow NVPerfHUD=No Anti aliasing=None Floating-point mode=Fastest Full Screen=Yes Rendering Device=NVIDIA GeForce 7600 GS (Microsoft Corporation - WDDM) VSync=No Video Mode=800 x 600 @ 32-bit colour [OpenGL Rendering Subsystem] Colour Depth=32 Display Frequency=60 FSAA=0 Full Screen=Yes RTT Preferred Mode=FBO VSync=No Video Mode=1024 x 768 Once the main configuration file is loaded, we manually load the correct render system plugin and tell Ogre which render system to use. Ogre::String RenderSystemName; RenderSystemName = OgreConfigFile.getSetting("Render System"); m_Root->loadPlugin("RenderSystem_Direct3D9_d); Ogre::RenderSystemList RendersList = m_Root->getAvailableRenderers(); m_Root->setRenderSystem(RendersList[0]); There's actually a little more code in Engine.cpp for selecting the correct render system plugin to load, but for our render system settings the RenderSystem_Direct3D9_d plugin is all we need. Next, we load the resources.cfg configuration file. Ogre::ConfigFile cf; Ogre::String ResourcePath = ApplicationPath + Ogre::String("resources. cfg"); cf.load(ResourcePath); The resources.cfg file contains a list of all the paths where Ogre should search for graphic resources. Then, we go through all the sections and settings in the resource configuration file, and add every location to the Ogre resource manager. Ogre::ConfigFile::SectionIterator seci = cf.getSectionIterator(); Ogre::String secName, typeName, archName; while (seci.hasMoreElements()){ secName = seci.peekNextKey(); Ogre::ConfigFile::SettingsMultiMap *settings = seci.getNext(); Ogre::ConfigFile::SettingsMultiMap::iterator i; for(i = settings->begin(); i != settings->end(); ++i){ typeName = i->first; archName = i->second; archName = ApplicationPath + archName; Ogre::ResourceGroupManager::getSingleton(). addResourceLocation(archName, typeName, secName); } } Now, we are ready to initialize the engine. m_Root->initialise(false); We pass in false to the initialize() function, to indicate that we don't want Ogre to create a render window for us. We'll be manually creating a render window later, using the hWnd window handle from our Win32 Application. Every graphics object in the scene including all meshes, lights, and cameras are managed by the Ogre scene manager. There are several scene managers to choose from, and each specializes in managing certain types of scenes of varying sizes. Some scene managers support rendering vast landscapes, while others are best for enclosed spaces. We'll use the generic scene manager for this recipe, because we don't need any extra features. m_SceneManager = m_Root->createSceneManager(Ogre::ST_GENERIC, "Win32Ogre"); Remember when we initialized Ogre::Root, and specifically told it not to auto-create a render window? We did that because we create a render window manually using the externalWindowHandle parameter. Ogre::NameValuePairList params; params["externalWindowHandle"] = Ogre::StringConverter::toString((long)hWnd); params["vsync"] = "true"; RECT rect; GetClientRect(hWnd, &rect); Ogre::RenderTarget *RenderWindow = NULL; try{ m_RenderWindow = m_Root->createRenderWindow("Ogre in Win32", rect. right - rect.left, rect.bottom - rect.top, false, &params); } catch(...){ MessageBox(hWnd, "Failed to create the Ogre::RenderWindownCheck that your graphics card driver is up-to-date", "Initialize Render System", MB_OK | MB_ICONSTOP); exit(EXIT_SUCCESS); } As you have probably guessed, the createRenderWindow() method creates a new RenderWindow instance. The first parameter is the name of the window. The second and third parameters are the width and height of the window, respectively. The fourth parameter is set to false to indicate that we don't want to run in full-screen mode. The last parameter is our NameValuePair list, in which we provide the external window handle for embedding the Ogre renderer in our application window. If we want to see anything, we need to create a camera, and add it to our scene. The next bit of code does just that. m_Camera = m_SceneManager->createCamera("Camera"); m_Camera->setNearClipDistance(0.5); m_Camera->setFarClipDistance(5000); m_Camera->setCastShadows(false); m_Camera->setUseRenderingDistance(true); m_Camera->setPosition(Ogre::Vector3(200.0, 50.0, 100.0)); Ogre::SceneNode *CameraNode = NULL; CameraNode = m_SceneManager->getRootSceneNode()- >createChildSceneNode("CameraNode"); First, we tell the scene manager to create a camera, and give it the highly controversial name Camera. Next, we set some basic camera properties, such as the near and far clip distances, whether to cast shadows or not, and where to put the camera in the scene. Now that the camera is created and configured, we still have to attach it to a scene node for Ogre to consider it a part of the scene graph, so we create a new child scene node named CameraNode, and attach our camera to that node. The last bit of the camera-related code involves us telling Ogre that we want the content for our camera to end up in our render window. We do this by defining a viewport that gets its content from the camera, and displays it in the render window. Ogre::Viewport* Viewport = NULL; if (0 == m_RenderWindow->getNumViewports()){ Viewport = m_RenderWindow->addViewport(m_Camera); Viewport->setBackgroundColour(Ogre::ColourValue(0.8f, 1.0f, 0.8f)); } m_Camera->setAspectRatio(Ogre::Real(rect.right - rect.left) / Ogre::Real(rect.bottom - rect.top)); The first line of code checks whether we have already created a viewport for our render window or not; if not, it creates one with a greenish background color. We also set the aspect ratio of the camera to match the aspect ratio of our viewport. Without setting the aspect ratio, we could end up with some really squashed or stretched-looking scenes. You may wonder why you might want to have multiple viewports for a single render window. Consider a car racing game where you want to display the rear view mirror in the top portion of your render window. One way to accomplish, this would be to define a viewport that draws to the entire render window, and gets its content from a camera facing out the front windshield of the car, and another viewport that draws to a small subsection of the render window and gets its content from a camera facing out the back windshield. The last lines of code in the CEngine constructor are for loading and creating the 3D robot model that comes with the Ogre SDK. Ogre::Entity *RobotEntity = m_SceneManager->createEntity("Robot", "robot.mesh"); Ogre::SceneNode *RobotNode = m_SceneManager->getRootSceneNode()- >createChildSceneNode(); RobotNode->attachObject(RobotEntity); Ogre::AxisAlignedBox RobotBox = RobotEntity->getBoundingBox(); Ogre::Vector3 RobotCenter = RobotBox.getCenter(); m_Camera->lookAt(RobotCenter); We tell the scene manager to create a new entity named Robot, and to load the robot.mesh resource file for this new entity. The robot.mesh file is a model file in the Ogre .mesh format that describes the triangles, textures, and texture mappings for the robot model. We then create a new scene node just like we did for the camera, and attach our robot entity to this new scene node, making our killer robot visible in our scene graph. Finally, we tell the camera to look at the center of our robot's bounding box. Finally, we tell Ogre to render the scene. m_Root->renderOneFrame(); We also tell Ogre to render the scene in OgreInWin32.cpp whenever our application receives a WM_PAINT message. The WM_PAINT message is sent when the operating system, or another application, makes a request that our application paints a portion of its window. Let's take a look at the WM_PAINT specific code in the WndProc() function again. case WM_PAINT: hdc = BeginPaint(hWnd, &ps); m_Engine->m_Root->renderOneFrame(); EndPaint(hWnd, &ps); break; The BeginPaint() function prepares the window for painting, and the corresponding EndPaint() function denotes the end of painting. In between those two calls is the Ogre function call to renderOneFrame(), which will draw the contents of our viewport in our application window. During the renderOneFrame() function call, Ogre gathers all the objects, lights, and materials that are to be drawn from the scene manager based on the camera's frustum or visible bounds. It then passes that information to the render system, which executes the 3D library function calls that run on your system's graphics hardware, to do the actual drawing on a render surface. In our case, the 3D library is Direct X and the render surface is the hdc, or Handle to the device context, of our application window. The result of all our hard work can be seen in the following screenshot: Flee in terror earthling! There's more... If you want to use the release configuration instead of debug, change the Configuration type to Release in the project properties, substitute the word release where you see the word debug in this recipe, and link the OgreMain.lib instead of OgreMain_d.lib in the linker settings. It is likely that at some point you will want to use a newer version of the Ogre SDK. If you download a newer version and extract it to the Recipes folder, you will need to change the paths in the project settings so that they match the paths for the version of the SDK you downloaded.
Read more
  • 0
  • 0
  • 993

article-image-more-adf-business-components-and-fusion-page-runtime
Packt
09 Jan 2013
11 min read
Save for later

More on ADF Business Components and Fusion Page Runtime

Packt
09 Jan 2013
11 min read
(For more resources related to this topic, see here.) Lifecycle of an ADF Fusion web page with region When a client requests for a page with region, at the server, ADF runtime intercepts and pre-processes the request before passing it to the page lifecycle handler. The pre-processing tasks include security check, initialization of Trinidad runtime, and setting up the binding context and ADF context. This is shown in the following diagram: After setting up the context for processing the request, the ADF framework starts the page lifecycle for the page. During the Before Restore View phase of the page, the framework will try to synchronize the controller state with the request, using the state token sent by the client. If this is a new request, a new root view port is created for the top-level page. In simple words, a view port maps to a page or page fragment in the current view. During view port initialization, the framework will build a data control frame for holding the data controls. During this phrase runtime also builds the binding containers used in the current page. The data control frame will then be added to the binding context object for future use. After setting up the basic infrastructure required for processing the request, the page lifecycle moves to the Restore View phase. During the Restore View phase, the framework generates a component tree for the page. Note that the UI component tree, at this stage, contains only metadata for instantiating the UI components. The component instantiation happens only during the Render Response phase, which happens later in the page lifecycle. If this is a fresh request, the lifecycle moves to the Render Response phase. Note that, in this article, we are not discussing how the framework handles the post back requests from the client. During the Render Response phase, the framework instantiates the UI components for each node in the component tree by traversing the tree hierarchy. The completed component tree is appended to UIViewRoot, which represents the root of the UI component tree. Once the UI components are created, runtime walks through the component tree and performs the pre-rendering tasks. The pre-rendering event is used by components with lazy initialization abilities, such as region, to keep themselves ready for rendering if they are added to the component tree during the page cycle. While processing a region, the framework creates a new child view port and controller state for the region, and starts processing the associated task flow. The following is the algorithm used by the framework while initializing the task flow: If the task flow is configured not to share data control, the framework creates a new data control frame for the task flow and adds to the parent data control frame (the data control frame for the parent view port). If the task flow is configured to start a new transaction, the framework calls beginTransaction() on the control frame. If the task flow is configured to use existing transaction, the framework asks the data control frame to create a save point and to associate it to the page flow stack. If the task flow is configured to 'use existing transaction if possible', framework will start a new transaction on the data control, if there is no transaction opened on it. If a transaction is already opened on the data control, the framework will use the existing one. Once the pre-render processing is over, each component will be asked to write out its value into the response object. During this action, the framework will evaluate the EL expressions specified for the component properties, whenever they are referred in the page lifecycle. If the EL expressions contain binding expression referring properties of the business components, evaluation of the EL will end up in instantiating corresponding model components. The framework performs the following tasks during the evaluation of the model-bound EL expressions: It instantiates the data control if it is missing from the current data control frame. It performs a check out of the application module. It attaches the transaction object to the application module. Note that it is the transaction object that manages all database transactions for an application module. Runtime uses the following algorithm for attaching transactions to the application module: If the application module is nested under a root application module or if it is used in a task flow that has been configured to use an existing transaction, the framework will identify the existing DBTransaction object that has been created for the root application module or for the calling task flow, and attach it to the current application module. Under the cover, the framework uses the jbo.shared.txn parameter (named transaction) to share the transaction between the application modules. In other words, if an application module needs to share a transaction with another module, the framework assigns the same jbo.shared.txn value for both application modules at runtime. While attaching the transaction to the application module, runtime will look up the transaction object by using the jbo.shared.txn value set for the application module and if any transaction object is found for this key, it re-uses the same. If the application module is a regular one, and not part of the task flow that shares a transaction with caller, the framework will generate a new DBTransaction object and attach it to the application module. After initializing the data control, the framework adds it to the data control frame. The data control frame holds all the data control used in the current view port. Remember that a, view port maps to a page or a page fragment. Execute an appropriate view object instance, which is bound to the iterator. At the end of the render response phase, the framework will output the DOM content to the client. Before finishing the request, the ADF binding filter will call endRequest() on each data control instance participating in the request. Data controls use this callback to clean up the resources and check in the application modules back to the pool. Transaction management in Fusion web applications A transaction for a business application may be thought of as a unit of work resulting in changes to the application state. Oracle ADF simplifies the transaction handling by abstracting the micro-level management of transactions from the developers. This section discusses the internal aspects of the transaction management in Fusion web applications. In this discussion, we will consider only ADF Business Component-based applications. What happens when the task flow commits a transaction Oracle ADF allows you to define transactional boundaries, using task flows. Each task flow can be configured to define the transactional unit of work. Let us see what happens when a task flow return activity tries to commit the currently opened transaction. The following is the algorithm used by the framework when the task flow commits the transaction: When you action a task flow return activity, a check is carried over to see whether the task flow is configured for committing the current transaction or not. And if found true, runtime will identify the data control frame associated with the current view port and call the commit operation on it. The data control frame delegates the "commit" call to the transaction handler instance for further processing. The transaction handler iterates over all data controls added to the data control frame and invokes commitTransaction on each root data control. It is the transaction handler that engages all data controls added to the data control frame in the transaction commit cycle. Data control delegates the commit call to the transaction object that is attached to the application module. Note that if you have a child task flow participating in the transaction started by the caller or application modules nested under a root application module, they all share the same transaction object. The commit call on a transaction object will commit changes done by all application modules attached to it. The following diagram illustrates how transaction objects that are attached to the application modules are getting engaged when a client calls commit on the data control frame: Programmatically managing a transaction in a task flow If the declarative solution provided by the task flow for managing the transaction is not flexible enough to meet your use case, you can handle the transaction programmatically by calling the beginTransaction(), commit(), and rollback() methods exposed by oracle.adf.model.DataControlFrame. The data control frame acts as a bucket for holding data controls used in a binding container (page definition file). A data control frame may also hold child data control frames, if the page or page fragment has regions sharing the transaction with the parent. When you call beginTransaction(), commit(), or rollback() on a data control frame, all the data controls added to the data control frame will participate in the appropriate transaction cycle. In plain words, the data control frame provides a mechanism to manage the transaction seamlessly, freeing you from the pain of managing transactions separately for each data control present in the page definition. Note that you can use the DataControlFrame APIs for managing a transaction only in the context of a bounded task flow with an appropriate transaction setting (in the context of a controller transaction). The following example illustrates the APIs for programmatically managing a transaction, using the data control frame: //In managed bean class public void commit(){ //Get the binding context BindingContext bindingContext = BindingContext. getCurrent(); //Gets the name of current(root) DataControlFrame String currentFrame = bindingContext.getCurrentDataControlFrame(); //Finds DataControlFrame instance DataControlFrame dcFrame = bindingContext.findDataControlFrame(currentFrame); try { // Commit the trensaction dcFrame.commit(); //Open a new transaction allowing user to continue //editing data dcFrame.beginTransaction(null); } catch (Exception e) { //Report error through binding container ((DCBindingContainer)bindingContext. getCurrentBindingsEntry()). reportException(e); } } Programmatically managing a transaction in the business components The preceding solution of calling the commit method on the data control frame is ideal to be used in the client tier in the context of the bounded task flows. What if you need to programmatically commit the transaction from the business service layer, which does not have any binding context? To commit or roll back the transactions in the business service layer logic where there is no binding context, you can call commit() or rollback() on the oracle.jbo.Transaction object associated with the root application modules. The following example shows a method defined in an application module, which invokes commit on the Transaction object attached to the root application module: //In application module implementation class/***This method calls commit on transaction object*/public void commit(){this.getRootApplicationModule().getTransaction().commit();} Sharing a transaction between application modules at runtime An application module, nested under a root application module, shares the same transaction context with the root. This solution will fit well if you know that the application module needs to be nested during the development phase of the application. What if an application module needs to invoke the business methods from various application modules whose names are known only at runtime, and all the method calls require to happen in same transaction? You can use the following API in such scenarios to create the required application module at runtime: DBTransaction::createApplicationModule(defName); The following method defined in a root application module creates a nested application module on the fly. Both calling and called application modules share the same transaction context. //In application module implementation class/*** Caller passes the AM definition name of the application* module that requires to participate in the existing* transaction. This method creates new AM if no instance is* found for the supplied amName and invokes required service* on it.* @param amName* @param defName*/public void nestAMIfRequiredAndInvokeMethod(String amName, StringdefName) {//TxnAppModule is a generic interface implemented by all//transactional AMs used in this exampleTxnAppModule txnAM = null;boolean generatedLocally = false;try {//Check whether the TxnAppModuleImpl is already nestedtxnAM = (TxnAppModule)getDBTransaction().getRootApplicationModule().findApplicationModule(amName);//create a new nested instance of the TxnAppModuleImpl,// if not nested alreadyif(txnAM == null) {txnAM = (TxnAppModule)this.getDBTransaction().createApplicationModule(defName);generatedLocally = true;}//Invoke business methodsif (txnAM != null) {txnAM.updateEmployee();}} catch (Exception e) {e.printStackTrace();} finally {//Remove locally created AM once use is overif (generatedLocally && txnAM != null) {txnAM.remove();}}}
Read more
  • 0
  • 0
  • 1749

article-image-prepare-and-build
Packt
10 Dec 2012
13 min read
Save for later

Prepare and Build

Packt
10 Dec 2012
13 min read
(For more resources related to this topic, see here.) Let's take a look at the history and background of APEX. History and background APEX is a very powerful development tool, which is used to create web-based database-centric applications. The tool itself consists of a schema in the database with a lot of tables, views, and PL/SQL code. It's available for every edition of the database. The techniques that are used with this tool are PL/SQL, HTML, CSS, and JavaScript. Before APEX there was WebDB, which was based on the same techniques. WebDB became part of Oracle Portal and disappeared in silence. The difference between APEX and WebDB is that WebDB generates packages that generate the HTML pages, while APEX generates the HTML pages at runtime from the repository. Despite this approach APEX is amazingly fast. Because the database is doing all the hard work, the architecture is fairly simple. We only have to add a web server. We can choose one of the following web servers: Oracle HTTP Server (OHS) Embedded PL/SQL Gateway (EPG) APEX Listener APEX became available to the public in 2004 and then it was part of version 10g of the database. At that time it was called HTMLDB and the first version was 1.5. Before HTMLDB, it was called Oracle Flows , Oracle Platform, and Project Marvel. Throughout the years many versions have come out and at the time of writing the current version is 4.1.1. These many versions prove that Oracle has continuously invested in the development and support of APEX. This is important for the developers and companies who have to make a decision about which techniques to use in the future. According to Oracle, as written in their statement of direction, new versions of APEX will be released at least annually. The following screenshot shows the home screen of the current version of APEX: Home screen of APEX For the last few years, there is an increasing interest in the use of APEX from developers. The popularity came mainly from developers who found themselves comfortable with PL/SQL and wanted to easily enter the world of web-based applications. Oracle gave ADF a higher priority, because APEX was a no cost option of the database and with ADF (and all the related techniques and frameworks from Java), additional licenses could be sold. Especially now Oracle has pointed out APEX as one of the important tools for building applications in their Oracle Database Cloud Service, this interest will only grow. APEX shared a lot of the characteristics of cloud computing, even before cloud computing became popular. These characteristics include: Elasticity Roles and authorization Browser-based development and runtime RESTful web services (REST stands for Representational State Transfer) Multi-tenant Simple and fast to join APEX has outstanding community support, witnessed by the number of posts and threads on the Oracle forum. This forum is the most popular after the database and PL/SQL. Oracle itself has some websites, based on APEX. Among others there are the following: http://asktom.oracle.com http://shop.oracle.com http://cloud.oracle.com Oracle uses quite a few internal APEX applications. Oracle also provides a hosted version of APEX at http://apex.oracle.com. Users can sign up for free for a workspace to evaluate and experiment with the latest version of APEX. This environment is for evaluations and demonstrations only, there are no guarantees! Apex.oracle.com is a very popular service—more than 16,000 workspaces are active. To give an idea of the performance of APEX, the server used for this service used to be a Dell Poweredge 1950 with two Dual Core Xeon processors with 16 GB. Installing APEX In this section, we will discuss some additional considerations to take care of while installing APEX. The best source for the installation process is the Installation Guide of APEX. Runtime or full development environment On a production database, the runtime environment of APEX should be installed. This installation lacks the Application Builder and the SQL Workshop. Users can run applications, but the applications cannot be modified. The runtime environment of APEX can be administered using SQL*Plus and SQL Developer. The (web interface) options for importing an application, which are only available in a full development environment, can be used manually with the APEX_INSTANCE_ADMIN API. Using a runtime environment for production is recommended for security purposes, so that we can be certain that installed applications cannot be modified by anyone. On a development environment the full development environment can be installed with all the features available to the developers. Build status Besides the environment of APEX itself, the applications can also be installed in a similar way. When importing or exporting an application the Run Application Only or Run and Build Application options can be selected. Changing an application to Run Application Only can be done in the Application Builder by choosing Edit Application Properties. Changing the Build Status to Run and Build Application can only be done as the admin user of the workspace internal. In the APEX Administration Services, choose Manage Workspaces and then select Manage Applications | Build Status. Another setting related to the Runtime Only option could be used in the APEX Administration Services at instance level. Select Manage Instance and then select Security. Setting the property Disable Workspace Login to yes, acts as setting a Runtime Only environment, while still allowing instance administrators to log in to the APEX Administration Services. Tablespaces Following the install guide for the full development environment, at a certain moment, we have to run the following command, when logged in as SYS with the SYSDBA role, on the command line: @apexins tablespace_apex tablespace_files tablespace_temp images The command is explained as follows: tablespace_apex is the name of the tablespace that contains all the objects for the APEX application user. tablespace_files is the name of the tablespace that contains all the objects for the APEX files user. tablespace_temp is the name of the temporary tablespace of the database. images will be the virtual directory for APEX images. Oracle recommends using /i/ to support the future APEX upgrades. For the runtime environment, the command is as follows: @apxrtins tablespace_apex tablespace_files tablespace_temp images In the documentation, SYSAUX is given as an example for both tablespace_apex and tablespace_files. There are several reasons for not using SYSAUX for these tablespaces, but to use our own instead: SYSAUX is an important tablespace of the database itself We have more control over sizing and growth It is easier for a DBA to manage tablespace placement Contention in the SYSAUX tablespace is less occurring It's easier to clean-up older versions of APEX And last but not least, it's only an example Converting runtime environment into a full development environment and vice versa It's always possible to switch from a runtime to a production environment and vice versa. If you want to convert a runtime to a full development environment log in as SYS with the SYSDBA role and on the command line type @apxdvins.sql. For converting a full development to a runtime environment, type @apxdevrm—but export websheet applications first. Another way to restrict user access can be accomplished by logging in to the APEX Administration Services, where we can (among others) manage the APEX instance settings and all the workspaces. We can do that in two ways: http://server:port/apex/apex_admin: Log in with the administrator credentials http://server:port/apex/: Log in to the workspace internal, with the administrator credentials After logging in, perform the following steps: Go to Manage Instance. Select Security. Select the appropriate settings for Disable Administrator Login and Disable Workspace Login. These settings can also be set manually with the APEX_INSTANCE_ADMIN API. Choosing a web server When using a web-based development and runtime environment, we have to use a web server. Architecture of APEX The choice of a web server and the underlying architecture of the system has a direct impact on performance and scalability. Oracle provides us with three choices: Oracle HTTP Server (OHS) Embedded PL/SQL Gateway (EPG) APEX Listener Simply put, the web server maps the URL in a web browser to a procedure in the database. Everything the procedure prints with sys.htp package, is sent to the browser of the user. This is the concept used by tools such as WebDB and APEX. OHS The OHS is the oldest of the three. It's based on the Apache HTTP Server and uses a custom Apache Module named as mod_plsql: Oracle HTTP Server In release 10g of the database, OHS was installed with the database on the same machine. Upward to the release 11g, this is not the case anymore. If you want to install the OHS, you have to install the web tier part of WebLogic. If you install it on the same machine as the database, it's free of extra licence costs. This installation takes up a lot of space and is rather complex, compared with the other two. On the other hand, it's very fl exible and it has a proven track record. Configuration is done with the text files. EPG The EPG is part of XML DB and lives inside the database. Because everything is in the database, we have to use the dbms_xdb and dbms_epg PL/SQL packages to configure the EPG. Another implication is that all images and other files are stored inside the database, which can be accessed with PL/SQL or FTP, for example: Embedded PL/SQL gateway The architecture is very simple. It's not possible to install the EPG on a different machine than the database. From a security point of view, this is not the recommended architecture for real-life Internet applications and in most cases the EPG is used in development, test, or other internal environments with few users. APEX Listener APEX Listener is the newest of the three, it's still in development and with every new release more features are added to it. In the latest version, RESTful APIs can be created by configuring resource templates. APEX Listener is a Java application with a very small footprint. APEX Listener can be installed in a standalone mode, which is ideal for development and testing purposes. For production environments, the APEX Listener can be deployed by using a J2EE compliant Application Server such as Glassfish, WebLogic, or Oracle Containers for J2EE: APEX Listener Configuration of the APEX Listener is done in a browser. With some extra configuration, uploading of Excel into APEX collections can be achieved. In future release, other functionalities, such as OAuth 2.0 and ICAP virus scanner integration, have been announced. Configuration options of the APEX Listener Like OHS, an architectural choice can be made if we want to install APEX Listener on the same machine as the database. For large public applications, it's better to use a separate web server. Many documents and articles have been written about choosing the right web server. If you read between the lines, you'll see that Oracle more or less recommends the use of APEX Listener. Given the functionality, enhanced security, file caching, fl exibility of deployment possibilities, and feature announcements makes it the best choice. Creating a second administrator When installing APEX, by default the workspace Internal with the administrator user Admin is created. Some users know more than the average end user. Also, developers have more knowledge than the average user. Imagine that such users try to log in to either the APEX Administration Services or the normal login page with the workspace Internal and administrator Admin, and consequently use the wrong password. As a consequence, the Admin account would be locked after a number of login attempts. This is a very annoying situation, especially when it happens often. Big companies and APEX Hosting companies with many workspaces and a lot of anonymous users or developers may suffer from this. Fortunately there is an easy solution, creating a second administrator account. Login attempt in workspace Internal as Admin If the account is already locked, we have to unlock it first. This can be easily done by running the apxchpwd.sql script, which can be found in the main Apex directory of the unzipped installation file of APEX: Start SQL*Plus and connect as sys with the sysdba role Run the script by entering @apxchpwd.sql. Follow the instructions and enter a new password. Now we are ready to create a second administrator account. This can be done in two ways, using the web interface or the command line. APEX web interface Follow these steps to create a new administrator, using the browser. First, we need to log in to the APEX Administrator Services at http://server:port/apex/. Log in to the workspace Internal, with the administrator credentials After logging in, perform the following steps: Go to Manage Workspaces. Select Existing Workspaces. You can also select the edit icon of the workspace Internal to inspect the settings. You cannot change them. Select Cancel to return to the previous screen. Select the workspace Internal by clicking on the name. Select Manage Users. Here you can see the user Admin. You can also select the user Admin to change the password. Other settings cannot be changed. Select Cancel or Apply Changes to return to the previous screen. Select Create User. Make sure that Internal is selected in the Workspace field and APEX_xxxxxx is selected in Default Schema, and that the new user is an administrator. xxxxxx has to match your APEX scheme version in the database, for instance, APEX_040100. Click on Create to finish. Settings for the new administrator Command line When we still have access, we can use the web interface of APEX. If not we can use the command line: Start SQL*Plus and connect as SYS with the SYSDBA role. Unlock the APEX_xxxxxx account by issuing the following command: alter user APEX_xxxxxx account unlock; Connect to the APEX_xxxxxx account. If you don't remember your password, you can just reset it, without impacting the APEX instance. Execute the following (use your own username, e-mail, and password): BEGIN wwv_flow_api.set_security_group_id (p_security_group_id=>10); wwv_flow_fnd_user_api.create_fnd_user( p_user_name => 'second_admin', p_email_address => 'email@company.com', p_web_password => 'second_admin_password') ; END; / COMMIT / The new administrator is created. Connect again as SYS with the SYSDBA role and lock the account again with the following command: alter user APEX_xxxxxx account lock; Now you can log in to the Internal workspace with your newly created account and you'll be asked to change your password. Other accounts When an administrator of a developer workspace loses his/her password or has a locked account, you can bring that account back to life by following these steps: Log in to the APEX Administrator Services Go to Manage Workspace. Select Existing Workspaces. Select the workspace. Select Manage Users. Select the user, change the password, and unlock the user. A developer or an APEX end user account can be managed by the administrator of the workspace from the workspace itself. Follow these steps to do so: Log in to the workspace. Go to Administration. Select the user, change the password, and unlock the user.
Read more
  • 0
  • 0
  • 2689

article-image-building-applications-spring-data-redis
Packt
03 Dec 2012
9 min read
Save for later

Building Applications with Spring Data Redis

Packt
03 Dec 2012
9 min read
(For more resources related on Spring, see here.) Designing a Redis data model The most important rules of designing a Redis data model are: Redis does not support ad hoc queries and it does not support relations in the same way than relational databases. Thus, designing a Redis data model is a total different ballgame than designing the data model of a relational database. The basic guidelines of a Redis data model design are given as follows: Instead of simply modeling the information stored in our data model, we have to also think how we want to search information from it. This often leads to a situation where we have to duplicate data in order to fulfill the requirements given to us. Don't be afraid to do this. We should not concentrate on normalizing our data model. Instead, we should combine the data that we need to handle as an unit into an aggregate. Since Redis does not support relations, we have to design and implement these relations by using the supported data structures. This means that we have to maintain these relations manually when they are changed. Because this might require a lot of effort and code, it could be wise to simply duplicate the information instead of using relations. It is always wise to spend a moment to verify that we are using the correct tool for the job. NoSQL Distilled, by Martin Fowler contains explanations of different NoSQL databases and their use cases, and can be found at http://martinfowler.com/books/nosql.html. Redis supports multiple data structures. However, one question remained unanswered: which data structure should we use for our data? This question is addressed in the following table: Data type Description String A string is good choice for storing information that is already converted to a textual form. For instance, if we want to store HTML, JSON, or XML, a string should be our weapon of choice. List A list is a good choice if we will access it only near the start or end. This means that we should use it for representing queues or stacks. Set We should use a set if we need to get the size of a collection or check if a certain item belongs to it. Also, if we want to represent relations, a set is a good choice (for example, "who are John's friends?"). Sorted set Sorted sets should be used in the same situations as sets when the ordering of items is important to us. Hash A hash is a perfect data structure for representing complex objects. Key components Spring Data Redis provides certain components that are the cornerstones of each application that uses it. This section provides a brief introduction to the components that we will later use to implement our example applications. Atomic counters Atomic counters are for Redis what sequences are for relational databases. Atomic counters guarantee that the value received by a client is unique. This makes these counters a perfect tool for creating unique IDs to our data that is stored in Redis. At the moment, Spring Data Redis offers two atomic counters: RedisAtomicInteger and RedisAtomicLong . These classes provide atomic counter operations for integers and longs. RedisTemplate The RedisTemplate<K,V> class is the central component of Spring Data Redis. It provides methods that we can use to communicate with a Redis instance. This class requires that two type parameters are given during its instantiation: the type of used Redis key and the type of the Redis value. Operations The RedisTemplate class provides two kinds of operations that we can use to store, fetch, and remove data from our Redis instance: Operations that require that the key and the value are given every time an operation is performed. These operations are handy when we have to execute a single operation by using a key and a value. Operations that are bound to a specific key that is given only once. We should use this approach when we have to perform multiple operations by using the same key. The methods that require that a key and value is given every time an operation is performed are described in following list: HashOperations<K,HK,HV> opsForHash(): This method returns the operations that are performed on hashes ListOperations<K,V> opsForList(): This method returns the operations performed on lists SetOperations<K,V> opsForSet(): This method returns the operations performed on sets ValueOperations<K,V> opsForValue(): This method returns the operations performed on simple values ZSetOperations<K,HK,HV> opsForZSet(): This method returns the operations performed on sorted sets The methods of the RedisTemplate class that allow us to execute multiple operations by using the same key are described in following list: BoundHashOperarations<K,HK,HV> boundHashOps(K key): This method returns hash operations that are bound to the key given as a parameter BoundListOperations<K,V> boundListOps(K key): This method returns list operations bound to the key given as a parameter BoundSetOperations<K,V> boundSetOps(K key):: This method returns set operations, which are bound to the given key BoundValueOperations<K,V> boundValueOps(K key): This method returns operations performed to simple values that are bound to the given key BoundZSetOperations<K,V> boundZSetOps(K key): This method returns operations performed on sorted sets that are bound to the key that is given as a parameter The differences between these operations become clear to us when we start building our example applications. Serializers Because the data is stored in Redis as bytes, we need a method for converting our data to bytes and vice versa. Spring Data Redis provides an interface called RedisSerializer<T>, which is used in the serialization process. This interface has one type parameter that describes the type of the serialized object. Spring Data Redis provides several implementations of this interface. These implementations are described in the following table: Serializer Description GenericToStringSerializer<T> Serializes strings to bytes and vice versa. Uses the Spring ConversionService to transform objects to strings and vice versa. JacksonJsonRedisSerializer<T> Converts objects to JSON and vice versa. JdkSerializationRedisSerializer Provides Java based serialization to objects. OxmSerializer Uses the Object/XML mapping support of Spring Framework 3. StringRedisSerializer  Converts strings to bytes and vice versa. We can customize the serialization process of the RedisTemplate class by using the described serializers. The RedisTemplate class provides flexible configuration options that can be used to set the serializers that are used to serialize value keys, values, hash keys, hash values, and string values. The default serializer of the RedisTemplate class is JdkSerializationRedisSerializer. However, the string serializer is an exception to this rule. StringRedisSerializer is the serializer that is by default used to serialize string values. Implementing a CRUD application This section describes two different ways for implementing a CRUD application that is used to manage contact information. First, we will learn how we can implement a CRUD application by using the default serializer of the RedisTemplate class. Second, we will learn how we can use value serializers and implement a CRUD application that stores our data in JSON format. Both of these applications will also share the same domain model. This domain model consists of two classes: Contact and Address. We removed the JPA specific annotations from them We use these classes in our web layer as form objects and they no longer have any other methods than getters and setters The domain model is not the only thing that is shared by these examples. They also share the interface that declares the service methods for the Contact class. The source code of the ContactService interface is given as follows: public interface ContactService {public Contact add(Contact added);public Contact deleteById(Long id) throws NotFoundException;public List&lt;Contact&gt; findAll();public Contact findById(Long id) throws NotFoundException;public Contact update(Contact updated) throws NotFoundException;} Both of these applications will communicate with the used Redis instance by using the Jedis connector. Regardless of the user's approach, we can implement a CRUD application with Spring Data Redis by following these steps: Configure the application context. Implement the CRUD functions. Let's get started and find out how we can implement the CRUD functions for contact information. Using default serializers This subsection describes how we can implement a CRUD application by using the default serializers of the RedisTemplate class. This means that StringRedisSerializer is used to serialize string values, and JdkSerializationRedisSerializer serializes other objects. Configuring the application context We can configure the application context of our application by making the following changes to the ApplicationContext class: Configuring the Redis template bean. Configuring the Redis atomic long bean. Configuring the Redis template bean We can configure the Redis template bean by adding a redisTemplate() method to the ApplicationContext class and annotating this method with the @Bean annotation. We can implement this method by following these steps: Create a new RedisTemplate object. Set the used connection factory to the created RedisTemplate object. Return the created object. The source code of the redisTemplate() method is given as follows: @Beanpublic RedisTemplate redisTemplate() {RedisTemplate&lt;String, String&gt; redis = new RedisTemplate&lt;String,String&gt;();redis.setConnectionFactory(redisConnectionFactory());return redis;} Configuring the Redis atomic long bean We start the configuration of the Redis atomic long bean by adding a method called redisAtomicLong() to the ApplicationContext class and annotating the method with the @Bean annotation. Our next task is to implement this method by following these steps: Create a new RedisAtomicLong object. Pass the name of the used Redis counter and the Redis connection factory as constructor parameters. Return the created object. The source code of the redisAtomicLong() method is given as follows: @Beanpublic RedisAtomicLong redisAtomicLong() {return new RedisAtomicLong("contact", redisConnectionFactory());} If we need to create IDs for instances of different classes, we can use the same Redis counter. Thus, we have to configure only one Redis atomic long bean.
Read more
  • 0
  • 0
  • 9084

article-image-basic-coding-hornetq-creating-and-consuming-messages
Packt
28 Nov 2012
4 min read
Save for later

Basic Coding with HornetQ: Creating and Consuming Messages

Packt
28 Nov 2012
4 min read
(For more resources related to this topic, see here.) Installing Eclipse on Windows You can download the Eclipse IDE for Java EE developers (in our case the ZIP file eclipse-jee-indigo-SR1-win32.zip) from http://www.eclipse.org/downloads/. Once downloaded, you have to unzip the eclipse folder inside the archive to the destination folder so that you have a folder structure like the one illustrated in the following screenshot: Now a double-click on the eclipse.exe file will fire the first run of Eclipse. Installing NetBeans on Windows NetBeans is one of the most frequently used IDE for Java development purposes. It mimics the Eclipse plugin module's installation, so you could download the J2EE version from the URL http://netbeans.org/downloads/.But remember that this version also comes with an integrated GlassFish application server and a Tomcat server. Even in this case you only need to download the .exe file (java_ee_sdk-6u3-jdk7-windows.exe, in our case) and launch the installer. Once finished, you should be able to run the IDE by clicking on the NetBeans icon in your Windows Start menu. Installing NetBeans on Linux If you are using a Debian-based version of Linux like Ubuntu, installing both NetBeans and Eclipse is nothing more than typing a command from the bash shell and waiting for the installation process to finish. As we are using Ubuntu Version 11, we will type the following command from a non-root user account to install Eclipse: sudo apt-get install eclipse The NetBeans installation procedure is slightly different due to the fact that the Ubuntu repositories do not have a package for a NetBeans installation. So, for installing NetBeans you have to download a script and then run it. If you are using a non-root user account, you need to type the following commands on a terminal: sudo wget http://download.netbeans.org/netbeans/7.1.1/final/bundles/ netbeans-7.1.1-ml-javaee-linux.sh sudo chmod +x netbeans-7.1.1-ml-javaee-linux.sh ./netbeans-7.1.1-ml-javaee-linux.sh During the first run of the IDE, Eclipse will ask which default workspace the new projects should be stored in. Choose the one suggested, and in case you are not planning to change it, check the Use this as the default and do not ask again checkbox for not re-proposing the question, as shown in the following screenshot: The same happens with NetBeans, but during the installation procedure. Post installation Both Eclipse and NetBeans have an integrated system for upgrading them to the latest version, so when you have correctly launched the first-time run, keep your IDE updated. For Eclipse, you can access the Update window by using the menu Help | Check for updates. This will pop up the window, as shown in this screenshot: NetBeans has the same functionality, which can be launched from the menu. A 10,000 foot view of HornetQ Before moving on with the coding phase, it is time to recover some concepts to allow the user and the coder to better understand how HornetQ manages messages. HornetQ is only a set of Plain Old Java Objects (POJOs) compiled and grouped into JAR files. The software developer could easily grasp that this characteristic leads to HornetQ having no dependency on third-party libraries. It is possible to use and even start HornetQ from any Java class; this is a great advantage over other frameworks. HornetQ deals internally only with its own set of classes, called the HornetQ core, avoiding any dependency on JMS dialect and specifications. Nevertheless, the client that connects with the HornetQ server can speak the JMS language. So the HornetQ server also uses a JMS to core HornetQ API translator. This means that when you send a JMS message to a HornetQ server, it is received as JMS and then translated into the core API dialect to be managed internally by HornetQ. The following figure illustrates this concept: The core messaging concepts of HornetQ are somewhat simpler than those of JMS: Message: This is a unit of data that can be sent/delivered from a consumer to a producer. Messages have various possibilities. But only to cite them, a message can have: durability, priority, expiry time, time, and dimension. Address: HornetQ maintains an association between an address (IP address of the server) and the queues available at that address. So the message is bound to the address. Queue: This is nothing more than a set of messages. Like messages, queues have attributes such as durability, temporary, and filtering expressions.
Read more
  • 0
  • 0
  • 2295

article-image-dispatchers-and-routers
Packt
12 Nov 2012
5 min read
Save for later

Dispatchers and Routers

Packt
12 Nov 2012
5 min read
(For more resources related to this topic, see here.) Dispatchers In the real world, dispatchers are the communication coordinators that are responsible for receiving and passing messages. For the emergency services (for example, in U.S. – 911), the dispatchers are the people responsible for taking in the call, and passing on the message to the other departments (medical, police, fire station, or others). The dispatcher coordinates the route and activities of all these departments, to make sure that the right help reaches the destination as early as possible. Another example is how the airport manages airplanes taking off. The air traffic controllers (ATCs) coordinate the use of the runway between the various planes taking off and landing. On one side, air traffic controllers manage the runways (usually ranging from 1 to 3), and on the other, aircrafts of different sizes and capacity from different airlines ready to take off and land. An air traffic controller coordinates the various airplanes, gets the airplanes lined up, and allocates the runways to take off and land: As we can see, there are multiple runways available and multiple airlines, each having a different set of airplanes needing to take off. It is the responsibility of air traffic controller(s) to coordinate the take-off and landing of planes from each airline and do this activity as fast as possible. Dispatcher as a pattern Dispatcher is a well-recognized and used pattern in the Java world. Dispatchers are used to control the flow of execution. Based on the dispatching policy, dispatchers will route the incoming message or request to the business process. Dispatchers as a pattern provide the following advantages: Centralized control: Dispatchers provide a central place from where various messages/requests are dispatched. The word "centralized" means code is re-used, leading to improved maintainability and reduced duplication of code. Application partitioning: There is a clear separation between the business logic and display logic. There is no need to intermingle business logic with the display logic. Reduced inter-dependencies: Separation of the display logic from the business logic means there are reduced inter-dependencies between the two. Reduced inter-dependencies mean less contention on the same resources, leading to a scalable model. Dispatcher as a concept provides a centralized control mechanism that decouples different processing logic within the application, which in turn reduces inter-dependencies. Executor in Java In Akka, dispatchers are based on the Java Executor framework (part of java.util.concurrent).Executor provides the framework for the execution of asynchronous tasks. It is based on the producer–consumer model, meaning the act of task submission (producer) is decoupled from the act of task execution (consumer). The threads that submit tasks are different from the threads that execute the tasks. Two important implementations of the Executor framework are as follows: ThreadPoolExecutor: It executes each submitted task using thread from a predefined and configured thread pool. ForkJoinPool: It uses the same thread pool model but supplemented with work stealing. Threads in the pool will find and execute tasks (work stealing) created by other active tasks or tasks allocated to other threads in the pool that are pending execution. Fork/join is based a on fine-grained, parallel, divide-andconquer style, parallelism model. The idea is to break down large data chunks into smaller chunks and process them in parallel to take advantage of the underlying processor cores. Executor is backed by constructs that allow you to define and control how the tasks are executed. Using these Executor constructor constructs, one can specify the following: How many threads will be running? (thread pool size) How are the tasks queued until they come up for processing? How many tasks can be executed concurrently? What happens in case the system overloads, when tasks to be rejected are selected? What is the order of execution of tasks? (LIFO, FIFO, and so on) Which pre- and post-task execution actions can be run? In the book Java Concurrency in Practice, Addison-Wesley Publishing, the authors have described the Executor framework and its usage very nicely. It will be useful to read the book for more details on the concurrency constructs provided by Java language. Dispatchers in Akka In the Akka world, the dispatcher controls and coordinates the message dispatching to the actors mapped on the underlying threads. They make sure that the resources are optimized and messages are processed as fast as possible. Akka provides multiple dispatch policies that can be customized according to the underlying hardware resource (number of cores or memory available) and type of application workload. If we take our example of the airport and map it to the Akka world, we can see that the runways are mapped to the underlying resources—threads. The airlines with their planes are analogous to the mailbox with the messages. The ATC tower employs a dispatch policy to make sure the runways are optimally utilized and the planes are spending minimum time on waiting for clearance to take off or land: For Akka, the dispatchers, actors, mailbox, and threads look like the following diagram: The dispatchers run on their threads; they dispatch the actors and messages from the attached mailbox and allocate on heap to the executor threads. The executor threads are configured and tuned to the underlying processor cores that available for processing the messages.
Read more
  • 0
  • 0
  • 5664
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-article-odata-on-mobile-devices
Packt
02 Aug 2012
8 min read
Save for later

Odata on Mobile Devices

Packt
02 Aug 2012
8 min read
With the continuous evolution of mobile operating systems, smart mobile devices (such as smartphones or tablets) play increasingly important roles in everyone's daily work and life. The iOS (from Apple Inc., for iPhone, iPad, and iPod Touch devices), Android (from Google) and Windows Phone 7 (from Microsoft) operating systems have shown us the great power and potential of modern mobile systems. In the early days of the Internet, web access was mostly limited to fixed-line devices. However, with the rapid development of wireless network technology (such as 3G), Internet access has become a common feature for mobile or portable devices. Modern mobile OSes, such as iOS, Android, and Windows Phone have all provided rich APIs for network access (especially Internet-based web access). For example, it is quite convenient for mobile developers to create a native iPhone program that uses a network API to access remote RSS feeds from the Internet and present the retrieved data items on the phone screen. And to make Internet-based data access and communication more convenient and standardized, we often leverage some existing protocols, such as XML or JSON, to help us. Thus, it is also a good idea if we can incorporate OData services in mobile application development so as to concentrate our effort on the main application logic instead of the details about underlying data exchange and manipulation. In this article, we will discuss several cases of building OData client applications for various kinds of mobile device platforms. The first four recipes will focus on how to deal with OData in applications running on Microsoft Windows Phone 7. And they will be followed by two recipes that discuss consuming an OData service in mobile applications running on the iOS and Android platforms. Although this book is .NET developer-oriented, since iOS and Android are the most popular and dominating mobile OSes in the market, I think the last two recipes here would still be helpful (especially when the OData service is built upon WCF Data Service on the server side). Accessing OData service with OData WP7 client library What is the best way to consume an OData service in a Windows Phone 7 application? The answer is, by using the OData client library for Windows Phone 7 (OData WP7 client library). Just like the WCF Data Service client library for standard .NET Framework based applications, the OData WP7 client library allows developers to communicate with OData services via strong-typed proxy and entity classes in Windows Phone 7 applications. Also, the latest Windows Phone SDK 7.1 has included the OData WP7 client library and the associated developer tools in it. In this recipe, we will demonstrate how to use the OData WP7 client library in a standard Windows Phone 7 application. Getting ready The sample WP7 application we will build here provides a simple UI for users to view and edit the Categories data by using the Northwind OData service. The application consists of two phone screens, shown in the following screenshot: Make sure you have installed Windows Phone SDK 7.1 (which contains the OData WP7 client library and tools) on the development machine. You can get the SDK from the following website: http://create.msdn.com/en-us/home/getting_started The source code for this recipe can be found in the ch05ODataWP7ClientLibrarySln directory. How to do it... Create a new ASP.NET web application that contains the Northwind OData service. Add a new Windows Phone Application project in the same solution (see the following screenshot). Select Windows Phone OS 7.1 as the Target Windows Phone OS Version in the New Windows Phone Application dialog box (see the following screenshot). Click on the OK button, to finish the WP7 project creation. The following screenshot shows the default WP7 project structure created by Visual Studio: Create a new Windows Phone Portrait Page (see the following screenshot) and name it EditCategory.xaml. Create the OData client proxy (against the Northwind OData service) by using the Visual Studio Add Service Reference wizard. Add the XAML content for the MainPage.xaml page (see the following XAML fragment). <Grid x_Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0"> <ListBox x_Name="lstCategories" ItemsSource="{Binding}"> <ListBox.ItemTemplate>> <DataTemplate> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="60" /> <ColumnDefinition Width="260" /> <ColumnDefinition Width="140" /> </Grid.ColumnDefinitions> <TextBlock Grid.Column="0" Text="{Binding Path=CategoryID}" FontSize="36" Margin="5"/> <TextBlock Grid.Column="1" Text="{Binding Path=CategoryName}" FontSize="36" Margin="5" TextWrapping="Wrap"/> <HyperlinkButton Grid.Column="2" Content="Edit" HorizontalAlignment="Right" NavigateUri="{Binding Path=CategoryID, StringFormat='/EditCategory.xaml? ID={0}'}" FontSize="36" Margin="5"/> <Grid> <DataTemplate> <ListBox.ItemTemplate> <ListBox> <Grid> Add the code for loading the Category list in the code-behind file of the MainPage. xaml page (see the following code snippet). public partial class MainPage : PhoneApplicationPage { ODataSvc.NorthwindEntities _ctx = null; DataServiceCollection _categories = null; ...... private void PhoneApplicationPage_Loaded(object sender, RoutedEventArgs e) { Uri svcUri = new Uri("http://localhost:9188/NorthwindOData.svc"); _ctx = new ODataSvc.NorthwindEntities(svcUri); _categories = new DataServiceCollection(_ctx); _categories.LoadCompleted += (o, args) => { if (_categories.Continuation != null) _categories.LoadNextPartialSetAsync(); else { this.Dispatcher.BeginInvoke( () => { ContentPanel.DataContext = _categories; ContentPanel.UpdateLayout(); } ); } }; var query = from c in _ctx.Categories select c; _categories.LoadAsync(query); } } Add the XAML content for the EditCategory.xamlpage (see the following XAML fragment). <Grid x_Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0"> <StackPanel> <TextBlock Text="{Binding Path=CategoryID, StringFormat='Fields of Categories({0})'}" FontSize="40" Margin="5" /> <Border> <StackPanel> <TextBlock Text="Category Name:" FontSize="24" Margin="10" /> <TextBox x_Name="txtCategoryName" Text="{Binding Path=CategoryName, Mode=TwoWay}" /> <TextBlock Text="Description:" FontSize="24" Margin="10" /> <TextBox x_Name="txtDescription" Text="{Binding Path=Description, Mode=TwoWay}" /> </StackPanel> </Border> <StackPanel Orientation="Horizontal" HorizontalAlignment="Center"> <Button x_Name="btnUpdate" Content="Update" HorizontalAlignment="Center" Click="btnUpdate_Click" /> <Button x_Name="btnCancel" Content="Cancel" HorizontalAlignment="Center" Click="btnCancel_Click" /> </StackPanel> </StackPanel> </Grid> Add the code for editing the selected Category item in the code-behind file of the EditCategory.xaml page. In the PhoneApplicationPage_Loaded event, we will load the properties of the selected Category item and display them on the screen (see the following code snippet). private void PhoneApplicationPage_Loaded(object sender, RoutedEventArgs e) { EnableControls(false); Uri svcUri = new Uri("http://localhost:9188/NorthwindOData. svc"); _ctx = new ODataSvc.NorthwindEntities(svcUri); var id = int.Parse(NavigationContext.QueryString["ID"]); var query = _ctx.Categories.Where(c => c.CategoryID == id); _categories = new DataServiceCollection(_ctx); _categories.LoadCompleted += (o, args) => { if (_categories.Count <= 0) { MessageBox.Show("Failed to retrieve Category item."); NavigationService.GoBack(); } else { EnableControls(true); ContentPanel.DataContext = _categories[0]; ContentPanel.UpdateLayout(); } }; _categories.LoadAsync(query); } The code for updating changes (against the Category item) is put in the Click event of the Update button (see the following code snippet). private void btnUpdate_Click(object sender, RoutedEventArgs e) { EnableControls(false); _ctx.UpdateObject(_categories[0]); _ctx.BeginSaveChanges( (ar) => { this.Dispatcher.BeginInvoke( () => { try { var response = _ctx.EndSaveChanges(ar); NavigationService.Navigate(new Uri("/MainPage.xaml", UriKind.Relative)); } catch (Exception ex) { MessageBox.Show("Failed to save changes."); EnableControls(true); } } ); }, null ); } Select the WP7 project and launch it in Windows Phone Emulator (see the following screenshot). Depending on the performance of the development machine, it might take a while to start the emulator. Running a WP7 application in Windows Phone Emulator is very helpful especially when the phone application needs to access some web services (such as WCF Data Service) hosted on the local machine (via the Visual Studio test web server). How it works... Since the OData WP7 client library (and tools) has been installed together with Windows Phone SDK 7.1, we can directly use the Visual Studio Add Service Reference wizard to generate the OData client proxy in Windows Phone applications. And the generated OData proxy is the same as what we used in standard .NET applications. Similarly, all network access code (such as the OData service consumption code in this recipe) has to follow the asynchronous programming pattern in Windows Phone applications. There's more... In this recipe, we use the Windows Phone Emulator for testing. If you want to deploy and test your Windows Phone application on a real device, you need to obtain a Windows Phone developer account so as to unlock your Windows Phone device. Refer to the walkthrough: App Hub - windows phone developer registration walkthrough,available at http://go.microsoft.com/fwlink/?LinkID=202697
Read more
  • 0
  • 0
  • 1390

article-image-combining-silverlight-and-windows-azure-projects
Packt
22 Mar 2012
6 min read
Save for later

Combining Silverlight and Windows Azure projects

Packt
22 Mar 2012
6 min read
Combining Silverlight and Windows Azure projects Standard Silverlight applications require that they be hosted on HTML pages, so that they can be loaded in a browser. Developers who work with the .Net framework will usually host this page within an ASP.Net website. The easiest way to host a Silverlight application on Azure is to create a single web role that contains an ASP.Net application to host the Silverlight application. Hosting the Silverlight application in this way enables you, as a developer, to take advantage of the full .Net framework to support your Silverlight application. Supporting functionalities can be provided such as hosting WCF services, RIA services, Entity Framework, and so on. In the upcoming chapters, we will explore ways by which RIA services, OData, Entity Framework, and a few other technologies can be used together. For the rest of this chapter, we will focus on the basics of hosting a Silverlight application within Azure and integrating a hosted WCF service. Creating a Silverlight or Azure solution Your system should already be fully configured with all Silverlight and Azure tools. In this section, we are going to create a simple Silverlight application that is hosted inside an Azure web role. This will be the basic template that is used throughout the book as we explore different ways in which we can integrate the technologies together: Start Visual Studio as an administrator. You can do this by opening the Start Menu and finding Visual Studio, then right-clicking on it, and selecting Run as Administrator. This is required for the Azure compute emulator to run successfully. Create a new Windows Azure Cloud Service. The solution name used in the following example screenshot is Chapter3Exercise1: (Move the mouse over the image to enlarge.) Add a single ASP.Net Web Role as shown in the following screenshot. For this exercise, the default name of WebRole1 will be used. The name of the role can be changed by clicking on the pencil icon next to the WebRole1 name: Visual Studio should now be loaded with a single Azure project and an ASP. Net project. In the following screenshot, you can see that Visual Studio is opened with a solution named Chapter3Exercise1. The solution contains a Windows Azure Cloud project, also called Chapter3Exercise1. Finally, the ASP.Net project can be seen named as WebRole1: Right-click on the ASP.Net project named WebRole1 and select Properties. In the WebRole1 properties screen, click on the Silverlight Applications tab. Click on Add to add a new Silverlight project into the solution. The Add button has been highlighted in the following screenshot: For this exercise, rename the project to HelloWorldSilverlightProject. Click on Add to create the Silverlight project. The rest of the options can be left to their default settings, as shown in the following screenshot. Visual Studio will now create the Silverlight project and add it to the solution. The resulting solution should now have three projects as shown in the following screenshot. These include the original Azure project, Chapter3Exercise1; the ASP.Net web role, WebRole1; and the third new project HelloWorldSilverlightProject: Open MainPage.xaml in design view, if not already open. Change the grid to a StackPanel. Inside the StackPanel, add a button named button1 with a height of 40 and a content that displays Click me!. Inside the StackPanel, underneath button1, add a text block named textBlock1 with a height of 20 The final XAML should look similar to this code snippet: <UserControl> <StackPanel x_Name="LayoutRoot" Background="White"> <Button x_Name="button1" Height="40" Content="Click me!" /> <TextBlock x_Name="textBlock1" Height="20" /> </StackPanel> </UserControl> Double-click on button1 in the designer to have Visual Studio automatically create a click event. The final XAML in the designer should look similar to the following screenshot: Open the MainPage.xaml.cs code behind the file and find the button1_Click method. Add a code that will update textBlock1 to display Hello World and the current time as follows: private void button1_Click(object sender, RoutedEventArgs e) { textBlock1.Text = "Hello World at " + DateTime.Now.ToLongTimeString(); } Build the project to ensure that everything compiles correctly.Now that the solution has been built, it is ready to be run and debugged within the Windows Azure compute emulator. The next section will explore what happens while running an Azure application on the compute emulator. Running an Azure application on the Azure compute emulator With the solution built, it is ready to run on the Azure simulation: the compute emulator. The compute emulator is the local simulation of the Windows Azure compute emulator which Microsoft runs on the Azure servers it hosts. When you start debugging by pressing F5 (or by selecting Debug | Start Debugging from the menu), Visual Studio will automatically package the Azure project, then start the Azure compute emulator simulation. The package will be copied to a local folder used by the compute emulator. The compute emulator will then start a Windows process to host or execute the roles, one of which will be started as per the instance request for each role. Once the compute emulator has been successfully initialized, Visual Studio will then launch the browser and attach the debugger to the correct places. This is similar to the way Visual Studio handles debugging of an ASP.Net application with the ASP. Net Development Server. The following steps will take you through the process of running and debugging applications on top of the compute emulator: In Solution Explorer, inside the HelloWorldSilverlightProject, right-click on HelloWorldSilverlightProjectTestPage.aspx, and select Set as startup page. Ensure that the Azure project (Chapter3Exercise1) is still set as the start-up project. In Visual Studio, press F5 to start debugging (or from the menu select Debug | Start Debugging). Visual Studio will compile the project, and if successful, begins to launch the Azure compute emulator as shown in the following screenshot: Once the compute emulator has been started and the Azure package deployed to it, Visual Studio will launch Internet Explorer. Internet Explorer will display the page set as the start-up page (which was set to in an earlier step HelloWorldSilverlightProjectTestPage.aspx). Once the Silverlight application has been loaded, click on the Click me! button. The TextBlock should be updated with the current time, as shown in the following screenshot: Upon this completion, you should now have successfully deployed a Silverlight application on top of the Windows Azure compute emulator. You can now use this base project to build more advanced features and integration with other services. Consuming an Azure-hosted WCF service within a Silverlight application A standalone Silverlight application will not be able to do much by itself. Most applications will require that they consume data from a data source, such as to get a list of products or customer orders. A common way to send data between .Net applications is through WCF services.
Read more
  • 0
  • 0
  • 2306

article-image-introduction-logging-tomcat-7
Packt
21 Mar 2012
9 min read
Save for later

Introduction to Logging in Tomcat 7

Packt
21 Mar 2012
9 min read
(For more resources on Apache, see here.) JULI Previous versions of Tomcat (till 5.x) use Apache common logging services for generating logs. A major disadvantage with this logging mechanism is that it can handle only single JVM configuration and makes it difficult to configure separate logging for each class loader for independent application. In order to resolve this issue, Tomcat developers have introduced a separate API for Tomcat 6 version, that comes with the capability of capturing each class loader activity in the Tomcat logs. It is based on java.util.logging framework. By default, Tomcat 7 uses its own Java logging API to implement logging services. This is also called as JULI. This API can be found in TOMCAT_HOME/bin of the Tomcat 7 directory structures (tomcat-juli.jar). The following screenshot shows the directory structure of the bin directory where tomcat-juli.jar is placed. JULI also provides the feature for custom logging for each web application, and it also supports private per-application logging configurations. With the enhanced feature of separate class loader logging, it also helps in detecting memory issues while unloading the classes at runtime. For more information on JULI and the class loading issue, please refer to http://tomcat.apache.org/tomcat-7.0-doc/logging.html and http://tomcat.apache.org/tomcat-7.0-doc/class-loader-howto.html respectively. Loggers, appenders, and layouts There are some important components of logging which we use at the time of implementing the logging mechanism for applications. Each term has its individual importance in tracking the events of the application. Let's discuss each term individually to find out their usage: Loggers:It can be defined as the logical name for the log file. This logical name is written in the application code. We can configure an independent logger for each application. Appenders: The process of generation of logs are handled by appenders. There are many types of appenders, such as FileAppender, ConsoleAppender, SocketAppender, and so on, which are available in log4j. The following are some examples of appenders for log4j: log4j.appender.CATALINA=org.apache.log4j.DailyRollingFileAppender log4j.appender.CATALINA.File=${catalina.base}/logs/catalina.out log4j.appender.CATALINA.Append=true log4j.appender.CATALINA.Encoding=UTF-8 The previous four lines of appenders define the DailyRollingFileAppender in log4j, where the filename is catalina.out . These logs will have UTF-8 encoding enabled. If log4j.appender.CATALINA.append=false, then logs will not get updated in the log files. # Roll-over the log once per day log4j.appender.CATALINA.DatePattern='.'dd-MM-yyyy'.log' log4j.appender.CATALINA.layout = org.apache.log4j.PatternLayout log4j.appender.CATALINA.layout.ConversionPattern = %d [%t] %-5p %c- %m%n The previous three lines of code show the roll-over of log once per day. Layout: It is defined as the format of logs displayed in the log file. The appender uses layout to format the log files (also called as patterns). The highlighted code shows the pattern for access logs: <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" resolveHosts="false"/> Loggers, appenders, and layouts together help the developer to capture the log message for the application event. Types of logging in Tomcat 7 We can enable logging in Tomcat 7 in different ways based on the requirement. There are a total of five types of logging that we can configure in Tomcat, such as application, server, console, and so on. The following figure shows the different types of logging for Tomcat 7. These methods are used in combination with each other based on environment needs. For example, if you have issues where Tomcat services are not displayed, then console logs are very helpful to identify the issue, as we can verify the real-time boot sequence. Let's discuss each logging method briefly. Application log These logs are used to capture the application event while running the application transaction. These logs are very useful in order to identify the application level issues. For example, suppose your application performance is slow on a particular transition, then the details of that transition can only be traced in application log. The biggest advantage of application logs is we can configure separate log levels and log files for each application, making it very easy for the administrators to troubleshoot the application. Log4j is used in 90 percent of the cases for application log generation. Server log Server logs are identical to console logs. The only advantage of server logs is that they can be retrieved anytime but console logs are not available after we log out from the console. Console log This log gives you the complete information of Tomcat 7 startup and loader sequence. The log file is named as catalina.out and is found in TOMCAT_HOME/logs. This log file is very useful in checking the application deployment and server startup testing for any environment. This log is configured in the Tomcat file catalina.sh, which can be found in TOMCAT_HOME/bin. The previous screenshot shows the definition for Tomcat logging. By default, the console logs are configured as INFO mode. There are different levels of logging in Tomcat such as WARNING, INFORMATION, CONFIG, and FINE. The previous screenshot shows the Tomcat log file location, after the start of Tomcat services. The previous screenshot shows the output of the catalina.out file, where Tomcat services are started in 1903 ms. Access log Access logs are customized logs, which give information about the following: Who has accessed the application What components of the application are accessed Source IP and so on These logs play a vital role in traffic analysis of many applications to analyze the bandwidth requirement and also helps in troubleshooting the application under heavy load. These logs are configured in server.xml in TOMCAT_HOME/conf. The following screenshot shows the definition of access logs. You can customize them according to the environment and your auditing requirement. Let's discuss the pattern format of the access logs and understand how we can customize the logging format: <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" resolveHosts="false"/> Class Name: This parameter defines the class name used for generation of logs. By default, Apache Tomcat 7 uses the org.apache.catalina.valves.AccessLogValve class for access logs. Directory: This parameter defines the directory location for the log file. All the log files are generated in the log directory—TOMCAT_HOME/logs—but we can customize the log location based on our environment setup and then update the directory path in the definition of access logs. Prefix: This parameter defines the prefix of the access log filename, that is, by default, access log files are generated by the name localhost_access_log.yy-mm-dd.txt. Suffix: This parameter defines the file extension of the log file. Currently it is in .txt format. Pattern: This parameter defines the format of the log file. The pattern is a combination of values defined by the administrator, for example, %h = remote host address. The following screenshot shows the default log format for Tomcat 7. Access logs show the remote host address, date/time of request, method used for response, URI mapping, and HTTP status code. In case you have installed the web traffic analysis tool for application, then you have to change the access logs to a different format. Host manager These logs define the activity performed using Tomcat Manager, such as various tasks performed, status of application, deployment of application, and lifecycle of Tomcat. These configurations are done on the logging.properties, which can be found in TOMCAT_HOME/conf. The previous screenshot shows the definition of host, manager, and host-manager details. If you see the definitions, it defines the log location, log level, and prefix of the filename. In logging.properties, we are defining file handlers and appenders using JULI. The log file for manager looks similar to the following: I28 Jun, 2011 3:36:23 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:37:13 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:37:42 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: undeploy: Undeploying web application at '/sample' 28 Jun, 2011 3:37:43 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:42:59 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:43:01 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:53:44 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' Types of log levels in Tomcat 7 There are seven levels defined for Tomcat logging services (JULI). They can be set based on the application requirement. The following figure shows the sequence of log levels for JULI: Every log level in JULI had its own functionality. The following table shows the functionality of each log level in JULI: Log level Description SEVERE(highest) Captures exception and Error WARNING Warning messages INFO Informational message, related to server activity CONFIG Configuration message FINE Detailed activity of server transaction (similar to debug) FINER More detailed logs than FINE FINEST(least) Entire flow of events (similar to trace) For example, let's take an appender from logging.properties and find out the log level used; the first log appender for localhost is using FINE as the log level, as shown in the following code snippet: localhost.org.apache.juli.FileHandler.level = FINE localhost.org.apache.juli.FileHandler.directory = ${catalina.base}/logs localhost.org.apache.juli.FileHandler.prefix = localhost. The following code shows the default file handler configuration for logging in Tomcat 7 using JULI. The properties are mentioned and log levels are highlighted: ############################################################ # Facility specific properties. # Provides extra control for each logger. ############################################################ org.apache.catalina.core.ContainerBase.[Catalina].[localhost].level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].handlers = 2localhost.org.apache.juli.FileHandler org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager] .level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager] .handlers = 3manager.org.apache.juli.FileHandler org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host- manager].level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host- manager].handlers = 4host-manager.org.apache.juli.FileHandler
Read more
  • 0
  • 0
  • 6260

article-image-silverlight-5-lob-development-validation-advanced-topics-and-mvvm
Packt
09 Mar 2012
8 min read
Save for later

Silverlight 5 LOB Development : Validation, Advanced Topics, and MVVM

Packt
09 Mar 2012
8 min read
(For more resources on Silverlight, see here.) Validation One of the most important parts of the Silverlight application is the correct implementation of validations in our business logic. These can be simple details, such as the fact that the client must provide their name and e-mail address to sign up, or that before selling a book, it must be in stock. In RIA Services, validations can be defined on two levels: In entities, via DataAnnotations. In our Domain Service, server or asynchronous validations via Invoke. DataAnnotations The space named System.ComponentModel.DataAnnotations implements a series of attributes allowing us to add validation rules to the properties of our entities. The following table shows the most outstanding ones: Validation Attribute Description DataTypeAttribute Specifies a particular type of data such as date or an e-mail EnumDataTypeAttribute Ensures that the value exists in an enumeration RangeAttribute Designates minimum and maximum constraints RegularExpressionAttribute Uses a regular expression to determine valid values RequiredAttribute Specifies that a value must be provided StringLengthAttribute Designates a maximum and minimum number of characters CustomValidationAttribute Uses a custom method for validation The following code shows us how to add a field as "required": [Required()]public string Name{ get { return this._name; } set { (...) }} In the UI layer, the control linked to this field (a TextBox, in this case), automatically detects and displays the error. It can be customized as follows: These validations are based on the launch of exceptions. They are captured by user controls and bound to data elements. If there are errors, these are shown in a friendly way. When executing the application in debug mode with Visual Studio, it is possible to find that IDE captures exceptions. To avoid this, refer to the following link, where the IDE configuration is explained: http://bit.ly/riNdmp. Where can validations be added? The answer is in the metadata definition, entities, in our Domain Service, within the server project. Going back to our example, the server project is SimpleDB.Web and the Domain Service is MyDomainService. medatada.cs. These validations are automatically copied to the entities definition file and the context found on the client side. In the Simple.DB.Web.g.cs file, when the hidden folder Generated Code is opened, you will be surprised to find that some validations are already implemented. For example, the required field, field length, and so on. These are inferred from the Entity Framework model. Simple validations For validations that are already generated, let's see a simple example on how to implement those of the "required" field and "maximum length": [Required()][StringLength(60)]public string Name{ get { return this._name; } set { (...) }} Now, we will implement the syntactic validation for credit cards (format dddddddd- dddd-dddd). To do so, use the regular expression validator and add the server file MyDomainService.metadata.cs, as shown in the following code: [RegularExpression(@"d{4}-d{4}-d{4}-d{4}",ErrorMessage="Credit card not valid format should be: 9999-9999-9999-9999")]public string CreditCard { get; set; } To know how regular expressions work, refer to the following link: http://bit.ly/115Td0 and refer to this free tool to try them in a quick way: http://bit.ly/1ZcGFC. Custom and shared validations Basic validations are acceptable for 70 percent of validation scenarios, but there are still 30 percent of validations which do not fit in these patterns. What do you do then? RIA Services offers CustomValidatorAttribute. It permits the creation of a method which makes a validation defined by the developer. The benefits are listed below: Its code: The necessary logic can be implemented to make validations. It can be oriented for validations to be viable in other modules (for instance, the validation of an IBAN [International Bank Account]). It can be chosen if a validation is executed on only the server side (for example, a validation requiring data base readings) or if it is also copied to the client. To validate the checksum of the CreditCard field, follow these steps: Add to the SimpleDB.Web project, the class named ClientCustomValidation. Within this class, define a static model, ValidationResult, which accepts the value of the field to evaluate as a parameter and returns the validation result. public class ClientCustomValidation{ public static ValidationResult ValidMasterCard(string strcardNumber)} Implement the summarized validation method (the part related to the result call back is returned). public static ValidationResult ValidMasterCard(string strcardNumber){ // Let us remove the "-" separator string cardNumber = strcardNumber.Replace("-", ""); // We need to keep track of the entity fields that are // affected, so the UI controls that have this property / bound can display the error message when applies List<string> AffectedMembers = new List<string>(); AffectedMembers.Add("CreditCard"); (...) // Validation succeeded returns success // Validation failed provides error message and indicates // the entity fields that are affected return (sum % 10 == 0) ? ValidationResult.Success : new ValidationResult("Failed to validate", AffectedMembers);} To make validation simpler, only the MasterCard has been covered. To know more and cover more card types, refer to the page http://bit.ly/aYx39u. In order to find examples of valid numbers, go to http://bit.ly/gZpBj. Go to the file MyDomainService.metadata.cs and, in the Client entity, add the following to the CreditCard field: [CustomValidation(typeof(ClientCustomValidation),"ValidMasterCard")]public string CreditCard { get; set; } If it is executed now and you try to enter an invalid field in the CreditCard field, it won't be marked as an error. What happens? Validation is only executed on the server side. If it is intended to be executed on the client side as well, rename the file called ClientCustomValidation.cs to ClientCustomValidation.shared. cs. In this way, the validation will be copied to the Generated_code folder and the validation will be launched. In the code generated on the client side, the entity validation is associated. /// <summary>/// Gets or sets the 'CreditCard' value./// </summary>[CustomValidation(typeof(ClientCustomValidation), "ValidMasterCard")][DataMember()][RegularExpression("d{4}-d{4}-d{4}-d{4}", ErrorMessage="Creditcard not valid format should be: 9999-9999-9999-9999")][StringLength(30)]public string CreditCard{ This is quite interesting. However, what happens if more than one field has to be checked in the validation? In this case, one more parameter is added to the validation method. It is ValidationContext, and through this parameter, the instance of the entity we are dealing with can be accessed. public static ValidationResult ValidMasterCard( string strcardNumber, ValidationContext validationContext){ client currentClient = (client)validationContext.ObjectInstance; Entity-level validations Fields validation is quite interesting, but sometimes, rules have to be applied in a higher level, that is, entity level. RIA Services implements some machinery to perform this kind of validation. Only a custom validation has to be defined in the appropriate entity class declaration. Following the sample we're working upon, let us implement one validation which checks that at least one of the two payment methods (PayPal or credit card) is informed. To do so, go to the ClientCustomValidation.shared.cs (SimpleDB web project) and add the following static function to the ClientCustomValidation class: public static ValidationResult ValidatePaymentInformed(clientCurrentClient){ bool atLeastOnePaymentInformed = ((CurrentClient.PayPalAccount != null && CurrentClient.PayPalAccount != string.Empty) || (CurrentClient.CreditCard != null && CurrentClient.CreditCard != string.Empty)); return (atLeastOnePaymentInformed) ? ValidationResult.Success : new ValidationResult("One payment method must be informed at least");} Next, open the MyDomainService.metadata file and add, in the class level, the following annotation to enable that validation: [CustomValidation(typeof(ClientCustomValidation), ValidatePaymentInformed")][MetadataTypeAttribute(typeof(client.clientMetadata))]public partial class client When executing and trying the application, it will be realized that the validation is not performed. This is due to the fact that, unlike validations in the field level, the entity validations are only launched client-side when calling EndEdit or TryValidateObject. The logic is to first check if the fields are well informed and then make the appropriate validations. In this case, a button will be added, making the validation and forcing it to entity level. To know more about validation on entities, go to http://bit.ly/qTr9hz. Define the command launching the validation on the current entity in the ViewModel as the following code: private RelayCommand _validateCommand;public RelayCommand ValidateCommand{ get { if (_validateCommand == null) { _validateCommand = new RelayCommand(() => { // Let us clear the current validation list CurrentSelectedClient.ValidationErrors.Clear(); var validationResults = new List<ValidationResult>(); ValidationContext vcontext = new ValidationContext(CurrentSelectedClient, null, null); // Let us run the validation Validator.TryValidateObject(CurrentSelectedClient, vcontext, validationResults); // Add the errors to the entities validation error // list foreach (var res in validationResults) { CurrentSelectedClient.ValidationErrors.Add(res); } },(() => (CurrentSelectedClient != null)) ); } return _validateCommand; }} Define the button in the window and bind it to the command: <Button Content="Validate" Command="{Binding Path=ValidateCommand}"/> While executing, it will be appreciated that the fields be blank, even if we click the button. Nonetheless, when adding a breaking point, the validation is shown. What happens is, there is a missing element showing the result of that validation. In this case, the choice will be to add a header whose DataContext points to the current entity. If entity validations fail, they will be shown in this element. For more information on how to show errors, check the link http://bit.ly/ad0JyD. The TextBox added will show the entity validation errors. The final result will look as shown in the following screenshot:
Read more
  • 0
  • 0
  • 1657
article-image-introduction-enterprise-business-messages
Packt
27 Feb 2012
4 min read
Save for later

Introduction to Enterprise Business Messages

Packt
27 Feb 2012
4 min read
(For more resources on Oracle, see here.) Before we jump into the AIA Enterprise Business Message (EBM) standards, let us understand a little more about Business Messages. In general, Business Message is information shared between people, organizations, systems, or processes. Any information communicated to any object in a standard understandable format are called messages. In the application integration world, there are various information-sharing approaches that are followed. Therefore, we need not go through it again, but in a service-oriented environment, message-sharing between systems is the fundamental characteristic. There should be a standard approach followed across an enterprise, so that every existing or new business system could understand and follow the uniform method. XML technology is a widely-accepted message format by all the technologies and tools. Oracle AIA framework provides a standard messaging format to share the information between AIA components. Overview of Enterprise Business Message (EBM) Enterprise Business Messages (EBMs) are business information exchanged between enterprise business systems as messages. EBMs define the elements that are used to form the messages in service-oriented operations. EBM payloads represent specific content of an EBO that is required to perform a specific service. In an AIA infrastructure, EBMs are messages exchanged between all components in the Canonical Layer. Enterprise Business Services (EBS) accepts EBM as a request message and responds back to EBM as an output payload. However, in Application Business Connector Service (ABCS), the provider ABCS accepts messages in the EBM format and translates them into the application provider's Application Business Message (ABM) format. Alternatively, the requester ABCS receives ABM as a request message, transforms it into an EBS, and calls the EBS to submit the EBM message. Therefore, EBM has been a widely-accepted message standard within AIA components. The context-oriented EBMs are built using a set of common components and EBO business components. Some EBMs may require more than one EBO to fulfill the business integration needs. The following diagram describes the role of an EBM in the AIA architecture: EBM characteristics The fundamentals of EBM and its characteristics are as follows: Each business service request and response should be represented in an EBM format using a unique combination of an action and an EBO instance. One EBM can support only one action or verb. EBM component should import the common component to make use of metadata and data types across the EBM structure. EBMs are application interdependencies. Any requester application that invokes Enterprise Business Services (EBS) through ABCS should follow the EBM format standards to pass as payload in integration. The action that is embedded in the EBM is the only action that sender or requester application can execute to perform integration. The action in the EBM may also carry additional data that has to be done as part of service execution. For example, the update action may carry information about whether the system should notify after successful execution of update. The information that exists in the EBM header is common to all EBMs. However, information existing in the data area and corresponding actions are specific to only one EBM. EBM headers may carry tracking information, auditing information, source and target system information, and error-handling information. EBM components do not rely on the underlying transport protocol. Any service protocols such as HTTP, HTTPs, SMTP, SOAP, and JMS should carry EBM payload documents. Exploring AIA EBMs We explored the physical structure of the Oracle AIA EBO in the previous chapter; EBMs do not have a separate structure. EBMs are also part of the EBO's physical package structure. Every EBO is bound with an EBM. The following screenshot will show the physical structure of the EBM groups as directories: As EBOs are grouped as packages based on the business model, EBMs are also a part of that structure and can be located along with the EBO schema under the Core EBO package.
Read more
  • 0
  • 0
  • 3696

article-image-oracle-jdeveloper-11gr2-application-modules
Packt
20 Jan 2012
12 min read
Save for later

Oracle JDeveloper 11gR2: Application Modules

Packt
20 Jan 2012
12 min read
(For more resources on JDeveloper, see here.) Creating and using generic extension interfaces In this recipe, we will go over how to expose any of that common functionality as a generic extension interface. By doing so, this generic interface becomes available to all derived business components, which in turn can be exposed to its own client interface and make it available to the ViewController layer through the bindings layer. How to do it… Open the shared components workspace in JDeveloper Create an interface called ExtApplicationModule as follows: public interface ExtApplicationModule { // return some user authority level, based on // the user's name public int getUserAuthorityLevel();} Locate and open the custom application module framework extension class ExtApplicationModuleImpl. Modify it so that it implements the ExtApplicationModule interface. Then, add the following method to it: public int getUserAuthorityLevel() { // return some user authority level, based on the user's name return ("anonymous".equalsIgnoreCase(this.getUserPrincipalName()))? AUTHORITY_LEVEL_MINIMAL : AUTHORITY_LEVEL_NORMAL;} Rebuild the SharedComponents workspace and deploy it as an ADF Library JAR. Now, open the HRComponents workspace Locate and open the HrComponentsAppModule application module defnition. Go to the Java section and click on the Edit application module client interface button (the pen icon in the Client Interface section). On the Edit Client Interface dialog, shuttle the getUserAuthorityLevel() interface from the Available to the Selected list. How it works… In steps 1 and 2, we have opened the SharedComponents workspace and created an interface called HrComponentsAppModule. This interface contains a single method called getUserAuthorityLevel(). Then, we updated the application module framework extension class HrComponentsAppModuleImpl so that it implements the HrComponentsAppModule interface (step 3). We also implemented the method getUserAuthorityLevel() required by the interface (step 4). For the sake of this recipe, this method returns a user authority level based on the authenticated user's name. We retrieve the authenticated user's name by calling getUserPrincipal().getName() on the SecurityContext, which we retrieve from the current ADF context (ADFContext.getCurrent().getSecurityContext()). If security is not enabled for the ADF application, the user's name defaults to anonymous. In this example, we return AUTHORITY_LEVEL_MINIMAL for anonymous users, and for all others we return AUTHORITY_LEVEL_NORMAL. We rebuilt and redeployed the SharedComponents workspace in step 5. In steps 6 through 9, we opened the HRComponents workspace and added the getUserAuthorityLevel() method to the HrComponentsAppModuleImpl client interface. By doing this, we exposed the getUserAuthorityLevel() generic extension interface to a derived application module, while keeping its implementation in the base framework extension class ExtApplicationModuleImpl. There's more… Note that the steps followed in this recipe to expose an application module framework extension class method to a derived class' client interface can be followed for other business components framework extension classes as well. Exposing a custom method as a web service Service-enabling an application module allows you, among others, to expose custom application module methods as web services. This is one way for service consumers to consume the service-enabled application module. The other possibilities are accessing the application module by another application module, and accessing it through a Service Component Architecture (SCA) composite. Service-enabling an application module allows access to the same application module both through web service clients and interactive web user interfaces. In this recipe, we will go over the steps involved in service-enabling an application module by exposing a custom application module method to its service interface. Getting ready The HRComponents workspace requires a database connection to the HR schema. How to do it… Open the HRComponents project in JDeveloper. Double-click on the HRComponentsAppModule application module in the Application Navigator to open its defnition. Go to the Service Interface section and click on the Enable support for Service Interface button (the green plus sign icon in the Service Interface section). This will start the Create Service Interface wizard. In the Service Interface page, accept the defaults and click Next. In the Service Custom Methods page, locate the adjustCommission() method and shuttle it from the Available list to the Selected list. Click on Finish. Observe that the adjustCommission() method is shown in the Service Interface Custom Methods section of the application module's Service Interface. The service interface fles were generated in the serviceinterface package under the application module and are shown in the Application Navigator. Double-click on the weblogic-ejb-jar.xml fle under the META-INF package in the Application Navigator to open it. In the Beans section, select the com.packt.jdeveloper. cookbook.hr.components.model.application.common. HrComponentsAppModuleService Bean bean and click on the Performance tab. For the Transaction timeout feld, enter 120. How it works… In steps 1 through 6, we have exposed the adjustCommission() custom application module method to the application module's service interface. This is a custom method that adjusts all the Sales department employees' commissions by the percentage specifed. As a result of exposing the adjustCommission() method to the application module service interface, JDeveloper generates the following fles: HrComponentsAppModuleService.java: Defnes the service interface HrComponentsAppModuleServiceImpl.java: The service implementation class HrComponentsAppModuleService.xsd: The service schema fle describing the input and output parameters of the service HrComponentsAppModuleService.wsdl: The Web Service Defnition Language (WSDL) fle, describing the web service ejb-jar.xml: The EJB deployment descriptor. It is located in the src/META-INF directory weblogic-ejb-jar.xml: The WebLogic-specifc EJB deployment descriptor, located in the src/META-INF directory In steps 7 and 8, we adjust the service Java Transaction API (JTA) transaction timeout to 120 seconds (the default is 30 seconds). This will avoid any exceptions related to transaction timeouts when invoking the service. This is an optional step added specifcally for this recipe, as the process of adjusting the commission for all sales employees might take longer than the default 30 seconds, causing the transaction to time out. To test the service using the JDeveloper integrated WebLogic application server, right-click on the HrComponentsAppModuleServiceImpl.java service implementation fle in the Application Navigator and select Run or Debug from the context menu. This will build and deploy the HrComponentsAppModuleService web service into the integrated WebLogic server. Once the deployment process is completed successfully, you can click on the service URL in the Log window to test the service. This will open a test window in JDeveloper and also enable the HTTP Analyzer. Otherwise, copy the target service URL from the Log window and paste it into your browser's address feld. This will bring up the service's endpoint page. On this page, select the adjustCommission method from the Operation drop down, specify the commissionPctAdjustment parameter amount and click on the Invoke button to execute the web service. Observe how the employees' commissions are adjusted in the EMPLOYEES table in the HR schema. There's more… For more information on service-enabling application modules consult chapter Integrating Service-Enabled Application Modules in the Fusion Developer's Guide for Oracle Application Development Framework which can be found at http://docs.oracle.com/cd/ E24382_01/web.1112/e16182/toc.htm. Accessing a service interface method from another application module In the recipe Exposing a custom method as a web service in this article, we went through the steps required to service-enable an application module and expose a custom application module method as a web service. We will continue in this recipe by explaining how to invoke the custom application module method, exposed as a web service, from another application module. Getting ready This recipe will call the adjustCommission() custom application module method that was exposed as a web service in the Exposing a custom method as a web service recipe in this article. It requires that the web service is deployed in WebLogic and that it is accessible. The recipe also requires that both the SharedComponents workspace and the HRComponents workspace are deployed as ADF Library JARs and that are added to the workspace used by this specifc recipe. Additionally, a database connection to the HR schema is required. How to do it… Ensure that you have built and deployed both the SharedComponents and HRComponents workspaces as ADF Library JARs. Create a File System connection in the Resource Palette to the directory path where the SharedComponents.jar and HRComponents.jar ADF Library JARs are located. Create a new Fusion Web Application (ADF) called HRComponentsCaller using the Create Fusion Web Application (ADF) wizard. Create a new application module called HRComponentsCallerAppModule using the Create Application Module wizard. In the Java page, check on the Generate Application Module Class checkbox to generate a custom application module implementation class. JDeveloper will ask you for a database connection during this step, so make sure that a new database connection to the HR schema is created. Expand the File System | ReUsableJARs connection in the Resource Palette and add both the SharedComponents and HRComponents libraries to the project. You do this by right-clicking on the jar fle and selecting Add to Project… from the context menu. Bring up the business components Project Properties dialog and go to the Libraries and Classpath section. Click on the Add Library… button and add the BC4J Service Client and JAX-WS Client extensions. Double-click on the HRComponentsCallerAppModuleImpl.java custom application module implementation fle in the Application Navigator to open it in the Java editor. Add the following method to it: public void adjustCommission( BigDecimal commissionPctAdjustment) { // get the service proxy HrComponentsAppModuleService service = (HrComponentsAppModuleService)ServiceFactory .getServiceProxy( HrComponentsAppModuleService.NAME); // call the adjustCommission() service service.adjustCommission(commissionPctAdjustment);} Expose adjustCommission() to the HRComponentsCallerAppModule client interface. Finally, in order to be able to test the HRComponentsCallerAppModule application module with the ADF Model Tester, locate the connections.xml fle in the Application Resources section of the Application Navigator under the Descriptors | ADF META-INF node, and add the following confguration to it: <Reference name="{/com/packt/jdeveloper/cookbook/hr/components/model/ application/common/}HrComponentsAppModuleService" className="oracle.jbo.client.svc.Service" ><Factory className="oracle.jbo.client.svc.ServiceFactory"/><RefAddresses><StringRefAddr addrType="serviceInterfaceName"><Contents>com.packt.jdeveloper.cookbook.hr.components.model. application.common.serviceinterface.HrComponentsAppModuleService </Contents></StringRefAddr><StringRefAddr addrType="serviceEndpointProvider"><Contents>ADFBC</Contents></StringRefAddr><StringRefAddr addrType="jndiName"><Contents>HrComponentsAppModuleServiceBean#com.packt.jdeveloper. cookbook.hr.components.model.application.common. serviceinterface.HrComponentsAppModuleService</Contents></StringRefAddr><StringRefAddr addrType="serviceSchemaName"><Contents>HrComponentsAppModuleService.xsd</Contents></StringRefAddr><StringRefAddr addrType="serviceSchemaLocation"><Contents>com/packt/jdeveloper/cookbook/hr/components/model/ application/common/serviceinterface/</Contents></StringRefAddr><StringRefAddr addrType="jndiFactoryInitial"><Contents>weblogic.jndi.WLInitialContextFactory</Contents></StringRefAddr><StringRefAddr addrType="jndiProviderURL"><Contents>t3://localhost:7101</Contents></StringRefAddr></RefAddresses></Reference> How it works… In steps 1 and 2, we have made sure that both the SharedComponents and HRComponents ADF Library JARs are deployed and that a fle system connection was created, in order that both of these libraries get added to a newly created project (in step 5). Then, in steps 3 and 4, we create a new Fusion web application based on ADF, and an application module called HRComponentsCallerAppModule. It is from this application module that we intend to call the adjustCommission() custom application module method, exposed as a web service by the HrComponentsAppModule service-enabled application module in the HRComponents library JAR. For this reason, in step 4, we have generated a custom application module implementation class. We proceed by adding the necessary libraries to the new project in steps 5 and 6. Specifcally, the following libraries were added: SharedComponents.jar, HRComponents.jar, BC4J Service Client, and JAX-WS Client. In steps 7 through 9, we create a custom application module method called adjustCommission(), in which we write the necessary glue code to call our web service. In it, we frst retrieve the web service proxy, as a HrComponentsAppModuleService interface, by calling ServiceFactory.getServiceProxy() and specifying the name of the web service, which is indicated by the constant HrComponentsAppModuleService.NAME in the service interface. Then we call the web service through the retrieved interface. In the last step, we have provided the necessary confguration in the connections.xml so that we will be able to call the web service from an RMI client (the ADF Model Tester). This fle is used by the web service client to locate the web service. For the most part, the Reference information that was added to it was generated automatically by JDeveloper in the Exposing a custom method as a Web service recipe, so it was copied from there. The extra confguration information that had to be added is the necessary JNDI context properties jndiFactoryInitial and jndiProviderURL that are needed to resolve the web service on the deployed server. You should change these appropriately for your deployment. Note that these parameters are the same as the initial context parameters used to lookup the service when running in a managed environment. To test calling the web service, ensure that you have frst deployed it and that it is running. You can then use the ADF Model Tester, select the adjustCommission method and execute it. There's more… For additional information related to such topics as securing the ADF web service, enabling support for binary attachments, deploying to WebLogic, and more, refer to the Integrating Service-Enabled Application Modules section in the Fusion Developer's Guide for Oracle Application Development Framework which can be found at http://docs.oracle.com/ cd/E24382_01/web.1112/e16182/toc.htm.
Read more
  • 0
  • 0
  • 1889

article-image-article-setting-development-environment
Packt
13 Jan 2012
14 min read
Save for later

Setting Up a Development Environment

Packt
13 Jan 2012
14 min read
Selecting your virtual environment Prisoners serving life sentences (in Canada) have what is known as a faint hope clause where you have a glimmer of a chance of getting parole after 15 years. However, those waiting for Microsoft to provide us a version of Virtual PC that can run Virtual Hard Drives (VHDs) hosting 64-bit operating systems (such as Windows Server 2008 R2), have no such hope of ever seeing that piece of software. But miracles do happen, and I hope that the release of a 64-bit capable Virtual PC renders this section of the article obsolete. If this has in fact happened, go with it and proceed to the following section.   Getting ready Head into your computer's BIOS settings and enable the virtualization setting. The exact setting you are looking for varies widely, so please consult with your manufacturer's documentation. This setting seems universally defaulted to off, so I am very sure you will need to perform this action.   How to do it... Since you are still reading, however, it is safe to say that a miracle has not yet happened. Your first task is to select a suitable virtualization technology that can support a 64-bit guest operating system. The recipe here is to consider the choices in this order, with the outcome of your virtual environment being selected: Microsoft Virtualization: Hyper-V certainly has the ability to create and run Virtual Hard Disks (VHDs) with 64-bit operating systems. It's free—that is, you can install the Hyper-V role , but it requires the base operating system to be Windows Server 2008 R2. It can be brutal to get it running properly on something like a laptop (for example, because of driver issues). It won't be a good idea to get Windows 2008 Server running on a laptop, primarily because of driver issues. I recommend that if your laptop is running Windows 7, look at creating a dual boot, and a boot to VHD where this other boot option / partition is Windows Server 2008 R2. The main disadvantage is coming up with an (preferably licensed) installation of Windows Server 2008 R2 as the main computer operating system (or as a dual boot option). Or perhaps your company runs Hyper-V on their server farm and would be willing to host your development environment for you? Either way, if you have managed to get access to a Hyper-V server, you are good to go! VMware Workstation: Go to http://www.vmware.com and download my absolute favorite virtualization technology—VMware Workstation—fully featured, powerful, and can run on Windows 7. I have used it for years and love it. You must of course pay for a license, but please believe me, it is a worthwhile investment. You can sign up for a 30 day trial to explore the benefits. Note that you only need one copy of VMware Workstation to create a virtual machine. Once you have created it, you can run it anywhere using the freely available VMware Player. Oracle Virtual Box: Go to http://www.virtualbox.org/ and download this free software that will run on Windows 7 and create and host 64-bit guest operating systems. The reason that this is at the bottom of the list is that I personally do not have experience using this software. However, I have colleagues who have used it and have had no problems with it. Give this a try and see if it works as equally well as a paid version of VMware. With your selected virtualization technology in hand, head to the next section to install and configure Windows Server 2008 R2, which is the base operating system required for an installation of SharePoint Server 2010. Installing and configuring Windows Server 2008 R2 SharePoint 2010 requires the Windows Server 2008 R2 operating system in order to run. In this recipe, we will confi gure the components of Windows Server 2008 necessary in order to get ready to install SQL Server 2008 and SharePoint 2010. Getting ready Download Windows Server 2008 R2 from your MSDN subscription, or type in windows server 2008 R2 trial download into your favorite search engine to download the 180-day trial from the Microsoft site. This article does not cover actually installing the base operating system. The specific instructions to do so will be dependent upon the virtualization software. Generally, it will be provided as an ISO image (the file extension will be .iso). ISO means a compressed disk image, and all virtualization software that I am aware of will let you mount (attach) an ISO image to the virtual machine as a CD Drive. This means that when you elect to create a new virtual machine, you will normally be prompted for the ISO image, and the installation of the operating system should proceed in a familiar and relatively automated fashion. So for this recipe, ready means that you have your virtualization software up and running, the Windows Server 2008 R2 base operating system is installed, and you are able to log in as the Administrator (and that you are effectively logging in for the first time). How to do it... Log in as the Administrator. You will be prompted to change the password the first time—I suggest choosing a very commonly used Microsoft password—Password1. However, feel free to select a password of your choice, but use it consistently throughout. The Initial configuration tasks screen will come up automatically. On this screen: Activate windows using your 180 day trial key or using your MSDN key. Select Provide computer name and domain. Change the computer name to a simpler one of your choice. In my case, I named the machine OPENHIGHWAY. Leave the Member of option as Workgroup. The computer will require a reboot. In the Update this server section, choose Download and install updates. Click on the Change settings link and select the option Never check for updates and click OK. Click the Check for updates link. The important updates will be selected. Click on Install Updates. Now is a good time for a coffee break! You will need to reboot the server when the updates complete. In the Customize this server section, click on Add Features. Select the Desktop Experience, Windows, PowerShell, Integrated, Scripting, and Environment options. Choose Add Required Features when prompted to do so. Reboot the server when prompted to do so. If the Initial configuration tasks screen appears now, or in the future, you may now select the checkbox for Do not show this window at logon. We will continue configuration from the Server Manager, which should be displayed on your screen. If not, launch the Server Manager using the icon on the taskbar. We return to Server Manager to continue the confi guration: OPTIONAL: Click on Configure Remote Desktop if you have a preference for accessing your virtual machine using Remote Desktop (RDP) instead of using the virtual machine's console software. In the Security Information section, click Go to Windows Firewall. Click on the Windows Firewall Properties link. From the dialog, go to each of the tabs, namely, Domain Profi le, Private Profi le, and Public Profi le and set the Firewall State to Off on each tab and click OK. Click on the Server Manager node, and from the main screen, click on the Configure IE ESC link. Set both options to Off and click OK. From the Server Manager, expand the Configuration node and then expand Local Users and Groups node, and then click on the Users folder. Right-click on the Administrator account and select Properties. Select the option for Password never expires and click OK. From the Server Manager, click the Roles node . Click the Add Roles link. Now, click on the Introductory screen and select the checkbox for Active Directory Domain Services. Click Next, again click on Next, and then click Install. After completion, click the Close this wizard and launch the Active Directory Domain Services Installation Wizard (dcpromo.exe) link. Now, carry out the following steps: From the new wizard that pops up, from the welcome screen, select the checkbox Use advanced mode installation, click Next, and again click on Next on the Operating System Compatibility screen. Select the option Create a new domain in a new forest and click Next. Choose your domain (FQDN)! This is completely internal to your development server and does not have to be real. For article purposes, I am using theopenhighway.net, as shown in the following screenshot. Then click Next: From the Set Forest Functional Level drop-down, choose Windows Server 2008 R2 and click Next. Click Next on the Additional Domain Controller Option screen. Select Yes on the Static IP assignment screen. Click Yes on the Dns Delegation Warning screen. Click Next on the Location for Database, Log Files, and SYSVOL screen. On the Directory Services Restore Mode Administrator Password screen, enter the same password that you used for the Administrator account, in my case, Password1. Click Next. Click Next on the Summary screen. Click on the Reboot On Completion screen. Otherwise reboot the server after the installation completes You will now confi gure a user account that will run the application pools for the SharePoint web applications in IIS. From the Server Manager, expand the Roles node. Keep expanding the Active Directory Domain Services until you see the Users folder. Click on the Users folder. Now carry out the following: Right-click on the Users folder and select New | User Enter SP_AppPool in the full name field and also enter SP_AppPool in the user logon field and click Next. Enter the password as Password1 (or the same as you had selected for the Administrator account). Deselect the option for User must change password at next logon and select the option for Password never expires. Click Next and then click Finish. A loopback check is a security feature to mitigate against reflection attacks, introduced in Windows Server 2003 SP1. You will likely encounter connection issues with your local websites and it is therefore universally recommended that you disable the loopback check on a development server. This is done from the registry editor: Click the Start menu button, choose Run…, enter Regedit, and click OK to bring up the registry editor. Navigate to HKEY_LOCAL_MACHINE | SYSTEM | CurrentControlSet | Control | Lsa Right-click the Lsa node and select New | DWORD (32-bit) Value In the place of New Value #1 type DisableLoopbackCheck. Right-click DisableLoopbackCheck, select Modify, change the value to 1, and click OK Congratulations! You have successfully confi gured Windows Server 2008 R2. There's more... The Windows Shutdown Event Tracker is simply annoying on a development machine. To turn this feature off, click the Start button, select Run…, enter gpedit.msc, and click OK. Scroll down, right-click on Display Shutdown Event Tracker, and select Edit. Select the Disabled option and click OK, as shown in the following screenshot: Installing and configuring SQL Server 2008 R2 SharePoint 2010 requires Microsoft SQL Server as a fundamental component of the overall SharePoint architecture. The content that you plan to manage in SharePoint, including web content and documents, literally is stored within and served from SQL Server databases. The SharePoint 2010 architecture itself relies on information stored in SQL Server databases, such as confi guration and the many service applications. In this recipe, we will install and configure the components of SQL Server 2008 necessary to install SharePoint 2010. Getting ready I do not recommend SQL Server Express for your development environment, although this is a possible, free, and valid choice for the installation of SharePoint 2010. In my personal experience, I have valued the full power and fl exibility of the full version of SQL Server as well as not having to live with the constraints and limitations of SQL Express. Besides, there is another little reason too! The Enterprise edition of SQL Server is either readily available with your MSDN subscription or downloadable as a trial from the Microsoft site. Download SQL Server 2008 R2 Enterprise from your MSDN subscription, or type in sql server 2008 enterprise R2 trial download into your favorite search engine to download the 180-day trial from the Microsoft site. For SQL Server 2008 R2 Enterprise, if you have MSDN software, then you will be provided with an ISO image that you can attach to the virtual machine. If you download your SQL Server from the Microsoft site as a trial, extract the software (it is a self-extracting EXE) on your local machine, and then share the folder with your virtual machine. Finallly, run the Setup.exe fi le. How to do it... Here is your recipe for installing SQL Server 2008 R2 Enterprise. Carry out the following steps to complete this recipe: You will be presented with the SQL Server Installation Center; on the left side of the screen, select Installation, as shown in the following screenshot: For the choices presented on the Installation screen, select New installation or add features to an existing installation. The Setup Support Rules (shown in the following screenshot) will run to identify any possible problems that might occur when installing SQL Server. All rules should pass. Click OK to continue: You will be presented with the SQL Server 2008 R2 Setup screen. On the fi rst screen, you can select an evaluation or use your product key (from, for example, MSDN) and then click Next. Accept the terms in the license, but do not check the Send feature usage data to Microsoft checkbox, and click Next. On the Setup Support Files screen, click Install. All tests will pass except for a warning that you can safely ignore (the one noting we are installing on a domain controller), and click Next, as shown in the following screenshot: On the Setup Role screen, select SQL Server Feature Installation and click Next. On the Feature Selection, as shown in the following screenshot, carry out the following tasks: In Instance Features, select Database Engine Services (and both SQL Server Replication and Full Text Search), Analysis Services, and Reporting Services In Shared Features, select Business Intelligence Development Studio, Management Tools Basic (and Management Tools Complete), and Microsoft Sync Framework Finally, click Next. On the Installation Rules screen, click Next On the Instance Confi guration screen, click Next. On the Disk Space Requirements screen, click Next On the Server Confi guration screen: Set the Startup Type for SQL Server Agent to be Automatic Click on the button Use the same account for all SQL Server services. Select the account NT AUTHORITYSYSTEM and click OK. Finally, click Next. On the Database Configuration Engine screen: Look for the Account Provisioning tab and click the Add Current User button under Specify SQL Server administrators. Finally, click Next On the Analysis Services Confi guration screen: Look for the Account Provisioning tab and click the Add Current User button under Specify which users have administrative permissions for Analysis Services. Finally, click Next. On the Reporting Services Configuration screen, select the option to Install but do not configure the report server. Now, click Next. On the Error Reporting Screen, click Next. On the Installation Confi guration Rules screen, click Next. On the Ready to Install screen, click Install. Your patience will be rewarded with the Complete screen! Finally, click Close. The Complete screen is shown in the following screenshot: You can close the SQL Server Installation Center. Confi gure SQL Server security for the SP_AppPool account: Click Start | All Programs | SQL Server 2008 R2 | SQL Server Management Studio. On Connect to server, type a period (.) in the Server Name field and click Connect. Expand the Security node. Right-click Logins and select New Login. Use the Search function and enter SP_AppPool in the box Enter object name to select. Click the check names button and then click OK. In my case, you see the properly formatted THEOPENHIGHWAYSP_AppPool in the login name text box. On the Server Roles tab, ensure that the dbcreator and securityadmin roles are selected (in addition to the already selected public role). Finally, click OK. Congratulations! You have successfully installed and confi gured SQL Server 2008 R2 Enterprise.
Read more
  • 0
  • 0
  • 1027
article-image-google-apps-surfing-web
Packt
13 Dec 2011
8 min read
Save for later

Google Apps: Surfing the Web

Packt
13 Dec 2011
8 min read
Browsing and using websites Back in the day, when I first started writing, there were two ways to research—own a lot of books (I did and still do), and/or spend a lot of time at the library (I did, don't have to now). This all changed in the early 90s when Sir Tim Berners-Lee invented the World Wide Web (WWW) . The web used the Internet, which was already there, although Al Gore did not invent the Internet. Although earlier experiments took place, Vinton Cerf's development of the basic transmission protocols making all we enjoy today possible gives him the title "Father of the Internet". All of us reading this book use the Internet and the web almost every day. App Inventor itself relies extensively on the web; you have to use the web in designing apps and downloading the Blocks Editor to power them up. Adding web browsing to our apps gives us: Access to literally trillions of web pages (no one knows how many; Google said it was over a trillion in 2008, and growth has been tremendous since then). Lets us leverage the limited resources and storage capacity of the user's phone by many times as we use the millions of dollars invested in server farms (vast collections of interconnected computers) and thousands of web pages on commercial sites (which they want us to use, even beg us to use because it promotes their products or services). Makes it possible to write powerful and useful apps with really little effort on our part. Take, for example, what I referred to earlier in this book as a link app. This is an app that, when called by the user, simply uses ActivityStarter to open a browser and load a web page. Let's whip one up. Time for action – building an eBay link app Yes, eBay loves us—especially if we buy or sell items on this extensive online auction site. And they want you to do it not just at home but when you're out with only your smartphone. To encourage use from your phone, eBay has spent a gazillion dollars (that's a lot) in providing a mobile site (http://m.ebay.com) that makes the entire eBay site (hundreds of thousands, probably millions of pages) available and useful to you. Back to that word, leverage—the ancient Greek mathematician, Archimedes, said about levers, "Give me a place to stand on and I will move the Earth." Our app's leverage won't move the Earth, but we can sure bid on just about everything on it. Okay, the design of our eBay app is very simple since the screen will be there for only a second or so. I'll explain that in a moment. So, we need just a label and two non-visual components: ActivityStarter and Clock. In the Properties column for the ActivityStarter, put android.intent.action.VIEW in the Action field (again, this is how your app calls the phone's web browser, and the way it's entered is case-sensitive). This gives us something as shown in the following screenshot (with a little formatting added): Now, the reason for the LOADING...nInternet connection required (in the label, n being a line break) is a basic notifier and error trap. First, if the Internet connection is slow, it lets the user know something is happening. Second, if there is not 3G or Wi-Fi connection, it tells the user why nothing will happen until they get a connection. Simple additions like the previous (basically thinking of the user's experience in using our apps) define the difference between amateur and professional apps. Always try to anticipate, because if we leave any way possible at all to screw it up, some user will find it. And who gets blamed? Right. You, me, or whatever idiot wrote that defective app. And they will zing you on Android Market. Okay, now to our buddy: the Blocks Editor. We need only one small block group. Everything is in the Clock1.Timer block to connect to the eBay mobile site and then, job done, gracefully exit. Inside the clock timer frame, we put four blocks, which accomplish the following logic: In our ActivityStarter1.DataUri goes the web address of the eBay mobile website home: http://m.ebay.com. The ActivityStarter1.StartActivity block calls the phone's web browser and passes the address to it. Clock1.TimerAlwaysFires set to false tells AI, "Stop, don't do anything else." Finally, the close application block (which we should always be nice and include somewhere in our apps) removes the app from the phone or other Android device's memory, releasing those always scarce resources. Make a nice icon, and it's ready to go onto Android Market (I put mine on as eBay for Droids). What just happened? That's it—a complete application giving access to a few million great bargains on eBay. This is how it will look when the eBay mobile site home page opens on the user's phone: Pretty neat, huh? And there are lots of other major sites out there that offer mobile sites, easily converted into link apps such as this eBay example. But, what if you want to make the app more complex with a lot of choices? One that after the user finishes browsing a site, they come back to your app and choose yet another site to visit? No problem. Here's an example of that. I recently published an app called AVLnews (AVL is the airport code for Asheville, North Carolina where I live). This app actually lets the user drill down into local news for all 19 counties in the Western North Carolina mountains. Here's what the front page of the app looks like: Even with several pages of choices, this type of app is truly easy to create. You need only a few buttons, labels, and (as ever) horizontal and vertical arrangements for formatting. And, of course, add a single ActivityStarter for calling the phone's web browser. Now, here's the cool part! No matter how long and how many pages your user reads on the site a button sends him to, when he or she hits the hardware Back button on their phone, they return to your app. The preceding is the only way the Back button works for an App Inventor app! If user is within an AI app and hits the Back button, it immediately kills your app (except when a listpicker is displayed, in which case it returns you to the screen that invoked it). This is doubleplusungood, so to speak. So, always supply navigation buttons for changing pages and maybe a note somewhere not to use the hardware button. Okay, back to my AVLnews app as an example. The first button I built was for the Citizen-Times, our major newspaper in the area. They have a mobile site such as the one we saw earlier for eBay. It looks nice and is easy to read on a Droid or other smartphone, like the following: And, it's equally easy to program—this is all you need: I then moved on to other news sources for the areas. The small weekly newspapers in the more isolated, outlying mountain counties have no expensive mobile sites. The websites they do have are pretty conventional. But, when I linked to them, the page would come up with tiny, unreadable type. I had to move it around, do the double tap on the content to fit to the page trick to expand the page, turn the phone on its side, and many such more, and still had trouble reading stuff. Folks, you do not do things like that to your users. Not if you want them to think you are one cool app inventor, as I know you to be by now. So, here's the trick to making almost all web pages readable on a phone—Google Mobilizer! It's a free service of Google at http://www.google.com/gwt/n. Use it to construct links that return a nicely formatted and readable page as shown in the next screenshot. People will thank you for it and think you expended many hours of genius-level programming effort to achieve it. And, remember, it's not polite to disagree with people, eh? Often, the search terms and mobilizing of your link is long and goes on for some time (the following screenshot is only a part of it). However, that's where your real genius comes into play: building a search term that gets the articles fitting whatever is the topic on the button. Summing up: using the power of the web lets us build powerful apps that use few resources on the phone yet return thousands upon thousands of pages. Enjoy. Now, we will look at yet another Google service: Fusion Tables. And see how our App Inventor apps can take advantage of these online data sources.  
Read more
  • 0
  • 0
  • 1713

article-image-microsoft-sharepoint-creating-various-content-types
Packt
02 Dec 2011
7 min read
Save for later

Microsoft SharePoint : Creating Various Content Types

Packt
02 Dec 2011
7 min read
(For more resources on Microsoft SharePoint, see here.) SharePoint content types are used to make it simpler for site managers to standardize what content and associated metadata gets uploaded to lists and libraries on the site. In this article, we'll take a look at how you can create various content types and assign them to be used in site containers. As a subset of more complex content types, a document set will allow your users to store related items in libraries as a set of documents sharing common metadata. This approach will allow your users to run business processes on a batch of items in the document set as well as the whole set. In this article, we'll take a look at how you can define a document set to be used on your site. Since users mostly interact with your SharePoint site through pages and views, the ability to modify SharePoint pages to accommodate business user requirements becomes an important part of site management. In this article, we'll take a look at how you can create and modify pages and content related to them. We will also take a look at how you can provision simple out-of-the-box web parts to your SharePoint publishing pages and configure their properties. Creating basic and complex content types SharePoint lists and libraries can store a variety of content on the site. SharePoint also has a user interface to customize what information you can collect from users to be attached as an item metadata. In the scenario where the entire intranet or the department site within your organization requires a standard set of metadata to be collected with list and library items, content types are the easiest approach to implement the requirement. With content types, you can define the type of business content your users will be interacting with. Once defined, you can also add a metadata field and any applicable validation to them. Once defined, you can attach the newly created content type to the library or list of your choice so that newly uploaded or modified content can conform to the rules you defined on the site. Getting ready Considering you have already set up your virtual development environment, we'll get right into authoring our script. It's assumed you are familiar with how to interact with SharePoint lists and libraries using PowerShell. In this recipe, we'll be using PowerGUI to author the script, which means you will be required to be logged in with an administrator's role on the target Virtual Machine. How to do it... Let's take a look at how we can provision site content types using PowerShell as follows: Click Start | All Programs | PowerGUI | PowerGUI Script Editor. In the main script editing window of PowerGUI, add the following script: # Defining script variables$SiteUrl = "http://intranet.contoso.com"$ListName = "Shared Documents"# Loading Microsoft.SharePoint.PowerShell $snapin = Get-PSSnapin | Where-Object {$_.Name -eq 'Microsoft.SharePoint.Powershell'}if ($snapin -eq $null) {Write-Host "Loading SharePoint Powershell Snapin"Add-PSSnapin "Microsoft.SharePoint.Powershell"}$SPSite = Get-SPSite | Where-Object {$_.Url -eq $SiteUrl} if($SPSite -ne $null) { Write-Host "Connecting to the site" $SiteUrl ",list " $ListName $RootWeb = $SPSite.RootWeb $SPList = $RootWeb.Lists[$ListName] Write-Host "Creating new content type from base type" $DocumentContentType = $RootWeb.AvailableContentTypes["Document"] $ContentType = New-Object Microsoft.SharePoint.SPContentType -ArgumentList @($DocumentContentType, $RootWeb.ContentTypes, "Org Document") Write-Host "Adding content type to site" $ct = $RootWeb.ContentTypes.Add($ContentType) Write-Host "Creating new fields" $OrgDocumentContentType = $RootWeb.ContentTypes[$ContentType.Id] $OrgFields = $RootWeb.Fields $choices = New-Object System.Collections.Specialized.StringCollection $choices.Add("East") $choices.Add("West") $OrgDivision = $OrgFields.Add("Division", [Microsoft.SharePoint.SPFieldType]::Choice, $false, $false, $choices) $OrgBranch = $OrgFields.Add("Branch", [Microsoft.SharePoint.SPFieldType]::Text, $false) Write-Host "Adding fields to content type" $OrgDivisionObject = $OrgFields.GetField($OrgDivision) $OrgBranchObject = $OrgFields.GetField($OrgBranch) $OrgDocumentContentType.FieldLinks.Add($OrgDivisionObject) $OrgDocumentContentType.FieldLinks.Add($OrgBranchObject) $OrgDocumentContentType.Update() Write-Host "Associating content type to list" $ListName $association = $SPList.ContentTypes.Add($OrgDocumentContentType) $SPList.ContentTypesEnabled = $true $SPList.Update() Write-Host "Content type provisioning complete" } $SPSite.Dispose() Click File | Save to save the script to your development machine's desktop. Set the filename of the script to CreateAssociateContentType.ps1. Open the PowerShell console window and call CreateAssociateContentType. ps1 using the following command: PS C:UsersAdministratorDesktop> . CreateAssociateContentType.ps1 As a result, your PowerShell script will create a site structure as shown in the following screenshot: Now, from your browser, let's switch to our SharePoint Intranet: http://intranet.contoso.com. From the home page's Quick launch, click the Shared Documents link. On the ribbon, click the Library tab and select Settings | Library Settings. Take note of the newly associated content type added to the Content Types area of the library settings, as shown in the following screenshot: Navigate back to the Shared Documents library from the Quick launch menu on your site and select any of the existing documents in the library. From the ribbons Documents tab, click Manage | Edit Properties. Take note of how the item now has the Content Type option available, where you can pick newly provisioned Org Document content type. Pick the Org Document content type and take note of the associated metadata showing up for the new content type, as shown in the following screenshot: How it works... First, we defined the script variables. In this recipe, the variables include a URL of the site where the content types are provisioned, http://intranet.contoso.com, and a document library to which the content type is associated: $ListName = "Shared Documents" Once a PowerShell snap-in has been loaded, we get a hold of the instance of the current site and its root web. Since we want our content type to inherit from the parent rather than just being defined from the scratch, we get a hold of the existing parent content type first, using the following command: $DocumentContentType = $RootWeb.AvailableContentTypes["Document"] Next, we created an instance of a new content type inheriting from our parent content type and provisioned it to the root site using the following command: $ContentType = New-Object Microsoft.SharePoint.SPContentType -ArgumentList @($DocumentContentType, $RootWeb.ContentTypes, "Org Document") Here, the new object takes the following parameters: the content type representing a parent, a web to which the new content type will be provisioned to, and the display name for the content type. Once our content type object has been created, we add it to the list of existing content types on the site: $ct = $RootWeb.ContentTypes.Add($ContentType) Since most content types are unique by the fields they are using, we will add some business- specific fields to our content type. First, we get a hold of the collection of all of the available fields on the site: $OrgFields = $RootWeb.Fields Next, we create a string collection to hold the values for the choice field we are going to add to our content type: $choices = New-Object System.Collections.Specialized.StringCollection The field with list of choices was called Division, representing a company division. We provision the field to the site using the following command: $OrgDivision = $OrgFields.Add("Division", [Microsoft.SharePoint.SPFieldType]::Choice, $false, $false, $choices) In the preceding command, the first parameter is the name of the field, followed by the type of the field, which in our case is choice field. We then specify whether the field will be a required field, followed by a parameter indicating whether the field name will be truncated to eight characters. The last parameter specifies the list of choices for the choice field. Another field we add, representing a company branch, is simpler since it's a text field. We define the text field using the following command: $OrgBranch = $OrgFields.Add("Branch", [Microsoft.SharePoint.SPFieldType]::Text, $false) We add both fields to the content type using the following commands: $OrgDocumentContentType.FieldLinks.Add($OrgDivisionObject)$OrgDocumentContentType.FieldLinks.Add($OrgBranchObject) The last part is to associate the newly created content type to a library, in our case Shared Documents. We use the following command to associate the content type to the library: $association = $SPList.ContentTypes.Add($OrgDocumentContentType) To ensure the content types on the list are enabled, we set the ContentTypesEnabled property of the list to $true.  
Read more
  • 0
  • 0
  • 2286