Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Application Development

357 Articles
article-image-using-xml-facade-dom
Packt
13 Sep 2013
25 min read
Save for later

Using XML Facade for DOM

Packt
13 Sep 2013
25 min read
(For more resources related to this topic, see here.) The Business Process Execution Language (BPEL) is based on XML, which means that all the internal variables and data are presented in XML. BPEL and Java technologies are complementary, we seek ways to ease the integration of the technologies. In order to handle the XML content from BPEL variables in Java resources (classes), we have a couple of possibilities: Use DOM (Document Object Model) API for Java, where we handle the XML content directly through API calls. An example of such a call would be reading from the input variable: oracle.xml.parser.v2.XMLElement input_cf= (oracle.xml.parser.v2.XMLElement)getVariableData("inputVariable","payload","/client:Cashflows"); We receive the XMLElement class, which we need to handle further, either be assignment, reading of content, iteration, or something else. As an alternative, we can use XML facade though Java Architecture for XML Binding (JAXB). JAXB provides a convenient way of transforming XML to Java or vice-versa. The creation of XML facade is supported through the xjc utility and of course via the JDeveloper IDE. The example code for accessing XML through XML facade is: java.util.List<org.packt.cashflow.facade.PrincipalExchange>princEx= cf.getPrincipalExchange(); We can see that there is neither XML content nor DOM API anymore. Furthermore, we have to access the whole XML structure represented by Java classes. The latest specification of JAXB at the time of writing is 2.2.7, and its specification can be found at the following location: https://jaxb.java.net/. The purpose of an XML facade operation is the marshalling and un-marshalling of Java classes. When the originated content is presented in XML, we use un-marshalling methods in order to generate the correspondent Java classes. In cases where we have content stored in Java classes and we want to present the content in XML, we use the marshalling methods. JAXB provides the ability to create XML facade from an XML schema definition or from the WSDL (Web Service Definition/Description Language). The latter method provides a useful approach as we, in most cases, orchestrate web services whose operations are defined in WSDL documents. Throughout this article, we will work on a sample from the banking world. On top of this sample, we will show how to build the XML facade. The sample contains the simple XML types, complex types, elements, and cardinality, so we cover all the essential elements of functionality in XML facade. Setting up an XML facade project We start generating XML facade by setting up a project in a JDeveloper environment which provides convenient tools for building XML facades. This recipe will describe how to set up a JDeveloper project in order to build XML facade. Getting ready To complete the recipe, we need the XML schema of the BPEL process variables based on which we build XML facade. Explore the XML schema of our banking BPEL process. We are interested in the structure of the BPEL request message: <xsd:complexType name="PrincipalExchange"><xsd:sequence><xsd:element minOccurs="0"name="unadjustedPrincipalExchangeDate" type="xsd:date"/><xsd:element minOccurs="0"name="adjustedPrincipalExchangeDate" type="xsd:date"/><xsd:element minOccurs="0" name="principalExchangeAmount"type="xsd:decimal"/><xsd:element minOccurs="0" name="discountFactor"type="xsd:decimal"/></xsd:sequence><xsd:attribute name="id" type="xsd:int"/></xsd:complexType><xsd:complexType name="CashflowsType"><xsd:sequence><xsd:element maxOccurs="unbounded" minOccurs="0"name="principalExchange" type="prc:PrincipalExchange"/></xsd:sequence></xsd:complexType><xsd:element name="Cashflows" type="prc:CashflowsType"/> The request message structure presents just a small fragment of cash flows modeled in the banks. The concrete definition of a cash flow is much more complex. However, our definition contains all the right elements so that we can show the advantages of using XML facade in a BPEL process. How to do it... The steps involved in setting up a JDeveloper project for XML façade are as follows: We start by opening a new Java Project in JDeveloper and naming it CashflowFacade. Click on Next. In the next window of the Create Java Project wizard, we select the default package name org.packt.cashflow.facade. Click on Finish. We now have the following project structure in JDeveloper: We have created a project that is ready for XML facade creation. How it works... After the wizard has finished, we can see the project structure created in JDeveloper. Also, the corresponding file structure is created in the filesystem. Generating XML facade using ANT This recipe explains how to generate XML facade with the use of the Apache ANT utility. We use the ANT scripts when we want to build or rebuild the XML facade in many iterations, for example, every time during nightly builds. Using ANT to build XML façade is very useful when XML definition changes are constantly in phases of development. With ANT, we can ensure continuous synchronization between XML and generated Java code. The official ANT homepage along with detailed information on how to use it can be found at the following URL: http://ant.apache.org/. Getting ready By completing our previous recipe, we built up a JDeveloper project ready to create XML facade out of XML schema. To complete this recipe, we need to add ANT project technology to the project. We achieve this through the Project Properties dialog: How to do it... The following are the steps we need to take to create a project in JDeveloper for building XML façade with ANT: Create a new ANT build file by right-clicking on the CashflowFacade project node, select New, and choose Buildfile from Project (Ant): The ANT build file is generated and added into the project under the Resources folder. Now we need to amend the build.xml file with the code to build XML facade. We will first define the properties for our XML facade: <property name="schema_file" location="../Banking_BPEL/xsd/Derivative_Cashflow.xsd"/><property name="dest_dir" location="./src"/><property name="package" value="org.packt.cashflow.facade"/> We define the location of the source XML schema (it is located in the BPEL process). Next, we define the destination of the generated Java files and the name of the package. Now, we define the ANT target in order to build XML facade classes. The ANT target presents one closed unit of ANT work. We define the build task for the XML façade as follows: <target name="xjc"><delete dir="src"/><mkdir dir="src"/><echo message="Compiling the schema..." /><exec executable="xjc"><arg value="-xmlschema"/><arg value="${schema_file}"/><arg value="-d"/><arg value="${dest_dir}"/><arg value="-p"/><arg value="${package}"/></exec></target> Now we have XML facade packaged and ready to be used in BPEL processes. How it works… ANT is used as a build tool and performs various tasks. As such, we can easily use it to build XML facade. Java Architecture for XML Binding provides the xjc utility, which can help us in building XML facade. We have provided the following parameters to the xjc utility: Xmlschema: This is the threat input schema as XML schema d: This specifies the destination directory of the generated classes p: This specifies the package name of the generated classes There are a number of other parameters, however we will not go into detail about them here. Based on the parameters we provided to the xjc utility, the Java representation of the XML schema is generated. If we examine the generated classes, we can see that there exists a Java class for every type defined in the XML schema. Also, we can see that the ObjectFactory class is generated, which eases the generation of Java class instances. There's more... There is a difference in creating XML facade between Versions 10g and 11g of Oracle SOA Suite. In Oracle SOA Suite 10g, there was a convenient utility named schema, which is used for building XML facade. However, in Oracle SOA Suite 11g, the schema utility is not available anymore. To provide a similar solution, we create a template class, which is later copied to a real code package when needed to provide functionality for XML facade. We create a new class Facade in the called facade package. The only method in the class is static and serves as a creation point of facade: public static Object createFacade(String context, XMLElement doc)throws Exception {JAXBContext jaxbContext;Object zz= null;try {jaxbContext = JAXBContext.newInstance(context);Unmarshaller unmarshaller = jaxbContext.createUnmarshaller();zz = unmarshaller.unmarshal(doc);return zz;} catch (JAXBException e) {throw new Exception("Cannot create facade from the XML content. "+ e.getMessage());}} The class code implementation is simple and consists of creating the JAXB context. Further, we un-marshall the context and return the resulting class to the client. In case of problems, we either throw an exception or return a null object. Now the calling code is trivial. For example, to create XML facade for the XML content, we call as follows: Object zz = facade.Facade.createFacade("org.packt.cashflow.facade",document.getSrcRoot()); Creating XML facade from XSD This recipe describes how to create XML facade classes from XSD. Usually, the necessity to access XML content out of Java classes comes from already defined XML schemas in BPEL processes. How to do it... We have already defined the BPEL process and the XML schema (Derivative_Cashflow.xsd) in the project. The following steps will show you how to create the XML facade from the XML schema: Select the CashflowFacade project, right-click on it, and select New. Select JAXB 2.0 Content Model from XML Schema. Select the schema file from the Banking_BPEL project. Select the Package Name for Generated Classes checkbox and click on the OK button. The corresponding Java classes for the XML schema were generated. How it works... Now compare the classes generated via the ANT utility in the Generating XML facade using ANT recipe with this one. In essence, the generated files are the same. However, we see the additional file jaxb.properties, which holds the configuration of the JAXB factory used for the generation of Java classes. It is recommended to create the same access class (Facade.java) in order to simplify further access to XML facade. Creating XML facade from WSDL It is possible to include the definitions of schema elements into WSDL. To overcome the extraction of XML schema content from the WSDL document, we would rather take the WSDL document and create XML facade for it. This recipe explains how to create XML facade out of the WSDL document. Getting ready To complete the recipe, we need the WSDL document with the XML schema definition. Luckily, we already have one automatically generated WSDL document, which we received during the Banking_BPEL project creation. We will amend the already created project, so it is recommended to complete the Generating XML facade using ANT recipe before continuing with this recipe. How to do it... The following are the steps involved in creating XML façade from WSDL: Open the ANT configuration file (build.xml) in JDeveloper. We first define the property which identifies the location of the WSDL document: <property name="wsdl_file" location="../Banking_BPEL/Derivative_Cashflow.wsdl"/> Continue with the definition of a new target inside the ANT configuration file in order to generate Java classes from the WSDL document: <target name="xjc_wsdl"><delete dir="src/org"/><mkdir dir="src/org"/><echo message="Compiling the schema..." /><exec executable="xjc"><arg value="-wsdl"/><arg value="${schema_file}"/><arg value="-d"/><arg value="${dest_dir}"/><arg value="-p"/><arg value="${package}"/></exec></target> From the configuration point of view, this step completes the recipe. To run the newly defined ANT task, we select the build.xml file in the Projects pane. Then, we select the xjc_wsdl task in the Structure pane of JDeveloper, right-click on it, and select Run Target "xjc_wsdl": How it works... The generation of Java representation classes from WSDL content works similar to the generation of Java classes from XSD content. Only the source of the XML input content is different from the xjc utility. In case we execute the ANT task with the wrong XML or WSDL content, we receive a kind notification from the xjc utility. For example, if we run the utility xjc with the parameter –xmlschema over the WSDL document, we get a warning that we should use different parameters for generating XML façade from WSDL. Note that generation of Java classes from the WSDL document via JAXB is only available through ANT task definition or the xjc utility. If we try the same procedure with JDeveloper, an error is reported. Packaging XML facade into JAR This recipe explains how to prepare a package containing XML facade to be used in BPEL processes and in Java applications in general. Getting ready To complete this recipe, we need the XML facade created out of the XML schema. Also, the generated Java classes need to be compiled. How to do it... The steps involved for packaging XML façade into JAR are as follows: We open the Project Properties by right-clicking on the CashflowFacade root node. From the left-hand side tree, select Deployment and click on the New button. The Create Deployment Profile window opens where we set the name of the archive. Click on the OK button. The Edit JAR Deployment Profile Properties dialog opens where you can configure what is going into the JAR archive. We confirm the dialog and deployment profile as we don't need any special configuration. Now, we right-click on the project root node (CashflowFacade), then select Deploy and CFacade. The window requesting the deployment action appears. We simply confirm it by pressing the Finish button: As a result, we can see the generated JAR file created in the deploy folder of the project. There's more... In this article, we also cover the building of XML facade with the ANT tool. To support an automatic build process, we can also define an ANT target to build the JAR file. We open the build.xml file and define a new target for packaging purposes. With this target, we first recreate the deploy directory and then prepare the package to be utilized in the BPEL process: <target name="pack" depends="compile"><delete dir="deploy"/><mkdir dir="deploy"/><jar destfile="deploy/CFacade.jar"basedir="./classes"excludes="**/*data*"/></target> To automate the process even further, we define the target to copy generated JAR files to the location of the BPEL process. Usually, this means copying the JAR files to the SCA-INF/lib directory: <target name="copyLib" depends="pack"><copy file="deploy/CFacade.jar" todir="../Banking_BPEL/SCAINF/lib"/></target> The task depends on the successful creation of a JAR package, and when the JAR package is created, it is copied over to the BPEL process library folder. Generating Java documents for XML facade Well prepared documentation presents important aspect of further XML facade integration. Suppose we only receive the JAR package containing XML facade. It is virtually impossible to use XML facade if we don't know what the purpose of each data type is and how we can utilize it. With documentation, we receive a well-defined XML facade capable of integrating XML and Java worlds together. This recipe explains how to document the XML facade generated Java classes. Getting ready To complete this recipe, we only need the XML schema defined. We already have the XML schema in the Banking_BPEL project (Derivative_Cashflow.xsd). How to do it... The following are the steps we need to take in order to generate Java documents for XML facade: We open the Derivative_Cashflow.xsd XML schema file. Initially, we need to add an additional schema definition to the XML schema file: <xsd:schema attributeFormDefault="unqualified"elementFormDefault="qualified"targetNamespace="http:// jxb_version="2.1"></xsd:schema> In order to put documentation at the package level, we put the following code immediately after the <xsd:schema> tag in the XML schema file: <xsd:annotation><xsd:appinfo><jxb:schemaBindings><jxb:package name="org.packt.cashflow.facade"><jxb:javadoc>This package represents the XML facadeof the cashflows in the financial derivativesstructure.</jxb:javadoc></jxb:package></jxb:schemaBindings></xsd:appinfo></xsd:annotation> In order to add documentation at the complexType level, we need to put the following lines into the XML schema file. The code goes immediately after the complexType definition: <xsd:annotation><xsd:appinfo><jxb:class><jxb:javadoc>This class defines the data for theevents, when principal exchange occurs.</jxb:javadoc></jxb:class></xsd:appinfo></xsd:annotation> The elements of the complexType definition are annotated in a similar way. We put the annotation data immediately after the element definition in the XML schema file: <xsd:annotation><xsd:appinfo><jxb:property><jxb:javadoc>Raw principal exchangedate.</jxb:javadoc></jxb:property></xsd:appinfo></xsd:annotation> In JDeveloper, we are now ready to build the javadoc documentation. So, select the project CashflowFacade root node. Then, from the main menu, select the Build and Javadoc CashflowFacade.jpr option. The javadoc content will be built in the javadoc directory of the project. How it works... During the conversion from XML schema to Java classes, JAXB is also processing possible annotations inside the XML schema file. When the conversion utility (xjc or execution through JDeveloper) finds the annotation in the XML schema file, it decorates the generated Java classes according to the specification. The XML schema file must contain the following declarations. In the <xsd:schema> element, the following declaration of the JAXB schema namespace must exist: jxb:version="2.1" Note that the xjb:version attribute is where the Version of the JAXB specification is defined. The most common Version declarations are 1.0, 2.0, and 2.1. The actual definition of javadoc resides within the <xsd:annotation> and <xsd:appinfo> blocks. To annotate at package level, we use the following code: <jxb:schemaBindings><jxb:package name="PKG_NAME"><jxb:javadoc>TEXT</jxb:javadoc></jxb:package></jxb:schemaBindings> We define the package name to annotate and a javadoc text containing the documentation for the package level. The annotation of javadoc at class or attribute level is similar to the following code: <jxb:class|property><jxb:javadoc>TEXT</jxb:javadoc></jxb:class|property> If we want to annotate the XML schema at complexType level, we use the <jaxb:class> element. To annotate the XML schema at element level, we use the <jaxb:property> element. There's more... In many cases, we need to annotate the XML schema file directly for various reasons. The XML schema defined by different vendors is automatically generated. In such cases, we would need to annotate the XML schema each time we want to generate Java classes out of it. This would require additional work just for annotation decoration tasks. In such situations, we can separate the annotation part of the XML schema to a separate file. With such an approach, we separate the annotating part from the XML schema content itself, over which we usually don't have control. For that purpose, we create a binding file in our CashflowFacade project and name it extBinding.xjb. We put the annotation documentation into this file and remove it from the original XML schema. We start by defining the binding file header declaration: <jxb:bindings version="1.0"><jxb:bindings schemaLocation="file:/D:/delo/source_code/Banking_BPEL/xsd/Derivative_Cashflow.xsd" node="/xs:schema"> We need to specify the name of the schema file location and the root node of the XML schema which corresponds to our mapping. We continue by declaring the package level annotation: <jxb:schemaBindings><jxb:package name="org.packt.cashflow.facade"><jxb:javadoc><![CDATA[<body>This package representsthe XML facade of the cashflows in the financialderivatives structure.</body>]]></jxb:javadoc></jxb:package><jxb:nameXmlTransform><jxb:elementName suffix="Element"/></jxb:nameXmlTransform></jxb:schemaBindings> We notice that the structure of the package level annotation is identical to those in the inline XML schema annotation. To annotate the class and its attribute, we use the following declaration: <jxb:bindings node="//xs:complexType[@name='CashflowsType']"><jxb:class><jxb:javadoc><![CDATA[This class defines the data for the events, whenprincipal exchange occurs.]]></jxb:javadoc></jxb:class><jxb:bindingsnode=".//xs:element[@name='principalExchange']"><jxb:property><jxb:javadoc>TEST prop</jxb:javadoc></jxb:property></jxb:bindings></jxb:bindings> Notice the indent annotation of attributes inside the class annotation that naturally correlates to the object programming paradigm. Now that we have the external binding file, we can regenerate the XML facade. Note that external binding files are not used only for the creation of javadoc. Inside the external binding file, we can include various rules to be followed during conversion. One such rule is aimed at data type mapping; that is, which Java data type will match the XML data type. In JDeveloper, if we are building XML facade for the first time, we follow either the Creating XML facade from XSD or the Creating XML facade from WSDL recipe. To rebuild XML facade, we use the following procedure: Select the XML schema file (Cashflow_Facade.xsd) in the CashflowFacade project. Right-click on it and select the Generate JAXB 2.0 Content Model option. The configuration dialog opens with some already pre-filled fields. We enter the location of the JAXB Customization File (in our case, the location of the extBinding.xjb file) and click on the OK button. Next, we build the javadoc part to get the documentation. Now, if we open the generated documentation in the web browser, we can see our documentation lines inside. Invoking XML facade from BPEL processes This recipe explains how to use XML facade inside BPEL processes. We can use XML façade to simplify access of XML content from Java code. When using XML façade, the XML content is exposed over Java code. Getting ready To complete the recipe, there are no special prerequisites. Remember that in the Packaging XML facade into JAR recipe, we defined the ANT task to copy XML facade to the BPEL process library directory. This task basically presents all the prerequisites for XML facade utilization. How to do it... Open a BPEL process (Derivative_Cashflow.bpel) in JDeveloper and insert the Java Embedding activity into it: We first insert a code snippet. The whole code snippet is enclosed by a try catch block: try { Read the input cashflow variable data: oracle.xml.parser.v2.XMLElement input_cf= (oracle.xml.parser.v2.XMLElement)getVariableData("inputVariable","payload","/client:Cashflows"); Un-marshall the XML content through the XML facade: Object obj_cf = facade.Facade.createFacade("org.packt.cashflow.facade", input_cf); We must cast the serialized object to the XML facade class: javax.xml.bind.JAXBElement<org.packt.cashflow.facade.CashflowsType> cfs = (javax.xml.bind.JAXBElement<org.packt.cashflow.facade.CashflowsType>)obj_cf; Retrieve the Java class out of the JAXBElement content class: org.packt.cashflow.facade.CashflowsType cf= cfs.getValue(); Finally, we close the try block and handle any exceptions that may occur during processing: } catch (Exception e) {e.printStackTrace();addAuditTrailEntry("Error in XML facade occurred: " +e.getMessage());} We close the Java Embedding activity dialog. Now, we are ready to deploy the BPEL process and test the XML facade. Actually, the execution of the BPEL process will not produce any output, since we have no output lines defined. In case some exception occurs, we will receive information about the exception in the audit trail as well as the BPEL server console. How it works... We add the XML facade JAR file to the BPEL process library directory (<BPEL_process_home>SCA-INFlib). Before we are able to access the XML facade classes, we need to extract the XML content from the BPEL process. To create the Java representation classes, we transform the XML content through the JAXB context. As a result, we receive an un-marshalled Java class ready to be used further in Java code. Accessing complex types through XML facade The advantage of using XML facade is to provide the ability to access the XML content via Java classes and methods. This recipe explains how to access the complex types through XML facade. Getting ready To complete the recipe, we will amend the example BPEL process from the Invoking XML facade from BPEL processes recipe. How to do it... The steps involved in accessing the complex types through XML façade are as follows: Open the Banking_BPEL process and double-click on the XML_facade_node Java Embedding activity. We amend the code snippet with the following code to access the complex type: java.util.List<org.packt.cashflow.facade.PrincipalExchange>princEx= cf.getPrincipalExchange(); We receive a list of principal exchange cash flows that contain various data. How it works... In the previous example, we receive a list of cash flows. The corresponding XML content definition states: <xsd:complexType name="PrincipalExchange"><xsd:sequence></xsd:sequence><xsd:attribute name="id" type="xsd:int"/></xsd:complexType> We can conclude that each of the principle exchange cash flows is modeled as an individual Java class. Depending on the hierarchy level of the complex type, it is modeled either as a Java class or as a Java class member. Complex types are organized in the Java object hierarchy according to the XML schema definition. Mostly, complex types can be modeled as a Java class and at the same time as a member of an other Java class. Accessing simple types through XML facade This recipe explains how to access simple types through XML facade. Getting ready To complete the recipe, we will amend the example BPEL process from our previous recipe, Accessing complex types through XML facade. How to do it... Open the Banking_BPEL process and double-click on the XML_facade_node Java Embedding activity. We amend the code snippet with the code to access the XML simple types: for (org.packt.cashflowfacade.PrincipalExchange pe: princEx) {addAuditTrailEntry("Received cashflow with id: " + pe.getId() +"n" +" Unadj. Principal Exch. Date ...: " + pe.getUnadjustedPrincipalExchangeDate() + "n" +" Adj. Principal Exch. Date .....: " + pe.getAdjustedPrincipalExchangeDate() + "n" +" Discount factor ...............: " +pe.getDiscountFactor() + "n" +" Principal Exch. Amount ........: " +pe.getPrincipalExchangeAmount() + "n");} With the preceding code, we output all Java class members to the audit trail. Now if we run the BPEL process, we can see the following part of output in the BPEL flow trace: How it works... The XML schema simple types are mapped to Java classes as members. If we check our example, we have three simple types in the XML schema: <xsd:complexType name="PrincipalExchange"><xsd:sequence><xsd:element minOccurs="0" name="unadjustedPrincipalExchangeDate"type="xsd:date"/><xsd:element minOccurs="0" name="adjustedPrincipalExchangeDate"type="xsd:date"/><xsd:element minOccurs="0" name="principalExchangeAmount"type="xsd:decimal"/><xsd:element minOccurs="0" name="discountFactor"type="xsd:decimal"/></xsd:sequence><xsd:attribute name="id" type="xsd:int"/></xsd:complexType> The simple types defined in the XML schema are <xsd:date>, <xsd:decimal>, and <xsd:int>. Let us find the corresponding Java class member definitions. Open the PrincipalExchange.java file. The definition of members we can see is as follows: @XmlSchemaType(name = "date")protected XMLGregorianCalendar unadjustedPrincipalExchangeDate;@XmlSchemaType(name = "date")protected XMLGregorianCalendar adjustedPrincipalExchangeDate;protected BigDecimal principalExchangeAmount;protected BigDecimal discountFactor;@XmlAttributeprotected Integer id; We can see that the mapping between the XML content and the Java classes was performed as shown in the following table: XML schema simple type Java class member <xsd:date> javax.xml.datatype.XMLGregorianCalendar <xsd:decimal> java.math.BigDecimal <xsd:int> java.lang.Integer Also, we can identify that the XML simple type definitions as well as the XML attributes are always mapped as members in corresponding Java class representations. Summary In this article, we have learned how to set up an XML facade project, generate XML facade using ANT, create XML facade from XSD and WSDL, Package XML facade into a JAR file, generate Java documents for XML facade, Invoke XML facade from BPEL processes, and access complex and simple types through XML facade. Resources for Article: Further resources on this subject: BPEL Process Monitoring [Article] Human Interactions in BPEL [Article] Business Processes with BPEL [Article]
Read more
  • 0
  • 0
  • 3561

article-image-stylecop-analysis
Packt
12 Sep 2013
6 min read
Save for later

StyleCop analysis

Packt
12 Sep 2013
6 min read
(For more resources related to this topic, see here.) Integrating StyleCop analysis results in Jenkins/Hudson (Intermediate) In this article we will see how to build and display StyleCop errors in Jenkins/Hudson jobs. To do so, we will need to see how to configure the Jenkins job with a full analysis of the C# files in order to display the technical debt of the project. As we want it to diminish, we will also set in the job an automatic recording of the last number of violations. Finally, we will return an error if we add any violations when compared to the previous build. Getting ready For this article you will need to have: StyleCop 4.7 installed with the option MSBuild integration checked A Subversion server A working Jenkins server including: The MSBuild plug in for Jenkins The Violation plug in for Jenkins A C# project followed in a subversion repository. How to do it... The first step is to build a working build script for your project. All solutions have their advantages and drawbacks. I will use MSBuild in this article. The only difference here will be that I won't separate files on a project basis but take the "whole" solution: <?xml version="1.0" encoding="utf-8" ?> <Project DefaultTargets="StyleCop" > <UsingTask TaskName="StyleCopTask" AssemblyFile="$(MSBuildExtens ionsPath)..StyleCop 4.7StyleCop.dll" /> <PropertyGroup> <!-- Set a default value of 1000000 as maximum Stylecop violations found --> <StyleCopMaxViolationCount>1000000</StyleCopMaxViolationCount> </PropertyGroup> <Target Name="StyleCop"> <!-- Get last violation count from file if exists --> <ReadLinesFromFile Condition="Exists('violationCount.txt')" File="violationCount.txt"> <Output TaskParameter="Lines" PropertyName="StyleCopMaxViola tionCount" /> </ReadLinesFromFile> <!-- Create a collection of files to scan --> <CreateItem Include=".***.cs"> <Output TaskParameter="Include" ItemName="StyleCopFiles" /> </CreateItem> <!-- Launch Stylecop task itself --> <StyleCopTask ProjectFullPath="$(MSBuildProjectFile)" SourceFiles="@(StyleCopFiles)" ForceFullAnalysis="true" TreatErrorsAsWarnings="true" OutputFile="StyleCopReport.xml" CacheResults="true" OverrideSettingsFile= "StylecopCustomRuleSettings.Stylecop" MaxViolationCount="$(StyleCopMaxViolationCount)"> <!-- Set the returned number of violation --> <Output TaskParameter="ViolationCount" PropertyName="StyleCo pViolationCount" /> </StyleCopTask> <!-- Write number of violation founds in last build --> <WriteLinesToFile File="violationCount.txt" Lines="$(StyleCopV iolationCount)" Overwrite="true" /> </Target> </Project> After that, we prepare the files that will be scanned by the StyleCop engine and we launch the StyleCop task on it. We redirect the current number of violations to the StyleCopViolationCount property. Finally, we write the result in the violationsCount.txt file to find out the level of technical debt remaining. This is done with the WriteLinesToFile element. Now that we have our build script for our job, let's see how to use it with Jenkins. First, we have to create the Jenkins job itself. We will create a Build a free-style software project. After that, we have to set how the subversion repository will be accessed, as shown in the following screenshot: We also set it to check for changes on the subversion repository every 15 minutes. Then, we have to launch our MSBuild script using the MSBuild task. The task is quite simple to configure and lets you fill in three fields: MSBuild Version: You need to select one of the MSBuild versions you configured in Jenkins (Jenkins | Manage Jenkins | Configure System) MSBuild Build File: Here we will provide the Stylecop.proj file we previously made Command Line Arguments: In our case, we don't have any to provide, but it might be useful when you have multiple targets in your MSBuild file Finally we have to configure the display of StyleCop errors. This were we will use the violation plugin of Jenkins. It permits the display of multiple quality tools' results on the same graphic. In order to make it work, you have to provide an XML file containing the violations. As you can see in the preceding screenshot, Jenkins is again quite simple to configure. After providing the XML filename for StyleCop, you have to fix thresholds to build health and the maximum number of violations you want to display in the detail screen of each file in violation. How it works... In the first part of the How to do it…section, we presented a build script. Let's explain what it does: First, as we don't use the premade MSBuild integration, we have to declare in which assembly the StyleCop task is defined and how we will call it. This is achieved through the use of the UsingTask element. Then we try to retrieve the previous count of violations and set the maximum number of violations that are acceptable at this stage of our project. This is the role of the ReadLinesFromFile element, which reads the content of a file. As we added a condition to ascertain the existence of the violationsCount.txt file, it will only be executed if the file exists. We redirect the output to the property StyleCopMaxViolationCount. After that we have configured the Jenkins job to follow our project with StyleCop. We have configured some strict rules to ensure nobody will add new violations over time, and with the violation plugin and the way we addressed StyleCop, we are able to follow the technical debt of the project regarding StyleCop violations in the Violations page. A summary of each file is also present and if we click on one of them, we will be able to follow the violations of the file. How to address multiple projects with their own StyleCop settings As far as I know, this is the limit of the MSBuild StyleCop task. When I need to address multiple projects with their own settings, I generally switch to StyleCopCmd using NAnt or a simple batch script and process the stylecop-report.violations.xml file with an XSLT to get the number of violations. Summary This article talked about integrating StyleCop analysis in Jensons/Hudkins. This article helped in building a job analysis for our project. Resources for Article : Further resources on this subject: Organizing, Clarifying and Communicating the R Data Analyses [Article] Generating Reports in Notebooks in RStudio [Article] Data Profiling with IBM Information Analyzer [Article]
Read more
  • 0
  • 0
  • 2338

article-image-architecture-javascriptmvc
Packt
10 Sep 2013
2 min read
Save for later

The architecture of JavaScriptMVC

Packt
10 Sep 2013
2 min read
(For more resources related to this topic, see here.) DocumentJS DocumentJS is an independent JavaScript documentation application and provides the following: Inline demos with source code and HTML panels Adds tags to the documentation Adds documentation as favorite Auto suggest search Test result page Comments Extends the JSDoc syntax Adds undocumented code because it understands JavaScript FuncUnit FuncUnit is an independent web testing framework and provides the following: Test clicking, typing, moving mouse cursor, and drag-and-drop utility Follows users between pages Multi browser and operating system support Continuous integration solution Writes and debugs tests in the web browser Chainable API that parallels jQuery jQueryMX jQueryMX is the MVC part of JavaScriptMVC and provides the following: Encourages logically separated, deterministic code MVC layer Uniform client-side template interface (supports jq-tmpl, EJS, JAML, Micro, and Mustache) Ajax fixtures Useful DOM utilities Language helpers JSON utilities Class system Custom events StealJS StealJS is an independent code manager and build tool and provides the following powerful features: Dependency management Loads JavaScript and CoffeeScript Loads CSS, Less, and Sass files Loads client-side templates such as TODO Loasd individual files only once Loads files from a different domain Concatenation and compression Google Closure compressor Makes multi-page build Pre processes TODO Can conditionally remove specified code from the production build Builds standalone jQuery plugins Logger Logs messages in a development mode Code generator Generates an application skeleton Adds the possibility to create your own generator Package management Downloads and install plugins from SVN and Git repositories Installs the dependencies Runs install scripts Loads individual files only once Loads files from a different domain Code cleaner Runs JavaScript beautifier against your codebase Runs JSLint against your codebase Resources for Article : Further resources on this subject: YUI Test [Article] From arrays to objects [Article] Writing Your First Lines of CoffeeScript [Article]
Read more
  • 0
  • 0
  • 1294

article-image-scratching-tip-iceberg
Packt
03 Sep 2013
15 min read
Save for later

Scratching the Tip of the Iceberg

Packt
03 Sep 2013
15 min read
Boost is a huge collection of libraries. Some of those libraries are small and meant for everyday use and others require a separate article to describe all of their features. This article is devoted to some of those big libraries and to give you some basics to start with. The first two recipes will explain the usage of Boost.Graph. It is a big library with an insane number of algorithms. We'll see some basics and probably the most important part of it visualization of graphs. We'll also see a very useful recipe for generating true random numbers. This is a very important requirement for writing secure cryptography systems. Some C++ standard libraries lack math functions. We'll see how that can be fixed using Boost. But the format of this article leaves no space to describe all of the functions. Writing test cases is described in the Writing test cases and Combining multiple test cases in one test module recipes. This is important for any production-quality system. The last recipe is about a library that helped me in many courses during my university days. Images can be created and modified using it. I personally used it to visualize different algorithms, hide data in images, sign images, and generate textures. Unfortunately, even this article cannot tell you about all of the Boost libraries. Maybe someday I'll write another book... and then a few more. Working with graphs Some tasks require a graphical representation of data. Boost.Graph is a library that was designed to provide a flexible way of constructing and representing graphs in memory. It also contains a lot of algorithms to work with graphs, such as topological sort, breadth first search, depth first search, and Dijkstra shortest paths. Well, let's perform some basic tasks with Boost.Graph! Getting ready Only basic knowledge of C++ and templates is required for this recipe. How to do it... In this recipe, we'll describe a graph type, create a graph of that type, add some vertexes and edges to the graph, and search for a specific vertex. That should be enough to start using Boost.Graph. We start with describing the graph type: #include <boost/graph/adjacency_list.hpp> #include <string> typedef std::string vertex_t; typedef boost::adjacency_list< boost::vecS , boost::vecS , boost::bidirectionalS , vertex_t > graph_type; Now we construct it: graph_type graph; Let's use a non portable trick that speeds up graph construction: static const std::size_t vertex_count = 5; graph.m_vertices.reserve(vertex_count); Now we are ready to add vertexes to the graph: typedef boost::graph_traits<graph_type> ::vertex_descriptor descriptor_t; descriptor_t cpp = boost::add_vertex(vertex_t("C++"), graph); descriptor_t stl = boost::add_vertex(vertex_t("STL"), graph); descriptor_t boost = boost::add_vertex(vertex_t("Boost"), graph); descriptor_t guru = boost::add_vertex(vertex_t("C++ guru"), graph); descriptor_t ansic = boost::add_vertex(vertex_t("C"), graph); It is time to connect vertexes with edges: boost::add_edge(cpp, stl, graph); boost::add_edge(stl, boost, graph); boost::add_edge(boost, guru, graph); boost::add_edge(ansic, guru, graph); We make a function that searches for a vertex: template <class GraphT> void find_and_print(const GraphT& g, boost::string_ref name) { Now we will write code that gets iterators to all vertexes: typedef typename boost::graph_traits<graph_type> ::vertex_iterator vert_it_t; vert_it_t it, end; boost::tie(it, end) = boost::vertices(g); It's time to run a search for the required vertex: typedef boost::graph_traits<graph_type>::vertex_descriptor desc_t; for (; it != end; ++ it) { desc_t desc = *it; if (boost::get(boost::vertex_bundle, g)[desc] == name.data()) { break; } } assert(it != end); std::cout << name << 'n'; } /* find_and_print */ How it works... In step 1, we are describing what our graph must look like and upon what types it must be based. boost::adjacency_list is a class that represents graphs as a two-dimensional structure, where the first dimension contains vertexes and the second dimension contains edges for that vertex. boost::adjacency_list must be the default choice for representing a graph; it suits most cases. The first template parameter, boost::adjacency_list, describes the structure used to represent the edge list for each of the vertexes; the second one describes a structure to store vertexes. We can choose different STL containers for those structures using specific selectors, as listed in the following table: Selector STL container boost::vecS std::vector boost::listS std::list boost::slistS std::slist boost::setS std::set boost::multisetS std::multiset boost::hash_setS std::hash_set The third template parameter is used to make an undirected, directed, or bidirectional graph. Use the boost::undirectedS, boost::directedS, and boost::bidirectionalS selectors respectively. The fifth template parameter describes the datatype that will be used as the vertex. In our example, we chose std::string. We can also support a datatype for edges and provide it as a template parameter. Steps 2 and 3 are trivial, but at step 4 you will see a non portable way to speed up graph construction. In our example, we use std::vector as a container for storing vertexes, so we can force it to reserve memory for the required amount of vertexes. This leads to less memory allocations/deallocations and copy operations during insertion of vertexes into the graph. This step is non-portable because it is highly dependent on the current implementation of boost::adjacency_list and on the chosen container type for storing vertexes. At step 4, we see how vertexes can be added to the graph. Note how boost::graph_traits<graph_type> has been used. The boost::graph_traits class is used to get types that are specific for a graph type. We'll see its usage and the description of some graph-specific types later in this article. Step 5 shows what we need do to connect vertexes with edges. If we had provided a datatype for the edges, adding an edge would look as follows: boost::add_edge(ansic, guru, edge_t(initialization_parameters), graph) Note that at step 6 the graph type is a template parameter. This is recommended to achieve better code reusability and make this function work with other graph types. At step 7, we see how to iterate over all of the vertexes of the graph. The type of vertex iterator is received from boost::graph_traits. The function boost::tie is a part of Boost.Tuple and is used for getting values from tuples to the variables. So calling boost::tie(it, end) = boost::vertices(g) will put the begin iterator into the it variable and the end iterator into the end variable. It may come as a surprise to you, but dereferencing a vertex iterator does not return vertex data. Instead, it returns the vertex descriptor desc, which can be used in boost::get(boost::vertex_bundle, g)[desc] to get vertex data, just as we have done in step 8. The vertex descriptor type is used in many of the Boost.Graph functions; we saw its use in the edge construction function in step 5. As already mentioned, the Boost.Graph library contains the implementation of many algorithms. You will find many search policies implemented, but we won't discuss them in this article. We will limit this recipe to only the basics of the graph library. There's more... The Boost.Graph library is not a part of C++11 and it won't be a part of C++1y. The current implementation does not support C++11 features. If we are using vertexes that are heavy to copy, we may gain speed using the following trick: vertex_descriptor desc = boost::add_vertex(graph); boost::get(boost::vertex_bundle, g_)[desc] = std::move(vertex_data); It avoids copy constructions of boost::add_vertex(vertex_data, graph) and uses the default construction with move assignment instead. The efficiency of Boost.Graph depends on multiple factors, such as the underlying containers types, graph representation, edge, and vertex datatypes. Visualizing graphs Making programs that manipulate graphs was never easy because of issues with visualization. When we work with STL containers such as std::map and std::vector, we can always print the container's contents and see what is going on inside. But when we work with complex graphs, it is hard to visualize the content in a clear way: too many vertexes and too many edges. In this recipe, we'll take a look at the visualization of Boost.Graph using the Graphviz tool. Getting ready To visualize graphs, you will need a Graphviz visualization tool. Knowledge of the preceding recipe is also required. How to do it... Visualization is done in two phases. In the first phase, we make our program output the graph's description in a text format; in the second phase, we import the output from the first step to some visualization tool. The numbered steps in this recipe are all about the first phase. Let's write the std::ostream operator for graph_type as done in the preceding recipe: #include <boost/graph/graphviz.hpp> std::ostream& operator<<(std::ostream& out, const graph_type& g) { detail::vertex_writer<graph_type> vw(g); boost::write_graphviz(out, g, vw); return out; } The detail::vertex_writer structure, used in the preceding step, must be defined as follows: namespace detail { template <class GraphT> class vertex_writer { const GraphT& g_; public: explicit vertex_writer(const GraphT& g) : g_(g) {} template <class VertexDescriptorT> void operator()(std::ostream& out, const VertexDescriptorT& d) const { out << " [label="" << boost::get(boost::vertex_bundle, g_)[d] << ""]"; } }; // vertex_writer } // namespace detail That's all. Now, if we visualize the graph from the previous recipe using the std::cout << graph; command, the output can be used to create graphical pictures using the dot command-line utility: $ dot -Tpng -o dot.png digraph G { 0 [label="C++"]; 1 [label="STL"]; 2 [label="Boost"]; 3 [label="C++ guru"]; 4 [label="C"]; 0->1 ; 1->2 ; 2->3 ; 4->3 ; }   The output of the preceding command is depicted in the following figure: We can also use the Gvedit or XDot programs for visualization if the command line frightens you. How it works... The Boost.Graph library contains function to output graphs in Graphviz (DOT) format. If we write boost::write_graphviz(out, g) with two parameters in step 1, the function will output a graph picture with vertexes numbered from 0. That's not very useful, so we provide an instance of the vertex_writer class that outputs vertex names. As we can see in step 2, the format of output must be DOT, which is understood by the Graphviz tool. You may need to read the Graphviz documentation for more info about the DOT format. If you wish to add some data to the edges during visualization, we need to provide an instance of the edge visualizer as a fourth parameter to boost::write_graphviz. There's more... C++11 does not contain Boost.Graph or the tools for graph visualization. But you do not need to worry—there are a lot of other graph formats and visualization tools and Boost. Graph can work with plenty of them. Using a true random number generator I know of many examples of commercial products that use incorrect methods for getting random numbers. It's a shame that some companies still use rand() in cryptography and banking software. Let's see how to get a fully random uniform distribution using Boost.Random that is suitable for banking software. Getting ready Basic knowledge of C++ is required for this recipe. Knowledge of different types of distributions will also be helpful. The code in this recipe requires linking against the boost_random library. How to do it... To create a true random number, we need some help from the operating system or processor. This is how it can be done using Boost: We'll need to include the following headers: #include <boost/config.hpp> #include <boost/random/random_device.hpp> #include <boost/random/uniform_int_distribution.hpp> Advanced random number providers have different names under different platforms: static const std::string provider = #ifdef BOOST_WINDOWS "Microsoft Strong Cryptographic Provider" #else "/dev/urandom" #endif ; Now we are ready to initialize the generator with Boost.Random: boost::random_device device(provider); Let's get a uniform distribution that returns a value between 1000 and 65535: boost::random::uniform_int_distribution<unsigned short> random(1000); That's it. Now we can get true random numbers using the random(device) call. How it works... Why does the rand() function not suit banking? Because it generates pseudo-random numbers, which means that the hacker could predict the next generated number. This is an issue with all pseudo-random number algorithms. Some algorithms are easier to predict and some harder, but it's still possible. That's why we are using boost::random_device in this example (see step 3). That device gathers information about random events from all around the operating system to construct an unpredictable hardware-generated number. The examples of such events are delays between pressed keys, delays between some of the hardware interruptions, and the internal CPU random number generator. Operating systems may have more than one such type of random number generators. In our example for POSIX systems, we used /dev/urandom instead of the more secure /dev/random because the latter remains in a blocked state until enough random events have been captured by the OS. Waiting for entropy could take seconds, which is usually unsuitable for applications. Use /dev/random to create long-lifetime GPG/SSL/SSH keys. Now that we are done with generators, it's time to move to step 4 and talk about distribution classes. If the generator just generates numbers (usually uniformly distributed), the distribution class maps one distribution to another. In step 4, we made a uniform distribution that returns a random number of unsigned short type. The parameter 1000 means that distribution must return numbers greater or equal to 1000. We can also provide the maximum number as a second parameter, which is by default equal to the maximum value storable in the return type. There's more... Boost.Random has a huge number of true/pseudo random generators and distributions for different needs. Avoid copying distributions and generators; this could turn out to be an expensive operation. C++11 has support for different distribution classes and generators. You will find all of the classes from this example in the <random> header in the std:: namespace. The Boost.Random libraries do not use C++11 features, and they are not really required for that library either. Should you use Boost implementation or STL? Boost provides better portability across systems; however, some STL implementations may have assembly-optimized implementations and might provide some useful extensions. Using portable math functions Some projects require specific trigonometric functions, a library for numerically solving ordinary differential equations, and working with distributions and constants. All of those parts of Boost.Math would be hard to fit into even a separate book. A single recipe definitely won't be enough. So let's focus on very basic everyday-use functions to work with float types. We'll write a portable function that checks an input value for infinity and not-a-number (NaN) values and changes the sign if the value is negative. Getting ready Basic knowledge of C++ is required for this recipe. Those who know C99 standard will find a lot in common in this recipe. How to do it... Perform the following steps to check the input value for infinity and NaN values and change the sign if the value is negative: We'll need the following headers: #include <boost/math/special_functions.hpp> #include <cassert> Asserting for infinity and NaN can be done like this: template <class T> void check_float_inputs(T value) { assert(!boost::math::isinf(value)); assert(!boost::math::isnan(value)); Use the following code to change the sign: if (boost::math::signbit(value)) { value = boost::math::changesign(value); } // ... } // check_float_inputs That's it! Now we can check that check_float_inputs(std::sqrt(-1.0)) and check_float_inputs(std::numeric_limits<double>::max() * 2.0) will cause asserts. How it works... Real types have specific values that cannot be checked using equality operators. For example, if the variable v contains NaN, assert(v!=v) may or may not pass depending on the compiler. For such cases, Boost.Math provides functions that can reliably check for infinity and NaN values. Step 3 contains the boost::math::signbit function, which requires clarification. This function returns a signed bit, which is 1 when the number is negative and 0 when the number is positive. In other words, it returns true if the value is negative. Looking at step 3 some readers might ask, "Why can't we just multiply by -1 instead of calling boost::math::changesign?". We can. But multiplication may work slower than boost::math::changesign and won't work for special values. For example, if your code can work with nan, the code in step 3 will be able to change the sign of -nan and write nan to the variable. The Boost.Math library maintainers recommend wrapping math functions from this example in round parenthesis to avoid collisions with C macros. It is better to write (boost::math::isinf)(value) instead of boost::math::isinf(value). There's more... C99 contains all of the functions described in this recipe. Why do we need them in Boost? Well, some compiler vendors think that programmers do not need them, so you won't find them in one very popular compiler. Another reason is that the Boost.Math functions can be used for classes that behave like numbers. Boost.Math is a very fast, portable, reliable library.
Read more
  • 0
  • 0
  • 1124

article-image-exploring-top-new-features-clr
Packt
30 Aug 2013
10 min read
Save for later

Exploring the Top New Features of the CLR

Packt
30 Aug 2013
10 min read
(For more resources related to this topic, see here.) One of its most important characteristics is that it is an in-place substitution of the .NET 4.0 and only runs on Windows Vista SP2 or later systems. .NET 4.5 breathes asynchronous features and makes writing async code even easier. It also provides us with the Task Parallel Library (TPL) Dataflow Library to help us create parallel and concurrent applications. Another very important addition is the portable libraries, which allow us to create managed assemblies that we can refer through different target applications and platforms, such as Windows 8, Windows Phone, Silverlight, and Xbox. We couldn't avoid mentioning Managed Extensibility Framework (MEF), which now has support for generic types, a convention-based programming model, and multiple scopes. Of course, this all comes together with a brand-new tooling, Visual Studio 2012, which you can find at http://msdn.microsoft.com/en-us/vstudio. Just be careful if you have projects in .NET 4.0 since it is an in-place install. For this article I'd like to give a special thanks to Layla Driscoll from the Microsoft .NET team who helped me summarize the topics, focus on what's essential, and showcase it to you, dear reader, in the most efficient way possible. Thanks, Layla. There are some features that we will not be able to explore through this article as they are just there and are part of the CLR but are worth explaining for better understanding: Support for arrays larger than 2 GB on 64-bit platforms, which can be enabled by an option in the app config file. Improved performance on the server's background garbage collection, which must be enabled in the <gcServer> element in the runtime configuration schema. Multicore JIT: Background JIT (Just In Time) compilation on multicore CPUs to improve app performance. This basically creates profiles and compiles methods that are likely to be executed on a separate thread. Improved performance for retrieving resources. The culture-sensitive string comparison (sorting, casing, normalization, and so on) is delegated to the operating system when running on Windows 8, which implements Unicode 6.0. On other platforms, the .NET framework will behave as in the previous versions, including its own string comparison data implementing Unicode 5.0. Next we will explore, in practice, some of these features to get a solid grasp on what .NET 4.5 has to offer and, believe me, we will have our hands full! Creating a portable library Most of us have often struggled and hacked our code to implement an assembly that we could use in different .NET target platforms. Portable libraries are here to help us to do exactly this. Now there is an easy way to develop a portable assembly that works without modification in .NET Framework, Windows Store apps style, Silverlight, Windows Phone, and XBOX 360 applications. The trick is that the Portable Class Library project supports a subset of assemblies from these platforms, providing us a Visual Studio template. This article will show you how to implement a basic application and help you get familiar with Visual Studio 2012. Getting ready In order to use this section you should have Visual Studio 2012 installed. Note that you will need a Visual Studio 2012 SKU higher than Visual Studio Express for it to fully support portable library projects. How to do it... Here we will create a portable library and see how it works: First, open Visual Studio 2012 and create a new project. We will select the Portable Class Library template from the Visual C# category. Now open the Properties dialog box of our newly created portable application and, in the library we will see a new section named Target frameworks. Note that, for this type of project, the dialog box will open as soon as the project is created, so opening it will only be necessary when modifying it afterwards. If we click on the Change button, we will see all the multitargeting possibilities for our class. We will see that we can target different versions of a framework. There is also a link to install additional frameworks. The one that we could install right now is XNA but we will click on Cancel and let the dialog box be as it is. Next, we will click on the show all files icon at the top of the Solution Explorer window (the icon with two papers and some dots behind them), right-click on the References folder, and click on Add Reference. We will observe on doing so that we are left with a .NET subset of assemblies that are compatible with the chosen target frameworks. We will add the following lines to test the portable assembly: using System;using System.Collections.Generic;using System.Linq;using System.Text;namespace pcl_myFirstPcl{public static class MyPortableClass {public static string GetSomething() {return "I am a portable class library"; } }} Build the project. Next, to try this portable assembly we could add, for example, a Silverlight project to the solution, together with an ASP.NET Web application project to wrap the Silverlight. We just need to add a reference to the portable library project and add a button to the MainPage.xaml page that calls the portable library static method we created. The code behind it should look as follows. Remember to add a using reference to our portable library namespace. using System.Windows.Documents;using System.Windows.Input;using System.Windows.Media;using System.Windows.Media.Animation;using System.Windows.Shapes;using pcl_myFirstPcl;namespace SilverlightApplication_testPCL{public partial class MainPage : UserControl {public MainPage() {InitializeComponent(); }private void Button_Click_1(object sender, RoutedEventArgs e) { String something = MyPortableClass.GetSomething();MessageBox.Show("Look! - I got this string from my portable class library: " + something); } }} We can execute the code and check if it works. In addition, we could add other types of projects, reference the Portable Library Class, and ensure that it works properly. How it works... We created a portable library from the Portable Class Library project template and selected the target frameworks. We saw the references; note that it reinforces the visibility of the assemblies that break the compatibility with the targeted platforms, helping us to avoid mistakes. Next we added some code, a target reference application that referenced the portable class, and used it. There's more... We should be aware that when deploying a .NET app that references a Portable Class Library assembly, we must specify its dependency to the correct version of the .NET Framework, ensuring that the required version is installed. A very common and interesting usage of the Portable Class Library would be to implement MVVM. For example, we could put the View Model and Model classes inside a portable library and share it with Windows Store apps, Silverlight, and Windows Phone applications. The architecture is described in the following diagram, which has been taken from MSDN (http://msdn.microsoft.com/en-us/library/hh563947%28v=vs.110%29.aspx): It is really interesting that the list of target frameworks is not limited and we even have a link to install additional frameworks, so I guess that the number of target frameworks will eventually grow. Controlling the timeout in regular expressions .NET 4.5 gives us improved control on the resolution of regular expressions so we can react when they don't resolve on time. This is extremely useful if we don't control the regular expressions/patterns, such as the ones provided by the users. A badly formed pattern can have bad performance due to excessive backtracking and this new feature is really a lifesaver. How to do it... Next we are going to control the timeout in the regular expression, where we will react if the operation takes more than 1 millisecond: Create a new Visual Studio project of type Console Application, named caRegexTimeout. Open the Program.cs file and add a using clause for using regular expressions: Using System.Text.RegularExpressions; Add the following method and call it from the Main function: private static void ExecuteRegexExpression() {bool RegExIsMatch = false;string testString = "One Tile to rule them all, One Tile to find them… ";string RegExPattern = @"([a-z ]+)*!";TimeSpantsRegexTimeout = TimeSpan.FromMilliseconds(1);try {RegExIsMatch = Regex.IsMatch(testString, RegExPattern, RegexOptions.None, tsRegexTimeout); }catch (RegexMatchTimeoutException ex) {Console.WriteLine("Timeout!!");Console.WriteLine("- Timeout specified: " + ex.MatchTimeout); }catch (ArgumentOutOfRangeException ex) {Console.WriteLine("ArgumentOutOfRangeException!!");Console.WriteLine(ex.Message); }Console.WriteLine("Finished succesfully: " + RegExIsMatch.ToString());Console.ReadLine();} If we execute it, we will see that it doesn't fi nish successfully, showing us some details in the console window. Next, we will change testString and RegExPattern to: String testString = "jose@brainsiders.com";String RegExPattern = @"^([w-.]+)@([w-.]+).[a-zA-Z]{2,4}$"; If we run it, we will now see that it runs and fi nishes successfully. How it works... The RegEx.IsMatch() method now accepts a parameter, which is matchTimeout of type TimeSpan, indicating the maximum time that we allow for the matching operation. If the execution time exceeds this amount, RegexMatchTimeoutException is launched. In our code, we have captured it with a try-catch statement to provide a custom message and of course to react upon a badly formed regex pattern taking too much time. We have tested it with an expression that will take some more time to validate and we got the timeout. When we changed the expression to a good one with a better execution time, the timeout was not reached. Additionally, we also watched out for the ArgumentOutOfRangeException, which is thrown when TimeSpan is zero, or negative, or greater than 24 days. There'smore... We could also set a global matchTimeout for the application through the "REGEX_DEFAULT_MATCH_TIMEOUT" property with the AppDomain.SetData method: AppDomain.CurrentDomain.SetData("REGEX_DEFAULT_MATCH_ TIMEOUT",TimeSpan.FromMilliseconds(200)); Anyway, if we specify the matchTimeout parameter, we will override the global value. Defining the culture for an application domain With .NET 4.5, we have in our hands a way of specifying the default culture for all of our application threads in a quick and efficient way. How to do it... We will now define the default culture for our application domain as follows: Create a new Visual Studio project of type Console Application named caCultureAppDomain. Open the Program.cs file and add the using clause for globalization: using System.Globalization; Next, add the following methods: static void DefineAppDomainCulture() {String CultureString = "en-US";DisplayCulture();CultureInfo.DefaultThreadCurrentCulture = CultureInfo.CreateSpecificCulture(CultureString);CultureInfo.DefaultThreadCurrentUICulture = CultureInfo.CreateSpecificCulture(CultureString);DisplayCulture();Console.ReadLine();}static void DisplayCulture() {Console.WriteLine("App Domain........: {0}", AppDomain.CurrentDomain.Id);Console.WriteLine("Default Culture...: {0}", CultureInfo.DefaultThreadCurrentCulture);Console.WriteLine("Default UI Culture: {0}", CultureInfo.DefaultThreadCurrentUICulture);} Then add a call to the DefineAppDomainCulture() method. If we execute it, we will observe that the initial default cultures are null and we specify them to become the default for the App Domain. How it works... We used the CultureInfo class to specify the culture and the UI of the application domain and all its threads. This is easily done through the DefaultThreadCurrentCulture and DefaultThreadCurrentUICulture properties. There's more... We must be aware that these properties affect only the current application domain, and if it changes we should control them.
Read more
  • 0
  • 0
  • 1071

article-image-writing-your-first-lines-coffeescript
Packt
29 Aug 2013
9 min read
Save for later

Writing Your First Lines of CoffeeScript

Packt
29 Aug 2013
9 min read
(For more resources related to this topic, see here.) Following along with the examples I implore you to open up a console as you read this article and try out the examples for yourself. You don't strictly have to; I'll show you any important output from the example code. However, following along will make you more comfortable with the command-line tools, give you a chance to write some CoffeeScript yourself, and most importantly, will give you an opportunity to experiment. Try changing the examples in small ways to see what happens. If you're confused about a piece of code, playing around and looking at the outcome will often help you understand what's really going on. The easiest way to follow along is to simply open up a CoffeeScript console. Just run this from the command line to get an interactive console: coffee If you'd like to save all your code to return to later, or if you wish to work on something more complicated, you can create files instead and run those. Give your files the .coffee extension , and run them like this: coffee my_sample_code.coffee Seeing the compiled JavaScript The golden rule of CoffeeScript, according to the CoffeeScript documentation, is: It's just JavaScript. This means that it is a language that compiles down to JavaScript in a simple fashion, without any complicated extra moving parts. This also means that it's easy, with a little practice, to understand how the CoffeeScript you are writing will compile into JavaScript. Your JavaScript expertise is still applicable, but you are freed from the tedious parts of the language. You should understand how the generated JavaScript will work, but you do not need to actually write the JavaScript. To this end, we'll spend a fair amount of time, especially in this article, comparing CoffeeScript code to the compiled JavaScript results. It's like peeking behind the wizard's curtain! The new language features won't seem so intimidating once you know how they work, and you'll find you have more trust in CoffeeScript when you can check in on the code it's generating. After a while, you won't even need to check in at all. I'll show you the corresponding JavaScript for most of the examples in this article, but if you write your own code, you may want to examine the output. This is a great way to experiment and learn more about the language! Unfortunately, if you're using the CoffeeScript console to follow along, there isn't a great way to see the compiled output (most of the time, it's nice to have all that out of sight—just not right now!). You can see the compiled JavaScript in several other easy ways, though. The first is to put your code in a file and compile it. The other is to use the Try CoffeeScript tool on http://coffeescript.org/. It brings up an editor right in the browser that updates the output as you type. CoffeeScript basics Let's get started! We'll begin with something simple: x = 1 + 1 You can probably guess what JavaScript this will compile to: var x;x = 1 + 1; Statements One of the very first things you will notice about CoffeeScript is that there are no semicolons. Statements are ended by a new line. The parser usually knows if a statement should be continued on the next line. You can explicitly tell it to continue to the next line by using a backslash at the end of the first line: x = 1+ 1 It's also possible to stretch function calls across multiple lines, as is common in "fluent" JavaScript interfaces: "foo" .concat("barbaz") .replace("foobar", "fubar") You may occasionally wish to place more than one statement on a single line (for purely stylistic purposes). This is the one time when you will use a semicolon in CoffeeScript: x = 1; y = 2 Both of these situations are fairly rare. The vast majority of the time, you'll find that one statement per line works great. You might feel a pang of loss for your semicolons at first, but give it time. The calluses on your pinky finger will fall off, your eyes will adjust to the lack of clutter, and soon enough you won't remember what good you ever saw in semicolons. Variables CoffeeScript variables look a lot like JavaScript variables, with one big difference: no var! CoffeeScript puts all variables in the local scope by default. x = 1y = 2z = x + y compiles to: var x, y, z;x = 1;y = 2;z = x + y; Believe it or not, this is one of my absolute top favorite things about CoffeeScript. It's so easy to accidentally introduce variables to the global scope in JavaScript and create subtle problems for yourself. You never need to worry about that again; from now on, it's handled automatically. Nothing is getting into the global scope unless you want it there. If you really want to put a variable in the global scope and you're really sure it's a good idea, you can easily do this by attaching it to the top-level object. In the CoffeeScript console, or in Node.js programs, this is the global object: global.myGlobalVariable = "I'm so worldly!" In a browser, we use the window object instead: window.myGlobalVariable = "I'm so worldly!" Comments Any line that begins with a # is a comment. Anything after a # in the middle of a line will also be a comment. # This is a comment."Hello" # This is also a comment Most of the time, CoffeeScripters use only this style, even for multiline comments. # Most multiline comments simply wrap to the# next line, each begun with a # and a space. It is also possible (but rare in the CoffeeScript world) to use a block comment, which begins and ends with ###. The lines in between these characters do not need to begin with a #. ###This is a block comment. You can get artistic in here.<(^^)>### Regular comments are not included in the compiled JavaScript, but block comments are, delineated by /* */. Calling functions Function invocation can look very familiar in CoffeeScript: console.log("Hello, planet!") Other than the missing semicolon, that's exactly like JavaScript, right? But function invocation can also look different: console.log "Hello, planet!" Whoa! Now we're in unfamiliar ground. This will work exactly the same as the previous example, though. Any time you call a function with arguments, the parentheses are optional. This also works with more than one argument: Math.pow 2, 3 While you might be a little nervous writing this way at first, I encourage you to try it and give yourself time to become comfortable with it. Idiomatic CoffeeScript style eliminates parentheses whenever it's sensible to do so. What do I mean by "sensible"? Well, imagine you're reading your code for the first time, and ask yourself which style makes it easiest to comprehend. Usually it's most readable without parentheses, but there are some occasions when your code is complex enough that judicious use of parentheses will help. Use your best judgment, and everything will turn out fine. There is one exception to the optional parentheses rule. If you are invoking a function with no arguments, you must use parentheses: Date.now() Why? The reason is simple. CoffeeScript preserves JavaScript's treatment of functions as first-class citizens. myFunc = Date.now #=> myFunc holds a function object that hasn't been executedmyDate = Date.now() #=> myDate holds the result of the function's execution CoffeeScript's syntax is looser, but it must still be unambiguous. When no arguments are present, it's not clear whether you want to access the function object or execute the function. Requiring parentheses makes it clear which one you want, and still allows both kinds of functionality. This is part of CoffeeScript's philosophy of not deviating from the fundamentals of the JavaScript language. If functions were always executed instead of returned, CoffeeScript would no longer act like JavaScript, and it would be hard for you, the seasoned JavaScripter, to know what to expect. This way, once you understand a few simple concepts, you will know exactly what your code is doing. From this discussion, we can extract a more general principle: parentheses are optional, except when necessary to avoid ambiguity . Here's another situation in which you might encounter ambiguity: nested function calls. Math.max 2, 3, Math.min 4, 5, 6 Yikes! What's happening there? Well, you can easily clear this up by adding parentheses. You may add parentheses to all the function calls, or you may add just enough to resolve the ambiguity: # These two calls are equivalentMath.max(2, 3, Math.min(4, 5, 6))Math.max 2, 3, Math.min(4, 5, 6) This makes it clear that you wish min to take 4 and 5 as arguments. If you wished 6 to be an argument to max instead, you would place the parentheses differently. # These two calls are equivalentMath.max(2, 3, Math.min(4, 5), 6)Math.max 2, 3, Math.min(4, 5), 6 Precedence Actually, the original version I showed you is valid CoffeeScript too! You just need to understand the precedence rules that CoffeeScript uses for functions. Arguments are assigned to functions from the inside out . Another way to think of this is that an argument belongs to the function that it's nearest to. So our original example is equivalent to the first variation we used, in which 4, 5, and 6 are arguments to min: # These two calls are equivalentMath.max 2, 3, Math.min 4, 5, 6Math.max 2, 3, Math.min(4, 5, 6) The parentheses are only absolutely necessary if our desired behavior doesn't match CoffeeScript's precedence—in this case, if we wanted 6 to be a argument to max. This applies to an unlimited level of nesting: threeSquared = Math.pow 3, Math.floor Math.min 4, Math.sqrt 5 Of course, at some point the elimination of parentheses turns from the question of if you can to if you should. You are now a master of the intricacies of CoffeeScript function-call parsing, but the other programmers reading your code might not be (and even if they are, they might prefer not to puzzle out what your code is doing). Avoid parentheses in simple cases, and use them judiciously in the more complicated situations.
Read more
  • 0
  • 0
  • 1480
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-getting-started-mule
Packt
26 Aug 2013
10 min read
Save for later

Getting Started with Mule

Packt
26 Aug 2013
10 min read
(For more resources related to this topic, see here.) Mule ESB is a lightweight Java programming language. Through ESB, you can integrate or communicate with multiple applications. Mule ESB enables easy integration of existing systems, regardless of the different technologies that the applications use, including JMS, web services, JDBC, and HTTP. Understanding Mule concepts and terminologies Enterprise Service Bus (ESB) is an application that gives access to other applications and services. Its main task is to be the messaging and integration backbone of an enterprise. An ESB is a distributed middleware system to integrate different applications. All these applications communicate through the ESB. It consists of a set of service containers that integrate various types of applications. The containers are interconnected with a reliable messaging bus. Getting ready An ESB is used for integration using a service-oriented approach. Its main features are as follows: Polling JMS Message transformation and routing services Tomcat hot deployment Web service security We often use the abbreviation, VETRO, to summarize the ESB functionality: V– validate the schema validation E– enrich T– transform R– route (either itinerary or content based) O– operate (perform operations; they run at the backend) Before introducing any ESB, developers and integrators must connect different applications in a point-to-point fashion. How to do it... After the introduction of an ESB, you just need to connect each application to the ESB so that every application can communicate with each other through the ESB. You can easily connect multiple applications through the ESB, as shown in the following diagram: Need for the ESB You can integrate different applications using ESB. Each application can communicate through ESB: To integrate more than two or three services and/or applications To integrate more applications, services, or technologies in the future To use different communication protocols To publish services for composition and consumption For message transformation and routing   What is Mule ESB? Mule ESB is a lightweight Java-based enterprise service bus and integration platform that allows developers and integrators to connect applications together quickly and easily, enabling them to exchange data. There are two editions of Mule ESB: Community and Enterprise. Mule ESB Enterprise is the enterprise-class version of Mule ESB, with additional features and capabilities that are ideal for clustering and performance tuning, DataMapper, and the SAP connector. Mule ESB Community and Enterprise editions are built on a common code base, so it is easy to upgrade from Mule ESB Community to Mule ESB Enterprise. Mule ESB enables easy integration of existing systems, regardless of the different technologies that the applications use, including JMS, web services, JDBC, and HTTP. The key advantage of an ESB is that it allows different applications to communicate with each other by acting as a transit system for carrying data between applications within your enterprise or across the Internet. Mule ESB includes powerful capabilities that include the following: Service creation and hosting: It exposes and hosts reusable services using Mule ESB as a lightweight service container Service mediation: It shields services from message formats and protocols, separate business logic from messaging, and enables location-independent service calls Message routing: It routes, filters, aggregates, and re-sequences messages based on content and rules Data transformation: It exchanges data across varying formats and transport protocols Mule ESB is lightweight but highly scalable, allowing you to start small and connect more applications over time. Mule provides a Java-based messaging framework. Mule manages all the interactions between applications and components transparently. Mule provides transformation, routing, filtering, Endpoint, and so on. How it works... When you examine how a message flows through Mule ESB, you can see that there are three layers in the architecture, which are listed as follows: Application Layer Integration Layer Transport Layer Likewise, there are three general types of tasks you can perform to configure and customize your Mule deployment. Refer to the following diagram: The following list talks about Mule and its configuration: Service component development: This involves developing or re-using the existing POJOs, which is a class with attributes and it generates the get and set methods, Cloud connectors, or Spring Beans that contain the business logic and will consume, process, or enrich messages. Service orchestration: This involves configuring message processors, routers, transformers, and filters that provide the service mediation and orchestration capabilities required to allow composition of loosely coupled services using a Mule flow. New orchestration elements can be created also and dropped into your deployment. Integration: A key requirement of service mediation is decoupling services from the underlying protocols. Mule provides transport methods to allow dispatching and receiving messages on different protocol connectors. These connectors are configured in the Mule configuration file and can be referenced from the orchestration layer. Mule supports many existing transport methods and all the popular communication protocols, but you may also develop a custom transport method if you need to extend Mule to support a particular legacy or proprietary system. Spring beans: You can construct service components from Spring beans and define these Spring components through a configuration file. If you don't have this file, you will need to define it manually in the Mule configuration file. Agents: An agent is a service that is created in Mule Studio. When you start the server, an agent is created. When you stop the server, this agent will be destroyed. Connectors: The Connector is a software component. Global configuration: Global configuration is used to set the global properties and settings. Global Endpoints: Global Endpoints can be used in the Global Elements tab. We can use the global properties' element as many times in a flow as we want. For that, we must pass the global properties' reference name. Global message processor: A global message processor observes a message or modifies either a message or the message flow; examples include transformers and filters. Transformers: A transformer converts data from one format to another. You can define them globally and use them in multiple flows. Filters: Filters decide which Mule messages should be processed. Filters specify the conditions that must be met for a message to be routed to a service or continue progressing through a flow. There are several standard filters that come with Mule ESB, which you can use, or you can create your own filters. Models: It is a logical grouping of services, which are created in Mule Studio. You can start and stop all the services inside a particular model. Services: You can define one or more services that wrap your components (business logic) and configure Routers, Endpoints, transformers, and filters specifically for that service. Services are connected using Endpoints. Endpoints: Services are connected using Endpoints. It is an object on which the services will receive (inbound) and send (outbound) messages. Flow: Flow is used for a message processor to define a message flow between a source and a target. Setting up the Mule IDE The developers who were using Mule ESB over other technologies such as Liferay Portal, Alfresco ECM, or Activiti BPM can use Mule IDE in Eclipse without configuring the standalone Mule Studio in the existing environment. In recent times, MuleSoft (http://www.mulesoft.org/) only provides Mule Studio from Version 3.3 onwards, but not Mule IDE. If you are using the older version of Mule ESB, you can get Mule IDE separately from http://dist.muleforge.org/mule-ide/releases/. Getting ready To set Mule IDE, we need Java to be installed on the machine and its execution path should be set in an environment variable. We will now see how to set up Java on our machine. Firstly, download JDK 1.6 or a higher version from the following URL: http://www.oracle.com/technetwork/java/javase/downloads/jdk6downloads-1902814.html. In your Windows system, go to Start | Control Panel | System | Advanced. Click on Environment Variables under System Variables, find Path, and click on it. In the Edit window, modify the path by adding the location of the class to its value. If you do not have the item Path, you may select the option of adding a new variable and adding Path as the name and the location of the class as its value. Close the window, reopen the command prompt window, and run your Java code. How to do it... If you go with Eclipse, you have to download Mule IDE Standalone 3.3. Download Mule ESB 3.3 Community edition from the following URL: http://www.mulesoft.org/extensions/mule-ide. Unzip the downloaded file and set MULE_HOME as the environment variable. Download the latest version of Eclipse from http://www.eclipse.org/downloads/. After installing Eclipse, you now have to integrate Mule IDE in the Eclipse. If you are using Eclipse Version 3.4 (Galileo), perform the following steps to install Mule IDE. If you are not using Version 3.4 (Galileo), the URL for downloading will be different. Open Eclipse IDE. Go to Help | Install New Software…. Write the URL in the Work with: textbox: http://dist.muleforge.org/muleide/updates/3.4/ and press Enter. Select the Mule IDE checkbox. Click on the Next button. Read and accept the license agreement terms. Click on the Finish button. This will take some time. When it prompts for a restart, shut it down and restart Eclipse. Mule configuration After installing Mule IDE, you will now have to configure Mule in Eclipse. Perform the following steps: Open Eclipse IDE. Go to Window | Preferences. Select Mule, add the distribution folder mule as standalone 3.3; click on the Apply button and then on the OK button. This way you can configure Mule with Eclipse. Installing Mule Studio Mule Studio is a powerful, user-friendly Eclipse-based tool. Mule Studio has three main components: a package tree, a palette, and a canvas. Mule ESB easily creates flows as well as edits and tests them in a few minutes. Mule Studio is currently in public beta. It is based on drag-and-drop elements and supports two-way editing. Getting ready To install Mule Studio, download Mule Studio from http://www.mulesoft.org/download-mule-esb-community-edition. How to do it... Unzip the Mule Studio folder. Set the environment variable for Mule Studio. While starting with Mule Studio, the config.xml file will be created automatically by Mule Studio. The three main components of Mule Studio are as follows: A package tree A palette A canvas A package tree A package tree contains the entire structure of your project. In the following screenshot, you can see the package explorer tree. In this package explorer tree, under src/main/java, you can store the custom Java class. You can create a graphical flow from src/main/resources. In the app folder you can store the mule-deploy.properties file. The folders src, main, and app contain the flow of XML files. The folders src, main, and test contain flow-related test files. The Mule-project.xml file contains the project's metadata. You can edit the name, description, and server runtime version used for a specific project. JRE System Library contains the Java runtime libraries. Mule Runtime contains the Mule runtime libraries. A palette The second component is palette. The palette is the source for accessing Endpoints, components, transformers, and Cloud connectors. You can drag them from the palette and drop them onto the canvas in order to create flows. The palette typically displays buttons indicating the different types of Mule elements. You can view the content of each button by clicking on them. If you do not want to expand elements, click on the button again to hide the content. A canvas The third component is canvas; canvas is a graphical editor. In canvas you can create flows. The canvas provides a space that facilitates the arrangement of Studio components into Mule flows. In the canvas area you can configure each and every component, and you can add or remove components on the canvas.
Read more
  • 0
  • 0
  • 2664

article-image-biztalk-esb-management-portal
Packt
19 Aug 2013
6 min read
Save for later

BizTalk: The ESB Management Portal

Packt
19 Aug 2013
6 min read
(For more resources related to this topic, see here.) Registering services in UDDI Thanks to the ESB Toolkit, we can easily populate our organization's services registry in UDDI with the services that interact with the ESB, either because the ESB exposes them or because they can be consumed through it. Before we can register service in UDDI we must first configure the registry settings. Registry settings The registery settings change how the UDDI registration functionality mentioned in preceding section behaves. UDDI Server: This sets URL of the UDDI server. Auto Publish: When enabled, any registry request will be automatically published. If it's disabled, the requests will require administrative approval. Anonymous: This setting indicates whether to use anonymous access to connect to the UDDI server or to use the UDDI Publisher Service account. Notification Enabled: This enables or disables the delivery of notifications when any registry activity occurs on the portal. SMTP Server: This is the address of the SMTP server that will send notification e-mail messages. Notification E-Mail: This is the e-mail address to which to send endpoint update notification e-mail messages. E-Mail From Address: This is the address that will show up as sender in notification messages sent. E-Mail Subject: This is the text to display in the subject line of notification e-mail messages. E-Mail Body: This is the text for the body of notification e-mail messages. Contact Name: This setting is name of the UDDI administrator to notify of endpoint update requests. Contact E-Mail: This setting is used for the e-mail address of the UDDI administrator for notifications of endpoint update requests. The following screenshot shows all of the settings mentioned in preceding list: In the ESB Management Portal, we can see in the top menu an entry that takes us to the Registry functionality, shown in the following screenshot: On this view, we can directly register a service into UDDI. To do this, first we have to search the endpoint that we want to publish. These can be endpoints of services that the ESB consumes through Send ports, or endpoints of services that the ESB exposes through receive locations. As an example, we will publish one of the services exposed by the ESB through the GlobalBank.ESB sample application that comes with the ESB Toolkit. First, we will search on the New Registry Entry page for the endpoints in the GlobalBank.ESB application, as shown in the following screenshot: Once we get the results, we will click on the Publish link of the DynamicResolutionReqResp_SOAP endpoint that actually exposes the /ESB. NorthAmericanServices/CustomerOrder.asmx service. We will be presented with a screen where we can fill in further details about the service registry entry, such as the service provider under which we want to publish the service (or we can even create a new service provider that will get registered in UDDI as well). After clicking on the Publish button at the bottom of the page, we will be directed back to the New Registry Entry screen, where we can filter again and see how our new registry entry is in Pending status, as it needs to be approved by an administrator. We can access the Manage Pending Requests module through the corresponding submenu under the top-level Registry menu. There we can see if there are any new registry entries that might be pending for approval. By using the buttons to the left of each item, we can view the details of the request, edit them, and approve or delete the request. Once we approve the request, we will receive a confirmation message on the portal telling us that it got approved. Then, we can go to the UDDI portal and look for the service provider that we just created, were we will see that our service got registered. The following screenshot shows how the service provider of the service we just published is shown in the UDDI portal: In the following screenshot we can see the actual service published, with its corresponding properties. With these simple steps, we can easily build our own services registry in UDDI based on the services our organization already has, so they can be used by the ESB or any other systems to discover services and know how to consume them. Understanding the Audit Log Audit Log is a small reporting feature that is meant to provide information about the status of messages that have been resubmitted to the ESB through the resubmission module. We can access this module through the Manage Audit Log menu. We will be presented with a list of the messages that were resubmitted, if those were resubmitted successfully or not, and even check the actual message that was resubmitted, as the message could have been modified before being resubmitted. Fault Settings On the Fault Settings page we can specify: Audit Options: This includes the type events that we want to audit: Audit Save: When a message associated with a fault is saved. Audit Successful Resubmit: When a message is successfully resubmitted. Audit Unsuccessful Resubmit: When the resubmission of a message fails. Alert Queue Options: Here we can enable or disable the queuing of the notifications generated when a fault message is published to the portal. Alert Email Options: Here we can enable and configure the service that will actually send e-mail notifications once fault messages are published to the portal. The three most important settings in this section are: Email Server: The e-mail server that will be actually used to send the e-mails. Email From Address: The address that will show up as sender in the e-mails sent. Email XSLT File Absolute Path: The XSLT transformation sheet that will be used to format the e-mails. The ESB Toolkit provides one, but we could customize it or create our own sheet according to our requirements. Summary In this article, we discussed the additional features of the ESB Management Portal. We learned about the registry settings, which are used for configuring the UDDI and setting up the e-mail notifications. We also learned how to configure fault settings and how to utilize the Audit Log features. Resources for Article : Further resources on this subject: Microsoft Biztalk server 2010 patterns: Operating Biztalk [Article] Setting up a BizTalk Server Environment [Article] Communicating from Dynamics CRM to BizTalk Server [Article]
Read more
  • 0
  • 0
  • 2757

article-image-extnet-understanding-direct-methods-and-direct-events
Packt
08 Aug 2013
4 min read
Save for later

Ext.NET – Understanding Direct Methods and Direct Events

Packt
08 Aug 2013
4 min read
(For more resources related to this topic, see here.) How to do it... The steps to handle events raised by different controls are as follows: Open the Pack.Ext2.Examples solution Press F5 or click on the Start button to run the solution. Click on the Direct Methods & Events hyperlink. This will run the example code for this recipe. Familiarize yourself with the code behind and the client-side markup. How it works... Applying the [DirectMethod(namespace="ExtNetExample")] attribute to the server-side method GetDateTime(int timeDiff) has exposed this method to our client-side code with the namespace of ExtNetExample, which we append to the method name call on the client side. As we can see in the example code, we call this server method in the markup using the Ext.NET button btnDateTime and the code ExtNetExamples.GetDateTime(3). When the call hits the server, we update the Ext.NET control lblDateTime text property, which updates the control related to the property. Adding namespace="ExtNetExample" allows us to neatly group server-side methods and the JavaScript calls in our code. A good notation is CompanyName.ProjectName. BusinessDomain.MethodName. Without applying the namespace attribute, we would access our server-side method using the default namespace of App.direct. So, to call the GetDateTime method without the namespace attribute, we would use App.direct. GetDateTime(3). We can also see how to return a response from Direct Method to the client-side JavaScript. If a Direct Method returns a value, it is sent back to the success function defined in a configuration object. This configuration object contains a number of functions, properties, and objects. We have dealt with the two most common functions in our example, the success and failure responses. The server-side method GetCar()returns a custom object called Car. If the btnReturnResponse button is clicked on and GetCar() successfully returns a response, we can access the value when Ext.NET calls the JavaScript function named in the success configuration object CarResponseSuccess. This JavaScript function accepts the response parameter from the method and we can process it accordingly. The response parameter is serialized into JSON, and so object values can be accessed using the JavaScript object notation of object.propertyValue. Note that we alert the FirstRegistered property of the Car object returned. Likewise, if a failure response is received, we call the client-side method CarResponseFailure alerting the response, which is a string value. There are a number of other properties that form a part of the configuration object, which can be accessed as part of the callback, for example, failure to return a response. Please refer to the Direct Methods Overview Ext.NET examples website (http://examples.ext.net/#/ Events/DirectMethods/Overview/ ). To demonstrate DirectEvent in action, we've declared a button called btnFireEvent and secondly, a checkbox called chkFireEvent. Note that each control points to the same DirectEvent method called WhoFiredMe. You'll notice that in the markup we declare the WhoFiredMe method using the OnEvent property of the controls. This means that when the Click event is fired on the btnFireEvent button and the Change event is fired on the chkFireEvent checkbox, a request to the server is made where we call the WhoFiredMe method. From this, we can get the control that invoked the request via the object sender parameter and the arguments of the event using the DirectEventArgs e method. Note that we don't have to decorate the DirectEvent method, WhoFiredMe, with any attributes. Ext.NET takes care of all the plumbing. We just need to specify the method, which needs to be called on the server. There's more... Raising DirectMethods is far more flexible in terms of being able to specify the parameters you want to send to the server. You also have the ability to send the control objects to the server or to client-side functions using the #{controlId} notation. It is generally not a good idea though to send the whole control to the server from a Direct Method, as Ext.NET controls can contain references to themselves. Therefore, when Ext.NET encodes the control, it can end up in an infinite loop, and you will end up breaking your code. With a DirectEvent method, you can send extra parameters to the server using the ExtraParams property inside the controls event element. This can then be accessed using the e parameter on the server. Summary In this article we discussed about how to connect client-side and server-side code. Resources for Article : Further resources on this subject: Working with Microsoft Dynamics AX and .NET: Part 1 [Article] Working with Microsoft Dynamics AX and .NET: Part 2 [Article] Dynamically enable a control (Become an expert) [Article]
Read more
  • 0
  • 0
  • 3400

article-image-interacting-user
Packt
08 Aug 2013
23 min read
Save for later

Interacting with the User

Packt
08 Aug 2013
23 min read
(For more resources related to this topic, see here.) Creating actions, commands, and handlers The first few releases of the Eclipse framework provided Action as a means of contributing to menu items. These were defined declaratively via actionSets in the plugin.xml file, and many tutorials still reference those today. At the programming level, when creating views, Actions are still used to provide context menus programmatically. They were replaced with commands in Eclipse 3, as a more abstract way of decoupling the operation of a command with its representation of the menu. To connect these two together, a handler is used. E4: Eclipse 4.x uses the command's model, and decouples it further using the @Execute annotation on the handler class. Commands and views are hooked up with entries on the application's model. Time for action – adding context menus A context menu can be added to the TimeZoneTableView class and respond to it dynamically in the view's creation. The typical pattern for Eclipse 3 applications is to create a hookContextMenu() method, which is used to wire up the context menu operation with displaying the menu. A default implementation can be seen by creating an example view, or one can be created from first principles. Eclipse menus are managed by a MenuManager. This is a specialized subclass of a more general ContributionManager, which looks after a dynamic set of contributions that can be made from other sources. When the menu manager is connected to a control, it responds in the standard ways for the platform for showing the menu (typically a context-sensitive click or short key). Menus can also be displayed in other locations, such as a view's or the workspace's coolbar (toolbar). The same MenuManager approach works in these different locations. Open the TimeZoneTableView class and go to the createPartControl() method. At the botom of the method, add a new MenuManager with the ID #PopupMenu and associate it to the viewer's control. MenuManager manager = new MenuManager("#PopupMenu");Menu menu = manager.createContextMenu(tableViewer.getControl());tableViewer.getControl().setMenu(menu); If the Menu is empty, the MenuManager won't show any content, so this currently has no effect. To demonstrate this, an Action will be added to the Menu. An Action has text (for rendering in the pop-up menu, or the menu at the top of the screen), as well as a state (enabled/disabled, selected) and a behavior. These are typically created as subclasses and (although the Action doesn't strictly require it) an implementaton of the run() method. Add this to the botom of the createPartControl() method. Action deprecated = new Action() { public void run() { MessageDialog.openInformation(null, "Hello", "World"); }};deprecated.setText("Hello");manager.add(deprecated); Run the Eclipse instance, open the Time Zone Table View, and right-click on the table. The Hello menu can be seen, and when selected, an informational dialog is shown. What just happened? The MenuManager(with the id #PopupMenu) was bound to the control, which means when that particular control's context sensitive menu is invoked, the manager will be able to ask to display a menu. The manager is associated with a single Menu object (which is also stamped on the underlying control itself) and is responsible for updating the status of the menu. Actions are deprecated. They are included here since examples on the Internet may have preferred references to them, but it's important to note that while they still work, the way of building user interfaces are with the commands and handlers, shown in the next section. When the menu is shown, the actions that the menu contains are rendered in the order in which they are added. Action are usually subclasses that implement a run() method, which performs a certain operation, and have text which is displayed. Action instances also have other metadata, such as whether they are enabled or disabled. Although it is tempting to override the access or methods, this behavior doesn't work—the setters cause an event to be sent out to registered listeners, which causes side effects, such as updating any displayed controls. Time for action – creating commands and handlers Since the Action class is deprecated, the supported mechanism is to create a command, a handler, and a menu to display the command in the menu bar. Open the plug-in manifest for the project, or double-click on the plugin.xml file. Edit the source on the plugin.xml tab, and add a definition of a Hello command as follows: <extension point="org.eclipse.ui.commands"> <command name="Hello" description="Says Hello World" id="com.packtpub.e4.clock.ui.command.hello"/></extension> This creates a command, which is just an identifier and a name. To specify what it does, it must be connected to a handler, which is done by adding the following extension: <extension point="org.eclipse.ui.handlers"> <handler class= "com.packtpub.e4.clock.ui.handlers.HelloHandler" commandId="com.packtpub.e4.clock.ui.command.hello"/></extension> The handler joins the processing of the command to a class that implements IHandler, typically AbstractHandler. Create a class HelloHandler in a new com.packtpub.e4.clock.ui.handlers package, which implements AbstractHandler(from the org.eclipse.core.commands package). public class HelloHandler extends AbstractHandler { public Object execute(ExecutionEvent event) { MessageDialog.openInformation(null, "Hello", "World"); return null; }} The command's ID com.packtpub.e4.clock.ui.command.hello is used to refer to it from menus or other locations. To place the contribution in an existing menu structure, it needs to be specified by its locationURI, which is a URL that begins with menu:such as menu:window?after=additionsor menu:file?after=additions. To place it in the Help menu, add this to the plugin.xml file. <extension point="org.eclipse.ui.menus"> <menuContribution allPopups="false" locationURI="menu:help?after=additions"> <command commandId="com.packtpub.e4.clock.ui.command.hello" label="Hello" style="push"> </command> </menuContribution></extension> Run the Eclipse instance, and there will be a Hello menu item under the Help menu. When selected, it will pop up the Hello World message. If the Hello menu is disabled, verify that the handler extension point is defined, which connects the command to the handler class. What just happened? The main issue with the actions framework was that it tightly coupled the state of the command with the user interface. Although an action could be used uniformly between different menu locations, the Action superclass still lives in the JFace package, which has dependencies on both SWT and other UI components. As a result, Action cannot be used in a headless environment. Eclipse 3.x introduced the concept of commands and handlers, as a means of separating their interface from their implementation. This allows a generic command (such as Copy) to be overridden by specific views. Unlike the traditional command design pattern, which provides implementation as subclasses, the command in Eclipse 3.x uses a final class and then a retargetable IHandler to perform the actual execution. E4: In Eclipse 4.x, the concepts of commands and handlers are used extensively to provide the components of the user interface. The key difference is in their definition; for Eclipse 3.x, this typically occurs in the plugin.xml file, whereas in E4 it is part of the application model. In the example, a specific handler was defined for the command, which is valid in all contexts. The handler's class is the implementation; the command ID is the reference. The org.eclipse.ui.menus extension point allows menuContributions to be added anywhere in the user interface. To address where the menu can be contributed to, the location URIobject defines where the menu item can be created. The syntax for the URI is as follows: menu: Menus begin with the menu: protocol (can also be toolbar:or popup:) identifier: This can be a known short name (such as file, window, and help), the global menu (org.eclipse.ui.main.menu), the global toolbar (org.eclipse.ui.main.toolbar), a view identifier (org.eclipse. ui.views.ContentOutline), or an ID explicitly defined in a pop-up menu's registerContextMenu()call. ?after(or before)=key: This is the placement instruction to put this after or before other items; typically additions is used as an extensible location for others to contribute to. The locationURIallows plug-ins to contribute to other menus, regardless of where they are ultimately located. Note, that if the handler implements the IHandler interface directly instead of subclassing AbstractHandler, the isEnabled() method will need to be overridden as otherwise the command won't be enabled, and the menu won't have any effect. Time for action – binding commands to keys To hook up the command to a keystroke a binding is used. This allows a key (or series of keys) to be used to invoke the command, instead of only via the menu. Bindings are set up via an extension point org.eclipse.ui.bindings, and connect a sequence of keystrokes to a command ID. Open the plugin.xml in the clock.uiproject. In the plugin.xml tab, add the following: <extension point="org.eclipse.ui.bindings"> <key commandId="com.packtpub.e4.clock.ui.command.hello" sequence="M1+9" contextId="org.eclipse.ui.contexts.window" schemeId= "org.eclipse.ui.defaultAcceleratorConfiguration"/></extension> Run the Eclipse instance, and press Cmd+ 9(for OS X) or Ctrl+ 9 (for Windows/Linux). The same Hello dialog should be displayed, as if it was shown from the menu. The same keystroke should be displayed in the Help menu. What just happened? The M1 key is the primary meta key, which is Cmd on OS X and Ctrl on Windows/Linux. This is typically used for the main operations; for example M1+ C is copy and M1+ V is paste on all systems. The sequence notation M1+ 9 is used to indicate pressing both keys at the same time. The command that gets invoked is referenced by its commandId. This may be defined in the same plug-in, but does not have to be; it is possible for one application to provide a set of commands and another plug-in to provide keystrokes that bind them. It is also possible to set up a sequence of key presses; for example, M1+ 9 8 7would require pressing Cmd+ 9 or Ctrl+ 9 followed by 8 and then 7 before the command is executed. This allows a set of keystrokes to be used to invoke a command; for example, it's possible to emulate an Emacs quit operation with the keybinding Ctrl + X Ctrl + Cto the quit command. Other modifier keys include M2(Shift), M3(Alt/Option), and M4(Ctrl on OS X). It is possible to use CTRL, SHIFT, or ALT as long names, but the meta names are preferred, since M1tends to be bound to different keys on different operating systems. The non-modifier keys themselves can either be single characters (A to Z), numbers (0to 9), or one of a set of longer name key-codes, such as F12, ARROW_UP, TAB, and PAGE_UP. Certain common variations are allowed; for example, ESC/ESCAPE, ENTER/RETURN, and so on. Finally, bindings are associated with a scheme, which in the default case should be org.eclipse.ui.defaultAcceleratorConfiguration. Schemes exist to allow the user to switch in and out of keybindings and replace them with others, which is how tools like "vrapper" (a vi emulator) and the Emacs bindings that come with Eclipse by default can be used. (This can be changed via Window | Preferences | Keys menu in Eclipse.) Time for action – changing contexts The context is the location in which this binding is valid. For commands that are visible everywhere—typically the kind of options in the default menu—they can be associated with the org.eclipse.ui.contexts.windowcontext. If the command should also be invoked from dialogs as well, then the org.eclipse.ui.context.dialogAndWindowcontext would be used instead. Open the plugin.xml file of the clock.ui project. To enable the command only for Java editors, go to the plugin.xml tab, and modify the contextId as follows: <extension point="org.eclipse.ui.bindings"> <key commandId="com.packtpub.e4.clock.ui.command.hello" sequence="M1+9" contextId="org.eclipse.ui.contexts.window" contextId="org.eclipse.jdt.ui.javaEditorScope" schemeId="org.eclipse.ui.defaultAcceleratorConfiguration"/></extension> Run the Eclipse instance, and create a Java project, a test Java class, and an empty text file. Open both of these in editors. When the focus is on the Java editor, the Cmd + 9 or Ctrl+ 9 operation will run the command, but when the focus is on the text editor, the keybinding will have no effect. Unfortunately, it also highlights the fact that just because the keybinding is disabled when in the Java scope, it doesn't disable the underlying command. If there is no change in behavior, try cleaning the workspace of the test instance at launch, by going to the Run | Run... menu, and choosing Clear on the workspace. This is sometimes necessary when making changes to the plugin.xml file, as some extensions are cached and may lead to strange behavior. What just happened? Context scopes allow bindings to be valid for certain situations, such as when a Java editor is open. This allows the same keybinding to be used for different situations, such as a Format operation—which may have a different effect in a Java editor than an XML editor, for instance. Since scopes are hierarchical, they can be specifically targeted for the contexts in which they may be used. The Java editor context is a subcontext of the general text editor, which in turn is a subcontext of the window context, which in turn is a subcontext of the windowAndDialogcontext. The available contexts can be seen by editing the plugin.xml file in the plug-in editor; in the extensions tab the binding shows an editor window with a form: Clicking on the Browse… button next to the contextId brings up a dialog, which presents the available contexts: It's also possible to find out all the contexts programmatically or via the running OSGi instance, by navigating to Window | Show View | Console, and then using New Host OSGi Console in the drop-down menu, and then running the following code snippet: osgi> pt -v org.eclipse.ui.contextsExtension point: org.eclipse.ui.contexts [from org.eclipse.ui]Extension(s):-------------------null [from org.eclipse.ant.ui] <context> name = Editing Ant Buildfiles description = Editing Ant Buildfiles Context parentId = org.eclipse.ui.textEditorScope id = org.eclipse.ant.ui.AntEditorScope </context>null [from org.eclipse.compare] <context> name = Comparing in an Editor description = Comparing in an Editor parentId = org.eclipse.ui.contexts.window id = org.eclipse.compare.compareEditorScope </context> Time for action – enabling and disabling the menu's items The previous section showed how to hide or show a specific keybinding depending on the open editor type. However, it doesn't stop the command being called via the menu, or from it showing up in the menu itself. Instead of just hiding the keybinding, the menu can be hidden as well by adding a visibleWhenblock to the command. The expressions framework provides a number of variables, including activeContexts, which contains a list of the active contexts at the time. Since many contexts can be active simultaneously, the active contexts is a list (for example, [dialogAndWindows,windows, textEditor, javaEditor]). So, to find an entry (in effect, a contains operation) an iterate operator with the equals expression is used. Open up the plugin.xml file, and update the the Hello command by adding a visibleWhen expression. <extension point="org.eclipse.ui.menus"> <menuContribution allPopups="false" locationURI="menu:help?after=additions"> <command commandId="com.packtpub.e4.clock.ui.command.hello" label="Hello" style="push"> <visibleWhen> <with variable="activeContexts"> <iterate operator="or"> <equals value="org.eclipse.jdt.ui.javaEditorScope"/> </iterate> </with> </visibleWhen> </command> </menuContribution></extension> Run the Eclipse instance, and verify that the menu is hidden until a Java editor is opened. If this behavior is not seen, run the Eclipse application with the clean argument to clear the workspace. After clearing, it will be necessary to create a new Java project with a Java class, as well as an empty text file, to verify that the menu's visibility is correct. What just happened? Menus have a visibleWhen guard that is evaluated when the menu is shown. If it is false, he menu is hidden. The expressions syntax is based on nested XML elements with certain conditions. For example, an <and> block is true if all of its children are true, whereas an <or> block is true if one of its children is true. Variables can also be used with a property test using a combination of a <with> block (which binds the specified variable to the stack) and an <equals> block or other comparison. In the case of variables that have lists, an <iterate> can be used to step through elements using either operator="or" or operator="and" to dynamically calculate enablement. To find out if a list contains an element, a combination of <iterate> and <equals> operators is the standard pattern. There are a number of variables that can be used in tests; they include the following variables: activeContexts: List of context IDs that are active at the time activeShell: The active shell (dialog or window) activeWorkbenchWindow: The active window activeEditor: The current or last active editor activePart: The active part (editor or view) selection: The current selection org.eclipse.core.runtime.Platform: The Platform object The Platform object is useful for performing dynamic tests using test, such as the following: <test value="ACTIVE" property="org.eclipse.core.runtime.bundleState" args="org.eclipse.core.expressions"/><test property="org.eclipse.core.runtime.isBundleInstalled" args="org.eclipse.core.expressions"/> Knowing if a bundle is installed is often useful; it's better to only enable functionality if a bundle is started (or in OSGi terminology, ACTIVE). As a result, use of isBundleInstalled has been replaced by the bundleState=ACTIVE tests. Time for action – reusing expressions Although it's possible to copy and paste expressions between places where they are used, it is preferable to re-use an identical expression. Declare an expression using the expression's extension point, by opening the plugin.xml file of the clock.uiproject. <extension point="org.eclipse.core.expressions.definitions"> <definition id="when.hello.is.active"> <with variable="activeContexts"> <iterate operator="or"> <equals value="org.eclipse.jdt.ui.javaEditorScope"/> </iterate> </with> </definition></extension> If defined via the extension wizard, it will prompt to add dependency on the org.eclipse.core.expressions bundle. This isn't strictly necessary for this example to work. To use the definition, the enablement expressions needs to use the reference. <extension point="org.eclipse.ui.menus"> <menuContribution allPopups="false" locationURI="menu:help?after=additions"> <command commandId="com.packtpub.e4.clock.ui.command.hello" label="Hello" style="push"> <visibleWhen> <with variable="activeContexts"> <iterate operator="or"> <equals value="org.eclipse.jdt.ui.javaEditorScope"/> </iterate> </with> <reference definitionId="when.hello.is.active"/> </visibleWhen> </command> </menuContribution></extension> Now that the reference has been defined, it can be used to modify the handler as well, so that the handler and menu become active and visible together. Add the following to the Hellohandler in the plugin.xml file: <extension point="org.eclipse.ui.handlers"> <handler class="com.packtpub.e4.clock.ui.handlers.Hello" commandId="com.packtpub.e4.clock.ui.command.hello"> <enabledWhen> <reference definitionId="when.hello.is.active"/> </enabledWhen> </handler></extension> Run the Eclipse application and exactly the same behavior will occur; but should the enablement change, it can be done in one place. What just happened? The org.eclipse.core.expressions extension point defined a virtual condition that could be evaluated when the user's context changes, so both the menu and the handler can be made visible and enabled at the same time. The reference was bound in the enabledWhen condition for the handler, and the visibleWhencondition for the menu. Since references can be used anywhere, expressions can also be defined in terms of other expressions. As long as the expressions aren't recursive, they can be built up in any manner. Time for action – contributing commands to pop-up menus It's useful to be able to add contributions to pop-up menus so that they can be used by different places. Fortunately, this can be done fairly easily with the menuContribution element and a combination of enablement tests. This allows the removal of the Action introduced in the first part of this article with a more generic command and handler pairing. There is a deprecated extension point—which still works in Eclipse 4.2 today—called objectContribution, which is a single specialized hook for contributing a pop-up menu to an object. This has been deprecated for some time, but often older tutorials or examples may refer to it. Open the TimeZoneTableView class and add the hookContextMenu()method as follows: private void hookContextMenu(Viewer viewer) { MenuManager manager = new MenuManager("#PopupMenu"); Menu menu = manager.createContextMenu(viewer.getControl()); viewer.getControl().setMenu(menu); getSite().registerContextMenu(manager, viewer);} Add the same hookContextMenu() method to the TimeZoneTreeView class. In the TimeZoneTreeView class, at the end of the createPartControl() method, call hookContextMenu(tableViewer). In the TimeZoneTableViewclass, at the end of the createPartControl() method, replace the call to the action with a call to hookContextMenu()instead: hookContextMenu(tableViewer);MenuManager manager = new MenuManager("#PopupMenu");Menu menu = manager.createContextMenu(tableViewer.getControl());tableViewer.getControl().setMenu(menu);Action deprecated = new Action() { public void run() { MessageDialog.openInformation(null, "Hello", "World"); }};deprecated.setText("Hello");manager.add(deprecated); Running the Eclipse instance now and showing the menu results in nothing being displayed, because no menu items have been added to it yet. Create a command and a handler Show the Time. <extension point="org.eclipse.ui.commands"> <command name="Show the Time" description="Shows the Time" id="com.packtpub.e4.clock.ui.command.showTheTime"/></extension><extension point="org.eclipse.ui.handlers"> <handler class= "com.packtpub.e4.clock.ui.handlers.ShowTheTime" commandId="com.packtpub.e4.clock.ui.command.showTheTime"/></extension> Create a class ShowTheTime, in the com.packtpub.e4.clock.ui.handlers package, which extends org.eclipse.core.commands.AbstractHandler, to show the time in a specific time zone. public class ShowTheTime extends AbstractHandler { public Object execute(ExecutionEvent event) { ISelection sel = HandlerUtil.getActiveWorkbenchWindow(event) .getSelectionService().getSelection(); if (sel instanceof IStructuredSelection && !sel.isEmpty()) { Object value = ((IStructuredSelection)sel).getFirstElement(); if (value instanceof TimeZone) { SimpleDateFormat sdf = new SimpleDateFormat(); sdf.setTimeZone((TimeZone) value); MessageDialog.openInformation(null, "The time is", sdf.format(new Date())); } } return null; }} Finally, to hook it up, a menu needs to be added to the special locationURI popup:org.eclipse.ui.popup.any. <extension point="org.eclipse.ui.menus"> <menuContribution allPopups="false" locationURI="popup:org.eclipse.ui.popup.any"> <command label="Show the Time" style="push" commandId="com.packtpub.e4.clock.ui.command.showTheTime"> <visibleWhen checkEnabled="false"> <with variable="selection"> <iterate ifEmpty="false"> <adapt type="java.util.TimeZone"/> </iterate> </with> </visibleWhen </command> </menuContribution></extension> Run the Eclipse instance, and open the Time Zone Table view or Time Zone Table view. Right-click on a TimeZone, and the command Show the Time will be displayed (that is, one of the leaves of the tree or one of the rows of the table). Select the command and a dialog should show the time. What just happened? The views and the knowledge of how to wire up commands in this article provided a unified means of adding commands, based on the selected object type. This approach of registering commands is powerful, because any time a time zone is exposed as a selection in the future it will now have a Show the Time menu added to it automatically. The commands define a generic operation, and handlers bind those commands to implementations. The context-sensitive menu is provided by the pop-up menu extension point using the locationURI popup:org.eclipse.ui.popup.any. This allows the menu to be added to any pop-up menu that uses a MenuManager and when the selection contains a TimeZone. The MenuManager is responsible for listening to the mouse gestures to show a menu, and filling it with details when it is shown. In the example, the command was enabled when the object was an instance of a TimeZone, and also if it could be adapted to a TimeZone. This would allow another object type (say, a contact card) to have an adapter to convert it to a TimeZone, and thus show the time in that contact's location. Have a go hero – using view menus and toolbars The way to add a view menu is similar to adding a pop-up menu; the locationURI used is the view's ID rather than the menu item itself. Add a Show the Time menu to the TimeZone view as a view menu. Another way of adding the menu is to add it as a toolbar, which is an icon in the main Eclipse window. Add the Show the Time icon by adding it to the global toolbar instead. To facilitate testing of views, add a menu item that allows you to show the TimeZone views with PlatformUI.getActiveWorkbenchWindow().getActivePage().showView(id). Jobs and progress Since the user interface is single threaded, if a command takes a long amount of time it will block the user interface from being redrawn or processed. As a result, it is necessary to run long-running operations in a background thread to prevent the UI from hanging. Although the core Java library contains java.util.Timer, the Eclipse Jobs API provides a mechanism to both run jobs and report progress. It also allows jobs to be grouped together and paused or joined as a whole.
Read more
  • 0
  • 0
  • 1059
article-image-net-45-parallel-extensions-async
Packt
07 Aug 2013
13 min read
Save for later

.NET 4.5 Parallel Extensions – Async

Packt
07 Aug 2013
13 min read
(For more resources related to this topic, see here.) Creating an async method The TAP is a new pattern for asynchronous programming in .NET Framework 4.5. It is based on a task, but in this case a task doesn't represent work which will be performed on another thread. In this case, a task is used to represent arbitrary asynchronous operations. Let's start learning how async and await work by creating a Windows Presentation Foundation (WPF ) application that accesses the web using HttpClient. This kind of network access is ideal for seeing TAP in action. The application will get the contents of a classic book from the web, and will provide a count of the number of words in the book. How to do it… Let's go to Visual Studio 2012 and see how to use the async and await keywords to maintain a responsive UI by doing the web communications asynchronously. Start a new project using the WPF Application project template and assign WordCountAsync as Solution name . Begin by opening MainWindow.xaml and adding the following XAML to create a simple user interface containing Button and TextBlock: <Window x_Class="WordCountAsync.MainWindow" Title="WordCountAsync" Height="350" Width="525"> <Grid> <Button x_Name="StartButton" Content="Start" HorizontalAlignment="Left" Margin="219,195,0,0" VerticalAlignment="Top" Width="75" RenderTransformOrigin="-0.2,0.45" Click="StartButton_Click"/> <TextBlock x_Name="TextResults" HorizontalAlignment="Left" Margin="60,28,0,0" TextWrapping="Wrap" VerticalAlignment="Top" Height="139" Width="411"/> </Grid> </Window> Next, open up MainWindow.xaml.cs. Go to the Project and add a reference to System.Net.Http. Add the following using directives to the top of your MainWindow class: using System; using System.Linq; using System.Net.Http; using System.Threading.Tasks; using System.Windows; At the top of the MainWindow class, add a character array constant that will be used to split the contents of the book into a word array. char[] delimiters = { ' ', ',', '.', ';', ':', '-', '_', '/', '\u000A' }; Add a button click event for the StartButton and add the async modifier to the method signature to indicate that this will be a async method. Please note that async methods that return void are normally only used for event handlers, and should be avoided. private async void StartButton_Click(object sender, RoutedEventArgs e) { } Next, let's create a async method called GetWordCountAsync that returns Task<int>. This method will create HttpClient and call its GetStringAsync method to download the book contents as a string. It will then use the Split method to split the string into a wordArray. We can return the count of the wordArray as our return value. public async Task<int> GetWordCountAsync() { TextResults.Text += "Getting the word count for Origin of Species...\n"; var client = new HttpClient(); var bookContents = await client.GetStringAsync(@"http://www.gutenberg.org/files/2009/2009.txt"); var wordArray = bookContents.Split(delimiters, StringSplitOptions.RemoveEmptyEntries); return wordArray.Count(); } Finally, let's complete the implementation of our button click event. The Click event handler will just call GetWordCountAsync with the await keyword and display the results to TextBlock. private async void StartButton_Click(object sender, RoutedEventArgs e) { var result = await GetWordCountAsync(); TextResults.Text += String.Format("Origin of Species word count: {0}",result); } In Visual Studio 2012, press F5 to run the project. Click on the Start button, and your application should appear as shown in the following screenshot: How it works… In the TAP, asynchronous methods are marked with an async modifier. The async modifier on a method does not mean that the method will be scheduled to run asynchronously on a worker thread. It means that the method contains control flow that involves waiting for the result of an asynchronous operation, and will be rewritten by the compiler to ensure that the asynchronous operation can resume this method at the right spot. Let me try to put this a little more simply. When you add the async modifier to a method, it indicates that the method will wait on an asynchronous code to complete. This is done with the await keyword. The compiler actually takes the code that follows the await keyword in an async method and turns it into a continuation that will run after the result of the async operation is available. In the meantime, the method is suspended, and control returns to the method's caller. If you add the async modifier to a method, and then don't await anything, it won't cause an error. The method will simply run synchronously. An async method can have one of the three return types: void, Task, or Task<TResult>. As mentioned before, a task in this context doesn't mean that this is something that will execute on a separate thread. In this case, task is just a container for the asynchronous work, and in the case of Task<TResult>, it is a promise that a result value of type TResult will show up after the asynchronous operation completes. In our application, we use the async keyword to mark the button click event handler as asynchronous, and then we wait for the GetWordCountAsync method to complete by using the wait keyword. private async void StartButton_Click(object sender, RoutedEventArgs e) { StartButton.Enabled = false; var result = await GetWordCountAsync(); TextResults.Text += String.Format("Origin of Species word count: {0}", .................. result); StartButton.Enabled = true; } The code that follows the await keyword, in this case, the same line of code that updates TextBlock, is turned by the compiler into a continuation that will run after the integer result is available. If the Click event is fired again while this asynchronous task is in progress, another asynchronous task is created and awaited. To prevent this, it is a common practice to disable the button that is clicked. It is a convention to name an asynchronous method with an Async postfix, as we have done with GetWordCountAsync. Handling Exceptions in asynchronous code So how would you add Exception handling to code that is executed asynchronously? In previous asynchronous patterns, this was very difficult to achieve. In C# 5.0 it is much more straightforward because you just have to wrap the asynchronous function call with a standard try/catch block. On the surface this sounds easy, and it is, but there is more going on behind the scene that will be explained right after we build our next example application. For this recipe, we will return to our classic books word count scenario, and we will be handling an Exception thrown by HttpClient when it tries to get the book contents using an incorrect URL. How to do it… Let's build another WPF application and take a look at how to handle Exceptions when something goes wrong in one of our asynchronous methods. Start a new project using the WPF Application project template and assign AsyncExceptions as Solution name . Begin by opening MainWindow.xaml and adding the following XAML to create a simple user interface containing Button and a TextBlock: <Window x_Class="WordCountAsync.MainWindow" Title="WordCountAsync" Height="350" Width="525"> <Grid> <Button x_Name="StartButton" Content="Start" HorizontalAlignment="Left" Margin="219,195,0,0" VerticalAlignment="Top" Width="75" RenderTransformOrigin="-0.2,0.45" Click="StartButton_Click"/> <TextBlock x_Name="TextResults" HorizontalAlignment="Left" Margin="60,28,0,0" TextWrapping="Wrap" VerticalAlignment="Top" Height="139" Width="411"/> </Grid> </Window> Next, open up MainWindow.xaml.cs. Go to the Project Explorer , right-click on References , click on Framework from the menu on the left side of the Reference Manager , and then add a reference to System.Net.Http. Add the following using directives to the top of your MainWindow class: using System; using System.Linq; using System.Net.Http; using System.Threading.Tasks; using System.Windows; At the top of the MainWindow class, add a character array constant that will be used to split the contents of the book into a word array. char[] delimiters = { ' ', ',', '.', ';', ':', '-', '_', '/', '\u000A' }; Now let's create our GetWordCountAsync method. This method will be very similar to the last recipe, but it will be trying to access the book on an incorrect URL. The asynchronous code will be wrapped in a try/catch block to handle Exception. We will also use a finally block to dispose of HttpClient. public async Task<int> GetWordCountAsync() { ResultsTextBlock.Text += "Getting the word count for Origin of Species...\n"; var client = new HttpClient(); try { var bookContents = await client.GetStringAsync(@"http://www.gutenberg.org/files/2009/No_Book_Here.txt"); var wordArray = bookContents.Split(delimiters, StringSplitOptions.RemoveEmptyEntries); return wordArray.Count(); } catch (Exception ex) { ResultsTextBlock.Text += String.Format("An error has occurred: {0} \n", ex.Message); return 0; } finally { client.Dispose(); } } Finally, let create the Click event handler for our StartButton. This is pretty much the same as the last recipe, just wrapped in a try/catch block. Don't forget to add the async modifier to the method signature. private async void StartButton_Click(object sender, RoutedEventArgs e) { try { var result = await GetWordCountAsync(); ResultsTextBlock.Text += String.Format("Origin of Species word count: {0}", result); } catch(Exception ex) { ResultsTextBlock.Text += String.Format("An error has occurred: {0} \n", ex.Message); } } Now, in Visual Studio 2012, press F5 to run the project. Click on the Start button. Your application should appear as shown in the following screenshot: How it works… Wrapping your asynchronous code in a try/catch block is pretty easy. In fact, it hides some of the complex work Visual Studio 2012 to doing for us. To understand this, you need to think about the context in which your code is running. When the TAP is used in Windows Forms or WPF applications, there's already a context that the code is running in, such as the message loop UI thread. When async calls are made in those applications, the awaited code goes off to do its work asynchronously and the async method exits back to its caller. In other words, the program execution returns to the message loop UI thread. The Console applications don't have the concept of a context. When the code hits an awaited call inside the try block, it will exit back to its caller, which in this case is Main. If there is no more code after the awaited call, the application ends without the async method ever finishing. To alleviate this issue, Microsoft included async compatible context with the TAP that is used for Console apps or unit test apps to prevent this inconsistent behavior. This new context is called GeneralThreadAffineContext. Do you really need to understand these context issues to handle async Exceptions? No, not really. That's part of the beauty of the Task-based Asynchronous Pattern. Cancelling an asynchronous operation In .NET 4.5, asynchronous operations can be cancelled in the same way that parallel tasks can be cancelled, by passing in CancellationToken and calling the Cancel method on CancellationTokenSource. In this recipe, we are going to create a WPF application that gets the contents of a classic book over the web and performs a word count. This time though we are going to set up a Cancel button that we can use to cancel the async operation if we don't want to wait for it to finish. How to do it… Let's create a WPF application to show how we can add cancellation to our asynchronous methods. Start a new project using the WPF Application project template and assign AsyncCancellation as Solution name . Begin by opening MainWindow.xaml and adding the following XAML to create our user interface. In this case, the UI contains TextBlock, StartButton, and CancelButton. <Window x_Class="AsyncCancellation.MainWindow" Title="AsyncCancellation" Height="400" Width="599"> <Grid Width="600" Height="400"> <Button x_Name="StartButton" Content="Start" HorizontalAlignment="Left" Margin="142,183,0,0" VerticalAlignment="Top" Width="75" RenderTransformOrigin="-0.2,0.45" Click="StartButton_Click"/> <Button x_Name="CancelButton" Content="Cancel" HorizontalAlignment="Left" Margin="379,185,0,0" VerticalAlignment="Top" Width="75" Click="CancelButton_Click"/> <TextBlock x_Name="TextResult" HorizontalAlignment="Left" Margin="27,24,0,0" TextWrapping="Wrap" VerticalAlignment="Top" Height="135" Width="540"/> </Grid> </Window> Next, open up MainWindow.xaml.cs, click on the Project Explorer , and add a reference to System.Net.Http. Add the following using directives to the top of your MainWindow class: using System; using System.Linq; using System.Net.Http; using System.Threading.Tasks; using System.Windows; At the top of the MainWindow class, add a character array constant that will be used to split the contents of the book into a word array. char[] delimiters = { ' ', ',', '.', ';', ':', '-', '_', '/', '\u000A' }; Next, let's create the GetWordCountAsync method. This method is very similar to the method explained before. It needs to be marked as asynchronous with the async modifier and it returns Task<int>. This time however, the method takes a CancellationToken parameter. We also need to use the GetAsync method of HttpClient instead of the GetStringAsync method, because the former supports cancellation, whereas the latter does not. We will add a small delay in the method so we have time to cancel the operation before the download completes. public async Task<int> GetWordCountAsync(CancellationToken ct) { TextResult.Text += "Getting the word count for Origin of Species...\n"; var client = new HttpClient(); await Task.Delay(500); try { HttpResponseMessage response = await client.GetAsync(@"http://www.gutenberg.org/files/2009/2009.txt", ct); var words = await response.Content.ReadAsStringAsync(); var wordArray = words.Split(delimiters, StringSplitOptions.RemoveEmptyEntries); return wordArray.Count(); } finally { client.Dispose(); } } Now, let's create the Click event handler for our CancelButton. This method just needs to check if CancellationTokenSource is null, and if not, it calls the Cancel method. private void CancelButton_Click(object sender, RoutedEventArgs e) { if (cts != null) { cts.Cancel(); } } Ok, let's finish up by adding a Click event handler for StartButton. This method is the same as explained before, except we also have a catch block that specifically handles OperationCancelledException. Don't forget to mark the method with the async modifier. public async Task<int> GetWordCountAsync(CancellationToken ct) { TextResult.Text += "Getting the word count for Origin of Species...\n"; var client = new HttpClient(); await Task.Delay(500); try { HttpResponseMessage response = await client.GetAsync(@"http://www.gutenberg.org/files/2009/2009.txt", ct); var words = await response.Content.ReadAsStringAsync(); var wordArray = words.Split(delimiters, StringSplitOptions.RemoveEmptyEntries); return wordArray.Count(); } finally { client.Dispose(); } } In Visual Studio 2012, press F5 to run the project Click on the Start button, then the Cancel button. Your application should appear as shown in the following screenshot: How it works… Cancellation is an aspect of user interaction that you need to consider to build a professional async application. In this example, we implemented cancellation by using a Cancel button, which is one of the most common ways to surface cancellation functionality in a GUI application. In this recipe, cancellation follows a very common flow. The caller (start button click event handler) creates a CancellationTokenSource object. private async void StartButton_Click(object sender, RoutedEventArgs e) { cts = new CancellationTokenSource(); ... } The caller calls a cancelable method, and passes CancellationToken from CancellationTokenSource (CancellationTokenSource.Token). public async Task<int> GetWordCountAsync(CancellationToken ct ) { ... HttpResponseMessage response = await client.GetAsync(@"http://www.gutenberg.org/files/2009/2009.txt", ct ); ... } The cancel button click event handler requests cancellation using the CancellationTokenSource object (CancellationTokenSource.Cancel()). private void CancelButton_Click(object sender, RoutedEventArgs e) { if (cts != null) { cts.Cancel(); } } The task acknowledges the cancellation by throwing OperationCancelledException, which we handle in a catch block in the start button click event handler.
Read more
  • 0
  • 0
  • 2072

article-image-classification-and-regression-trees
Packt
02 Aug 2013
23 min read
Save for later

Classification and Regression Trees

Packt
02 Aug 2013
23 min read
(For more resources related to this topic, see here.) Recursive partitions The name of the library package rpart, shipped along with R, stands for Recursive Partitioning . The package was first created by Terry M Therneau and Beth Atkinson , and is currently maintained by Brian Ripley . We will first have a peek at means recursive partitions are. A complex and contrived relationship is generally not identifiable by linear models. In the previous chapter, we saw the extensions of the linear models in piecewise, polynomial, and spline regression models. It is also well known that if the order of a model is larger than 4, then interpretation and usability of the model becomes more difficult. We consider a hypothetical dataset, where we have two classes for the output Y and two explanatory variables in X1 and X2. The two classes are indicated by filled-in green circles and red squares. First, we will focus only on the left display of Figure 1: A complex classification dataset with partitions , as it is the actual depiction of the data. At the outset, it is clear that a linear model is not appropriate, as there is quite an overlap of the green and red indicators. Now, there is a clear demarcation of the classification problem accordingly, as X1 is greater than 6 or not. In the area on the left side of X1=6, the mid-third region contains a majority of green circles and the rest are red squares. The red squares are predominantly identifiable accordingly, as the X2 values are either lesser than or equal to 3 or greater than 6. The green circles are the majority values in the region of X2 being greater than 3 and lesser than 6. A similar story can be built for the points on the right side of X1 greater than 6. Here, we first partitioned the data according to X1 values first, and then in each of the partitioned region, we obtained partitions according to X2 values. This is the act of recursive partitioning. Figure 1: A complex classification dataset with partitions Let us obtain the preceding plot in R. Time for action – partitioning the display plot We first visualize the CART_Dummy dataset and then look in the next subsection at how CART gets the patterns, which are believed to exist in the data. Obtain the dataset CART_Dummy from the RSADBE package by using data( CART_Dummy). Convert the binary output Y as a factor variable, and attach the data frame with CART_Dummy$Y <- as.factor(CART_Dummy$Y). attach(CART_Dummy) In Figure 1: A complex classification dataset with partitions , the red squares refer to 0 and the green circles to 1. Initialize the graphics windows for the three samples by using par(mfrow= c(1,2)). Create a blank scatter plot: plot(c(0,12),c(0,10),type="n",xlab="X1",ylab="X2"). Plot the green circles and red squares: points(X1[Y==0],X2[Y==0],pch=15,col="red") points(X1[Y==1],X2[Y==1],pch=19,col="green") title(main="A Difficult Classification Problem") Repeat the previous two steps to obtain the identical plot on the right side of the graphics window. First, partition according to X1 values by using abline(v=6,lwd=2). Add segments on the graph with the segment function: segments(x0=c(0,0,6,6),y0=c(3.75,6.25,2.25,5),x1=c(6,6,12,12),y1=c(3.75,6.25,2.25,5),lwd=2) title(main="Looks a Solvable Problem Under Partitions") What just happened? A complex problem is simplified through partitioning! A more generic function, segments, has nicely slipped in our program, which you may use for many other scenarios. Now, this approach of recursive partitioning is not feasible all the time! Why? We seldom deal with two or three explanatory variables and data points as low as in the preceding hypothetical example. The question is how one creates recursive partitioning of the dataset. Breiman, et. al. (1984) and Quinlan (1988) have invented tree building algorithms, and we will follow the Breiman, et. al. approach in the rest of book. The CART discussion in this book is heavily influenced by Berk (2008). Splitting the data In the earlier discussion, we saw that partitioning the dataset can benefit a lot in reducing the noise in the data. The question is how does one begin with it? The explanatory variables can be discrete or continuous. We will begin with the continuous (numeric objects in R) variables. For a continuous variable, the task is a bit simpler. First, identify the unique distinct values of the numeric object. Let us say, for example, that the distinct values of a numeric object, say height in cms, are 160, 165, 170, 175, and 180. The data partitions are then obtained as follows: data[Height<=160,], data[Height>160,] data[Height<=165,], data[Height>165,] data[Height<=170,], data[Height>170,] data[Height<=175,], data[Height>175,] The reader should try to understand the rationale behind the code, and certainly this is just an indicative one. Now, we consider the discrete variables. Here, we have two types of variables, namely categorical and ordinal . In the case of ordinal variables, we have an order among the distinct values. For example, in the case of the economic status variable, the order may be among the classes Very Poor, Poor, Average, Rich, and Very Rich. Here, the splits are similar to the case of continuous variable, and if there are m distinct orders, we consider m -1 distinct splits of the overall data. In the case of a categorical variable with m categories, for example the departments A to F of the UCBAdmissions dataset, the number of possible splits becomes 2m-1-1. However, the benefit of using software like R is that we do not have to worry about these issues. The first tree In the CART_Dummy dataset, we can easily visualize the partitions for Y as a function of the inputs X1 and X2. Obviously, we have a classification problem, and hence we will build the classification tree. Time for action – building our first tree The rpart function from the library rpart will be used to obtain the first classification tree. The tree will be visualized by using the plot options of rpart, and we will follow this up with extracting the rules of a tree by using the asRules function from the rattle package. Load the rpart package by using library(rpart). Create the classification tree with CART_Dummy_rpart <- rpart(Y~ X1+X2,data=CART_Dummy). Visualize the tree with appropriate text labels by using plot(CART_Dummy_rpart); text(CART_Dummy_rpart). Figure 2: A classification tree for the dummy dataset Now, the classification tree flows as follows. Obviously, the tree using the rpart function does not partition as simply as we did in Figure 1: A complex classification dataset with partitions , the working of which will be dealt within the third section of this chapter. First, we check if the value of the second variable X2 is less than 4.875. If the answer is an affirmation, we move to the left side of the tree; the right side in the other case. Let us move to the right side. A second question asked is whether X1 is lesser than 4.5 or not, and then if the answer is yes it is identified as a red square, and otherwise a green circle. You are now asked to interpret the left side of the first node. Let us look at the summary of CART_Dummy_rpart. Apply the summary, an S3 method, for the classification tree with summary( CART_Dummy_rpart). That one is a lot of output! Figure 3: Summary of a classification tree Our interests are in the nodes numbered 5 to 9! Why? The terminal nodes, of course! A terminal node is one in which we can't split the data any further, and for the classification problem, we arrive at a class assignment as the class that has a majority count at the node. The summary shows that there are indeed some misclassifications too. Now, wouldn't it be great if R gave the terminal nodes asRules. The function asRules from the rattle package extracts the rules from an rpart object. Let's do it! Invoke the rattle package library(rattle) and using the asRules function, extract the rules from the terminal nodes with asRules(CART_Dummy_rpart). The result is the following set of rules: Figure 4: Extracting "rules" from a tree! We can see that the classification tree is not according to our "eye-bird" partitioning. However, as a final aspect of our initial understanding, let us plot the segments using the naïve way. That is, we will partition the data display according to the terminal nodes of the CART_Dummy_rpart tree. The R code is given right away, though you should make an effort to find the logic behind it. Of course, it is very likely that by now you need to run some of the earlier code that was given previously. abline(h=4.875,lwd=2) segments(x0=4.5,y0=4.875,x1=4.5,y1=10,lwd=2) abline(h=1.75,lwd=2) segments(x0=3.5,y0=1.75,x1=3.5,y1=4.875,lwd=2) title(main="Classification Tree on the Data Display") It can be easily seen from the following that rpart works really well: Figure 5: The terminal nodes on the original display of the data What just happened? We obtained our first classification tree, which is a good thing. Given the actual data display, the classification tree gives satisfactory answers. We have understood the "how" part of a classification tree. The "why" aspect is very vital in science, and the next section explains the science behind the construction of a regression tree, and it will be followed later by a detailed explanation of the working of a classification tree. The construction of a regression tree In the CART_Dummy dataset, the output is a categorical variable, and we built a classification tree for it. The same distinction is required in CART, and we thus build classification trees for binary random variables, where regression trees are for continuous random variables. Recall the rationale behind the estimation of regression coefficients for the linear regression model. The main goal was to find the estimates of the regression coefficients, which minimize the error sum of squares between the actual regressand values and the fitted values. A similar approach is followed here, in the sense that we need to split the data at the points that keep the residual sum of squares to a minimum. That is, for each unique value of a predictor, which is a candidate for the node value, we find the sum of squares of y's within each partition of the data, and then add them up. This step is performed for each unique value of the predictor, and the value, which leads to the least sum of squares among all the candidates, is selected as the best split point for that predictor. In the next step, we find the best split points for each of the predictors, and then the best split is selected across the best split points across the predictors. Easy! Now, the data is partitioned into two parts according to the best split. The process of finding the best split within each partition is repeated in the same spirit as for the first split. This process is carried out in a recursive fashion until the data can't be partitioned any further. What is happening here? The residual sum of squares at each child node will be lesser than that in the parent node. At the outset, we record that the rpart function does the exact same thing. However, as a part of cleaner understanding of the regression tree, we will write raw R codes and ensure that there is no ambiguity in the process of understanding CART. We will begin with a simple example of a regression tree, and use the rpart function to plot the regression function. Then, we will first define a function, which will extract the best split given by the covariate and dependent variable. This action will be repeated for all the available covariates, and then we find the best overall split. This will be verified with the regression tree. The data will then be partitioned by using the best overall split, and then the best split will be identified for each of the partitioned data. The process will be repeated until we reach the end of the complete regression tree given by the rpart. First, the experiment! The cpus dataset available in the MASS package contains the relative performance measure of 209 CPUs in the perf variable. It is known that the performance of a CPU depends on factors such as the cycle time in nanoseconds (syct), minimum and maximum main memory in kilobytes (mmin and mmax), cache size in kilobytes (cach), and minimum and maximum number of channels (chmin and chmax). The task in hand is to model the perf as a function of syct, mmin, mmax, cach, chmin, and chmax. The histogram of perf—try hist(cpus$perf)—will show a highly skewed distribution, and hence we will build a regression tree for the logarithm transformation log10(perf). Time for action – the construction of a regression tree A regression tree is first built by using the rpart function. The getNode function is introduced, which helps in identifying the split node at each stage, and using it we build a regression tree and verify that we had the same tree as returned by the rpart function. Load the MASS library by using library(MASS). Create the regression tree for the logarithm (to the base 10) of perf as a function of the covariates explained earlier, and display the regression tree: cpus.ltrpart <- rpart(log10(perf)~syct+mmin+mmax+cach+chmin+chmax, data=cpus) plot(cpus.ltrpart); text(cpus.ltrpart) The regression tree will be indicated as follows: Figure 6: Regression tree for the "perf" of a CPU We will now define the getNode function. Given the regressand and the covariate, we need to find the best split in the sense of the sum of squares criterion. The evaluation needs to be done for every distinct value of the covariate. If there are m distinct points, we need m -1 evaluations. At each distinct point, the regressand needs to be partitioned accordingly, and the sum of squares should be obtained for each partition. The two sums of squares (in each part) are then added to obtain the reduced sum of squares. Thus, we create the required function to meet all these requirements. Create the getNode function in R by running the following code: getNode <- function(x,y) { xu <- sort(unique(x),decreasing=TRUE) ss <- numeric(length(xu)-1) for(i in 1:length(ss)) { partR <- y[x>xu[i]] partL <- y[x<=xu[i]] partRSS <- sum((partR-mean(partR))^2) partLSS <- sum((partL-mean(partL))^2) ss[i]<-partRSS + partLSS } return(list(xnode=xu[which(ss==min(ss,na.rm=TRUE))], minss = min(ss,na.rm=TRUE),ss,xu)) } The getNode function gives the best split for a given covariate. It returns a list consisting of four objects: xnode, which is a datum of the covariate x that gives the minimum residual sum of squares for the regressand y The value of the minimum residual sum of squares The vector of the residual sum of squares for the distinct points of the vector x The vector of the distinct x values We will run this function for each of the six covariates, and find the best overall split. The argument na.rm=TRUE is required, as at the maximum value of x we won't get a numeric value. We will first execute the getNode function on the syct covariate, and look at the output we get as a result: > getNode(cpus$syct,log10(cpus$perf))$xnode [1] 48 > getNode(cpus$syct,log10(cpus$perf))$minss [1] 24.72 > getNode(cpus$syct,log10(cpus$perf))[[3]] [1] 43.12 42.42 41.23 39.93 39.44 37.54 37.23 36.87 36.51 36.52 35.92 34.91 [13] 34.96 35.10 35.03 33.65 33.28 33.49 33.23 32.75 32.96 31.59 31.26 30.86 [25] 30.83 30.62 29.85 30.90 31.15 31.51 31.40 31.50 31.23 30.41 30.55 28.98 [37] 27.68 27.55 27.44 26.80 25.98 27.45 28.05 28.11 28.66 29.11 29.81 30.67 [49] 28.22 28.50 24.72 25.22 26.37 28.28 29.10 33.02 34.39 39.05 39.29 > getNode(cpus$syct,log10(cpus$perf))[[4]] [1] 1500 1100 900 810 800 700 600 480 400 350 330 320 300 250 240 [16] 225 220 203 200 185 180 175 167 160 150 143 140 133 125 124 [31] 116 115 112 110 105 100 98 92 90 84 75 72 70 64 60 [46] 59 57 56 52 50 48 40 38 35 30 29 26 25 23 17 The least sum of squares at a split for the best split value of the syct variable is 24.72, and it occurs at a value of syct greater than 48. The third and fourth list objects given by getNode, respectively, contain the details of the sum of squares for the potential candidates and the unique values of syct. The values of interest are highlighted. Thus, we will first look at the second object from the list output for all the six covariates to find the best split among the best split of each of the variables, by the residual sum of squares criteria. Now, run the getNode function for the remaining five covariates: getNode(cpus$syct,log10(cpus$perf))[[2]] getNode(cpus$mmin,log10(cpus$perf))[[2]] getNode(cpus$mmax,log10(cpus$perf))[[2]] getNode(cpus$cach,log10(cpus$perf))[[2]] getNode(cpus$chmin,log10(cpus$perf))[[2]] getNode(cpus$chmax,log10(cpus$perf))[[2]] getNode(cpus$cach,log10(cpus$perf))[[1]] sort(getNode(cpus$cach,log10(cpus$perf))[[4]],decreasing=FALSE) The output is as follows: Figure 7: Obtaining the best "first split" of regression tree The sum of squares for cach is the lowest, and hence we need to find the best split associated with it, which is 24. However, the regression tree shows that the best split is for the cach value of 27. The getNode function says that the best split occurs at a point greater than 24, and hence we take the average of 24 and the next unique point at 30. Having obtained the best overall split, we next obtain the first partition of the dataset. Partition the data by using the best overall split point: cpus_FS_R <- cpus[cpus$cach>=27,] cpus_FS_L <- cpus[cpus$cach<27,] The new names of the data objects are clear with _FS_R indicating the dataset obtained on the right side for the first split, and _FS_L indicating the left side. In the rest of the section, the nomenclature won't be further explained. Identify the best split in each of the partitioned datasets: getNode(cpus_FS_R$syct,log10(cpus_FS_R$perf))[[2]] getNode(cpus_FS_R$mmin,log10(cpus_FS_R$perf))[[2]] getNode(cpus_FS_R$mmax,log10(cpus_FS_R$perf))[[2]] getNode(cpus_FS_R$cach,log10(cpus_FS_R$perf))[[2]] getNode(cpus_FS_R$chmin,log10(cpus_FS_R$perf))[[2]] getNode(cpus_FS_R$chmax,log10(cpus_FS_R$perf))[[2]] getNode(cpus_FS_R$mmax,log10(cpus_FS_R$perf))[[1]] sort(getNode(cpus_FS_R$mmax,log10(cpus_FS_R$perf))[[4]], decreasing=FALSE) getNode(cpus_FS_L$syct,log10(cpus_FS_L$perf))[[2]] getNode(cpus_FS_L$mmin,log10(cpus_FS_L$perf))[[2]] getNode(cpus_FS_L$mmax,log10(cpus_FS_L$perf))[[2]] getNode(cpus_FS_L$cach,log10(cpus_FS_L$perf))[[2]] getNode(cpus_FS_L$chmin,log10(cpus_FS_L$perf))[[2]] getNode(cpus_FS_L$chmax,log10(cpus_FS_L$perf))[[2]] getNode(cpus_FS_L$mmax,log10(cpus_FS_L$perf))[[1]] sort(getNode(cpus_FS_L$mmax,log10(cpus_FS_L$perf))[[4]], decreasing=FALSE) The following screenshot gives the results of running the preceding R code: Figure 8: Obtaining the next two splits Thus, for the first right partitioned data, the best split is for the mmax value as the mid-point between 24000 and 32000; that is, at mmax = 28000. Similarly, for the first left-partitioned data, the best split is the average value of 6000 and 6200, which is 6100, for the same mmax covariate. Note the important step here. Even though we used cach as the criteria for the first partition, it is still used with the two partitioned data. The results are consistent with the display given by the regression tree, Figure 6: Regression tree for the "perf" of a CPU . The next R program will take care of the entire first split's right side's future partitions. Partition the first right part cpus_FS_R as follows: cpus_FS_R_SS_R <- cpus_FS_R[cpus_FS_R$mmax>=28000,] cpus_FS_R_SS_L <- cpus_FS_R[cpus_FS_R$mmax<28000,] Obtain the best split for cpus_FS_R_SS_R and cpus_FS_R_SS_L by running the following code: cpus_FS_R_SS_R <- cpus_FS_R[cpus_FS_R$mmax>=28000,] cpus_FS_R_SS_L <- cpus_FS_R[cpus_FS_R$mmax<28000,] getNode(cpus_FS_R_SS_R$syct,log10(cpus_FS_R_SS_R$perf))[[2]] getNode(cpus_FS_R_SS_R$mmin,log10(cpus_FS_R_SS_R$perf))[[2]] getNode(cpus_FS_R_SS_R$mmax,log10(cpus_FS_R_SS_R$perf))[[2]] getNode(cpus_FS_R_SS_R$cach,log10(cpus_FS_R_SS_R$perf))[[2]] getNode(cpus_FS_R_SS_R$chmin,log10(cpus_FS_R_SS_R$perf))[[2]] getNode(cpus_FS_R_SS_R$chmax,log10(cpus_FS_R_SS_R$perf))[[2]] getNode(cpus_FS_R_SS_R$cach,log10(cpus_FS_R_SS_R$perf))[[1]] sort(getNode(cpus_FS_R_SS_R$cach,log10(cpus_FS_R_SS_R$perf))[[4]], decreasing=FALSE) getNode(cpus_FS_R_SS_L$syct,log10(cpus_FS_R_SS_L$perf))[[2]] getNode(cpus_FS_R_SS_L$mmin,log10(cpus_FS_R_SS_L$perf))[[2]] getNode(cpus_FS_R_SS_L$mmax,log10(cpus_FS_R_SS_L$perf))[[2]] getNode(cpus_FS_R_SS_L$cach,log10(cpus_FS_R_SS_L$perf))[[2]] getNode(cpus_FS_R_SS_L$chmin,log10(cpus_FS_R_SS_L$perf))[[2]] getNode(cpus_FS_R_SS_L$chmax,log10(cpus_FS_R_SS_L$perf))[[2]] getNode(cpus_FS_R_SS_L$cach,log10(cpus_FS_R_SS_L$perf))[[1]] sort(getNode(cpus_FS_R_SS_L$cach,log10(cpus_FS_R_SS_L$perf))[[4]],decreasing=FALSE) For the cpus_FS_R_SS_R part, the final division is according to cach being greater than 56 or not (average of 48 and 64). If the cach value in this partition is greater than 56, then perf (actually log10(perf)) ends in the terminal leaf 3, else 2. However, for the region cpus_FS_R_SS_L, we partition the data further by the cach value being greater than 96.5 (average of 65 and 128). In the right side of the region, log10(perf) is found as 2, and a third level split is required for cpus_FS_R_SS_L with cpus_FS_R_SS_L_TS_L. Note that though the final terminal leaves of the cpus_FS_R_SS_L_TS_L region shows the same 2 as the final log10(perf), this may actually result in a significant variability reduction of the difference between the predicted and the actual log10(perf) values. We will now focus on the first main split's left side. Figure 9: Partitioning the right partition after the first main split Partition cpus_FS_L accordingly, as the mmax value being greater than 6100 or otherwise: cpus_FS_L_SS_R <- cpus_FS_L[cpus_FS_L$mmax>=6100,] cpus_FS_L_SS_L <- cpus_FS_L[cpus_FS_L$mmax<6100,] The rest of the partition for cpus_FS_L is completely given next. The details will be skipped and the R program is given right away: cpus_FS_L_SS_R <- cpus_FS_L[cpus_FS_L$mmax>=6100,] cpus_FS_L_SS_L <- cpus_FS_L[cpus_FS_L$mmax<6100,] getNode(cpus_FS_L_SS_R$syct,log10(cpus_FS_L_SS_R$perf))[[2]] getNode(cpus_FS_L_SS_R$mmin,log10(cpus_FS_L_SS_R$perf))[[2]] getNode(cpus_FS_L_SS_R$mmax,log10(cpus_FS_L_SS_R$perf))[[2]] getNode(cpus_FS_L_SS_R$cach,log10(cpus_FS_L_SS_R$perf))[[2]] getNode(cpus_FS_L_SS_R$chmin,log10(cpus_FS_L_SS_R$perf))[[2]] getNode(cpus_FS_L_SS_R$chmax,log10(cpus_FS_L_SS_R$perf))[[2]] getNode(cpus_FS_L_SS_R$syct,log10(cpus_FS_L_SS_R$perf))[[1]] sort(getNode(cpus_FS_L_SS_R$syct,log10(cpus_FS_L_SS_R$perf))[[4]], decreasing=FALSE) getNode(cpus_FS_L_SS_L$syct,log10(cpus_FS_L_SS_L$perf))[[2]] getNode(cpus_FS_L_SS_L$mmin,log10(cpus_FS_L_SS_L$perf))[[2]] getNode(cpus_FS_L_SS_L$mmax,log10(cpus_FS_L_SS_L$perf))[[2]] getNode(cpus_FS_L_SS_L$cach,log10(cpus_FS_L_SS_L$perf))[[2]] getNode(cpus_FS_L_SS_L$chmin,log10(cpus_FS_L_SS_L$perf))[[2]] getNode(cpus_FS_L_SS_L$chmax,log10(cpus_FS_L_SS_L$perf))[[2]] getNode(cpus_FS_L_SS_L$mmax,log10(cpus_FS_L_SS_L$perf))[[1]] sort(getNode(cpus_FS_L_SS_L$mmax,log10(cpus_FS_L_SS_L$perf))[[4]],decreasing=FALSE) cpus_FS_L_SS_R_TS_R <- cpus_FS_L_SS_R[cpus_FS_L_SS_R$syct<360,] getNode(cpus_FS_L_SS_R_TS_R$syct,log10(cpus_FS_L_SS_R_TS_R$ perf))[[2]] getNode(cpus_FS_L_SS_R_TS_R$mmin,log10(cpus_FS_L_SS_R_TS_R$ perf))[[2]] getNode(cpus_FS_L_SS_R_TS_R$mmax,log10(cpus_FS_L_SS_R_TS_R$ perf))[[2]] getNode(cpus_FS_L_SS_R_TS_R$cach,log10(cpus_FS_L_SS_R_TS_R$ perf))[[2]] getNode(cpus_FS_L_SS_R_TS_R$chmin,log10(cpus_FS_L_SS_R_TS_R$ perf))[[2]] getNode(cpus_FS_L_SS_R_TS_R$chmax,log10(cpus_FS_L_SS_R_TS_R$ perf))[[2]] getNode(cpus_FS_L_SS_R_TS_R$chmin,log10(cpus_FS_L_SS_R_TS_R$ perf))[[1]] sort(getNode(cpus_FS_L_SS_R_TS_R$chmin,log10(cpus_FS_L_SS_R_TS_R$perf))[[4]],decreasing=FALSE) We will now see how the : Figure 10: Partitioning the left partition after the first main split We leave it to you to interpret the output arising from the previous action. What just happened? Using the rpart function from the rpart library we first built the regression tree for log10(perf). Then, we explored the basic definitions underlying the construction of a regression tree and defined the getNode function to obtain the best split for a pair of regressands and a covariate. This function is then applied for all the covariates, and the best overall split is obtained; using this we get our first partition of the data, which will be in agreement with the tree given by the rpart function. We then recursively partitioned the data by using the getNode function and verified that all the best splits in each partitioned data are in agreement with the one provided by the rpart function. The reader may wonder if the preceding tedious task was really essential. However, it has been the experience of the author that users/readers seldom remember the rationale behind using direct code/functions for any software after some time. Moreover, CART is a difficult concept and it is imperative that we clearly understand our first tree, and return to the preceding program whenever the understanding of a science behind CART is forgotten. Summary We began with the idea of recursive partitioning and gave a legitimate reason as to why such an approach is practical. The CART technique is completely demystified by using the getNode function, which has been defined appropriately depending upon whether we require a regression or a classification tree. Resources for Article : Further resources on this subject: Organizing, Clarifying and Communicating the R Data Analyses [Article] Graphical Capabilities of R [Article] Customizing Graphics and Creating a Bar Chart and Scatterplot in R [Article]
Read more
  • 0
  • 0
  • 1502

article-image-sharepoint-2013-search
Packt
30 Jul 2013
11 min read
Save for later

SharePoint 2013 Search

Packt
30 Jul 2013
11 min read
(For more resources related to this topic, see here.) Features of SharePoint 2013 – Result types and design templates Both result types and design templates are new concepts introduced in SharePoint 2013. Kate Dramstad, a program manager from the SharePoint search team at Microsoft, describes both concepts in a single, easy-to-remember formula: result types + design templates = rich search experience . When we perform a search we get back results. Some results are documents, others are pictures, SharePoint items, or just about anything else. Up until SharePoint 2010, all results, no matter which type they were, looked quite the same. Take a look at the following screenshot showing a results page from FAST for SharePoint 2010: The results are dull looking, can't be told apart, and in order to find what you are looking for, you have to scan the results up and down with your eyes and zero in on your desired result. Now let's look at how results are displayed in SharePoint 2013: What a difference! The page looks much more alive and vibrant, with easy distinguishing of different result types and a whole new hover panel, which provides information about the hovered item and is completely customizable. Display templates Search, and its related web parts, makes heavy use of display templates instead of plain old XSLT ( Extensible Stylesheet Language Transformations ). Display templates are basically snippets of HTML and JavaScript, which control the appearance and behavior of search results. SharePoint ships with a bunch of display templates that we can use out of the box, but we can also create our own custom ones. Similar to master pages, it is recommended to copy an existing display template that is close in nature to what we strive to achieve and start our customization from it. Customizing a display template can be done on any HTML editor, or if you choose, even Notepad. Once we upload the HTML template, SharePoint takes care of creating the companion JavaScript file all by itself. If we tear apart the results page, we can distinguish four different layers of display templates: The layers are as follows: Filters layer : In the preceding screenshot they are highlighted with the green border on the left and numbered 1. This layer shows the new refinement panel area that is not limited to text alone, but also enables the use of UX elements such as sliders, sliders with graphs, and so on. Control layer : In the preceding screenshot they are highlighted with the red border in the middle and numbered 2. This layer shows that not only results but also controls can be templated. Item layer : In the preceding screenshot they are highlighted with the orange border in the middle and numbered 3. This layer shows that each result type can be templated to look unique. For example, in the screenshot we see how a site result (the first result), conversation results (next three results), and image result (last one) looks like. Each result type has its own display template. Hover panel layer : In the preceding screenshot, they are highlighted with the blue border on the right and numbered 4. They are introduced in SharePoint 2013, the hover panel shows information on a hovered result. The extra information can be a preview of the document (using Office Web Apps), a bigger version of an image or just about anything we like, as we can template the hover panel just like any other layer. Display templates are stored in a site's master page gallery under the Display templates folder. Each one of these layers is controlled by display templates. But if design templates are the beauty, what are the brains? Well, that is result types. Result types Result types are the glue between design templates (UX—user experience) and the type of search result they template. You can think of result types as the brain behind the templating engine. Using result types enables administrators to create display templates to be displayed based upon the type of content that is returned from the search engine. Each result type is defined by a rule and is bound to a result source. In addition, each result type is associated with a single display template. Just like display templates, SharePoint ships with it a set of out of the box result types that match popular content. For example, SharePoint renders Word document results using the Item_Word.html display templates within any result source if the item matches the Microsoft Word type of content. However, if an item matches the PDF type of content, the result will be rendered using the Item_PDF.html display template. Defining a result type is a process much like creating a query rule. We will create our first result type and display template towards the end of the article. Both result types and display templates are used not only for search results, but also for other web parts as well, such as the Content Search Web Part. Styling results in a Content Search Web Part The Content Search Web Part (CSWP) comes in handy when we wish to show search-driven content to users quickly and without any interaction on their side. When adding a CSWP we have two sections to set: Search Criteria and Display Templates . Each section has its unique settings, explained as follows: The search criteria section is equivalent to the result type. Using the Query Builder we tell the web part which result type it should get. The Query Builder enables us to either choose one of the built-in queries (latest documents, items related to current user, and so on) or build our own. In addition, we can set the scope of the search. It can either be the current site, current site collection, or a URL. For our example, we will set the query to be Documents(System) , meaning it searches for the latest documents, and the scope to Current site collection : Next, we set the display template for the control holding the results. This is equivalent to the Control layer we mentioned earlier. The CSWP provides three control templates: List , List with Paging , and Slideshow . The control templates change the way the container of the items looks. To compare the different templates, take a look at how the container looks when the List template is chosen: And the following screenshot displays how the exact same list looks when the Slideshow template is chosen: Since our content is not images, rendering the control as Slideshow makes no sense. Last but not least, we set the Item display template. As usual, SharePoint comes with a set of built-in item templates, each designated for different item types. By default, the Picture on left, 3 lines on right item display template is selected. By looking at the preceding screenshot we can see it's not right for our results. Since we are searching for documents, we don't have a picture representing them so the left area looks quite dull. If we change the Item display template to Two lines we will get a much more suitable result: Display templates allow us to change the look of our results instantly. While playing around with the out-of-the-box display templates is fun, extending them is even better. If you look at the Two lines template that we chose for the CSWP, it seems kind of empty. All we have is the document type, represented by an icon, and the name of the document. Let's extend this display template and add the last modified date and the author of the document to the display. Creating a custom display template As we mentioned earlier, the best way to extend a display template is to copy and paste a template that is close in nature to what we wish to achieve, and customize it. So, as we wish to extend the Two lines template, open SharePoint Designer, navigate to Master Page Gallery | Display Templates | Content Web Parts of the site you previously added the CSWP, and copy and paste the Item_TwoLines.html file into the same folder. Rename the newly created file to Item_TwoLinesWithExtraInfo.html. As soon as you save the new filename, refresh the folder. You'll notice that SharePoint automatically created a new file named Item_TwoLinesWithExtraInfo.js. The combination of the HTML and JavaScript file is what makes the magic of display templates come to life. Edit the Item_TwoLinesWithExtraInfo.html file, and change its title to Two Lines with Extra Info. Getting the new properties The first code block we should discuss is the CustomDocumentProperties block. Let's examine what it holds between its tags: <mso:CustomDocumentProperties> <mso:TemplateHidden msdt_dt="string">0</mso:TemplateHidden> <mso:ManagedPropertyMapping msdt_dt="string">'Link URL'{Link URL}:'Path','Line 1'…</mso:ManagedPropertyMapping> <mso:MasterPageDescription msdt_dt="string">This Item Display Template will show a small thumbnail…</mso:MasterPageDescription> <mso:ContentTypeId msdt_dt="string">0x0101002039C03B61C64EC4A04F5361F385106603</ mso:ContentTypeId> <mso:TargetControlType msdt_dt="string">;#Content Web Parts;#</mso:TargetControlType> <mso:HtmlDesignAssociated msdt_dt="string">1</mso:HtmlDesignAssociated> <mso:HtmlDesignConversionSucceeded msdt_dt="string">True</mso:HtmlDesignConversionSucceeded> <mso:HtmlDesignStatusAndPreview msdt_dt="string">https://hippodevssp.sharepoint.com/search/_ catalogs/masterpage/Display%20Templates/Content%20Web%20Parts/Item_ TwoLinesWithExtraInfo.html, Conversion successful.</mso:HtmlDesignStatusAndPreview> </mso:CustomDocumentProperties> The most important properties from this block are: ManagedPropertyMapping : This property holds all the managed properties that our display template will have access to. The properties are organized in the key:value format. For example, if we wish to make use of the Author property, we will declare it as 'Author':'Author'. The value can be a list of managed properties, so if the first one is null, the mapping will be done using the second one, and so on. ContentTypeId : This property sets the content type of the display template. The specific value recognizes the file as a display template. TargetControlType : This property sets the target of the display template. In our example it is set to Content Web Parts , which means the search content web part and any other related search content web part. Other possible values are SearchBox , SearchHoverPanel , SearchResults , and so on. Since we wish to display the author and the last modified date of the document, let's add these two managed properties to the ManagedPropertyMapping property. Add the following snippet in the beginning of the property, as follows: <mso:ManagedPropertyMapping msdt_dt="string">'Author:'Author','LastModified':'LastModifiedTime',… </mso:ManagedPropertyMapping> We mapped the Author managed property to the Author key, and the LastModifiedTime managed property to the LastModified key. Next, we will discuss how to actually use the new properties. Getting the values of the new properties Using the newly added properties is done using plain old JavaScript. Scroll down a bit until you see the following opening div statement: <div id="TwoLines"> The div tag begins with what seems to be a comment markup (<!--), but if you look closer you should recognize that it is actually JavaScript. By using built-in methods and client object model code, display templates can get any information out of SharePoint, and of the outside world. The getItemValue method is in charge of getting content back based on a given managed property. That means if we wish to get the author of a result, and we set the key to the managed property to be Author, the following line of code will get it: var author = $getItemValue(ctx,"Author"); The same goes for the last modified date. We used the key LastModified for the managed property, and hence the following line of code will be used: var last = $getItemValue(ctx,"LastModified"); Add these two lines just above the closing comment statement markup (_#-->). Remember that each result is rendered using this display template, so the author and the last variables are unique for that one item that is being rendered.
Read more
  • 0
  • 0
  • 2211
article-image-vaadin-project-spring-and-handling-login-spring
Packt
22 Jul 2013
16 min read
Save for later

Vaadin project with Spring and Handling login with Spring

Packt
22 Jul 2013
16 min read
(For more resources related to this topic, see here.) Setting up a Vaadin project with Spring in Maven We will set up a new Maven project for Vaadin application that will use the Spring framework. We will use a Java annotation-driven approach for Spring configuration instead of XML configuration files. This means that we will eliminate the usage of XML to the necessary minimum (for XML fans, don't worry there will be still enough XML to edit). In this recipe, we will set up a Spring project where we define a bean that will be obtainable from the Spring application context in the Vaadin code. As the final result, we will greet a lady named Adela, so we display Hi Adela! text on the screen. The brilliant thing about this is that we get the greeting text from the bean that we define via Spring. Getting ready First, we create a new Maven project. mvn archetype:generate -DarchetypeGroupId=com.vaadin -DarchetypeArtifactId=vaadin-archetype-application -DarchetypeVersion=LATEST -Dpackaging=war -DgroupId=com.packtpub.vaadin -DartifactId=vaadin-with-spring -Dversion=1.0 More information about Maven and Vaadin can be found at https://vaadin.com/book/-/page/getting-started.maven.html. How to do it... Carry out the following steps, in order to set up a Vaadin project with Spring in Maven: First, we need to add the necessary dependencies. Just add the following Maven dependencies into the pom.xml file: dependencies into the pom.xml file: <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib</artifactId> <version>2.2.2</version> </dependency> In the preceding code, we are referring to the spring.version property. Make sure we have added the Spring version inside the properties tag in the pom.xml file. <properties> … <spring.version>3.1.2.RELEASE</spring.version> </properties> At the time of writing, the latest version of Spring was 3.1.2. Check the latest version of the Spring framework at http://www.springsource.org/spring-framework. The last step in the Maven configuration file is to add the new repository into pom.xml. Maven needs to know where to download the Spring dependencies. <repositories> … <repository> <id>springsource-repo</id> <name>SpringSource Repository</name> <url>http://repo.springsource.org/release</url> </repository> </repositories> Now we need to add a few lines of XML into the src/main/webapp/WEB-INF/web.xml deployment descriptor file. At this point, we make the first step in connecting Spring with Vaadin. The location of the AppConfig class needs to match the full class name of the configuration class. <context-param> <param-name>contextClass</param-name> <param-value> org.springframework.web.context.support.Annotation ConfigWebApplicationContext </param-value> </context-param> <context-param> <param-name>contextConfigLocation</param-name> <param-value>com.packtpub.vaadin.AppConfig </param-value> </context-param> <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> Create a new class AppConfig inside the com.packtpub.vaadin package and annotate it with the @Configuration annotation. Then create a new @Bean definition as shown: package com.packtpub.vaadin; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration public class AppConfig { @Bean(name="userService") public UserService helloWorld() { return new UserServiceImpl(); } } In order to have the recipe complete, we need to make a class that will represent a domain class. Create a new class called User. public class User { private String name; // generate getters and setters for name field } UserService is a simple interface defining a single method called getUser(). When the getUser() method is called in this recipe, we always create and return a new instance of the user (in the future, we could add parameters, for example login, and fetch user from the database). UserServiceImpl is the implementation of this interface. As mentioned, we could replace that implementation by something smarter than just returning a new instance of the same user every time the getUser() method is called. public interface UserService { public User getUser(); } public class UserServiceImpl implements UserService { @Override public User getUser() { User user = new User(); user.setName("Adela"); return user; } } Almost everything is ready now. We just make a new UI and get the application context from which we get the bean. Then, we call the service and obtain a user that we show in the browser. After we are done with the UI, we can run the application. public class AppUI extends UI { private ApplicationContext context; @Override protected void init(VaadinRequest request) { UserService service = getUserService(request); User user = service.getUser(); String name = user.getName(); Label lblUserName = new Label("Hi " + name + " !"); VerticalLayout layout = new VerticalLayout(); layout.setMargin(true); setContent(layout); layout.addComponent(lblUserName); } private UserService getUserService (VaadinRequest request) { WrappedSession session = request.getWrappedSession(); HttpSession httpSession = ((WrappedHttpSession) session).getHttpSession(); ServletContext servletContext = httpSession.getServletContext(); context = WebApplicationContextUtils.getRequired WebApplicationContext(servletContext); return (UserService) context.getBean("userService"); } } Run the following Maven commands in order to compile the widget set and run the application: mvn package mvn jetty:run How it works... In the first step, we have added dependencies to Spring. There was one additional dependency to cglib, Code Generation Library. This library is required by the @ Configuration annotation and it is used by Spring for making the proxy objects. More information about cglib can be found at http://cglib.sourceforge.net Then, we have added contextClass, contextConfigLocation and ContextLoaderListener into web.xml file. All these are needed in order to initialize the application context properly. Due to this, we are able to get the application context by calling the following code: WebApplicationContextUtils.getRequiredWebApplicationContext (servletContext); Then, we have made UserService that is actually not a real service in this case (we did so because it was not in the scope of this recipe). We will have a look at how to declare Spring services in the following recipes. In the last step, we got the application context by using the WebApplicationContextUtils class from Spring. WrappedSession session = request.getWrappedSession(); HttpSession httpSession = ((WrappedHttpSession) session).getHttpSession(); ServletContext servletContext = httpSession.getServletContext(); context = WebApplicationContextUtils.getRequired WebApplicationContext(servletContext); Then, we obtained an instance of UserService from the Spring application context. UserService service = (UserService) context.getBean("userService"); User user = service.getUser(); We can obtain a bean without knowing the bean name because it can be obtained by the bean type, like this context.getBean(UserService.class). There's more... Using the @Autowire annotation in classes that are not managed by Spring (classes that are not defined in AppConfig in our case) will not work, so no instances will be set via the @Autowire annotation. Handling login with Spring We will create a login functionality in this recipe. The user will be able to log in as admin or client. We will not use a database in this recipe. We will use a dummy service where we just hardcode two users. The first user will be "admin" and the second user will be "client". There will be also two authorities (or roles), ADMIN and CLIENT. We will use Java annotation-driven Spring configuration. Getting ready Create a new Maven project from the Vaadin archetype. mvn archetype:generate -DarchetypeGroupId=com.vaadin -DarchetypeArtifactId=vaadin-archetype-application -DarchetypeVersion=LATEST -Dpackaging=war -DgroupId=com.app -DartifactId=vaadin-spring-login -Dversion=1.0 Maven archetype generates the basic structure of the project. We will add the packages and classes, so the project will have the following directory and file structure: How to do it... Carry out the following steps, in order to create login with Spring framework: We need to add Maven dependencies in pom.xml to spring-core, spring-context, spring-web, spring-security-core, spring-security-config, and cglib (cglib is required by the @Configuration annotation from Spring). <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-config</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib</artifactId> <version>2.2.2</version> </dependency> Now we edit the web.xml file, so Spring knows we want to use the annotation-driven configuration approach. The path to the AppConfig class must match full class name (together with the package name). <context-param> <param-name>contextClass</param-name> <param-value> org.springframework.web.context.support.Annotation ConfigWebApplicationContext </param-value> </context-param> <context-param> <param-name>contextConfigLocation</param-name> <param-value>com.app.config.AppConfig</param-value> </context-param> <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> We are referring to the AppConfig class in the previous step. Let's implement that class now. AppConfig needs to be annotated by the @Configuration annotation, so Spring can accept it as the context configuration class. We also add the @ComponentScan annotation, which makes sure that Spring will scan the specified packages for Spring components. The package names inside the @ComponentScan annotation need to match our packages that we want to include for scanning. When a component (a class that is annotated with the @Component annotation) is found and there is a @Autowire annotation inside, the auto wiring will happen automatically. package com.app.config; import com.app.auth.AuthManager; import com.app.service.UserService; import com.app.ui.LoginFormListener; import com.app.ui.LoginView; import com.app.ui.UserView; import org.springframework.context.annotation.Bean; import org.springframework.context. annotation.ComponentScan; import org.springframework.context. annotation.Configuration; import org.springframework.context. annotation.Scope; @Configuration @ComponentScan(basePackages = {"com.app.ui" , "com.app.auth", "com.app.service"}) public class AppConfig { @Bean public AuthManager authManager() { AuthManager res = new AuthManager(); return res; } @Bean public UserService userService() { UserService res = new UserService(); return res; } @Bean public LoginFormListener loginFormListener() { return new LoginFormListener(); } } We are defining three beans in AppConfig. We will implement them in this step. AuthManager will take care of the login process. package com.app.auth; import com.app.service.UserService; import org.springframework.beans.factory. annotation.Autowired; import org.springframework.security.authentication. AuthenticationManager; import org.springframework.security.authentication. BadCredentialsException; import org.springframework.security.authentication. UsernamePasswordAuthenticationToken; import org.springframework.security.core.Authentication; import org.springframework.security.core. AuthenticationException; import org.springframework.security.core. GrantedAuthority; import org.springframework.security.core. userdetails.UserDetails; import org.springframework.stereotype.Component; import java.util.Collection; @Component public class AuthManager implements AuthenticationManager { @Autowired private UserService userService; public Authentication authenticate (Authentication auth) throws AuthenticationException { String username = (String) auth.getPrincipal(); String password = (String) auth.getCredentials(); UserDetails user = userService.loadUserByUsername(username); if (user != null && user.getPassword(). equals(password)) { Collection<? extends GrantedAuthority> authorities = user.getAuthorities(); return new UsernamePasswordAuthenticationToken (username, password, authorities); } throw new BadCredentialsException("Bad Credentials"); } } UserService will fetch a user based on the passed login. UserService will be used by AuthManager. package com.app.service; import org.springframework.security.core. GrantedAuthority; import org.springframework.security.core. authority.GrantedAuthorityImpl; import org.springframework.security.core. authority.SimpleGrantedAuthority; import org.springframework.security.core. userdetails.UserDetails; import org.springframework.security.core. userdetails.UserDetailsService; import org.springframework.security.core. userdetails.UsernameNotFoundException; import org.springframework.security.core. userdetails.User; import org.springframework.stereotype.Service; import java.util.ArrayList; import java.util.List; public class UserService implements UserDetailsService { @Override public UserDetails loadUserByUsername (String username) throws UsernameNotFoundException { List<GrantedAuthority> authorities = new ArrayList<GrantedAuthority>(); // fetch user from e.g. DB if ("client".equals(username)) { authorities.add (new SimpleGrantedAuthority("CLIENT")); User user = new User(username, "pass", true, true, false, false, authorities); return user; } if ("admin".equals(username)) { authorities.add (new SimpleGrantedAuthority("ADMIN")); User user = new User(username, "pass", true, true, false, false, authorities); return user; } else { return null; } } } LoginFormListener is just a listener that will initiate the login process, so it will cooperate with AuthManager. package com.app.ui; import com.app.auth.AuthManager; import com.vaadin.navigator.Navigator; import com.vaadin.ui.*; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.security.authentication. UsernamePasswordAuthenticationToken; import org.springframework.security.core.Authentication; import org.springframework.security.core. AuthenticationException; import org.springframework.security.core.context. SecurityContextHolder; import org.springframework.stereotype.Component; @Component public class LoginFormListener implements Button.ClickListener { @Autowired private AuthManager authManager; @Override public void buttonClick(Button.ClickEvent event) { try { Button source = event.getButton(); LoginForm parent = (LoginForm) source.getParent(); String username = parent.getTxtLogin().getValue(); String password = parent.getTxtPassword().getValue(); UsernamePasswordAuthenticationToken request = new UsernamePasswordAuthenticationToken (username, password); Authentication result = authManager.authenticate(request); SecurityContextHolder.getContext(). setAuthentication(result); AppUI current = (AppUI) UI.getCurrent(); Navigator navigator = current.getNavigator(); navigator.navigateTo("user"); } catch (AuthenticationException e) { Notification.show("Authentication failed: " + e.getMessage()); } } } The login form will be made as a separate Vaadin component. We will use the application context and that way we get bean from the application context by ourselves. So, we are not using auto wiring in LoginForm. package com.app.ui; import com.vaadin.ui.*; import org.springframework.context.ApplicationContext; public class LoginForm extends VerticalLayout { private TextField txtLogin = new TextField("Login: "); private PasswordField txtPassword = new PasswordField("Password: "); private Button btnLogin = new Button("Login"); public LoginForm() { addComponent(txtLogin); addComponent(txtPassword); addComponent(btnLogin); LoginFormListener loginFormListener = getLoginFormListener(); btnLogin.addClickListener(loginFormListener); } public LoginFormListener getLoginFormListener() { AppUI ui = (AppUI) UI.getCurrent(); ApplicationContext context = ui.getApplicationContext(); return context.getBean(LoginFormListener.class); } public TextField getTxtLogin() { return txtLogin; } public PasswordField getTxtPassword() { return txtPassword; } } We will use Navigator for navigating between different views in our Vaadin application. We make two views. The first is for login and the second is for showing the user detail when the user is logged into the application. Both classes will be in the com.app.ui package. LoginView will contain just the components that enable a user to log in (text fields and button). public class LoginView extends VerticalLayout implements View { public LoginView() { LoginForm loginForm = new LoginForm(); addComponent(loginForm); } @Override public void enter(ViewChangeListener.ViewChangeEvent event) { } }; UserView needs to identify whether the user is logged in or not. For this, we will use SecurityContextHolder that obtains the SecurityContext that holds the authentication data. If the user is logged in, then we display some data about him/her. If not, then we navigate him/her to the login form. public class UserView extends VerticalLayout implements View { public void enter(ViewChangeListener.ViewChangeEvent event) { removeAllComponents(); SecurityContext context = SecurityContextHolder.getContext(); Authentication authentication = context.getAuthentication(); if (authentication != null && authentication.isAuthenticated()) { String name = authentication.getName(); Label labelLogin = new Label("Username: " + name); addComponent(labelLogin); Collection<? extends GrantedAuthority> authorities = authentication.getAuthorities(); for (GrantedAuthority ga : authorities) { String authority = ga.getAuthority(); if ("ADMIN".equals(authority)) { Label lblAuthority = new Label("You are the administrator. "); addComponent(lblAuthority); } else { Label lblAuthority = new Label("Granted Authority: " + authority); addComponent(lblAuthority); } } Button logout = new Button("Logout"); LogoutListener logoutListener = new LogoutListener(); logout.addClickListener(logoutListener); addComponent(logout); } else { Navigator navigator = UI.getCurrent().getNavigator(); navigator.navigateTo("login"); } } } We have mentioned LogoutListener in the previous step. Here is how that class could look: public class LogoutListener implements Button.ClickListener { @Override public void buttonClick(Button.ClickEvent clickEvent) { SecurityContextHolder.clearContext(); UI.getCurrent().close(); Navigator navigator = UI.getCurrent().getNavigator(); navigator.navigateTo("login"); } } Everything is ready for the final AppUI class. In this class, we put in to practice all that we have created in the previous steps. We need to get the application context. That is done in the first lines of code in the init method. In order to obtain the application context, we need to get the session from the request, and from the session get the servlet context. Then, we use the Spring utility class, WebApplicationContextUtils, and we find the application context by using the previously obtained servlet context. After that, we set up the navigator. @PreserveOnRefresh public class AppUI extends UI { private ApplicationContext applicationContext; @Override protected void init(VaadinRequest request) { WrappedSession session = request.getWrappedSession(); HttpSession httpSession = ((WrappedHttpSession) session).getHttpSession(); ServletContext servletContext = httpSession.getServletContext(); applicationContext = WebApplicationContextUtils. getRequiredWebApplicationContext(servletContext); Navigator navigator = new Navigator(this, this); navigator.addView("login", LoginView.class); navigator.addView("user", UserView.class); navigator.navigateTo("login"); setNavigator(navigator); } public ApplicationContext getApplicationContext() { return applicationContext; } } Now we can run the application. The password for usernames client and admin is pass. mvn package mvn jetty:run How it works... There are two tricky parts from the development point of view while making the application: First is how to get the Spring application context in Vaadin. For this, we need to make sure that contextClass, contextConfigLocation, and ContextLoaderListener are defined in the web.xml file. Then we need to know how to get Spring application context from the VaadinRequest. We certainly need a reference to the application context in UI, so we define the applicationContext class field together with the public getter (because we need access to the application context from other classes, to get Spring beans). The second part, which is a bit tricky, is the AppConfig class. That class represents annotated Spring application configuration (which is referenced from the web.xml file). We needed to define what packages Spring should scan for components. For this, we have used the @ComponentScan annotation. The important thing to keep in mind is that the @Autowired annotation will work only for Spring managed beans that we have defined in AppConfig. When we try to add the @Autowired annotation to a simple Vaadin component, the autowired reference will remain empty because no auto wiring happens. It is up to us to decide what instances should be managed by Spring and where we use the Spring application context to retrieve the beans. Summary In this article, we saw how to add Spring into the Maven project. We also took a look at handling login with Spring Resources for Article:   Further resources on this subject: Vaadin Portlets in Liferay User Interface Development [Article] Creating a Basic Vaadin Project [Article] Getting Started with Ext GWT [Article]
Read more
  • 0
  • 0
  • 4070

article-image-java-development
Packt
18 Jul 2013
16 min read
Save for later

Java Development

Packt
18 Jul 2013
16 min read
(For more resources related to this topic, see here.) Creating a Java project To create a new Java project, navigate to File | New | Project . You will be presented with the New Project wizard window that is shown in the following screenshot: Choose the Java Project option, and click on Next . The next page of the wizard contains the basic configuration of the project that you will create. The JRE section allows you to use a specific JRE to compile and run your project. The Project layout section allows you to choose if both source and binary files are created in the project's root folder or if they are to be separated into different folders (src and bin by default). The latter is the default option. You can create your project inside a working set. This is a good idea if you have too many projects in your workspace and want to keep them organized. Check the Creating working sets section of this article for more information on how to use and manage working sets. The next page of the wizard contains build path options. In the Managing the project build path section of this article , we will talk more about these options. You can leave everything as the default for now, and make the necessary changes after the project is created. Creating a Java class To create a new Java class, right-click on the project in the Package Explorer view and navigate to New | Class . You will be presented with the New Java Class window, where you will input information about your class. You can change the class's superclass, and add interfaces that it implements, as well as add stubs for abstract methods inherited from interfaces and abstract superclasses, add constructors from superclasses, and add the main method. To create your class inside a package, simply enter its name in the appropriate field, or click on the Browse button beside it and select the package. If you input a package name that doesn't exist, Eclipse will create it for you. New packages can also be created by right-clicking on the project in the Package Explorer and navigating to New | Package . Right-clicking on a package instead of a project in the Project Explorer and navigating to New | Class will cause the class to be created inside that package. Creating working sets Working sets provide a way to organize your workspace's projects into subsets. When you have too many projects in your workspace, it gets hard to find the project you're looking for in the Package Explorer view. Projects you are not currently working on, for example, can be kept in separate working sets. They won't get in the way of your current work but will be there in case you need them. To create a new working set, open the Package Explorer's view menu (white triangle in the top-right corner of the view), and choose Select Working Set . Click on New and select the type of projects that the working set will contain (Java , in this case). On the next page, insert the name of the working set, and choose which projects it will contain. Once the working set is created, choose the Selected Working Sets option, and mark your working set. Click on OK , and the Package Explorer will only display the projects inside the working set you've just created. Once your working sets are created, they are listed in the Package Explorer's view menu. Selecting one of them will make it the only working set visible in the Package Explorer. To view more than one working set at once, choose the Select Working Set option and mark the ones you want to show. To view the whole workspace again, choose Deselect Workspace in the view menu. You can also view all the working sets with their nested projects by selecting working sets as the top-level element of the Package Explorer view. To do this, navigate to Top Level Elements | Working Sets in the view menu. Although you don't see projects that belong to other working sets when a working set is selected, they are still loaded in your workspace, and therefore utilize resources of your machine. To avoid wasting these resources, you can close unrelated projects by right-clicking on them and selecting Close Project . You can select all the projects in a working set by using the Ctrl + A keyboard shortcut. If you have a big number of projects, but you never work with all of them at the same time (personal/business projects, different clients' projects, and so on), you can also create a specific workspace for each project set. To create a new workspace, navigate to File | Switch Workspace | Other in the menu, enter the folder name of your new workspace, and click on OK . You can choose to copy the current workspace's layout and working sets in the Copy Settings section. Importing a Java project If you are going to work on an existing project, there are a number of different ways you can import it into Eclipse, depending on how you have obtained the project's source code. To open the Import wizard, navigate to File | Import . Let's go through the options under the General category: Archive file : Select this option if the project you are working on already exists in your workspace, and you just want to import an archive file containing new resources to it. The Import wizard will list all the resources inside the archive file so that you can select the ones you wish to import. To select to which project the resources will be imported click on the Browse button. You can also select in which folder the resources are to be included. Click on Finish when you are done. The imported resources will be decompressed and copied into the project's folder. Existing Projects into Workspace : If you want to import a new project, select this option from the Import wizard. If the project's source file has been compressed into an archive file (the .zip, .tar, .jar, or .tgz format), there's no need to decompress it; just mark the Select archive file option on the following page of the wizard, and point to the archive file. If you have already decompressed the code, mark Select root directory and point to the project. The wizard will list all the Eclipse projects found in the folder or archive file. Select the ones you wish to import and click on Finish . You can add the imported projects to a specific working set and choose whether the projects are to be copied into your workspace folder or not. It's highly recommended to do so for both simplicity and portability; you know where all your Eclipse projects are, and it's easy to backup or move all of them to a different machine. File System : Use this wizard if you already have a project in your workspace and want to add new existing resources in your filesystem. On the next page, select the resources you wish to import by checking them. Click on the Browse button to select the project and the folder where the resources will be imported. When you click on the Finish button, the resources will be copied to the project's folder inside your workspace. Preferences : You can import Eclipse preferences files to your workspace by selecting this option. Preferences file contains code style and compiler preferences, the list of installed JREs, and the Problems view configurations. You can choose which of these preferences you wish to import from the selected configuration file. Importing a project from Version Control Servers Projects that are stored in Version Control Servers can be imported directly into Eclipse. There's a number of version control softwares, each with its pros and cons, and most of them are supported by Eclipse via plugins. GIT is one of the most used softwares for version control. CVS is the only version control system supported by default. To import a project managed by it, navigate to CVS | Projects from CVS in the Import wizard. Fill in the server information on the following page, and click on Finish . Introducing Java views Eclipse's user interface consists of elements called views. The following sections will introduce the main views related to Java development. The Package Explorer view The Package Explorer view is the default view used to display a project's contents. As the name implies, it uses the package hierarchy of the project to display its classes, regardless of the actual file hierarchy. This view also displays the project's build path. The following screenshot shows how the Package Explorer view looks: The Java Editor view The Java Editor is the Eclipse component that will be used to edit Java source files. It is the main view in the Java perspective and is located in the middle of the screen. The following screenshot shows the Java Editor view: The Java Editor is much more than an ordinary text editor. It contains a number of features that makes it easy for newcomers to start writing Java code and increases the productivity of experienced Java programmers. Let's talk about some of these features. Compiling errors and warnings annotations As you will see in the Building and running section with more details, Eclipse builds your code automatically after every saved modification by default. This allows Eclipse to get the Java Compiler output and mark errors and warnings through the code, making it easier to spot and correct them. Warnings are underlined in yellow and errors in red. Content assist This is probably the most used Java Editor feature both by novice and experienced Java programmers. It allows you to list all the methods callable by a given instance, along with their documentation. This feature will work by default for all Java classes and for the ones in your workspace. To enable it for external libraries, you will have to configure the build path for your project. We'll talk more about build paths further in this article in the Managing the project build path section. To see this feature in action, open a Java Editor, and create a new String instance. String s = new String(); Now add a reference to this String instance, followed by a dot, and press Ctrl + Space bar . You will see a list of all the String() and Object() methods. This is way more practical than searching for the class's API in the Java documentation or memorizing it. The following screenshot shows the content assist feature in action: This list can be filtered by typing the beginning of the method's name after the dot. Let's suppose you want to replace some characters in this String instance. As a novice Java programmer, you are not sure if there's a method for that; and if there is, you are not sure which parameters it receives. It's a fair guess that the method's name probably starts with replace, right? So go ahead and type: s.replace When you press Ctrl along with the space bar, you will get a list of all the String() methods whose name starts with replace. By choosing one of them and pressing Enter , the editor completes the code with the rest of the method's name and its parameters. It will even suggest some variables in your code that you might want to use as parameters, as shown in the following screenshot: Content assist will work with all classes in the project's classpath. You can disable content assist's automatic activation by unmarking Enable auto activation inside the Preferences window and navigating to Java | Editor | Content Assist . Code navigation When the project you are working on is big enough, finding a class in the Package Explorer can be a pain. You will frequently find yourself asking, "In which package is that class again?". You can leave the source code of the classes you are working on open in different tabs, but soon enough you will have more open tabs than you would like to have. Eclipse has an easy solution for this. In the toolbar, select Navigate | Open Type . Now, just type in the class's name, and click on OK . If you don't remember the full name of the class, you can use the wildcard characters, ? (matches one character) and * (matches any number of characters). You can also use only the uppercase letters for the CamelCase names (for example, SIOOBE for StringIndexOutOfBoundsException). The shortcut for the Open Type dialog is Ctrl + Shift + T . There's also an equivalent feature for finding and opening resources other than Java classes, such as HTML files, images, and plain text files. The shortcut for the Open Resource dialog is Ctrl + Shift + R . You can also navigate to a class' source file by holding Ctrl and clicking on a reference to that class in the code. To navigate to a method's implementation or definition directly, hold Ctrl and click on the method's call. Another useful feature that makes it easy to browse through your project's source files is the Link With Editor feature in the Package Explorer view, as shown in the following screenshot: By enabling it, the selected resource in the Package Explorer will always be the one that's open in the editor. Using this feature together with OpenType is certainly the easiest way of finding a resource in the Package Explorer. Quick fix Whenever there's an error or warning marker in your code, Eclipse might have some suggestions on how to get rid of it. To open the Quick Fix menu containing the suggestions, place the caret on the marked piece of code related to the error or warning, right-click on it, and choose Quick Fix . You can also use the shortcut by pressing Ctrl + 1 with the caret placed on the marked piece of code. The following screenshot shows the quick fix feature suggesting you to either get rid of the unused variable, create getters and setters for it, or add a SuppressWarnings annotation: Let's see some of the most used quick fixes provided by Eclipse. You can take advantage of these quick fixes to speed up your code writing. You can for example, deliberately call a method that throws an exception without the try/catch block, and use the quick fix to generate it instead of writing the try/catch block yourself. Unhandled exceptions : When a method that throws an exception is called, and the exception is not caught or thrown, Eclipse will mark the call with an error. You can use the quick fix feature to surround the code with a proper try/catch block automatically. Just open the Quick Fix menu, and choose Surround with Try/Catch . It will generate a catch block that will then call the printStackTrace() method of the thrown exception. If the method is already inside a try block, you can also choose the Add catch clause to the surrounding try option. If the exception shouldn't be handled in the current method, you can also use the Add throws declaration quick fix. References to nonexisting methods and variables : Eclipse can create a stub for methods referenced through the code that doesn't exist with quick fix. To illustrate this feature's usefulness, let's suppose you are working on a class's code, and you realize that you will need a method that performs some specific operation with two integers, returning another integer value. You can simply use the method, pretending that it exists: int b = 4; int c = 5; int a = performOperation(b,c); The method call will be marked with an error that says performOperation is undefined. To create a stub for this method, place the caret over the method's name, open the Quick Fix menu, and choose create method performOperation(int, int) . A private method will be created with the correct parameters and return type as well as a TODO marker inside it, reminding you that you have to implement the method. You can also use a quick fix to create methods in other classes. Using the same previous example, you can create the performOperation() method in a different class, such as the following: OperationPerformer op = new OperationPerformer(); int a = op.performOperation(b,c); Speaking of classes, quick fix can also create one if you add a call to a non-existing class constructor. Non-existing variables can also be created with quick fix. Like with the method creation, just refer to a variable that still doesn't exist, place the caret over it, and open the Quick Fix menu. You can create the variable either as a local variable, a field, or a parameter. Remove dead code : Unused methods, constructors and fields with private visibility are all marked with warnings. While the quick fix provided for unused methods and constructors is the most evident one (remove the dead code), it's also possible to generate getters and setters for unused private fields with a quick fix. Customizing the editor Like almost everything in Eclipse, you can customize the Java Editor's appearance and behavior. There are plenty of configurations in the Preferences window (Window | Preferences ) that will certainly allow you to tailor the editor to suit your needs. Appearance-related configurations are mostly found in General | Appearance | Colors and Fonts and behavior and feature configurations are mostly under General | Editors | Text Editors . Since there are lots of different categories and configurations, the filter text in the Preferences window might help you find what you want. A short list of the preferences you will most likely want to change is as follows: Colors and fonts : Navigate to General | Appearance . In the Colors and Fonts configuration screen, you can see that options are organized by categories. The ones inside the Basic and Java categories will affect the Java Editor. Enable/Disable spell checking : The Eclipse editor comes with a spellchecker. While in some cases it can be useful, in many others you won't find much use for it. To disable or configure it, navigate to General | Editors | Text Editors | Spelling . Annotations : You can edit the way annotations (warnings and errors, among others) are shown in the editor by navigating to General | Editors | Text Editors | Annotations inside the Preferences window. You can change colors, the way annotations are highlighted in the code (underline, squiggly line, box, among others), and whether they are shown in the vertical bar before the code. Show Line Numbers : To show line numbers on the left-hand side of the editor, mark the corresponding checkbox by navigating to General | Editors | Text Editors . Right-clicking on the bar on the editor's left-hand side brings a dialog in which you can also enable/disable line numbers.
Read more
  • 0
  • 0
  • 1149