Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Oracle SOA Suite 11g R1 Developer's Guide
Oracle SOA Suite 11g R1 Developer's Guide

Oracle SOA Suite 11g R1 Developer's Guide: Service-Oriented Architecture (SOA) is made easily accessible thanks to this comprehensive guide. With a logically structured approach, it gives you the expertise to start using the Oracle SOA suite in real-world applications.

eBook
$27.98 $39.99
Paperback
$87.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Table of content icon View table of contents Preview book icon Preview Book

Oracle SOA Suite 11g R1 Developer's Guide

Chapter 1. Introduction to Oracle SOA Suite

Service-Oriented Architecture (SOA) may consist of many interconnected components. As a result of this, the Oracle SOA Suite is a large piece of software that initially seems to be overwhelmingly complex. In this chapter, we will provide a roadmap for your understanding of the SOA Suite and provide a reference architecture to help you understand how to apply SOA principles with the SOA Suite. After a review of the basic principles of SOA, we will look at how the SOA Suite provides support for those principles through its many different components. Following this journey through the components of SOA Suite, we will introduce Oracle JDeveloper as the primary development tool that is used to build applications for deployment into the SOA Suite.

Service-oriented architecture in short

Service-oriented architecture has evolved to allow greater flexibility in adapting the IT infrastructure to satisfy the needs of business. Let's examine what SOA means by examining the components of its title.

Service

A service is a term that is understood by both business and IT. It has some key characteristics as follows:

  • Encapsulation: A service creates delineation between the service provider and the service consumer. It identifies what will be provided.
  • Interface: It is defined in terms of inputs and outputs. How the service is provided is not of concern to the consumer, only to the provider. The service is defined by its interface.
  • Contract or service level agreements: There may be quality of service attributes associated with the service, such as performance characteristics, availability constraints, or cost.

The break-out-box uses the example of a laundry service to make more concrete the characteristics of a service. Later, we will map these characteristics onto specific technologies.

Tip

A clean example

Consider a laundry service. The service provider is a laundry company, and the service consumer is a corporation or individual with washing to be done.

The input to the company is a basket of dirty laundry. Additional input parameters may be a request to iron the laundry as well as wash it or to starch the collars. The output is a basket of clean washing with whatever optional, additional services such as starching or ironing were specified. This defines the interface.

Quality of service may specify that the washing must be returned within 24 or 48 hours. Additional quality of service attributes may specify that the service is unavailable from 5PM Friday until 8AM Monday. These service level agreements may be characterized as policies to be applied to the service.

An important thing about services is that they can be understood by both business analysts and IT implementers. This leads to the first key benefit of service-oriented architecture.

Note

SOA makes it possible for IT and the business to speak the same language, that is, the language of services.

Services allow us to have a common vocabulary between IT and the business.

Orientation

When we are building our systems, we are looking at them from a service point of view or orientation. This implies that we are oriented or interested in the following:

  • Granularity: The level of service interface or number of interactions required with the service are typically characterized as course-grained or fine-grained.
  • Collaboration: Services may be combined together to create higher level or composite services.
  • Universality: All components can be approached from a service perspective. For example, a business process may also be considered a service that, despite its complexity, provides inputs and outputs.

Thinking of everything as a service leads us to another key benefit of service-oriented architecture, namely composability, which is the ability to compose a service out of other services.

Note

Composing new services out of existing services allows easy reasoning about the availability and performance characteristics of the composite service.

By building composite services out of existing services, we can reduce the amount of effort required to provide new functionality as well as being able to build something with prior knowledge of its availability and scalability characteristics. The latter can be derived from the availability and performance characteristics of the component services.

Architecture

Architecture implies a consistent and coherent design approach. This implies a need to understand the inter-relationships between components in the design and ensure consistency in approach. Architecture suggests that we adopt some of the following principles:

  • Consistency: The same challenges should be addressed in a uniform way. For example, the application of security constraints needs to be enforced in the same way across the design. Patterns or proven design approaches can assist with maintaining consistency of design.
  • Reliability: The structures created must be fit to purpose and meet the demands for which they are designed.
  • Extensibility: A design must provide a framework that can be expanded in ways both foreseen and unforeseen. See the break-out-box on extensions.
  • Scalability: The implementation must be capable of being scaled to accommodate increasing load by adding hardware to the solution.

    Tip

    Extending Antony's house

    My wife and I designed our house in England. We built in the ability to convert the loft into extra rooms and also allowed for a conservatory to be added. This added to the cost of the build, but these were foreseen extensions. The costs of actually adding the conservatory and two extra loft rooms were low because the architecture allowed this. In a similar way, it is relatively easy to architect for foreseen extensions, such as additional related services and processes that must be supported by the business. When we wanted to add a playroom and another bathroom, this was more complex and costly as we had not allowed it in the original architecture. Fortunately, our original design was sufficiently flexible to allow for these additions, but the cost was higher. In a similar way, the measure of the strength of a service-oriented architecture is the way in which it copes with unforeseen demands, such as new types of business process and services that were not foreseen when the architecture was laid down. A well-architected solution will be able to accommodate unexpected extensions at a manageable cost.

A consistent architecture, when coupled with implementation in "SOA Standards", gives us another key benefit, that is, inter-operability.

Note

SOA allows us to build more inter-operable systems as it is based on standards agreed by all the major technology vendors.

SOA is not about any specific technology. The principles of service orientation can be applied equally well using an assembler as they can in a high-level language. However, as with all development, it is easiest to use a model that is supported by tools and is both inter-operable and portable across vendors. SOA is widely associated with the web service or WS-* standards presided over by groups like OASIS (http://www.oasis.org). This use of common standards allows SOA to be inter-operable between vendor technology stacks.

Why SOA is different

A few years ago, distributed object technology, in the guise of CORBA and COM+, was going to provide benefits of reuse. Prior to that, third and fourth generation languages such as C++ and Smalltalk (based on object technology) were to provide the same benefit. Even earlier, the same claims were made for structured programming. So why is SOA different?

Terminology

The use of terms such as services and processes allows business and IT to talk about items in the same way, improving communication, and reducing impedance mismatch between the two. The importance of this is greater than what it appears at first because it drives IT to build and structure its systems around the business rather than vice versa.

Interoperability

In the past, there have been competing platforms for the latest software development fad. This manifested itself as CORBA and COM+, Smalltalk and C++, Pascal and C. However, this time around, the standards are not based upon the physical implementation, but upon the service interfaces and wire protocols. In addition, these standards are generally text-based to avoid issues around conversion between binary forms. This allows services implemented in C# under Windows to inter-operate with Java or PL/SQL services running on Oracle SOA Suite under Windows, Linux, or Unix. The major players Oracle, Microsoft, IBM, SAP, and others have agreed on how to inter-operate together. This agreement has always been missing in the past.

Note

WS basic profile

There is an old IT joke that standards are great, there are so many to choose from! Fortunately, the SOA vendors have recognized this and have collaborated to create a basic profile, or collections of standards that focus on interoperability. This is known as WS basic profile and details the key web service standards that all vendors should implement to allow for interoperability. SOA Suite supports this basic profile as well as additional standards.

Extension and evolution

SOA recognizes that there are existing assets in the IT landscape and does not force these to be replaced, preferring instead to encapsulate and later extend these resources. SOA may be viewed as a boundary technology that reverses many of the earlier development trends. Instead of specifying how systems are built at the lowest level, it focuses on how services are described and how they inter-operate in a standards-based world.

Reuse in place

A final major distinguishing feature for SOA is the concept of reuse in place. Most reuse technologies in the past have focused on reuse through libraries, at best sharing a common implementation on a single machine through the use of dynamic link libraries. SOA focuses not only on reuse of the code functionality, but also upon the reuse of existing machine resources to execute that code. When a service is reused, the same physical servers with their associated memory and CPU are shared across a larger client base. This is good from the perspective of providing a consistent location to enforce code changes, security constraints, and logging policies, but it does mean that the performance of existing users may be impacted if care is not taken in how services are reused.

Note

Client responsibility in service contracts

As SOA is about reuse in place of existing machine resources as well as software resources, it is important that part of the service contract specifies the expected usage a client will make of a service. Imposing this constraint on the client is important for efficient sizing of the services being used by the client.

Terminology

The use of terms such as services and processes allows business and IT to talk about items in the same way, improving communication, and reducing impedance mismatch between the two. The importance of this is greater than what it appears at first because it drives IT to build and structure its systems around the business rather than vice versa.

Interoperability

In the past, there have been competing platforms for the latest software development fad. This manifested itself as CORBA and COM+, Smalltalk and C++, Pascal and C. However, this time around, the standards are not based upon the physical implementation, but upon the service interfaces and wire protocols. In addition, these standards are generally text-based to avoid issues around conversion between binary forms. This allows services implemented in C# under Windows to inter-operate with Java or PL/SQL services running on Oracle SOA Suite under Windows, Linux, or Unix. The major players Oracle, Microsoft, IBM, SAP, and others have agreed on how to inter-operate together. This agreement has always been missing in the past.

Note

WS basic profile

There is an old IT joke that standards are great, there are so many to choose from! Fortunately, the SOA vendors have recognized this and have collaborated to create a basic profile, or collections of standards that focus on interoperability. This is known as WS basic profile and details the key web service standards that all vendors should implement to allow for interoperability. SOA Suite supports this basic profile as well as additional standards.

Extension and evolution

SOA recognizes that there are existing assets in the IT landscape and does not force these to be replaced, preferring instead to encapsulate and later extend these resources. SOA may be viewed as a boundary technology that reverses many of the earlier development trends. Instead of specifying how systems are built at the lowest level, it focuses on how services are described and how they inter-operate in a standards-based world.

Reuse in place

A final major distinguishing feature for SOA is the concept of reuse in place. Most reuse technologies in the past have focused on reuse through libraries, at best sharing a common implementation on a single machine through the use of dynamic link libraries. SOA focuses not only on reuse of the code functionality, but also upon the reuse of existing machine resources to execute that code. When a service is reused, the same physical servers with their associated memory and CPU are shared across a larger client base. This is good from the perspective of providing a consistent location to enforce code changes, security constraints, and logging policies, but it does mean that the performance of existing users may be impacted if care is not taken in how services are reused.

Note

Client responsibility in service contracts

As SOA is about reuse in place of existing machine resources as well as software resources, it is important that part of the service contract specifies the expected usage a client will make of a service. Imposing this constraint on the client is important for efficient sizing of the services being used by the client.

Interoperability

In the past, there have been competing platforms for the latest software development fad. This manifested itself as CORBA and COM+, Smalltalk and C++, Pascal and C. However, this time around, the standards are not based upon the physical implementation, but upon the service interfaces and wire protocols. In addition, these standards are generally text-based to avoid issues around conversion between binary forms. This allows services implemented in C# under Windows to inter-operate with Java or PL/SQL services running on Oracle SOA Suite under Windows, Linux, or Unix. The major players Oracle, Microsoft, IBM, SAP, and others have agreed on how to inter-operate together. This agreement has always been missing in the past.

Note

WS basic profile

There is an old IT joke that standards are great, there are so many to choose from! Fortunately, the SOA vendors have recognized this and have collaborated to create a basic profile, or collections of standards that focus on interoperability. This is known as WS basic profile and details the key web service standards that all vendors should implement to allow for interoperability. SOA Suite supports this basic profile as well as additional standards.

Extension and evolution

SOA recognizes that there are existing assets in the IT landscape and does not force these to be replaced, preferring instead to encapsulate and later extend these resources. SOA may be viewed as a boundary technology that reverses many of the earlier development trends. Instead of specifying how systems are built at the lowest level, it focuses on how services are described and how they inter-operate in a standards-based world.

Reuse in place

A final major distinguishing feature for SOA is the concept of reuse in place. Most reuse technologies in the past have focused on reuse through libraries, at best sharing a common implementation on a single machine through the use of dynamic link libraries. SOA focuses not only on reuse of the code functionality, but also upon the reuse of existing machine resources to execute that code. When a service is reused, the same physical servers with their associated memory and CPU are shared across a larger client base. This is good from the perspective of providing a consistent location to enforce code changes, security constraints, and logging policies, but it does mean that the performance of existing users may be impacted if care is not taken in how services are reused.

Note

Client responsibility in service contracts

As SOA is about reuse in place of existing machine resources as well as software resources, it is important that part of the service contract specifies the expected usage a client will make of a service. Imposing this constraint on the client is important for efficient sizing of the services being used by the client.

Extension and evolution

SOA recognizes that there are existing assets in the IT landscape and does not force these to be replaced, preferring instead to encapsulate and later extend these resources. SOA may be viewed as a boundary technology that reverses many of the earlier development trends. Instead of specifying how systems are built at the lowest level, it focuses on how services are described and how they inter-operate in a standards-based world.

Reuse in place

A final major distinguishing feature for SOA is the concept of reuse in place. Most reuse technologies in the past have focused on reuse through libraries, at best sharing a common implementation on a single machine through the use of dynamic link libraries. SOA focuses not only on reuse of the code functionality, but also upon the reuse of existing machine resources to execute that code. When a service is reused, the same physical servers with their associated memory and CPU are shared across a larger client base. This is good from the perspective of providing a consistent location to enforce code changes, security constraints, and logging policies, but it does mean that the performance of existing users may be impacted if care is not taken in how services are reused.

Note

Client responsibility in service contracts

As SOA is about reuse in place of existing machine resources as well as software resources, it is important that part of the service contract specifies the expected usage a client will make of a service. Imposing this constraint on the client is important for efficient sizing of the services being used by the client.

Reuse in place

A final major distinguishing feature for SOA is the concept of reuse in place. Most reuse technologies in the past have focused on reuse through libraries, at best sharing a common implementation on a single machine through the use of dynamic link libraries. SOA focuses not only on reuse of the code functionality, but also upon the reuse of existing machine resources to execute that code. When a service is reused, the same physical servers with their associated memory and CPU are shared across a larger client base. This is good from the perspective of providing a consistent location to enforce code changes, security constraints, and logging policies, but it does mean that the performance of existing users may be impacted if care is not taken in how services are reused.

Note

Client responsibility in service contracts

As SOA is about reuse in place of existing machine resources as well as software resources, it is important that part of the service contract specifies the expected usage a client will make of a service. Imposing this constraint on the client is important for efficient sizing of the services being used by the client.

Service Component Architecture (SCA)

We have spoken a lot about service reuse and composing new services out of existing services, but we have yet to indicate how this may be done. The Service Component Architecture in SOA Suite is a standard that is used to define how services in a composite application are connected. It also defines how a service may interact with other services.

Service Component Architecture (SCA)

As can be seen in the preceding screenshot, an SCA composite consists of several different parts.

Component

A component represents a piece of business logic. It may be process logic, such as a BPEL process, routing logic, such as a mediator, or some other SOA Suite component. In the next section, we will discuss the components of the SOA Suite. SCA also supports writing custom components in Java or other languages, but we will not cover that in this book.

Service

A service represents the interface provided by a component or by the SCA Assembly itself. This is the interface to be used by clients of the assembly or component. A service that is available from outside the composite is referred to as an External Service.

Reference

A reference is a dependency on a service provided by another component, another SCA Assembly, or by some external entity such as a remote web service. References to services outside the composite are referred to as External References.

Wire

Services and references are joined together by wires. A wire indicates a dependency between components or between a component and an external entity.

Note

It is important to note that wires show dependencies and not flow of control. In the example, the Mediator component may call the FileWriteService before or after invoking BPEL, or it may not invoke it at all.

Composite.xml

An SCA Assembly is described in a file named composite.xml. The format of this file is defined by the SCA standard and consists of the elements identified in the preceding screenshot.

Properties

The components in the SCA may have properties associated with them that can be customized as part of the deployment of an SCA Assembly. These properties are also described in the composite.xml.

Component

A component represents a piece of business logic. It may be process logic, such as a BPEL process, routing logic, such as a mediator, or some other SOA Suite component. In the next section, we will discuss the components of the SOA Suite. SCA also supports writing custom components in Java or other languages, but we will not cover that in this book.

Service

A service represents the interface provided by a component or by the SCA Assembly itself. This is the interface to be used by clients of the assembly or component. A service that is available from outside the composite is referred to as an External Service.

Reference

A reference is a dependency on a service provided by another component, another SCA Assembly, or by some external entity such as a remote web service. References to services outside the composite are referred to as External References.

Wire

Services and references are joined together by wires. A wire indicates a dependency between components or between a component and an external entity.

Note

It is important to note that wires show dependencies and not flow of control. In the example, the Mediator component may call the FileWriteService before or after invoking BPEL, or it may not invoke it at all.

Composite.xml

An SCA Assembly is described in a file named composite.xml. The format of this file is defined by the SCA standard and consists of the elements identified in the preceding screenshot.

Properties

The components in the SCA may have properties associated with them that can be customized as part of the deployment of an SCA Assembly. These properties are also described in the composite.xml.

Service

A service represents the interface provided by a component or by the SCA Assembly itself. This is the interface to be used by clients of the assembly or component. A service that is available from outside the composite is referred to as an External Service.

Reference

A reference is a dependency on a service provided by another component, another SCA Assembly, or by some external entity such as a remote web service. References to services outside the composite are referred to as External References.

Wire

Services and references are joined together by wires. A wire indicates a dependency between components or between a component and an external entity.

Note

It is important to note that wires show dependencies and not flow of control. In the example, the Mediator component may call the FileWriteService before or after invoking BPEL, or it may not invoke it at all.

Composite.xml

An SCA Assembly is described in a file named composite.xml. The format of this file is defined by the SCA standard and consists of the elements identified in the preceding screenshot.

Properties

The components in the SCA may have properties associated with them that can be customized as part of the deployment of an SCA Assembly. These properties are also described in the composite.xml.

Reference

A reference is a dependency on a service provided by another component, another SCA Assembly, or by some external entity such as a remote web service. References to services outside the composite are referred to as External References.

Wire

Services and references are joined together by wires. A wire indicates a dependency between components or between a component and an external entity.

Note

It is important to note that wires show dependencies and not flow of control. In the example, the Mediator component may call the FileWriteService before or after invoking BPEL, or it may not invoke it at all.

Composite.xml

An SCA Assembly is described in a file named composite.xml. The format of this file is defined by the SCA standard and consists of the elements identified in the preceding screenshot.

Properties

The components in the SCA may have properties associated with them that can be customized as part of the deployment of an SCA Assembly. These properties are also described in the composite.xml.

Wire

Services and references are joined together by wires. A wire indicates a dependency between components or between a component and an external entity.

Note

It is important to note that wires show dependencies and not flow of control. In the example, the Mediator component may call the FileWriteService before or after invoking BPEL, or it may not invoke it at all.

Composite.xml

An SCA Assembly is described in a file named composite.xml. The format of this file is defined by the SCA standard and consists of the elements identified in the preceding screenshot.

Properties

The components in the SCA may have properties associated with them that can be customized as part of the deployment of an SCA Assembly. These properties are also described in the composite.xml.

Composite.xml

An SCA Assembly is described in a file named composite.xml. The format of this file is defined by the SCA standard and consists of the elements identified in the preceding screenshot.

Properties

The components in the SCA may have properties associated with them that can be customized as part of the deployment of an SCA Assembly. These properties are also described in the composite.xml.

Properties

The components in the SCA may have properties associated with them that can be customized as part of the deployment of an SCA Assembly. These properties are also described in the composite.xml.

SOA Suite components

SOA Suite has a number of component parts, some of which may be licensed separately.

Services and adapters

The most basic unit of service-oriented architecture is the service. This may be provided directly by a web service-enabled piece of code or it may be exposed by encapsulating an existing resource.

Services and adapters

The only way to access a service is through its defined interface. This interface may actually be part of the service or it may be a wrapper that provides a standard-based service interface on top of a more implementation-specific interface. Accessing the service in a consistent fashion isolates the client of the service from any details of its physical implementation.

Services are defined by a specific interface, usually specified in a Web Service Description Language (WSDL) file. A WSDL file specifies the operations supported by the service. Each operation describes the expected format of the input message and if a message is returned it also describes the format of that message. Services are often surfaced through adapters that take an existing piece of functionality and "adapt" it to the SOA world, so that it can interact with other SOA Suite components. An example of an adapter is the file adapter that allows a file to be read or written to. The act of reading or writing the file is encapsulated into a service interface. This service interface can then be used to receive service requests by reading a file or to create service requests by writing a file.

Out of the box, the SOA Suite includes licenses for the following adapters:

  • File adapter
  • FTP adapter
  • Database adapter
  • JMS adapter
  • MQ adapter
  • AQ adapter
  • Socket adapter
  • BAM adapter

The database adapter and the file adapter are explored in more detail in Chapter 3, Service-enabling Existing Systems, while the BAM adapter is discussed in Chapter 9, Building Real-time Dashboards. There is also support for other non-SOAP transports and styles such as plain HTTP, REST, and Java.

Services are the most important part of service-oriented architecture, and in this book, we focus on how to define their interfaces and how to best assemble services together to create composite services with a value beyond the functionality of a single atomic service.

ESB – service abstraction layer

To avoid service location and format dependencies, it is desirable to access services through an Enterprise Service Bus (ESB). This provides a layer of abstraction over the service and allows transformation of data between formats. The ESB is aware of the physical endpoint locations of services and acts to virtualize services.

ESB – service abstraction layer

Services may be viewed as being plugged into the Service Bus.

An Enterprise Service Bus is optimized for routing and transforming service requests between components. By abstracting the physical location of a service, an ESB allows services to be moved to different locations without impacting the clients of those services. The ability of an ESB to transform data from one format to another also allows for changes in service contracts to be accommodated without recoding client services. The Service Bus may also be used to validate that messages conform to interface contracts and to enrich messages by adding additional information to them as part of the message transformation process.

Oracle Service Bus and Oracle Mediator

Note that the SOA Suite contains both the Oracle Service Bus (formerly AquaLogic Service Bus, now known as OSB) and the Oracle Mediator. OSB provides more powerful service abstraction capabilities that will be explored in Chapter 4, Loosely-coupling Services. Beyond simple transformation, it can also perform other functions such as throttling of target services. It is also easier to modify service endpoints in the runtime environment with OSB.

The stated direction by Oracle is for the Oracle Service Bus to be the preferred ESB for interactions outside the SOA Suite. Interactions within the SOA Suite may sometimes be better dealt with by the Oracle Mediator component in the SOA Suite, but we believe that for most cases, the Oracle Service Bus will provide a better solution and so that is what we have focused on within this book. However, in the current release, the Oracle Service Bus only executes on the Oracle WebLogic platform. Therefore, when running SOA Suite on non-Oracle platforms, there are two choices:

  • Use only the Oracle Mediator
  • Run Oracle Service Bus on a WebLogic Server while running the rest of SOA Suite on the non-Oracle platform

Later releases of the SOA Suite will support Oracle Service Bus on non-Oracle platforms such as WebSphere.

Service orchestration – the BPEL process manager

In order to build composite services, that is, services constructed from other services, we need a layer that can orchestrate, or tie together, multiple services into a single larger service. Simple service orchestrations can be done within the Oracle Service Bus, but more complex orchestrations require additional functionality. These service orchestrations may be thought of as processes, some of which are low-level processes and others are high-level business processes.

Service orchestration – the BPEL process manager

Business Process Execution Language (BPEL) is the standard way to describe processes in the SOA world, a task often referred to as service orchestration. The BPEL process manager in SOA Suite includes support for the BPEL 1.1 standard, with most constructs from BPEL 2.0 also being supported. BPEL allows multiple services to be linked to each other as part of a single managed process. The processes may be short running (taking seconds and minutes) or long running (taking hours and days).

The BPEL standard says nothing about how people interact with it, but BPEL process manager includes a Human Workflow component that provides support for human interaction with processes.

The BPEL process manager may also be purchased as a standalone component, in which case, it ships with the Human Workflow support and the same adapters, as included in the SOA Suite.

We explore the BPEL process manager in more detail in Chapter 5, Using BPEL to Build Composite Services and Business Processes and Chapter 14, Error Handling. Human workflow is examined in Chapter 6, Adding in Human Workflow and Chapter 17, Workflow Patterns.

Oracle also packages the BPEL process manager with the Oracle Business Process Management (BPM) Suite. This package includes the former AquaLogic BPM product (acquired when BEA bought Fuego), now known as Oracle BPM. Oracle positions BPEL as a system-centric process engine with support for human workflow, while BPM is positioned as human-centric process engine with support for system interaction.

Rules

Business decision-making may be viewed as a service within SOA. A rules engine is the physical implementation of this service.

SOA Suite includes a powerful rules engine that allows key business decision logic to be abstracted out of individual services and managed in a single repository.

In Chapter 7, Using Business Rules to Define Decision Points and in Chapter 18, Using Business Rules to Implement Services, we investigate how to use the rules engine.

Security and monitoring

One of the interesting features of SOA is the way in which aspects of a service are themselves a service. Nowhere is this better exemplified than with security. Security is a characteristic of services, yet to implement it effectively requires a centralized policy store coupled with distributed policy enforcement at the service boundaries. The central policy store can be viewed as a service that the infrastructure uses to enforce service security policy.

Enterprise Manager serves as a policy manager for security, providing a centralized service for policy enforcement points to obtain their policies. Policy enforcement points, termed interceptors in SOA Suite 11g, are responsible for applying security policy, ensuring that only requests that comply with the policy are accepted.

Security policy may also be applied through the Service Bus. Although policy management is done in the Service Bus rather than in the Enterprise Manager, the direction is for Oracle to have a common policy management in a future release.

Applying security policies is covered in Chapter 21, Defining Security and Management Policies.

Security and monitoring

Active monitoring – BAM

It is important in SOA to track what is happening in real time. Some business processes require such real-time monitoring. Users such as financial traders, risk assessors, and security services may need instant notification of business events that have occurred.

Business Activity Monitoring is part of the SOA Suite and provides a real-time view of processes and services data to end users. BAM is covered in Chapter 9, Building Real-time Dashboards.

Business to Business – B2B

Although we can use adapters to talk to remote systems, we often need additional features to support external services, either as clients or providers. For example, we may need to verify that there is contract in place before accepting or sending messages to a partner. Management of agreements or contracts is a key additional piece of functionality that is provided by Oracle B2B. B2B can be thought of as a special kind of adapter that, in addition to support for B2B protocols such as EDIFACT/ANSI X12 or RosettaNet, also supports agreement management. Agreement management allows control over the partners and interfaces used at any given point in time. We will not cover B2B in this book as the B2B space is a little at the edge of most SOA deployments.

Complex Event Processing – CEP

As our services execute, we will often generate events. These events can be monitored and processed using the complex event processor. The difference between event and message processing is that messages generally require some action on their own with little or minimal additional context. Events, on the other hand, often require us to monitor several of them to spot and respond to trends. For example, we may treat a stock sale as a message when we need to record it and reconcile it with the accounting system. We may also want to treat the stock sale as an event in which we wish to monitor the overall market movements in a single stock or in related stocks to decide whether we should buy or sell. The complex event processor allows us to do time-based and series-based analysis of data. We will not talk about CEP in this book as it is a complex part of the SOA Suite that requires a complementary but different approach to the other SOA components.

Event delivery network

Even the loose-coupling provided by a Service Bus is not always enough. We often wish to just publish events and let any interested parties be notified of the event. A new feature of SOA Suite 11g is the event delivery network, which allows events to be published without the publisher being aware of the target or targets. Subscribers can request to be notified of particular events, filtering them based on event domain, event type, and event content. We cover the event delivery network in Chapter 8, Using Business Events.

Services and adapters

The most basic unit of service-oriented architecture is the service. This may be provided directly by a web service-enabled piece of code or it may be exposed by encapsulating an existing resource.

Services and adapters

The only way to access a service is through its defined interface. This interface may actually be part of the service or it may be a wrapper that provides a standard-based service interface on top of a more implementation-specific interface. Accessing the service in a consistent fashion isolates the client of the service from any details of its physical implementation.

Services are defined by a specific interface, usually specified in a Web Service Description Language (WSDL) file. A WSDL file specifies the operations supported by the service. Each operation describes the expected format of the input message and if a message is returned it also describes the format of that message. Services are often surfaced through adapters that take an existing piece of functionality and "adapt" it to the SOA world, so that it can interact with other SOA Suite components. An example of an adapter is the file adapter that allows a file to be read or written to. The act of reading or writing the file is encapsulated into a service interface. This service interface can then be used to receive service requests by reading a file or to create service requests by writing a file.

Out of the box, the SOA Suite includes licenses for the following adapters:

  • File adapter
  • FTP adapter
  • Database adapter
  • JMS adapter
  • MQ adapter
  • AQ adapter
  • Socket adapter
  • BAM adapter

The database adapter and the file adapter are explored in more detail in Chapter 3, Service-enabling Existing Systems, while the BAM adapter is discussed in Chapter 9, Building Real-time Dashboards. There is also support for other non-SOAP transports and styles such as plain HTTP, REST, and Java.

Services are the most important part of service-oriented architecture, and in this book, we focus on how to define their interfaces and how to best assemble services together to create composite services with a value beyond the functionality of a single atomic service.

ESB – service abstraction layer

To avoid service location and format dependencies, it is desirable to access services through an Enterprise Service Bus (ESB). This provides a layer of abstraction over the service and allows transformation of data between formats. The ESB is aware of the physical endpoint locations of services and acts to virtualize services.

ESB – service abstraction layer

Services may be viewed as being plugged into the Service Bus.

An Enterprise Service Bus is optimized for routing and transforming service requests between components. By abstracting the physical location of a service, an ESB allows services to be moved to different locations without impacting the clients of those services. The ability of an ESB to transform data from one format to another also allows for changes in service contracts to be accommodated without recoding client services. The Service Bus may also be used to validate that messages conform to interface contracts and to enrich messages by adding additional information to them as part of the message transformation process.

Oracle Service Bus and Oracle Mediator

Note that the SOA Suite contains both the Oracle Service Bus (formerly AquaLogic Service Bus, now known as OSB) and the Oracle Mediator. OSB provides more powerful service abstraction capabilities that will be explored in Chapter 4, Loosely-coupling Services. Beyond simple transformation, it can also perform other functions such as throttling of target services. It is also easier to modify service endpoints in the runtime environment with OSB.

The stated direction by Oracle is for the Oracle Service Bus to be the preferred ESB for interactions outside the SOA Suite. Interactions within the SOA Suite may sometimes be better dealt with by the Oracle Mediator component in the SOA Suite, but we believe that for most cases, the Oracle Service Bus will provide a better solution and so that is what we have focused on within this book. However, in the current release, the Oracle Service Bus only executes on the Oracle WebLogic platform. Therefore, when running SOA Suite on non-Oracle platforms, there are two choices:

  • Use only the Oracle Mediator
  • Run Oracle Service Bus on a WebLogic Server while running the rest of SOA Suite on the non-Oracle platform

Later releases of the SOA Suite will support Oracle Service Bus on non-Oracle platforms such as WebSphere.

Service orchestration – the BPEL process manager

In order to build composite services, that is, services constructed from other services, we need a layer that can orchestrate, or tie together, multiple services into a single larger service. Simple service orchestrations can be done within the Oracle Service Bus, but more complex orchestrations require additional functionality. These service orchestrations may be thought of as processes, some of which are low-level processes and others are high-level business processes.

Service orchestration – the BPEL process manager

Business Process Execution Language (BPEL) is the standard way to describe processes in the SOA world, a task often referred to as service orchestration. The BPEL process manager in SOA Suite includes support for the BPEL 1.1 standard, with most constructs from BPEL 2.0 also being supported. BPEL allows multiple services to be linked to each other as part of a single managed process. The processes may be short running (taking seconds and minutes) or long running (taking hours and days).

The BPEL standard says nothing about how people interact with it, but BPEL process manager includes a Human Workflow component that provides support for human interaction with processes.

The BPEL process manager may also be purchased as a standalone component, in which case, it ships with the Human Workflow support and the same adapters, as included in the SOA Suite.

We explore the BPEL process manager in more detail in Chapter 5, Using BPEL to Build Composite Services and Business Processes and Chapter 14, Error Handling. Human workflow is examined in Chapter 6, Adding in Human Workflow and Chapter 17, Workflow Patterns.

Oracle also packages the BPEL process manager with the Oracle Business Process Management (BPM) Suite. This package includes the former AquaLogic BPM product (acquired when BEA bought Fuego), now known as Oracle BPM. Oracle positions BPEL as a system-centric process engine with support for human workflow, while BPM is positioned as human-centric process engine with support for system interaction.

Rules

Business decision-making may be viewed as a service within SOA. A rules engine is the physical implementation of this service.

SOA Suite includes a powerful rules engine that allows key business decision logic to be abstracted out of individual services and managed in a single repository.

In Chapter 7, Using Business Rules to Define Decision Points and in Chapter 18, Using Business Rules to Implement Services, we investigate how to use the rules engine.

Security and monitoring

One of the interesting features of SOA is the way in which aspects of a service are themselves a service. Nowhere is this better exemplified than with security. Security is a characteristic of services, yet to implement it effectively requires a centralized policy store coupled with distributed policy enforcement at the service boundaries. The central policy store can be viewed as a service that the infrastructure uses to enforce service security policy.

Enterprise Manager serves as a policy manager for security, providing a centralized service for policy enforcement points to obtain their policies. Policy enforcement points, termed interceptors in SOA Suite 11g, are responsible for applying security policy, ensuring that only requests that comply with the policy are accepted.

Security policy may also be applied through the Service Bus. Although policy management is done in the Service Bus rather than in the Enterprise Manager, the direction is for Oracle to have a common policy management in a future release.

Applying security policies is covered in Chapter 21, Defining Security and Management Policies.

Security and monitoring

Active monitoring – BAM

It is important in SOA to track what is happening in real time. Some business processes require such real-time monitoring. Users such as financial traders, risk assessors, and security services may need instant notification of business events that have occurred.

Business Activity Monitoring is part of the SOA Suite and provides a real-time view of processes and services data to end users. BAM is covered in Chapter 9, Building Real-time Dashboards.

Business to Business – B2B

Although we can use adapters to talk to remote systems, we often need additional features to support external services, either as clients or providers. For example, we may need to verify that there is contract in place before accepting or sending messages to a partner. Management of agreements or contracts is a key additional piece of functionality that is provided by Oracle B2B. B2B can be thought of as a special kind of adapter that, in addition to support for B2B protocols such as EDIFACT/ANSI X12 or RosettaNet, also supports agreement management. Agreement management allows control over the partners and interfaces used at any given point in time. We will not cover B2B in this book as the B2B space is a little at the edge of most SOA deployments.

Complex Event Processing – CEP

As our services execute, we will often generate events. These events can be monitored and processed using the complex event processor. The difference between event and message processing is that messages generally require some action on their own with little or minimal additional context. Events, on the other hand, often require us to monitor several of them to spot and respond to trends. For example, we may treat a stock sale as a message when we need to record it and reconcile it with the accounting system. We may also want to treat the stock sale as an event in which we wish to monitor the overall market movements in a single stock or in related stocks to decide whether we should buy or sell. The complex event processor allows us to do time-based and series-based analysis of data. We will not talk about CEP in this book as it is a complex part of the SOA Suite that requires a complementary but different approach to the other SOA components.

Event delivery network

Even the loose-coupling provided by a Service Bus is not always enough. We often wish to just publish events and let any interested parties be notified of the event. A new feature of SOA Suite 11g is the event delivery network, which allows events to be published without the publisher being aware of the target or targets. Subscribers can request to be notified of particular events, filtering them based on event domain, event type, and event content. We cover the event delivery network in Chapter 8, Using Business Events.

ESB – service abstraction layer

To avoid service location and format dependencies, it is desirable to access services through an Enterprise Service Bus (ESB). This provides a layer of abstraction over the service and allows transformation of data between formats. The ESB is aware of the physical endpoint locations of services and acts to virtualize services.

ESB – service abstraction layer

Services may be viewed as being plugged into the Service Bus.

An Enterprise Service Bus is optimized for routing and transforming service requests between components. By abstracting the physical location of a service, an ESB allows services to be moved to different locations without impacting the clients of those services. The ability of an ESB to transform data from one format to another also allows for changes in service contracts to be accommodated without recoding client services. The Service Bus may also be used to validate that messages conform to interface contracts and to enrich messages by adding additional information to them as part of the message transformation process.

Oracle Service Bus and Oracle Mediator

Note that the SOA Suite contains both the Oracle Service Bus (formerly AquaLogic Service Bus, now known as OSB) and the Oracle Mediator. OSB provides more powerful service abstraction capabilities that will be explored in Chapter 4, Loosely-coupling Services. Beyond simple transformation, it can also perform other functions such as throttling of target services. It is also easier to modify service endpoints in the runtime environment with OSB.

The stated direction by Oracle is for the Oracle Service Bus to be the preferred ESB for interactions outside the SOA Suite. Interactions within the SOA Suite may sometimes be better dealt with by the Oracle Mediator component in the SOA Suite, but we believe that for most cases, the Oracle Service Bus will provide a better solution and so that is what we have focused on within this book. However, in the current release, the Oracle Service Bus only executes on the Oracle WebLogic platform. Therefore, when running SOA Suite on non-Oracle platforms, there are two choices:

  • Use only the Oracle Mediator
  • Run Oracle Service Bus on a WebLogic Server while running the rest of SOA Suite on the non-Oracle platform

Later releases of the SOA Suite will support Oracle Service Bus on non-Oracle platforms such as WebSphere.

Service orchestration – the BPEL process manager

In order to build composite services, that is, services constructed from other services, we need a layer that can orchestrate, or tie together, multiple services into a single larger service. Simple service orchestrations can be done within the Oracle Service Bus, but more complex orchestrations require additional functionality. These service orchestrations may be thought of as processes, some of which are low-level processes and others are high-level business processes.

Service orchestration – the BPEL process manager

Business Process Execution Language (BPEL) is the standard way to describe processes in the SOA world, a task often referred to as service orchestration. The BPEL process manager in SOA Suite includes support for the BPEL 1.1 standard, with most constructs from BPEL 2.0 also being supported. BPEL allows multiple services to be linked to each other as part of a single managed process. The processes may be short running (taking seconds and minutes) or long running (taking hours and days).

The BPEL standard says nothing about how people interact with it, but BPEL process manager includes a Human Workflow component that provides support for human interaction with processes.

The BPEL process manager may also be purchased as a standalone component, in which case, it ships with the Human Workflow support and the same adapters, as included in the SOA Suite.

We explore the BPEL process manager in more detail in Chapter 5, Using BPEL to Build Composite Services and Business Processes and Chapter 14, Error Handling. Human workflow is examined in Chapter 6, Adding in Human Workflow and Chapter 17, Workflow Patterns.

Oracle also packages the BPEL process manager with the Oracle Business Process Management (BPM) Suite. This package includes the former AquaLogic BPM product (acquired when BEA bought Fuego), now known as Oracle BPM. Oracle positions BPEL as a system-centric process engine with support for human workflow, while BPM is positioned as human-centric process engine with support for system interaction.

Rules

Business decision-making may be viewed as a service within SOA. A rules engine is the physical implementation of this service.

SOA Suite includes a powerful rules engine that allows key business decision logic to be abstracted out of individual services and managed in a single repository.

In Chapter 7, Using Business Rules to Define Decision Points and in Chapter 18, Using Business Rules to Implement Services, we investigate how to use the rules engine.

Security and monitoring

One of the interesting features of SOA is the way in which aspects of a service are themselves a service. Nowhere is this better exemplified than with security. Security is a characteristic of services, yet to implement it effectively requires a centralized policy store coupled with distributed policy enforcement at the service boundaries. The central policy store can be viewed as a service that the infrastructure uses to enforce service security policy.

Enterprise Manager serves as a policy manager for security, providing a centralized service for policy enforcement points to obtain their policies. Policy enforcement points, termed interceptors in SOA Suite 11g, are responsible for applying security policy, ensuring that only requests that comply with the policy are accepted.

Security policy may also be applied through the Service Bus. Although policy management is done in the Service Bus rather than in the Enterprise Manager, the direction is for Oracle to have a common policy management in a future release.

Applying security policies is covered in Chapter 21, Defining Security and Management Policies.

Security and monitoring

Active monitoring – BAM

It is important in SOA to track what is happening in real time. Some business processes require such real-time monitoring. Users such as financial traders, risk assessors, and security services may need instant notification of business events that have occurred.

Business Activity Monitoring is part of the SOA Suite and provides a real-time view of processes and services data to end users. BAM is covered in Chapter 9, Building Real-time Dashboards.

Business to Business – B2B

Although we can use adapters to talk to remote systems, we often need additional features to support external services, either as clients or providers. For example, we may need to verify that there is contract in place before accepting or sending messages to a partner. Management of agreements or contracts is a key additional piece of functionality that is provided by Oracle B2B. B2B can be thought of as a special kind of adapter that, in addition to support for B2B protocols such as EDIFACT/ANSI X12 or RosettaNet, also supports agreement management. Agreement management allows control over the partners and interfaces used at any given point in time. We will not cover B2B in this book as the B2B space is a little at the edge of most SOA deployments.

Complex Event Processing – CEP

As our services execute, we will often generate events. These events can be monitored and processed using the complex event processor. The difference between event and message processing is that messages generally require some action on their own with little or minimal additional context. Events, on the other hand, often require us to monitor several of them to spot and respond to trends. For example, we may treat a stock sale as a message when we need to record it and reconcile it with the accounting system. We may also want to treat the stock sale as an event in which we wish to monitor the overall market movements in a single stock or in related stocks to decide whether we should buy or sell. The complex event processor allows us to do time-based and series-based analysis of data. We will not talk about CEP in this book as it is a complex part of the SOA Suite that requires a complementary but different approach to the other SOA components.

Event delivery network

Even the loose-coupling provided by a Service Bus is not always enough. We often wish to just publish events and let any interested parties be notified of the event. A new feature of SOA Suite 11g is the event delivery network, which allows events to be published without the publisher being aware of the target or targets. Subscribers can request to be notified of particular events, filtering them based on event domain, event type, and event content. We cover the event delivery network in Chapter 8, Using Business Events.

Oracle Service Bus and Oracle Mediator

Note that the SOA Suite contains both the Oracle Service Bus (formerly AquaLogic Service Bus, now known as OSB) and the Oracle Mediator. OSB provides more powerful service abstraction capabilities that will be explored in Chapter 4, Loosely-coupling Services. Beyond simple transformation, it can also perform other functions such as throttling of target services. It is also easier to modify service endpoints in the runtime environment with OSB.

The stated direction by Oracle is for the Oracle Service Bus to be the preferred ESB for interactions outside the SOA Suite. Interactions within the SOA Suite may sometimes be better dealt with by the Oracle Mediator component in the SOA Suite, but we believe that for most cases, the Oracle Service Bus will provide a better solution and so that is what we have focused on within this book. However, in the current release, the Oracle Service Bus only executes on the Oracle WebLogic platform. Therefore, when running SOA Suite on non-Oracle platforms, there are two choices:

  • Use only the Oracle Mediator
  • Run Oracle Service Bus on a WebLogic Server while running the rest of SOA Suite on the non-Oracle platform

Later releases of the SOA Suite will support Oracle Service Bus on non-Oracle platforms such as WebSphere.

Service orchestration – the BPEL process manager

In order to build composite services, that is, services constructed from other services, we need a layer that can orchestrate, or tie together, multiple services into a single larger service. Simple service orchestrations can be done within the Oracle Service Bus, but more complex orchestrations require additional functionality. These service orchestrations may be thought of as processes, some of which are low-level processes and others are high-level business processes.

Service orchestration – the BPEL process manager

Business Process Execution Language (BPEL) is the standard way to describe processes in the SOA world, a task often referred to as service orchestration. The BPEL process manager in SOA Suite includes support for the BPEL 1.1 standard, with most constructs from BPEL 2.0 also being supported. BPEL allows multiple services to be linked to each other as part of a single managed process. The processes may be short running (taking seconds and minutes) or long running (taking hours and days).

The BPEL standard says nothing about how people interact with it, but BPEL process manager includes a Human Workflow component that provides support for human interaction with processes.

The BPEL process manager may also be purchased as a standalone component, in which case, it ships with the Human Workflow support and the same adapters, as included in the SOA Suite.

We explore the BPEL process manager in more detail in Chapter 5, Using BPEL to Build Composite Services and Business Processes and Chapter 14, Error Handling. Human workflow is examined in Chapter 6, Adding in Human Workflow and Chapter 17, Workflow Patterns.

Oracle also packages the BPEL process manager with the Oracle Business Process Management (BPM) Suite. This package includes the former AquaLogic BPM product (acquired when BEA bought Fuego), now known as Oracle BPM. Oracle positions BPEL as a system-centric process engine with support for human workflow, while BPM is positioned as human-centric process engine with support for system interaction.

Rules

Business decision-making may be viewed as a service within SOA. A rules engine is the physical implementation of this service.

SOA Suite includes a powerful rules engine that allows key business decision logic to be abstracted out of individual services and managed in a single repository.

In Chapter 7, Using Business Rules to Define Decision Points and in Chapter 18, Using Business Rules to Implement Services, we investigate how to use the rules engine.

Security and monitoring

One of the interesting features of SOA is the way in which aspects of a service are themselves a service. Nowhere is this better exemplified than with security. Security is a characteristic of services, yet to implement it effectively requires a centralized policy store coupled with distributed policy enforcement at the service boundaries. The central policy store can be viewed as a service that the infrastructure uses to enforce service security policy.

Enterprise Manager serves as a policy manager for security, providing a centralized service for policy enforcement points to obtain their policies. Policy enforcement points, termed interceptors in SOA Suite 11g, are responsible for applying security policy, ensuring that only requests that comply with the policy are accepted.

Security policy may also be applied through the Service Bus. Although policy management is done in the Service Bus rather than in the Enterprise Manager, the direction is for Oracle to have a common policy management in a future release.

Applying security policies is covered in Chapter 21, Defining Security and Management Policies.

Security and monitoring

Active monitoring – BAM

It is important in SOA to track what is happening in real time. Some business processes require such real-time monitoring. Users such as financial traders, risk assessors, and security services may need instant notification of business events that have occurred.

Business Activity Monitoring is part of the SOA Suite and provides a real-time view of processes and services data to end users. BAM is covered in Chapter 9, Building Real-time Dashboards.

Business to Business – B2B

Although we can use adapters to talk to remote systems, we often need additional features to support external services, either as clients or providers. For example, we may need to verify that there is contract in place before accepting or sending messages to a partner. Management of agreements or contracts is a key additional piece of functionality that is provided by Oracle B2B. B2B can be thought of as a special kind of adapter that, in addition to support for B2B protocols such as EDIFACT/ANSI X12 or RosettaNet, also supports agreement management. Agreement management allows control over the partners and interfaces used at any given point in time. We will not cover B2B in this book as the B2B space is a little at the edge of most SOA deployments.

Complex Event Processing – CEP

As our services execute, we will often generate events. These events can be monitored and processed using the complex event processor. The difference between event and message processing is that messages generally require some action on their own with little or minimal additional context. Events, on the other hand, often require us to monitor several of them to spot and respond to trends. For example, we may treat a stock sale as a message when we need to record it and reconcile it with the accounting system. We may also want to treat the stock sale as an event in which we wish to monitor the overall market movements in a single stock or in related stocks to decide whether we should buy or sell. The complex event processor allows us to do time-based and series-based analysis of data. We will not talk about CEP in this book as it is a complex part of the SOA Suite that requires a complementary but different approach to the other SOA components.

Event delivery network

Even the loose-coupling provided by a Service Bus is not always enough. We often wish to just publish events and let any interested parties be notified of the event. A new feature of SOA Suite 11g is the event delivery network, which allows events to be published without the publisher being aware of the target or targets. Subscribers can request to be notified of particular events, filtering them based on event domain, event type, and event content. We cover the event delivery network in Chapter 8, Using Business Events.

Service orchestration – the BPEL process manager

In order to build composite services, that is, services constructed from other services, we need a layer that can orchestrate, or tie together, multiple services into a single larger service. Simple service orchestrations can be done within the Oracle Service Bus, but more complex orchestrations require additional functionality. These service orchestrations may be thought of as processes, some of which are low-level processes and others are high-level business processes.

Service orchestration – the BPEL process manager

Business Process Execution Language (BPEL) is the standard way to describe processes in the SOA world, a task often referred to as service orchestration. The BPEL process manager in SOA Suite includes support for the BPEL 1.1 standard, with most constructs from BPEL 2.0 also being supported. BPEL allows multiple services to be linked to each other as part of a single managed process. The processes may be short running (taking seconds and minutes) or long running (taking hours and days).

The BPEL standard says nothing about how people interact with it, but BPEL process manager includes a Human Workflow component that provides support for human interaction with processes.

The BPEL process manager may also be purchased as a standalone component, in which case, it ships with the Human Workflow support and the same adapters, as included in the SOA Suite.

We explore the BPEL process manager in more detail in Chapter 5, Using BPEL to Build Composite Services and Business Processes and Chapter 14, Error Handling. Human workflow is examined in Chapter 6, Adding in Human Workflow and Chapter 17, Workflow Patterns.

Oracle also packages the BPEL process manager with the Oracle Business Process Management (BPM) Suite. This package includes the former AquaLogic BPM product (acquired when BEA bought Fuego), now known as Oracle BPM. Oracle positions BPEL as a system-centric process engine with support for human workflow, while BPM is positioned as human-centric process engine with support for system interaction.

Rules

Business decision-making may be viewed as a service within SOA. A rules engine is the physical implementation of this service.

SOA Suite includes a powerful rules engine that allows key business decision logic to be abstracted out of individual services and managed in a single repository.

In Chapter 7, Using Business Rules to Define Decision Points and in Chapter 18, Using Business Rules to Implement Services, we investigate how to use the rules engine.

Security and monitoring

One of the interesting features of SOA is the way in which aspects of a service are themselves a service. Nowhere is this better exemplified than with security. Security is a characteristic of services, yet to implement it effectively requires a centralized policy store coupled with distributed policy enforcement at the service boundaries. The central policy store can be viewed as a service that the infrastructure uses to enforce service security policy.

Enterprise Manager serves as a policy manager for security, providing a centralized service for policy enforcement points to obtain their policies. Policy enforcement points, termed interceptors in SOA Suite 11g, are responsible for applying security policy, ensuring that only requests that comply with the policy are accepted.

Security policy may also be applied through the Service Bus. Although policy management is done in the Service Bus rather than in the Enterprise Manager, the direction is for Oracle to have a common policy management in a future release.

Applying security policies is covered in Chapter 21, Defining Security and Management Policies.

Security and monitoring

Active monitoring – BAM

It is important in SOA to track what is happening in real time. Some business processes require such real-time monitoring. Users such as financial traders, risk assessors, and security services may need instant notification of business events that have occurred.

Business Activity Monitoring is part of the SOA Suite and provides a real-time view of processes and services data to end users. BAM is covered in Chapter 9, Building Real-time Dashboards.

Business to Business – B2B

Although we can use adapters to talk to remote systems, we often need additional features to support external services, either as clients or providers. For example, we may need to verify that there is contract in place before accepting or sending messages to a partner. Management of agreements or contracts is a key additional piece of functionality that is provided by Oracle B2B. B2B can be thought of as a special kind of adapter that, in addition to support for B2B protocols such as EDIFACT/ANSI X12 or RosettaNet, also supports agreement management. Agreement management allows control over the partners and interfaces used at any given point in time. We will not cover B2B in this book as the B2B space is a little at the edge of most SOA deployments.

Complex Event Processing – CEP

As our services execute, we will often generate events. These events can be monitored and processed using the complex event processor. The difference between event and message processing is that messages generally require some action on their own with little or minimal additional context. Events, on the other hand, often require us to monitor several of them to spot and respond to trends. For example, we may treat a stock sale as a message when we need to record it and reconcile it with the accounting system. We may also want to treat the stock sale as an event in which we wish to monitor the overall market movements in a single stock or in related stocks to decide whether we should buy or sell. The complex event processor allows us to do time-based and series-based analysis of data. We will not talk about CEP in this book as it is a complex part of the SOA Suite that requires a complementary but different approach to the other SOA components.

Event delivery network

Even the loose-coupling provided by a Service Bus is not always enough. We often wish to just publish events and let any interested parties be notified of the event. A new feature of SOA Suite 11g is the event delivery network, which allows events to be published without the publisher being aware of the target or targets. Subscribers can request to be notified of particular events, filtering them based on event domain, event type, and event content. We cover the event delivery network in Chapter 8, Using Business Events.

Rules

Business decision-making may be viewed as a service within SOA. A rules engine is the physical implementation of this service.

SOA Suite includes a powerful rules engine that allows key business decision logic to be abstracted out of individual services and managed in a single repository.

In Chapter 7, Using Business Rules to Define Decision Points and in Chapter 18, Using Business Rules to Implement Services, we investigate how to use the rules engine.

Security and monitoring

One of the interesting features of SOA is the way in which aspects of a service are themselves a service. Nowhere is this better exemplified than with security. Security is a characteristic of services, yet to implement it effectively requires a centralized policy store coupled with distributed policy enforcement at the service boundaries. The central policy store can be viewed as a service that the infrastructure uses to enforce service security policy.

Enterprise Manager serves as a policy manager for security, providing a centralized service for policy enforcement points to obtain their policies. Policy enforcement points, termed interceptors in SOA Suite 11g, are responsible for applying security policy, ensuring that only requests that comply with the policy are accepted.

Security policy may also be applied through the Service Bus. Although policy management is done in the Service Bus rather than in the Enterprise Manager, the direction is for Oracle to have a common policy management in a future release.

Applying security policies is covered in Chapter 21, Defining Security and Management Policies.

Security and monitoring

Active monitoring – BAM

It is important in SOA to track what is happening in real time. Some business processes require such real-time monitoring. Users such as financial traders, risk assessors, and security services may need instant notification of business events that have occurred.

Business Activity Monitoring is part of the SOA Suite and provides a real-time view of processes and services data to end users. BAM is covered in Chapter 9, Building Real-time Dashboards.

Business to Business – B2B

Although we can use adapters to talk to remote systems, we often need additional features to support external services, either as clients or providers. For example, we may need to verify that there is contract in place before accepting or sending messages to a partner. Management of agreements or contracts is a key additional piece of functionality that is provided by Oracle B2B. B2B can be thought of as a special kind of adapter that, in addition to support for B2B protocols such as EDIFACT/ANSI X12 or RosettaNet, also supports agreement management. Agreement management allows control over the partners and interfaces used at any given point in time. We will not cover B2B in this book as the B2B space is a little at the edge of most SOA deployments.

Complex Event Processing – CEP

As our services execute, we will often generate events. These events can be monitored and processed using the complex event processor. The difference between event and message processing is that messages generally require some action on their own with little or minimal additional context. Events, on the other hand, often require us to monitor several of them to spot and respond to trends. For example, we may treat a stock sale as a message when we need to record it and reconcile it with the accounting system. We may also want to treat the stock sale as an event in which we wish to monitor the overall market movements in a single stock or in related stocks to decide whether we should buy or sell. The complex event processor allows us to do time-based and series-based analysis of data. We will not talk about CEP in this book as it is a complex part of the SOA Suite that requires a complementary but different approach to the other SOA components.

Event delivery network

Even the loose-coupling provided by a Service Bus is not always enough. We often wish to just publish events and let any interested parties be notified of the event. A new feature of SOA Suite 11g is the event delivery network, which allows events to be published without the publisher being aware of the target or targets. Subscribers can request to be notified of particular events, filtering them based on event domain, event type, and event content. We cover the event delivery network in Chapter 8, Using Business Events.

Security and monitoring

One of the interesting features of SOA is the way in which aspects of a service are themselves a service. Nowhere is this better exemplified than with security. Security is a characteristic of services, yet to implement it effectively requires a centralized policy store coupled with distributed policy enforcement at the service boundaries. The central policy store can be viewed as a service that the infrastructure uses to enforce service security policy.

Enterprise Manager serves as a policy manager for security, providing a centralized service for policy enforcement points to obtain their policies. Policy enforcement points, termed interceptors in SOA Suite 11g, are responsible for applying security policy, ensuring that only requests that comply with the policy are accepted.

Security policy may also be applied through the Service Bus. Although policy management is done in the Service Bus rather than in the Enterprise Manager, the direction is for Oracle to have a common policy management in a future release.

Applying security policies is covered in Chapter 21, Defining Security and Management Policies.

Security and monitoring

Active monitoring – BAM

It is important in SOA to track what is happening in real time. Some business processes require such real-time monitoring. Users such as financial traders, risk assessors, and security services may need instant notification of business events that have occurred.

Business Activity Monitoring is part of the SOA Suite and provides a real-time view of processes and services data to end users. BAM is covered in Chapter 9, Building Real-time Dashboards.

Business to Business – B2B

Although we can use adapters to talk to remote systems, we often need additional features to support external services, either as clients or providers. For example, we may need to verify that there is contract in place before accepting or sending messages to a partner. Management of agreements or contracts is a key additional piece of functionality that is provided by Oracle B2B. B2B can be thought of as a special kind of adapter that, in addition to support for B2B protocols such as EDIFACT/ANSI X12 or RosettaNet, also supports agreement management. Agreement management allows control over the partners and interfaces used at any given point in time. We will not cover B2B in this book as the B2B space is a little at the edge of most SOA deployments.

Complex Event Processing – CEP

As our services execute, we will often generate events. These events can be monitored and processed using the complex event processor. The difference between event and message processing is that messages generally require some action on their own with little or minimal additional context. Events, on the other hand, often require us to monitor several of them to spot and respond to trends. For example, we may treat a stock sale as a message when we need to record it and reconcile it with the accounting system. We may also want to treat the stock sale as an event in which we wish to monitor the overall market movements in a single stock or in related stocks to decide whether we should buy or sell. The complex event processor allows us to do time-based and series-based analysis of data. We will not talk about CEP in this book as it is a complex part of the SOA Suite that requires a complementary but different approach to the other SOA components.

Event delivery network

Even the loose-coupling provided by a Service Bus is not always enough. We often wish to just publish events and let any interested parties be notified of the event. A new feature of SOA Suite 11g is the event delivery network, which allows events to be published without the publisher being aware of the target or targets. Subscribers can request to be notified of particular events, filtering them based on event domain, event type, and event content. We cover the event delivery network in Chapter 8, Using Business Events.

Active monitoring – BAM

It is important in SOA to track what is happening in real time. Some business processes require such real-time monitoring. Users such as financial traders, risk assessors, and security services may need instant notification of business events that have occurred.

Business Activity Monitoring is part of the SOA Suite and provides a real-time view of processes and services data to end users. BAM is covered in Chapter 9, Building Real-time Dashboards.

Business to Business – B2B

Although we can use adapters to talk to remote systems, we often need additional features to support external services, either as clients or providers. For example, we may need to verify that there is contract in place before accepting or sending messages to a partner. Management of agreements or contracts is a key additional piece of functionality that is provided by Oracle B2B. B2B can be thought of as a special kind of adapter that, in addition to support for B2B protocols such as EDIFACT/ANSI X12 or RosettaNet, also supports agreement management. Agreement management allows control over the partners and interfaces used at any given point in time. We will not cover B2B in this book as the B2B space is a little at the edge of most SOA deployments.

Complex Event Processing – CEP

As our services execute, we will often generate events. These events can be monitored and processed using the complex event processor. The difference between event and message processing is that messages generally require some action on their own with little or minimal additional context. Events, on the other hand, often require us to monitor several of them to spot and respond to trends. For example, we may treat a stock sale as a message when we need to record it and reconcile it with the accounting system. We may also want to treat the stock sale as an event in which we wish to monitor the overall market movements in a single stock or in related stocks to decide whether we should buy or sell. The complex event processor allows us to do time-based and series-based analysis of data. We will not talk about CEP in this book as it is a complex part of the SOA Suite that requires a complementary but different approach to the other SOA components.

Event delivery network

Even the loose-coupling provided by a Service Bus is not always enough. We often wish to just publish events and let any interested parties be notified of the event. A new feature of SOA Suite 11g is the event delivery network, which allows events to be published without the publisher being aware of the target or targets. Subscribers can request to be notified of particular events, filtering them based on event domain, event type, and event content. We cover the event delivery network in Chapter 8, Using Business Events.

Business to Business – B2B

Although we can use adapters to talk to remote systems, we often need additional features to support external services, either as clients or providers. For example, we may need to verify that there is contract in place before accepting or sending messages to a partner. Management of agreements or contracts is a key additional piece of functionality that is provided by Oracle B2B. B2B can be thought of as a special kind of adapter that, in addition to support for B2B protocols such as EDIFACT/ANSI X12 or RosettaNet, also supports agreement management. Agreement management allows control over the partners and interfaces used at any given point in time. We will not cover B2B in this book as the B2B space is a little at the edge of most SOA deployments.

Complex Event Processing – CEP

As our services execute, we will often generate events. These events can be monitored and processed using the complex event processor. The difference between event and message processing is that messages generally require some action on their own with little or minimal additional context. Events, on the other hand, often require us to monitor several of them to spot and respond to trends. For example, we may treat a stock sale as a message when we need to record it and reconcile it with the accounting system. We may also want to treat the stock sale as an event in which we wish to monitor the overall market movements in a single stock or in related stocks to decide whether we should buy or sell. The complex event processor allows us to do time-based and series-based analysis of data. We will not talk about CEP in this book as it is a complex part of the SOA Suite that requires a complementary but different approach to the other SOA components.

Event delivery network

Even the loose-coupling provided by a Service Bus is not always enough. We often wish to just publish events and let any interested parties be notified of the event. A new feature of SOA Suite 11g is the event delivery network, which allows events to be published without the publisher being aware of the target or targets. Subscribers can request to be notified of particular events, filtering them based on event domain, event type, and event content. We cover the event delivery network in Chapter 8, Using Business Events.

Complex Event Processing – CEP

As our services execute, we will often generate events. These events can be monitored and processed using the complex event processor. The difference between event and message processing is that messages generally require some action on their own with little or minimal additional context. Events, on the other hand, often require us to monitor several of them to spot and respond to trends. For example, we may treat a stock sale as a message when we need to record it and reconcile it with the accounting system. We may also want to treat the stock sale as an event in which we wish to monitor the overall market movements in a single stock or in related stocks to decide whether we should buy or sell. The complex event processor allows us to do time-based and series-based analysis of data. We will not talk about CEP in this book as it is a complex part of the SOA Suite that requires a complementary but different approach to the other SOA components.

Event delivery network

Even the loose-coupling provided by a Service Bus is not always enough. We often wish to just publish events and let any interested parties be notified of the event. A new feature of SOA Suite 11g is the event delivery network, which allows events to be published without the publisher being aware of the target or targets. Subscribers can request to be notified of particular events, filtering them based on event domain, event type, and event content. We cover the event delivery network in Chapter 8, Using Business Events.

Event delivery network

Even the loose-coupling provided by a Service Bus is not always enough. We often wish to just publish events and let any interested parties be notified of the event. A new feature of SOA Suite 11g is the event delivery network, which allows events to be published without the publisher being aware of the target or targets. Subscribers can request to be notified of particular events, filtering them based on event domain, event type, and event content. We cover the event delivery network in Chapter 8, Using Business Events.

SOA Suite architecture

We will now examine how Oracle SOA Suite provides the services identified previously.

Top level

The SOA Suite is built on top of a Java Enterprise Edition (Java EE) infrastructure. Although SOA Suite is certified with several different Java EE servers, including IBM WebSphere, it will most commonly be used with the Oracle WebLogic server. The Oracle WebLogic Server (WLS) will probably always be the first available Java EE platform for SOA Suite and is the only platform that will be provided bundled with the SOA Suite to simplify installation. For the rest of this book, we will assume that you are running SOA Suite on the Oracle WebLogic server. If there are any significant differences when running on non-Oracle application servers, we will highlight them in the text.

Top level

In addition to a Java EE application server, the SOA Suite also requires a database. The SOA Suite is designed to run against any SQL database, but certification for non-Oracle databases has been slow in coming. The database is used to maintain configuration information and also records of runtime interactions. Oracle Database XE can be used with the SOA Suite, but it is not recommended for production deployments as it is not a supported configuration.

Component view

In a previous section, we examined the individual components of the SOA Suite and here we show them in context with the Java EE container and the database. Note that CEP does not run in an application server and OSB runs in a separate container to the other SOA Suite components.

Component view

All the services are executed within the context of the Java EE container, even though they may use that container in different ways. BPEL listens for events and updates processes based upon those events. Adapters typically make use of the Java EE containers connector architecture (JCA) to provide connectivity and notifications. Policy interceptors act as filters. Note that the Oracle Service Bus (OSB) is only available when the application server is a WebLogic server.

Implementation view

Oracle has put a lot of effort into making SOA Suite consistent in its use of underlying services. A number of lower-level services are reused consistently across components.

Implementation view

A Portability Layer provides an interface between the SOA Suite and the specifics of the JEE platform that hosts it.

At the lowest level, connectivity services, such as SCA, JCA adapters, JMS, and Web Service Framework, are shared by higher-level components.

A Service Layer exposes higher-level functions. The BPEL process manager is implemented by a combination of a BPEL engine and access to the Human Workflow engine. Rules is another shared service that is available to BPEL or other components.

A recursive example

The SOA Suite architecture is a good example of service-oriented design principles being applied. Common services have been identified and extracted to be shared across many components. The high-level services such as BPEL and ESB share some common services such as transformation and adapter services running on a standard Java EE container.

Top level

The SOA Suite is built on top of a Java Enterprise Edition (Java EE) infrastructure. Although SOA Suite is certified with several different Java EE servers, including IBM WebSphere, it will most commonly be used with the Oracle WebLogic server. The Oracle WebLogic Server (WLS) will probably always be the first available Java EE platform for SOA Suite and is the only platform that will be provided bundled with the SOA Suite to simplify installation. For the rest of this book, we will assume that you are running SOA Suite on the Oracle WebLogic server. If there are any significant differences when running on non-Oracle application servers, we will highlight them in the text.

Top level

In addition to a Java EE application server, the SOA Suite also requires a database. The SOA Suite is designed to run against any SQL database, but certification for non-Oracle databases has been slow in coming. The database is used to maintain configuration information and also records of runtime interactions. Oracle Database XE can be used with the SOA Suite, but it is not recommended for production deployments as it is not a supported configuration.

Component view

In a previous section, we examined the individual components of the SOA Suite and here we show them in context with the Java EE container and the database. Note that CEP does not run in an application server and OSB runs in a separate container to the other SOA Suite components.

Component view

All the services are executed within the context of the Java EE container, even though they may use that container in different ways. BPEL listens for events and updates processes based upon those events. Adapters typically make use of the Java EE containers connector architecture (JCA) to provide connectivity and notifications. Policy interceptors act as filters. Note that the Oracle Service Bus (OSB) is only available when the application server is a WebLogic server.

Implementation view

Oracle has put a lot of effort into making SOA Suite consistent in its use of underlying services. A number of lower-level services are reused consistently across components.

Implementation view

A Portability Layer provides an interface between the SOA Suite and the specifics of the JEE platform that hosts it.

At the lowest level, connectivity services, such as SCA, JCA adapters, JMS, and Web Service Framework, are shared by higher-level components.

A Service Layer exposes higher-level functions. The BPEL process manager is implemented by a combination of a BPEL engine and access to the Human Workflow engine. Rules is another shared service that is available to BPEL or other components.

A recursive example

The SOA Suite architecture is a good example of service-oriented design principles being applied. Common services have been identified and extracted to be shared across many components. The high-level services such as BPEL and ESB share some common services such as transformation and adapter services running on a standard Java EE container.

Component view

In a previous section, we examined the individual components of the SOA Suite and here we show them in context with the Java EE container and the database. Note that CEP does not run in an application server and OSB runs in a separate container to the other SOA Suite components.

Component view

All the services are executed within the context of the Java EE container, even though they may use that container in different ways. BPEL listens for events and updates processes based upon those events. Adapters typically make use of the Java EE containers connector architecture (JCA) to provide connectivity and notifications. Policy interceptors act as filters. Note that the Oracle Service Bus (OSB) is only available when the application server is a WebLogic server.

Implementation view

Oracle has put a lot of effort into making SOA Suite consistent in its use of underlying services. A number of lower-level services are reused consistently across components.

Implementation view

A Portability Layer provides an interface between the SOA Suite and the specifics of the JEE platform that hosts it.

At the lowest level, connectivity services, such as SCA, JCA adapters, JMS, and Web Service Framework, are shared by higher-level components.

A Service Layer exposes higher-level functions. The BPEL process manager is implemented by a combination of a BPEL engine and access to the Human Workflow engine. Rules is another shared service that is available to BPEL or other components.

A recursive example

The SOA Suite architecture is a good example of service-oriented design principles being applied. Common services have been identified and extracted to be shared across many components. The high-level services such as BPEL and ESB share some common services such as transformation and adapter services running on a standard Java EE container.

Implementation view

Oracle has put a lot of effort into making SOA Suite consistent in its use of underlying services. A number of lower-level services are reused consistently across components.

Implementation view

A Portability Layer provides an interface between the SOA Suite and the specifics of the JEE platform that hosts it.

At the lowest level, connectivity services, such as SCA, JCA adapters, JMS, and Web Service Framework, are shared by higher-level components.

A Service Layer exposes higher-level functions. The BPEL process manager is implemented by a combination of a BPEL engine and access to the Human Workflow engine. Rules is another shared service that is available to BPEL or other components.

A recursive example

The SOA Suite architecture is a good example of service-oriented design principles being applied. Common services have been identified and extracted to be shared across many components. The high-level services such as BPEL and ESB share some common services such as transformation and adapter services running on a standard Java EE container.

A recursive example

The SOA Suite architecture is a good example of service-oriented design principles being applied. Common services have been identified and extracted to be shared across many components. The high-level services such as BPEL and ESB share some common services such as transformation and adapter services running on a standard Java EE container.

JDeveloper

Everything we have spoken of so far has been related to the executable or runtime environment. Specialist tools are required to take advantage of this environment. It is possible to manually craft the assemblies and descriptors required to build a SOA Suite application, but it is not a practical proposition. Fortunately, Oracle provides JDeveloper free of charge to allow developers to build SOA Suite applications.

JDeveloper is actually a separate tool, but it has been developed in conjunction with SOA Suite so that virtually all facilities of SOA Suite are accessible through JDeveloper. One exception to this is the Oracle Service Bus, which in the current release does not have support in JDeveloper but instead has a different tool named WebLogic Workspace Studio. Although JDeveloper started life as a Java development tool, many users now never touch the Java side of JDeveloper, doing all their work in the SOA Suite components.

JDeveloper may be characterized as a model-based, wizard-driven development environment. Re-entrant wizards are used to guide the construction of many artifacts of the SOA Suite, including adapters and transformation.

JDeveloper has a consistent view that the code is also the model, so that graphical views are always in synchronization with the underlying code. It is possible to exercise some functionality of SOA Suite using the Eclipse platform, but to get full value out of the SOA Suite it is really necessary to use JDeveloper. The Eclipse platform does, however, provide the basis for the Service Bus designer, the Workspace Studio. There are some aspects of development that may be supported in both tools, but are easier in one than the other.

Other components

We have now touched on all the major components of the SOA Suite. There are, however, a few items that are either of a more limited interest or are outside the SOA Suite, but closely related to it.

Service repository and registry

Oracle has a service repository and registry product that is integrated with the SOA Suite but separate from it. The repository acts as a central repository for all SOA artifacts and can be used to support both developers and deployers in tracking dependencies between components both deployed and in development. The repository can publish SOA artifacts such as service definitions and locations to the service registry. The Oracle Service registry may be used to categorize and index services created. Users may then browse the registry to locate services. The service registry may also be used as a runtime location service for service endpoints.

BPA Suite

The Oracle BPA Suite is targeted at business process analysts who want a powerful repository-based tool to model their business processes. The BPA Suite is not an easy product to learn, and like all modeling tools, there is a price to pay for the descriptive power available. The fact of interest to SOA Suite developers is the ability for the BPA Suite and SOA Suite to exchange process models. Processes created in the BPA Suite may be exported to the SOA Suite for concrete implementation. Simulation of processes in the BPA Suite may be used as a useful guide for process improvement.

Links between the BPA Suite and the SOA Suite are growing stronger over time, and this provides a valuable bridge between business analysts and IT architects.

The BPM Suite

The Business Process Management Suite is focused on modeling and execution of business processes. As mentioned, it includes BPEL process manager to provide strong system-centric support for business processes, but the primary focus of the Suite is on modeling and executing processes in the BPM designer and BPM server. BPM server and BPEL process manager are converging on a single shared service implementation.

Portals and WebCenter

The SOA Suite has no real end-user interface outside the human workflow service. Frontends may be built using JDeveloper directly or they may be crafted as part of Oracle Portal, Oracle WebCenter, or another Portal or frontend builder. A number of portlets are provided to expose views of SOA Suite to end users through the portal. These are principally related to human workflow, but also include some views onto the BPEL process status. Portals can also take advantage of WSDL interfaces to provide a user interface onto services exposed by the SOA Suite.

Enterprise manager SOA management pack

Oracle's preferred management framework is Oracle Enterprise Manager. This is provided as a base set of functionality with a large number of management packs, which provide additional functionality. The SOA management pack extends Enterprise Manager to provide monitoring and management of artifacts within the SOA Suite.

Service repository and registry

Oracle has a service repository and registry product that is integrated with the SOA Suite but separate from it. The repository acts as a central repository for all SOA artifacts and can be used to support both developers and deployers in tracking dependencies between components both deployed and in development. The repository can publish SOA artifacts such as service definitions and locations to the service registry. The Oracle Service registry may be used to categorize and index services created. Users may then browse the registry to locate services. The service registry may also be used as a runtime location service for service endpoints.

BPA Suite

The Oracle BPA Suite is targeted at business process analysts who want a powerful repository-based tool to model their business processes. The BPA Suite is not an easy product to learn, and like all modeling tools, there is a price to pay for the descriptive power available. The fact of interest to SOA Suite developers is the ability for the BPA Suite and SOA Suite to exchange process models. Processes created in the BPA Suite may be exported to the SOA Suite for concrete implementation. Simulation of processes in the BPA Suite may be used as a useful guide for process improvement.

Links between the BPA Suite and the SOA Suite are growing stronger over time, and this provides a valuable bridge between business analysts and IT architects.

The BPM Suite

The Business Process Management Suite is focused on modeling and execution of business processes. As mentioned, it includes BPEL process manager to provide strong system-centric support for business processes, but the primary focus of the Suite is on modeling and executing processes in the BPM designer and BPM server. BPM server and BPEL process manager are converging on a single shared service implementation.

Portals and WebCenter

The SOA Suite has no real end-user interface outside the human workflow service. Frontends may be built using JDeveloper directly or they may be crafted as part of Oracle Portal, Oracle WebCenter, or another Portal or frontend builder. A number of portlets are provided to expose views of SOA Suite to end users through the portal. These are principally related to human workflow, but also include some views onto the BPEL process status. Portals can also take advantage of WSDL interfaces to provide a user interface onto services exposed by the SOA Suite.

Enterprise manager SOA management pack

Oracle's preferred management framework is Oracle Enterprise Manager. This is provided as a base set of functionality with a large number of management packs, which provide additional functionality. The SOA management pack extends Enterprise Manager to provide monitoring and management of artifacts within the SOA Suite.

BPA Suite

The Oracle BPA Suite is targeted at business process analysts who want a powerful repository-based tool to model their business processes. The BPA Suite is not an easy product to learn, and like all modeling tools, there is a price to pay for the descriptive power available. The fact of interest to SOA Suite developers is the ability for the BPA Suite and SOA Suite to exchange process models. Processes created in the BPA Suite may be exported to the SOA Suite for concrete implementation. Simulation of processes in the BPA Suite may be used as a useful guide for process improvement.

Links between the BPA Suite and the SOA Suite are growing stronger over time, and this provides a valuable bridge between business analysts and IT architects.

The BPM Suite

The Business Process Management Suite is focused on modeling and execution of business processes. As mentioned, it includes BPEL process manager to provide strong system-centric support for business processes, but the primary focus of the Suite is on modeling and executing processes in the BPM designer and BPM server. BPM server and BPEL process manager are converging on a single shared service implementation.

Portals and WebCenter

The SOA Suite has no real end-user interface outside the human workflow service. Frontends may be built using JDeveloper directly or they may be crafted as part of Oracle Portal, Oracle WebCenter, or another Portal or frontend builder. A number of portlets are provided to expose views of SOA Suite to end users through the portal. These are principally related to human workflow, but also include some views onto the BPEL process status. Portals can also take advantage of WSDL interfaces to provide a user interface onto services exposed by the SOA Suite.

Enterprise manager SOA management pack

Oracle's preferred management framework is Oracle Enterprise Manager. This is provided as a base set of functionality with a large number of management packs, which provide additional functionality. The SOA management pack extends Enterprise Manager to provide monitoring and management of artifacts within the SOA Suite.

The BPM Suite

The Business Process Management Suite is focused on modeling and execution of business processes. As mentioned, it includes BPEL process manager to provide strong system-centric support for business processes, but the primary focus of the Suite is on modeling and executing processes in the BPM designer and BPM server. BPM server and BPEL process manager are converging on a single shared service implementation.

Portals and WebCenter

The SOA Suite has no real end-user interface outside the human workflow service. Frontends may be built using JDeveloper directly or they may be crafted as part of Oracle Portal, Oracle WebCenter, or another Portal or frontend builder. A number of portlets are provided to expose views of SOA Suite to end users through the portal. These are principally related to human workflow, but also include some views onto the BPEL process status. Portals can also take advantage of WSDL interfaces to provide a user interface onto services exposed by the SOA Suite.

Enterprise manager SOA management pack

Oracle's preferred management framework is Oracle Enterprise Manager. This is provided as a base set of functionality with a large number of management packs, which provide additional functionality. The SOA management pack extends Enterprise Manager to provide monitoring and management of artifacts within the SOA Suite.

Portals and WebCenter

The SOA Suite has no real end-user interface outside the human workflow service. Frontends may be built using JDeveloper directly or they may be crafted as part of Oracle Portal, Oracle WebCenter, or another Portal or frontend builder. A number of portlets are provided to expose views of SOA Suite to end users through the portal. These are principally related to human workflow, but also include some views onto the BPEL process status. Portals can also take advantage of WSDL interfaces to provide a user interface onto services exposed by the SOA Suite.

Enterprise manager SOA management pack

Oracle's preferred management framework is Oracle Enterprise Manager. This is provided as a base set of functionality with a large number of management packs, which provide additional functionality. The SOA management pack extends Enterprise Manager to provide monitoring and management of artifacts within the SOA Suite.

Enterprise manager SOA management pack

Oracle's preferred management framework is Oracle Enterprise Manager. This is provided as a base set of functionality with a large number of management packs, which provide additional functionality. The SOA management pack extends Enterprise Manager to provide monitoring and management of artifacts within the SOA Suite.

Summary

As we have seen, there are a lot of components to the SOA Suite, and even though Oracle has done a lot to provide consistent usage patterns, there is still a lot to learn about each component. The rest of this book takes a solution-oriented approach to the SOA Suite rather than a component approach. We will examine the individual components in the context of the role they serve and how they are used to enable service-oriented architecture.

Chapter 2. Writing your First Composite

In this chapter, we are going to provide a hands-on introduction to the core components of the Oracle SOA Suite, namely, the Oracle BPEL Process Manager (or BPEL PM), Mediator, and the Oracle Service Bus (or OSB). We will do this by implementing an Echo service, which is a trivial service that takes a single string as input and then returns the same string as its output.

We will first use JDeveloper to implement and deploy this as a BPEL process in an SCA Assembly. While doing this, we will take the opportunity to give you a high-level tour of JDeveloper in order to familiarize you with its overall layout.

Once we have successfully deployed our first BPEL process, we will use the Enterprise Manager (EM) console to execute a test instance of our process and examine its audit trail.

Next, we will introduce the Mediator component and use JDeveloper to create a Mediator component that fronts our BPEL process. We will deploy this as a new version of our SCA Assembly.

Finally we will introduce the Service Bus, and look at how we can use its web-based console to build and deploy a proxy service on top of our SCA Assembly. Once deployed, we will use the tooling provided by the Service Bus console to test our end-to-end service.

Installing SOA Suite

Before creating and running your first service, you will need to download and install the SOA Suite. Oracle SOA Suite 11g deploys on WebLogic 10g R3.

To download the installation guide, go to the support page of Packt Publishing (www.packtpub.com/support). From here, follow the instructions to download a zip file containing the code for the book. Included in the zip will be a PDF document named SoaSuiteInstallationForWeblogic11g.pdf.

This document details the quickest and easiest way to get the SOA Suite up and running and covers the following:

  • Where to download the SOA Suite and any other required components
  • How to install and configure the SOA Suite
  • How to install and run the oBay application, as well as the other code samples that come with this book

Writing your first BPEL process

Ensure that the Oracle SOA Suite has started (as described in the previously mentioned installation guide) and start JDeveloper. When you start JDeveloper for the first time, it will prompt you for a developer role, as shown in the following screenshot:

Writing your first BPEL process

JDeveloper has a number of different developer roles that limit the technology choices available to the developer. Choose the Default Role to get access to all JDeveloper functionality. This is needed to access the SOA Suite functionality.

After selecting the role, we are offered a Tip of the Day to tell us about a feature of JDeveloper. After dismissing the Tip of the Day, we are presented with a blank JDeveloper workspace.

Writing your first BPEL process

The top-left-hand window is the Application Navigator, which lists all the applications that we are working on (it is currently empty as we have not yet defined any). Within JDeveloper, an application is a grouping of one or more related projects. A Project is a collection of related components that make up a deployable resource (for example, an SCA Assembly, Java application, web service, and so on).

Within the context of the SOA Suite, each SCA Assembly is defined within its own project, with an application being a collection of related SCA Assemblies.

On the opposite side of the screen to the Application Navigator tab is the Resource Palette, which contains the My Catalogs tab to hold resources for use in composites and the IDE Connections tab. If we click on this it will list the types of connections we can define to JDeveloper. A connection allows us to define and manage links to external resources such as databases, application servers, and rules engines.

Once defined, we can expand a connection to inspect the content of an external resource, which can then be used to create or edit components that utilize the resource. For example, you can use a database connection to create and configure a database adapter to expose a database table as a web service.

Connections also allow us to deploy projects from JDeveloper to an external resource. If you haven't done so already, then you will need to define a connection to the application server (as described in the installation guide) because we will need this to deploy our SCA Assemblies from within JDeveloper.

The connection to the application server is used to connect to the management interfaces in the target container. We can use it to browse deployed applications, change the status of deployed composites, or as we will do here, deploy new composites to our container.

The main window within JDeveloper is used to edit the artifact that we are currently working on (for example, BPEL Process, XSLT Transformation, Java code, and so on). The top of this window contains a tab for each resource we have open, allowing you to quickly switch between them.

At the moment, the only artifact that we have opened is the Start Page, which provides links to various documents on JDeveloper.

The bottom-left-hand corner contains the Structure window. The content of this depends on the resource we are currently working on.

Creating an application

Within JDeveloper, an application is the main container for our work. It consists of a directory where all our application projects will be created.

So, before we can create our Echo SCA Assembly, we must create the application to which it will belong. Within the Applications Navigator tab in JDeveloper, click on the New Application… item.

This will launch the Create SOA Application dialog, as shown in the preceding screenshot.

Give the application an appropriate name like SoaSuiteBook11gChapter2.

We can specify the top-level directory in which we want to create our applications. By default, JDeveloper will set it to the following:

<JDEVELOPER_HOME>\ mywork\<Application Name>

Normally, we would specify a directory that's not under JDEVELOPER_HOME, as this makes it simpler to upgrade to future releases of JDeveloper.

In addition, you can specify an Application Template. For SOA projects, select SOA Application template, and click on the Next button.

Creating an application

Next, JDeveloper will prompt us for the details of a new SOA project.

Creating an SOA project

We provide a name for our project such as EchoComposite and select the technologies we desire to be available in the project. In this case, we leave the default SOA technology selected. The project will be created in a directory that, by default, has the same name as the project and is located under the application directory. These settings can be changed.

Creating an SOA project

Clicking on Next will give us the opportunity to configure our new composite by selecting some initial components. Select Composite With BPEL to create a new Assembly with a BPEL process, as shown in the next screenshot:

Creating an SOA project

SOA project composite templates

We have a number of different templates available to us. Apart from the Empty Composite template, they all populate the composite with an initial component. This may be a BPEL component, a Business Rule component, a Human Task, or a Mediator component. The Composite From Oracle BPA Blueprint is used to import a process from the Oracle BPA Suite and generate it as a BPEL component within the composite.

It is possible to create an Empty Composite and then add the components directly to the composite, so if you choose the wrong template and start working with it, you can always enhance it by adding more components. Even the Empty Composite is not really empty, as it includes all the initial files you need to start building your own composite.

Creating a BPEL process

Clicking Finish will launch the Create BPEL Process wizard, as shown in the following screenshot:

Creating a BPEL process

Replace the process with a sensible Name like EchoProcess and select a template of the type Synchronous BPEL Process and click OK. JDeveloper will create a skeleton BPEL Process and a corresponding WSDL that describes the web service implemented by our process. This process will be wrapped in an SCA Assembly.

Note

BPEL process templates cover the different ways in which a client may interact with the process. A Define Service Later template is just the process definition and will be used when we want to have complete control over the types of interfaces the process exposes, we can think of this as an empty BPEL process template. An Asynchronous BPEL Process template is used when we send a one-way message to a process, and then later on we send a one-way message from the process to the caller. This type of interaction is good for processes that run for a long time. A Synchronous BPEL Process is one in which we have a request/reply interaction style. The client sends in a request message and then blocks waiting for the process to provide a reply. This type of interaction is good for processes that need to return an immediate result. A One Way BPEL Process simply receives a one-way input message but no reply is expected. This is useful when we initiate some interaction that will initiate a number of other activities. We may also create a BPEL process that implements a specific interface defined in WSDL by using the Base on a WSDL template. Finally, we may have a BPEL process that is activated when a specific event is generated by the Event Delivery Network (see Chapter 8, Using Business Events) using the Subscribe to Events template.

If we look at the process that JDeveloper has created (as shown in the following screenshot), we can see that in the center is the process itself, which contains the activities to be carried out. At the moment, it just contains an initial activity for receiving a request and a corresponding activity for sending a response.

Creating a BPEL process

Either side of the process we have a swim lane containing Partner Links that represent either the caller of our process, as is the case with the echoprocess_client partner links, or services that our BPEL process calls out to. At the moment this is empty as we haven't defined any external references that we use within our BPEL process. Notice also that we don't currently have any content between receiving the call and replying; our process is empty and does nothing.

The Component Palette window (to the right of our process window in the preceding screenshot) lists all the BPEL Activities and Components that we can use within our process. To use any of these, we have to simply drag-and-drop them onto the appropriate place within our process.

If you click on the BPEL Services drop-down, you also have the option of selecting services which we use whenever we need to call out to an external system.

Getting back to our skeleton process, we can see that it consists of two activities; receiveInput and replyOutput. In addition it has two variables, inputVariable and outputVariable, which were created as part of our skeleton process.

The first activity is used to receive the initial request from the client invoking our BPEL process; when this request is received it will populate the variable inputVariable with the content of the request.

The last activity is used to send a response back to the client, and the content of this response will contain the content of outputVariable.

For the purpose of our simple EchoProcess we just need to copy the content of the input variable to the output variable.

Assigning values to variables

In BPEL, the <assign> activity is used to update the values of variables with new data. The <assign> activity typically consists of one or more copy operations. Each copy consists of a target variable, that is, the variable that you wish to assign a value to and a source, which can either be another variable or an XPath expression.

To insert an Assign activity, drag one from the Component Palette on to our BPEL process at the point just after the receiveInput activity, as shown in the following screenshot:

Assigning values to variables

To configure the Assign activity, double-click on it to open up its configuration window. Click on the green cross to access a menu and select Copy Operation…, as shown in the next screenshot:

Assigning values to variables

This will present us with the Create Copy Operation window, as shown in the following screenshot:

Assigning values to variables

On the left-hand side, we specify the From variable, that is, where we want to copy from. For our process, we want to copy the content of our input variable to our output variable. So expand inputVariable and select /client:process/client:input, as shown in the preceding screenshot.

On the right-hand side, we specify the To variable, that is, where we want to copy to. So expand outputVariable and select /client:processResponse/client:result.

Once you've done this, click OK and then OK again to close the Assign window.

Deploying the process

This completes our process, so click on the Save All icon (the fourth icon along, in the top-left-hand corner of JDeveloper) to save our work.

Note

As a BPEL project is made up of multiple files, we typically use Save All to ensure that all modifications are updated at the same time.

Our process is now ready to be deployed. Before doing this, make sure the SOA Suite is running and that within JDeveloper we have defined an Application Server connection (as described in the installation guide).

To deploy the process, right-click on our EchoComposite project and then select Deploy | EchoComposite | to MyApplicationServerConnection.

Deploying the process

This will bring up the SOA Deployment Configuration Dialog. This dialog allows us to specify the target servers onto which we wish to deploy the composite. We may also specify a Revision ID for the composite to differentiate it from other deployed versions of the composite. If a revision with the same ID already exists, then it may be replaced by specifying the Overwrite any existing composites with the same revision ID option.

Deploying the process

Clicking OK will begin the build and deployment of the composite. JDeveloper will open up a window below our process containing five tabs: Messages, Feedback, BPEL, Deployment, and SOA, to which it outputs the status of the deployment process.

During the build, the SOA tab will indicate if the build was successful, and assuming it was, then an Authorization Request window will pop up requesting credentials for the application server.

Deploying the process

On completion of the build process, the Deployment tab should state Successfully deployed archive…., as shown in the following screenshot:

Deploying the process

If you don't get this message, then check the log windows for details of the error and fix it accordingly.

Testing the BPEL process

Now that our process has been deployed, the next step is to run it. A simple way to do this is to initiate a test instance using the Enterprise Manager (EM) console, which is the web-based management console for SOA Suite.

To access the EM console, open up a browser and enter the following URL:

http://<hostname>:<port>/em

This will bring up the login screen for the EM console. Log in as weblogic. This will take us to the EM console dashboard, as shown in the following screenshot:

Testing the BPEL process

The Dashboard provides us with a summary report on the Fusion Middleware domain. On the left-hand side we have a list of management areas and on the right we have summaries of application deployments, including our EchoComposite under the SOA tab.

From here, click on the composite name, that is, EchoComposite. This will take us to the Dashboard screen for our composite. From here we can see the number of completed and currently executing composite instances.

Testing the BPEL process

At the top of the Dashboard there is a Test button that allows us to execute a composite test. Pressing this button brings up the Test Web Service page, as shown in the following screenshot:

Testing the BPEL process

When we created our process, JDeveloper automatically created a WSDL file which contained the single operation (that is, process). However, it's quite common to define processes that have multiple operations, as we will see later on in the book.

The Operation drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

When you select the operation to invoke, the console will generate an HTML form with a field for each element in the message payload of the operation (as defined by the WSDL for the process). Here we can enter into each field the value that we want to submit.

For operations with large message payloads it can be simpler to just enter the XML source. If you select the XML View drop-down list the console will replace the form with a free text area with a skeleton XML fragment into which we can insert the required values.

To execute a test instance of our composite, enter some text in the input field and click Test Web Service. This will cause the console to generate a SOAP message and use it to invoke our Echo process.

Upon successful execution of the process, our test page will be updated to show the result which displays the response returned by our process. Here we can see that the result element contains our original input string, as shown in the following screenshot:

Testing the BPEL process

If we expand the SOA and soa-infra items on the left-hand side of the page, we will arrive back at the dashboard for the EchoComposite. Clicking on a completed instance will give us a summary of the composite. From here we can see the components that make up our composite. In this case, the composite consists of a single BPEL process.

Testing the BPEL process

Clicking on the BPEL process takes us to an audit record of the instance. We can expand the tree view to see details of individual operations like the message sent by replyOutput.

Testing the BPEL process

Clicking on the Flow tab will display a graphical representation of the activities within the BPEL process.

Testing the BPEL process

Clicking on any of the activities in the audit trail will pop up a window displaying details of the actions performed by that activity. In the following screenshot, we can see details of the message sent by the replyOutput activity:

Testing the BPEL process

This completes development of our first BPEL process. The next step is to call it via the Mediator. This will give us the option of transforming the input into the format we desire and of routing to different components based on the input content.

Adding a Mediator

By selecting the composite.xml tab in JDeveloper, we can see the outline of the Assembly that we have created for the BPEL process. We can add a Mediator to this by dragging it from the Component Palette.

Adding a Mediator

Dragging the Mediator Component will cause a dialog to be displayed requesting a Name and Template for the Mediator.

Adding a Mediator

If we select the Define Interface Later template, then we can click OK to add a Mediator to our Assembly. Defining the interface later will allow us to define the interface by wiring it to a service. Note that the types of interface templates are the same as the ones we saw for our BPEL process.

Adding a Mediator

We want to have the Mediator use the same interface as the BPEL process. To rewire the composite to use a Mediator, we first delete the line joining the EchoProcess in the Exposed Services swimlane to the BPEL process by right-clicking on the line and selecting Delete.

Adding a Mediator

We can now wire the EchoProcess service to the input of the Mediator by clicking on the chevron in the top-right corner of the exposed service and dragging it onto the connection point on the left-hand side of the Mediator.

Adding a Mediator

Now wire the Mediator to the BPEL process by dragging the yellow arrow on the Mediator onto the blue chevron on the BPEL process.

Adding a Mediator

We have now configured the Mediator to accept the same interface as the BPEL process and wired the Mediator to forward all messages onto the BPEL process. The default behavior of the Mediator, if it has no explicit rules, is to route the input request to the outbound request and then route the response, if any, from the target to the client.

We can now deploy and test the Assembly containing the Mediator in the same way that we deployed and tested the Assembly containing the BPEL process.

Using the Service Bus

In preparation for this, we will need the URL for the WSDL of our process. To obtain this, from the EM Dashboard, click on the EchoComposite Assembly, and then the connector icon to the right of the Settings button. This will display a link for the WSDL location and Endpoint, as shown in the following screenshot:

Using the Service Bus

If you click on this link, the EM console will open a window showing details of the WSDL. Make a note of the WSDL location as we will need this in a moment.

Creating an application

Within JDeveloper, an application is the main container for our work. It consists of a directory where all our application projects will be created.

So, before we can create our Echo SCA Assembly, we must create the application to which it will belong. Within the Applications Navigator tab in JDeveloper, click on the New Application… item.

This will launch the Create SOA Application dialog, as shown in the preceding screenshot.

Give the application an appropriate name like SoaSuiteBook11gChapter2.

We can specify the top-level directory in which we want to create our applications. By default, JDeveloper will set it to the following:

<JDEVELOPER_HOME>\ mywork\<Application Name>

Normally, we would specify a directory that's not under JDEVELOPER_HOME, as this makes it simpler to upgrade to future releases of JDeveloper.

In addition, you can specify an Application Template. For SOA projects, select SOA Application template, and click on the Next button.

Creating an application

Next, JDeveloper will prompt us for the details of a new SOA project.

Creating an SOA project

We provide a name for our project such as EchoComposite and select the technologies we desire to be available in the project. In this case, we leave the default SOA technology selected. The project will be created in a directory that, by default, has the same name as the project and is located under the application directory. These settings can be changed.

Creating an SOA project

Clicking on Next will give us the opportunity to configure our new composite by selecting some initial components. Select Composite With BPEL to create a new Assembly with a BPEL process, as shown in the next screenshot:

Creating an SOA project

SOA project composite templates

We have a number of different templates available to us. Apart from the Empty Composite template, they all populate the composite with an initial component. This may be a BPEL component, a Business Rule component, a Human Task, or a Mediator component. The Composite From Oracle BPA Blueprint is used to import a process from the Oracle BPA Suite and generate it as a BPEL component within the composite.

It is possible to create an Empty Composite and then add the components directly to the composite, so if you choose the wrong template and start working with it, you can always enhance it by adding more components. Even the Empty Composite is not really empty, as it includes all the initial files you need to start building your own composite.

Creating a BPEL process

Clicking Finish will launch the Create BPEL Process wizard, as shown in the following screenshot:

Creating a BPEL process

Replace the process with a sensible Name like EchoProcess and select a template of the type Synchronous BPEL Process and click OK. JDeveloper will create a skeleton BPEL Process and a corresponding WSDL that describes the web service implemented by our process. This process will be wrapped in an SCA Assembly.

Note

BPEL process templates cover the different ways in which a client may interact with the process. A Define Service Later template is just the process definition and will be used when we want to have complete control over the types of interfaces the process exposes, we can think of this as an empty BPEL process template. An Asynchronous BPEL Process template is used when we send a one-way message to a process, and then later on we send a one-way message from the process to the caller. This type of interaction is good for processes that run for a long time. A Synchronous BPEL Process is one in which we have a request/reply interaction style. The client sends in a request message and then blocks waiting for the process to provide a reply. This type of interaction is good for processes that need to return an immediate result. A One Way BPEL Process simply receives a one-way input message but no reply is expected. This is useful when we initiate some interaction that will initiate a number of other activities. We may also create a BPEL process that implements a specific interface defined in WSDL by using the Base on a WSDL template. Finally, we may have a BPEL process that is activated when a specific event is generated by the Event Delivery Network (see Chapter 8, Using Business Events) using the Subscribe to Events template.

If we look at the process that JDeveloper has created (as shown in the following screenshot), we can see that in the center is the process itself, which contains the activities to be carried out. At the moment, it just contains an initial activity for receiving a request and a corresponding activity for sending a response.

Creating a BPEL process

Either side of the process we have a swim lane containing Partner Links that represent either the caller of our process, as is the case with the echoprocess_client partner links, or services that our BPEL process calls out to. At the moment this is empty as we haven't defined any external references that we use within our BPEL process. Notice also that we don't currently have any content between receiving the call and replying; our process is empty and does nothing.

The Component Palette window (to the right of our process window in the preceding screenshot) lists all the BPEL Activities and Components that we can use within our process. To use any of these, we have to simply drag-and-drop them onto the appropriate place within our process.

If you click on the BPEL Services drop-down, you also have the option of selecting services which we use whenever we need to call out to an external system.

Getting back to our skeleton process, we can see that it consists of two activities; receiveInput and replyOutput. In addition it has two variables, inputVariable and outputVariable, which were created as part of our skeleton process.

The first activity is used to receive the initial request from the client invoking our BPEL process; when this request is received it will populate the variable inputVariable with the content of the request.

The last activity is used to send a response back to the client, and the content of this response will contain the content of outputVariable.

For the purpose of our simple EchoProcess we just need to copy the content of the input variable to the output variable.

Assigning values to variables

In BPEL, the <assign> activity is used to update the values of variables with new data. The <assign> activity typically consists of one or more copy operations. Each copy consists of a target variable, that is, the variable that you wish to assign a value to and a source, which can either be another variable or an XPath expression.

To insert an Assign activity, drag one from the Component Palette on to our BPEL process at the point just after the receiveInput activity, as shown in the following screenshot:

Assigning values to variables

To configure the Assign activity, double-click on it to open up its configuration window. Click on the green cross to access a menu and select Copy Operation…, as shown in the next screenshot:

Assigning values to variables

This will present us with the Create Copy Operation window, as shown in the following screenshot:

Assigning values to variables

On the left-hand side, we specify the From variable, that is, where we want to copy from. For our process, we want to copy the content of our input variable to our output variable. So expand inputVariable and select /client:process/client:input, as shown in the preceding screenshot.

On the right-hand side, we specify the To variable, that is, where we want to copy to. So expand outputVariable and select /client:processResponse/client:result.

Once you've done this, click OK and then OK again to close the Assign window.

Deploying the process

This completes our process, so click on the Save All icon (the fourth icon along, in the top-left-hand corner of JDeveloper) to save our work.

Note

As a BPEL project is made up of multiple files, we typically use Save All to ensure that all modifications are updated at the same time.

Our process is now ready to be deployed. Before doing this, make sure the SOA Suite is running and that within JDeveloper we have defined an Application Server connection (as described in the installation guide).

To deploy the process, right-click on our EchoComposite project and then select Deploy | EchoComposite | to MyApplicationServerConnection.

Deploying the process

This will bring up the SOA Deployment Configuration Dialog. This dialog allows us to specify the target servers onto which we wish to deploy the composite. We may also specify a Revision ID for the composite to differentiate it from other deployed versions of the composite. If a revision with the same ID already exists, then it may be replaced by specifying the Overwrite any existing composites with the same revision ID option.

Deploying the process

Clicking OK will begin the build and deployment of the composite. JDeveloper will open up a window below our process containing five tabs: Messages, Feedback, BPEL, Deployment, and SOA, to which it outputs the status of the deployment process.

During the build, the SOA tab will indicate if the build was successful, and assuming it was, then an Authorization Request window will pop up requesting credentials for the application server.

Deploying the process

On completion of the build process, the Deployment tab should state Successfully deployed archive…., as shown in the following screenshot:

Deploying the process

If you don't get this message, then check the log windows for details of the error and fix it accordingly.

Testing the BPEL process

Now that our process has been deployed, the next step is to run it. A simple way to do this is to initiate a test instance using the Enterprise Manager (EM) console, which is the web-based management console for SOA Suite.

To access the EM console, open up a browser and enter the following URL:

http://<hostname>:<port>/em

This will bring up the login screen for the EM console. Log in as weblogic. This will take us to the EM console dashboard, as shown in the following screenshot:

Testing the BPEL process

The Dashboard provides us with a summary report on the Fusion Middleware domain. On the left-hand side we have a list of management areas and on the right we have summaries of application deployments, including our EchoComposite under the SOA tab.

From here, click on the composite name, that is, EchoComposite. This will take us to the Dashboard screen for our composite. From here we can see the number of completed and currently executing composite instances.

Testing the BPEL process

At the top of the Dashboard there is a Test button that allows us to execute a composite test. Pressing this button brings up the Test Web Service page, as shown in the following screenshot:

Testing the BPEL process

When we created our process, JDeveloper automatically created a WSDL file which contained the single operation (that is, process). However, it's quite common to define processes that have multiple operations, as we will see later on in the book.

The Operation drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

When you select the operation to invoke, the console will generate an HTML form with a field for each element in the message payload of the operation (as defined by the WSDL for the process). Here we can enter into each field the value that we want to submit.

For operations with large message payloads it can be simpler to just enter the XML source. If you select the XML View drop-down list the console will replace the form with a free text area with a skeleton XML fragment into which we can insert the required values.

To execute a test instance of our composite, enter some text in the input field and click Test Web Service. This will cause the console to generate a SOAP message and use it to invoke our Echo process.

Upon successful execution of the process, our test page will be updated to show the result which displays the response returned by our process. Here we can see that the result element contains our original input string, as shown in the following screenshot:

Testing the BPEL process

If we expand the SOA and soa-infra items on the left-hand side of the page, we will arrive back at the dashboard for the EchoComposite. Clicking on a completed instance will give us a summary of the composite. From here we can see the components that make up our composite. In this case, the composite consists of a single BPEL process.

Testing the BPEL process

Clicking on the BPEL process takes us to an audit record of the instance. We can expand the tree view to see details of individual operations like the message sent by replyOutput.

Testing the BPEL process

Clicking on the Flow tab will display a graphical representation of the activities within the BPEL process.

Testing the BPEL process

Clicking on any of the activities in the audit trail will pop up a window displaying details of the actions performed by that activity. In the following screenshot, we can see details of the message sent by the replyOutput activity:

Testing the BPEL process

This completes development of our first BPEL process. The next step is to call it via the Mediator. This will give us the option of transforming the input into the format we desire and of routing to different components based on the input content.

Adding a Mediator

By selecting the composite.xml tab in JDeveloper, we can see the outline of the Assembly that we have created for the BPEL process. We can add a Mediator to this by dragging it from the Component Palette.

Adding a Mediator

Dragging the Mediator Component will cause a dialog to be displayed requesting a Name and Template for the Mediator.

Adding a Mediator

If we select the Define Interface Later template, then we can click OK to add a Mediator to our Assembly. Defining the interface later will allow us to define the interface by wiring it to a service. Note that the types of interface templates are the same as the ones we saw for our BPEL process.

Adding a Mediator

We want to have the Mediator use the same interface as the BPEL process. To rewire the composite to use a Mediator, we first delete the line joining the EchoProcess in the Exposed Services swimlane to the BPEL process by right-clicking on the line and selecting Delete.

Adding a Mediator

We can now wire the EchoProcess service to the input of the Mediator by clicking on the chevron in the top-right corner of the exposed service and dragging it onto the connection point on the left-hand side of the Mediator.

Adding a Mediator

Now wire the Mediator to the BPEL process by dragging the yellow arrow on the Mediator onto the blue chevron on the BPEL process.

Adding a Mediator

We have now configured the Mediator to accept the same interface as the BPEL process and wired the Mediator to forward all messages onto the BPEL process. The default behavior of the Mediator, if it has no explicit rules, is to route the input request to the outbound request and then route the response, if any, from the target to the client.

We can now deploy and test the Assembly containing the Mediator in the same way that we deployed and tested the Assembly containing the BPEL process.

Using the Service Bus

In preparation for this, we will need the URL for the WSDL of our process. To obtain this, from the EM Dashboard, click on the EchoComposite Assembly, and then the connector icon to the right of the Settings button. This will display a link for the WSDL location and Endpoint, as shown in the following screenshot:

Using the Service Bus

If you click on this link, the EM console will open a window showing details of the WSDL. Make a note of the WSDL location as we will need this in a moment.

Creating an SOA project

We provide a name for our project such as EchoComposite and select the technologies we desire to be available in the project. In this case, we leave the default SOA technology selected. The project will be created in a directory that, by default, has the same name as the project and is located under the application directory. These settings can be changed.

Creating an SOA project

Clicking on Next will give us the opportunity to configure our new composite by selecting some initial components. Select Composite With BPEL to create a new Assembly with a BPEL process, as shown in the next screenshot:

Creating an SOA project

SOA project composite templates

We have a number of different templates available to us. Apart from the Empty Composite template, they all populate the composite with an initial component. This may be a BPEL component, a Business Rule component, a Human Task, or a Mediator component. The Composite From Oracle BPA Blueprint is used to import a process from the Oracle BPA Suite and generate it as a BPEL component within the composite.

It is possible to create an Empty Composite and then add the components directly to the composite, so if you choose the wrong template and start working with it, you can always enhance it by adding more components. Even the Empty Composite is not really empty, as it includes all the initial files you need to start building your own composite.

Creating a BPEL process

Clicking Finish will launch the Create BPEL Process wizard, as shown in the following screenshot:

Creating a BPEL process

Replace the process with a sensible Name like EchoProcess and select a template of the type Synchronous BPEL Process and click OK. JDeveloper will create a skeleton BPEL Process and a corresponding WSDL that describes the web service implemented by our process. This process will be wrapped in an SCA Assembly.

Note

BPEL process templates cover the different ways in which a client may interact with the process. A Define Service Later template is just the process definition and will be used when we want to have complete control over the types of interfaces the process exposes, we can think of this as an empty BPEL process template. An Asynchronous BPEL Process template is used when we send a one-way message to a process, and then later on we send a one-way message from the process to the caller. This type of interaction is good for processes that run for a long time. A Synchronous BPEL Process is one in which we have a request/reply interaction style. The client sends in a request message and then blocks waiting for the process to provide a reply. This type of interaction is good for processes that need to return an immediate result. A One Way BPEL Process simply receives a one-way input message but no reply is expected. This is useful when we initiate some interaction that will initiate a number of other activities. We may also create a BPEL process that implements a specific interface defined in WSDL by using the Base on a WSDL template. Finally, we may have a BPEL process that is activated when a specific event is generated by the Event Delivery Network (see Chapter 8, Using Business Events) using the Subscribe to Events template.

If we look at the process that JDeveloper has created (as shown in the following screenshot), we can see that in the center is the process itself, which contains the activities to be carried out. At the moment, it just contains an initial activity for receiving a request and a corresponding activity for sending a response.

Creating a BPEL process

Either side of the process we have a swim lane containing Partner Links that represent either the caller of our process, as is the case with the echoprocess_client partner links, or services that our BPEL process calls out to. At the moment this is empty as we haven't defined any external references that we use within our BPEL process. Notice also that we don't currently have any content between receiving the call and replying; our process is empty and does nothing.

The Component Palette window (to the right of our process window in the preceding screenshot) lists all the BPEL Activities and Components that we can use within our process. To use any of these, we have to simply drag-and-drop them onto the appropriate place within our process.

If you click on the BPEL Services drop-down, you also have the option of selecting services which we use whenever we need to call out to an external system.

Getting back to our skeleton process, we can see that it consists of two activities; receiveInput and replyOutput. In addition it has two variables, inputVariable and outputVariable, which were created as part of our skeleton process.

The first activity is used to receive the initial request from the client invoking our BPEL process; when this request is received it will populate the variable inputVariable with the content of the request.

The last activity is used to send a response back to the client, and the content of this response will contain the content of outputVariable.

For the purpose of our simple EchoProcess we just need to copy the content of the input variable to the output variable.

Assigning values to variables

In BPEL, the <assign> activity is used to update the values of variables with new data. The <assign> activity typically consists of one or more copy operations. Each copy consists of a target variable, that is, the variable that you wish to assign a value to and a source, which can either be another variable or an XPath expression.

To insert an Assign activity, drag one from the Component Palette on to our BPEL process at the point just after the receiveInput activity, as shown in the following screenshot:

Assigning values to variables

To configure the Assign activity, double-click on it to open up its configuration window. Click on the green cross to access a menu and select Copy Operation…, as shown in the next screenshot:

Assigning values to variables

This will present us with the Create Copy Operation window, as shown in the following screenshot:

Assigning values to variables

On the left-hand side, we specify the From variable, that is, where we want to copy from. For our process, we want to copy the content of our input variable to our output variable. So expand inputVariable and select /client:process/client:input, as shown in the preceding screenshot.

On the right-hand side, we specify the To variable, that is, where we want to copy to. So expand outputVariable and select /client:processResponse/client:result.

Once you've done this, click OK and then OK again to close the Assign window.

Deploying the process

This completes our process, so click on the Save All icon (the fourth icon along, in the top-left-hand corner of JDeveloper) to save our work.

Note

As a BPEL project is made up of multiple files, we typically use Save All to ensure that all modifications are updated at the same time.

Our process is now ready to be deployed. Before doing this, make sure the SOA Suite is running and that within JDeveloper we have defined an Application Server connection (as described in the installation guide).

To deploy the process, right-click on our EchoComposite project and then select Deploy | EchoComposite | to MyApplicationServerConnection.

Deploying the process

This will bring up the SOA Deployment Configuration Dialog. This dialog allows us to specify the target servers onto which we wish to deploy the composite. We may also specify a Revision ID for the composite to differentiate it from other deployed versions of the composite. If a revision with the same ID already exists, then it may be replaced by specifying the Overwrite any existing composites with the same revision ID option.

Deploying the process

Clicking OK will begin the build and deployment of the composite. JDeveloper will open up a window below our process containing five tabs: Messages, Feedback, BPEL, Deployment, and SOA, to which it outputs the status of the deployment process.

During the build, the SOA tab will indicate if the build was successful, and assuming it was, then an Authorization Request window will pop up requesting credentials for the application server.

Deploying the process

On completion of the build process, the Deployment tab should state Successfully deployed archive…., as shown in the following screenshot:

Deploying the process

If you don't get this message, then check the log windows for details of the error and fix it accordingly.

Testing the BPEL process

Now that our process has been deployed, the next step is to run it. A simple way to do this is to initiate a test instance using the Enterprise Manager (EM) console, which is the web-based management console for SOA Suite.

To access the EM console, open up a browser and enter the following URL:

http://<hostname>:<port>/em

This will bring up the login screen for the EM console. Log in as weblogic. This will take us to the EM console dashboard, as shown in the following screenshot:

Testing the BPEL process

The Dashboard provides us with a summary report on the Fusion Middleware domain. On the left-hand side we have a list of management areas and on the right we have summaries of application deployments, including our EchoComposite under the SOA tab.

From here, click on the composite name, that is, EchoComposite. This will take us to the Dashboard screen for our composite. From here we can see the number of completed and currently executing composite instances.

Testing the BPEL process

At the top of the Dashboard there is a Test button that allows us to execute a composite test. Pressing this button brings up the Test Web Service page, as shown in the following screenshot:

Testing the BPEL process

When we created our process, JDeveloper automatically created a WSDL file which contained the single operation (that is, process). However, it's quite common to define processes that have multiple operations, as we will see later on in the book.

The Operation drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

When you select the operation to invoke, the console will generate an HTML form with a field for each element in the message payload of the operation (as defined by the WSDL for the process). Here we can enter into each field the value that we want to submit.

For operations with large message payloads it can be simpler to just enter the XML source. If you select the XML View drop-down list the console will replace the form with a free text area with a skeleton XML fragment into which we can insert the required values.

To execute a test instance of our composite, enter some text in the input field and click Test Web Service. This will cause the console to generate a SOAP message and use it to invoke our Echo process.

Upon successful execution of the process, our test page will be updated to show the result which displays the response returned by our process. Here we can see that the result element contains our original input string, as shown in the following screenshot:

Testing the BPEL process

If we expand the SOA and soa-infra items on the left-hand side of the page, we will arrive back at the dashboard for the EchoComposite. Clicking on a completed instance will give us a summary of the composite. From here we can see the components that make up our composite. In this case, the composite consists of a single BPEL process.

Testing the BPEL process

Clicking on the BPEL process takes us to an audit record of the instance. We can expand the tree view to see details of individual operations like the message sent by replyOutput.

Testing the BPEL process

Clicking on the Flow tab will display a graphical representation of the activities within the BPEL process.

Testing the BPEL process

Clicking on any of the activities in the audit trail will pop up a window displaying details of the actions performed by that activity. In the following screenshot, we can see details of the message sent by the replyOutput activity:

Testing the BPEL process

This completes development of our first BPEL process. The next step is to call it via the Mediator. This will give us the option of transforming the input into the format we desire and of routing to different components based on the input content.

Adding a Mediator

By selecting the composite.xml tab in JDeveloper, we can see the outline of the Assembly that we have created for the BPEL process. We can add a Mediator to this by dragging it from the Component Palette.

Adding a Mediator

Dragging the Mediator Component will cause a dialog to be displayed requesting a Name and Template for the Mediator.

Adding a Mediator

If we select the Define Interface Later template, then we can click OK to add a Mediator to our Assembly. Defining the interface later will allow us to define the interface by wiring it to a service. Note that the types of interface templates are the same as the ones we saw for our BPEL process.

Adding a Mediator

We want to have the Mediator use the same interface as the BPEL process. To rewire the composite to use a Mediator, we first delete the line joining the EchoProcess in the Exposed Services swimlane to the BPEL process by right-clicking on the line and selecting Delete.

Adding a Mediator

We can now wire the EchoProcess service to the input of the Mediator by clicking on the chevron in the top-right corner of the exposed service and dragging it onto the connection point on the left-hand side of the Mediator.

Adding a Mediator

Now wire the Mediator to the BPEL process by dragging the yellow arrow on the Mediator onto the blue chevron on the BPEL process.

Adding a Mediator

We have now configured the Mediator to accept the same interface as the BPEL process and wired the Mediator to forward all messages onto the BPEL process. The default behavior of the Mediator, if it has no explicit rules, is to route the input request to the outbound request and then route the response, if any, from the target to the client.

We can now deploy and test the Assembly containing the Mediator in the same way that we deployed and tested the Assembly containing the BPEL process.

Using the Service Bus

In preparation for this, we will need the URL for the WSDL of our process. To obtain this, from the EM Dashboard, click on the EchoComposite Assembly, and then the connector icon to the right of the Settings button. This will display a link for the WSDL location and Endpoint, as shown in the following screenshot:

Using the Service Bus

If you click on this link, the EM console will open a window showing details of the WSDL. Make a note of the WSDL location as we will need this in a moment.

SOA project composite templates

We have a number of different templates available to us. Apart from the Empty Composite template, they all populate the composite with an initial component. This may be a BPEL component, a Business Rule component, a Human Task, or a Mediator component. The Composite From Oracle BPA Blueprint is used to import a process from the Oracle BPA Suite and generate it as a BPEL component within the composite.

It is possible to create an Empty Composite and then add the components directly to the composite, so if you choose the wrong template and start working with it, you can always enhance it by adding more components. Even the Empty Composite is not really empty, as it includes all the initial files you need to start building your own composite.

Creating a BPEL process

Clicking Finish will launch the Create BPEL Process wizard, as shown in the following screenshot:

Creating a BPEL process

Replace the process with a sensible Name like EchoProcess and select a template of the type Synchronous BPEL Process and click OK. JDeveloper will create a skeleton BPEL Process and a corresponding WSDL that describes the web service implemented by our process. This process will be wrapped in an SCA Assembly.

Note

BPEL process templates cover the different ways in which a client may interact with the process. A Define Service Later template is just the process definition and will be used when we want to have complete control over the types of interfaces the process exposes, we can think of this as an empty BPEL process template. An Asynchronous BPEL Process template is used when we send a one-way message to a process, and then later on we send a one-way message from the process to the caller. This type of interaction is good for processes that run for a long time. A Synchronous BPEL Process is one in which we have a request/reply interaction style. The client sends in a request message and then blocks waiting for the process to provide a reply. This type of interaction is good for processes that need to return an immediate result. A One Way BPEL Process simply receives a one-way input message but no reply is expected. This is useful when we initiate some interaction that will initiate a number of other activities. We may also create a BPEL process that implements a specific interface defined in WSDL by using the Base on a WSDL template. Finally, we may have a BPEL process that is activated when a specific event is generated by the Event Delivery Network (see Chapter 8, Using Business Events) using the Subscribe to Events template.

If we look at the process that JDeveloper has created (as shown in the following screenshot), we can see that in the center is the process itself, which contains the activities to be carried out. At the moment, it just contains an initial activity for receiving a request and a corresponding activity for sending a response.

Creating a BPEL process

Either side of the process we have a swim lane containing Partner Links that represent either the caller of our process, as is the case with the echoprocess_client partner links, or services that our BPEL process calls out to. At the moment this is empty as we haven't defined any external references that we use within our BPEL process. Notice also that we don't currently have any content between receiving the call and replying; our process is empty and does nothing.

The Component Palette window (to the right of our process window in the preceding screenshot) lists all the BPEL Activities and Components that we can use within our process. To use any of these, we have to simply drag-and-drop them onto the appropriate place within our process.

If you click on the BPEL Services drop-down, you also have the option of selecting services which we use whenever we need to call out to an external system.

Getting back to our skeleton process, we can see that it consists of two activities; receiveInput and replyOutput. In addition it has two variables, inputVariable and outputVariable, which were created as part of our skeleton process.

The first activity is used to receive the initial request from the client invoking our BPEL process; when this request is received it will populate the variable inputVariable with the content of the request.

The last activity is used to send a response back to the client, and the content of this response will contain the content of outputVariable.

For the purpose of our simple EchoProcess we just need to copy the content of the input variable to the output variable.

Assigning values to variables

In BPEL, the <assign> activity is used to update the values of variables with new data. The <assign> activity typically consists of one or more copy operations. Each copy consists of a target variable, that is, the variable that you wish to assign a value to and a source, which can either be another variable or an XPath expression.

To insert an Assign activity, drag one from the Component Palette on to our BPEL process at the point just after the receiveInput activity, as shown in the following screenshot:

Assigning values to variables

To configure the Assign activity, double-click on it to open up its configuration window. Click on the green cross to access a menu and select Copy Operation…, as shown in the next screenshot:

Assigning values to variables

This will present us with the Create Copy Operation window, as shown in the following screenshot:

Assigning values to variables

On the left-hand side, we specify the From variable, that is, where we want to copy from. For our process, we want to copy the content of our input variable to our output variable. So expand inputVariable and select /client:process/client:input, as shown in the preceding screenshot.

On the right-hand side, we specify the To variable, that is, where we want to copy to. So expand outputVariable and select /client:processResponse/client:result.

Once you've done this, click OK and then OK again to close the Assign window.

Deploying the process

This completes our process, so click on the Save All icon (the fourth icon along, in the top-left-hand corner of JDeveloper) to save our work.

Note

As a BPEL project is made up of multiple files, we typically use Save All to ensure that all modifications are updated at the same time.

Our process is now ready to be deployed. Before doing this, make sure the SOA Suite is running and that within JDeveloper we have defined an Application Server connection (as described in the installation guide).

To deploy the process, right-click on our EchoComposite project and then select Deploy | EchoComposite | to MyApplicationServerConnection.

Deploying the process

This will bring up the SOA Deployment Configuration Dialog. This dialog allows us to specify the target servers onto which we wish to deploy the composite. We may also specify a Revision ID for the composite to differentiate it from other deployed versions of the composite. If a revision with the same ID already exists, then it may be replaced by specifying the Overwrite any existing composites with the same revision ID option.

Deploying the process

Clicking OK will begin the build and deployment of the composite. JDeveloper will open up a window below our process containing five tabs: Messages, Feedback, BPEL, Deployment, and SOA, to which it outputs the status of the deployment process.

During the build, the SOA tab will indicate if the build was successful, and assuming it was, then an Authorization Request window will pop up requesting credentials for the application server.

Deploying the process

On completion of the build process, the Deployment tab should state Successfully deployed archive…., as shown in the following screenshot:

Deploying the process

If you don't get this message, then check the log windows for details of the error and fix it accordingly.

Testing the BPEL process

Now that our process has been deployed, the next step is to run it. A simple way to do this is to initiate a test instance using the Enterprise Manager (EM) console, which is the web-based management console for SOA Suite.

To access the EM console, open up a browser and enter the following URL:

http://<hostname>:<port>/em

This will bring up the login screen for the EM console. Log in as weblogic. This will take us to the EM console dashboard, as shown in the following screenshot:

Testing the BPEL process

The Dashboard provides us with a summary report on the Fusion Middleware domain. On the left-hand side we have a list of management areas and on the right we have summaries of application deployments, including our EchoComposite under the SOA tab.

From here, click on the composite name, that is, EchoComposite. This will take us to the Dashboard screen for our composite. From here we can see the number of completed and currently executing composite instances.

Testing the BPEL process

At the top of the Dashboard there is a Test button that allows us to execute a composite test. Pressing this button brings up the Test Web Service page, as shown in the following screenshot:

Testing the BPEL process

When we created our process, JDeveloper automatically created a WSDL file which contained the single operation (that is, process). However, it's quite common to define processes that have multiple operations, as we will see later on in the book.

The Operation drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

When you select the operation to invoke, the console will generate an HTML form with a field for each element in the message payload of the operation (as defined by the WSDL for the process). Here we can enter into each field the value that we want to submit.

For operations with large message payloads it can be simpler to just enter the XML source. If you select the XML View drop-down list the console will replace the form with a free text area with a skeleton XML fragment into which we can insert the required values.

To execute a test instance of our composite, enter some text in the input field and click Test Web Service. This will cause the console to generate a SOAP message and use it to invoke our Echo process.

Upon successful execution of the process, our test page will be updated to show the result which displays the response returned by our process. Here we can see that the result element contains our original input string, as shown in the following screenshot:

Testing the BPEL process

If we expand the SOA and soa-infra items on the left-hand side of the page, we will arrive back at the dashboard for the EchoComposite. Clicking on a completed instance will give us a summary of the composite. From here we can see the components that make up our composite. In this case, the composite consists of a single BPEL process.

Testing the BPEL process

Clicking on the BPEL process takes us to an audit record of the instance. We can expand the tree view to see details of individual operations like the message sent by replyOutput.

Testing the BPEL process

Clicking on the Flow tab will display a graphical representation of the activities within the BPEL process.

Testing the BPEL process

Clicking on any of the activities in the audit trail will pop up a window displaying details of the actions performed by that activity. In the following screenshot, we can see details of the message sent by the replyOutput activity:

Testing the BPEL process

This completes development of our first BPEL process. The next step is to call it via the Mediator. This will give us the option of transforming the input into the format we desire and of routing to different components based on the input content.

Adding a Mediator

By selecting the composite.xml tab in JDeveloper, we can see the outline of the Assembly that we have created for the BPEL process. We can add a Mediator to this by dragging it from the Component Palette.

Adding a Mediator

Dragging the Mediator Component will cause a dialog to be displayed requesting a Name and Template for the Mediator.

Adding a Mediator

If we select the Define Interface Later template, then we can click OK to add a Mediator to our Assembly. Defining the interface later will allow us to define the interface by wiring it to a service. Note that the types of interface templates are the same as the ones we saw for our BPEL process.

Adding a Mediator

We want to have the Mediator use the same interface as the BPEL process. To rewire the composite to use a Mediator, we first delete the line joining the EchoProcess in the Exposed Services swimlane to the BPEL process by right-clicking on the line and selecting Delete.

Adding a Mediator

We can now wire the EchoProcess service to the input of the Mediator by clicking on the chevron in the top-right corner of the exposed service and dragging it onto the connection point on the left-hand side of the Mediator.

Adding a Mediator

Now wire the Mediator to the BPEL process by dragging the yellow arrow on the Mediator onto the blue chevron on the BPEL process.

Adding a Mediator

We have now configured the Mediator to accept the same interface as the BPEL process and wired the Mediator to forward all messages onto the BPEL process. The default behavior of the Mediator, if it has no explicit rules, is to route the input request to the outbound request and then route the response, if any, from the target to the client.

We can now deploy and test the Assembly containing the Mediator in the same way that we deployed and tested the Assembly containing the BPEL process.

Using the Service Bus

In preparation for this, we will need the URL for the WSDL of our process. To obtain this, from the EM Dashboard, click on the EchoComposite Assembly, and then the connector icon to the right of the Settings button. This will display a link for the WSDL location and Endpoint, as shown in the following screenshot:

Using the Service Bus

If you click on this link, the EM console will open a window showing details of the WSDL. Make a note of the WSDL location as we will need this in a moment.

Creating a BPEL process

Clicking Finish will launch the Create BPEL Process wizard, as shown in the following screenshot:

Creating a BPEL process

Replace the process with a sensible Name like EchoProcess and select a template of the type Synchronous BPEL Process and click OK. JDeveloper will create a skeleton BPEL Process and a corresponding WSDL that describes the web service implemented by our process. This process will be wrapped in an SCA Assembly.

Note

BPEL process templates cover the different ways in which a client may interact with the process. A Define Service Later template is just the process definition and will be used when we want to have complete control over the types of interfaces the process exposes, we can think of this as an empty BPEL process template. An Asynchronous BPEL Process template is used when we send a one-way message to a process, and then later on we send a one-way message from the process to the caller. This type of interaction is good for processes that run for a long time. A Synchronous BPEL Process is one in which we have a request/reply interaction style. The client sends in a request message and then blocks waiting for the process to provide a reply. This type of interaction is good for processes that need to return an immediate result. A One Way BPEL Process simply receives a one-way input message but no reply is expected. This is useful when we initiate some interaction that will initiate a number of other activities. We may also create a BPEL process that implements a specific interface defined in WSDL by using the Base on a WSDL template. Finally, we may have a BPEL process that is activated when a specific event is generated by the Event Delivery Network (see Chapter 8, Using Business Events) using the Subscribe to Events template.

If we look at the process that JDeveloper has created (as shown in the following screenshot), we can see that in the center is the process itself, which contains the activities to be carried out. At the moment, it just contains an initial activity for receiving a request and a corresponding activity for sending a response.

Creating a BPEL process

Either side of the process we have a swim lane containing Partner Links that represent either the caller of our process, as is the case with the echoprocess_client partner links, or services that our BPEL process calls out to. At the moment this is empty as we haven't defined any external references that we use within our BPEL process. Notice also that we don't currently have any content between receiving the call and replying; our process is empty and does nothing.

The Component Palette window (to the right of our process window in the preceding screenshot) lists all the BPEL Activities and Components that we can use within our process. To use any of these, we have to simply drag-and-drop them onto the appropriate place within our process.

If you click on the BPEL Services drop-down, you also have the option of selecting services which we use whenever we need to call out to an external system.

Getting back to our skeleton process, we can see that it consists of two activities; receiveInput and replyOutput. In addition it has two variables, inputVariable and outputVariable, which were created as part of our skeleton process.

The first activity is used to receive the initial request from the client invoking our BPEL process; when this request is received it will populate the variable inputVariable with the content of the request.

The last activity is used to send a response back to the client, and the content of this response will contain the content of outputVariable.

For the purpose of our simple EchoProcess we just need to copy the content of the input variable to the output variable.

Assigning values to variables

In BPEL, the <assign> activity is used to update the values of variables with new data. The <assign> activity typically consists of one or more copy operations. Each copy consists of a target variable, that is, the variable that you wish to assign a value to and a source, which can either be another variable or an XPath expression.

To insert an Assign activity, drag one from the Component Palette on to our BPEL process at the point just after the receiveInput activity, as shown in the following screenshot:

Assigning values to variables

To configure the Assign activity, double-click on it to open up its configuration window. Click on the green cross to access a menu and select Copy Operation…, as shown in the next screenshot:

Assigning values to variables

This will present us with the Create Copy Operation window, as shown in the following screenshot:

Assigning values to variables

On the left-hand side, we specify the From variable, that is, where we want to copy from. For our process, we want to copy the content of our input variable to our output variable. So expand inputVariable and select /client:process/client:input, as shown in the preceding screenshot.

On the right-hand side, we specify the To variable, that is, where we want to copy to. So expand outputVariable and select /client:processResponse/client:result.

Once you've done this, click OK and then OK again to close the Assign window.

Deploying the process

This completes our process, so click on the Save All icon (the fourth icon along, in the top-left-hand corner of JDeveloper) to save our work.

Note

As a BPEL project is made up of multiple files, we typically use Save All to ensure that all modifications are updated at the same time.

Our process is now ready to be deployed. Before doing this, make sure the SOA Suite is running and that within JDeveloper we have defined an Application Server connection (as described in the installation guide).

To deploy the process, right-click on our EchoComposite project and then select Deploy | EchoComposite | to MyApplicationServerConnection.

Deploying the process

This will bring up the SOA Deployment Configuration Dialog. This dialog allows us to specify the target servers onto which we wish to deploy the composite. We may also specify a Revision ID for the composite to differentiate it from other deployed versions of the composite. If a revision with the same ID already exists, then it may be replaced by specifying the Overwrite any existing composites with the same revision ID option.

Deploying the process

Clicking OK will begin the build and deployment of the composite. JDeveloper will open up a window below our process containing five tabs: Messages, Feedback, BPEL, Deployment, and SOA, to which it outputs the status of the deployment process.

During the build, the SOA tab will indicate if the build was successful, and assuming it was, then an Authorization Request window will pop up requesting credentials for the application server.

Deploying the process

On completion of the build process, the Deployment tab should state Successfully deployed archive…., as shown in the following screenshot:

Deploying the process

If you don't get this message, then check the log windows for details of the error and fix it accordingly.

Testing the BPEL process

Now that our process has been deployed, the next step is to run it. A simple way to do this is to initiate a test instance using the Enterprise Manager (EM) console, which is the web-based management console for SOA Suite.

To access the EM console, open up a browser and enter the following URL:

http://<hostname>:<port>/em

This will bring up the login screen for the EM console. Log in as weblogic. This will take us to the EM console dashboard, as shown in the following screenshot:

Testing the BPEL process

The Dashboard provides us with a summary report on the Fusion Middleware domain. On the left-hand side we have a list of management areas and on the right we have summaries of application deployments, including our EchoComposite under the SOA tab.

From here, click on the composite name, that is, EchoComposite. This will take us to the Dashboard screen for our composite. From here we can see the number of completed and currently executing composite instances.

Testing the BPEL process

At the top of the Dashboard there is a Test button that allows us to execute a composite test. Pressing this button brings up the Test Web Service page, as shown in the following screenshot:

Testing the BPEL process

When we created our process, JDeveloper automatically created a WSDL file which contained the single operation (that is, process). However, it's quite common to define processes that have multiple operations, as we will see later on in the book.

The Operation drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

When you select the operation to invoke, the console will generate an HTML form with a field for each element in the message payload of the operation (as defined by the WSDL for the process). Here we can enter into each field the value that we want to submit.

For operations with large message payloads it can be simpler to just enter the XML source. If you select the XML View drop-down list the console will replace the form with a free text area with a skeleton XML fragment into which we can insert the required values.

To execute a test instance of our composite, enter some text in the input field and click Test Web Service. This will cause the console to generate a SOAP message and use it to invoke our Echo process.

Upon successful execution of the process, our test page will be updated to show the result which displays the response returned by our process. Here we can see that the result element contains our original input string, as shown in the following screenshot:

Testing the BPEL process

If we expand the SOA and soa-infra items on the left-hand side of the page, we will arrive back at the dashboard for the EchoComposite. Clicking on a completed instance will give us a summary of the composite. From here we can see the components that make up our composite. In this case, the composite consists of a single BPEL process.

Testing the BPEL process

Clicking on the BPEL process takes us to an audit record of the instance. We can expand the tree view to see details of individual operations like the message sent by replyOutput.

Testing the BPEL process

Clicking on the Flow tab will display a graphical representation of the activities within the BPEL process.

Testing the BPEL process

Clicking on any of the activities in the audit trail will pop up a window displaying details of the actions performed by that activity. In the following screenshot, we can see details of the message sent by the replyOutput activity:

Testing the BPEL process

This completes development of our first BPEL process. The next step is to call it via the Mediator. This will give us the option of transforming the input into the format we desire and of routing to different components based on the input content.

Adding a Mediator

By selecting the composite.xml tab in JDeveloper, we can see the outline of the Assembly that we have created for the BPEL process. We can add a Mediator to this by dragging it from the Component Palette.

Adding a Mediator

Dragging the Mediator Component will cause a dialog to be displayed requesting a Name and Template for the Mediator.

Adding a Mediator

If we select the Define Interface Later template, then we can click OK to add a Mediator to our Assembly. Defining the interface later will allow us to define the interface by wiring it to a service. Note that the types of interface templates are the same as the ones we saw for our BPEL process.

Adding a Mediator

We want to have the Mediator use the same interface as the BPEL process. To rewire the composite to use a Mediator, we first delete the line joining the EchoProcess in the Exposed Services swimlane to the BPEL process by right-clicking on the line and selecting Delete.

Adding a Mediator

We can now wire the EchoProcess service to the input of the Mediator by clicking on the chevron in the top-right corner of the exposed service and dragging it onto the connection point on the left-hand side of the Mediator.

Adding a Mediator

Now wire the Mediator to the BPEL process by dragging the yellow arrow on the Mediator onto the blue chevron on the BPEL process.

Adding a Mediator

We have now configured the Mediator to accept the same interface as the BPEL process and wired the Mediator to forward all messages onto the BPEL process. The default behavior of the Mediator, if it has no explicit rules, is to route the input request to the outbound request and then route the response, if any, from the target to the client.

We can now deploy and test the Assembly containing the Mediator in the same way that we deployed and tested the Assembly containing the BPEL process.

Using the Service Bus

In preparation for this, we will need the URL for the WSDL of our process. To obtain this, from the EM Dashboard, click on the EchoComposite Assembly, and then the connector icon to the right of the Settings button. This will display a link for the WSDL location and Endpoint, as shown in the following screenshot:

Using the Service Bus

If you click on this link, the EM console will open a window showing details of the WSDL. Make a note of the WSDL location as we will need this in a moment.

Assigning values to variables

In BPEL, the <assign> activity is used to update the values of variables with new data. The <assign> activity typically consists of one or more copy operations. Each copy consists of a target variable, that is, the variable that you wish to assign a value to and a source, which can either be another variable or an XPath expression.

To insert an Assign activity, drag one from the Component Palette on to our BPEL process at the point just after the receiveInput activity, as shown in the following screenshot:

Assigning values to variables

To configure the Assign activity, double-click on it to open up its configuration window. Click on the green cross to access a menu and select Copy Operation…, as shown in the next screenshot:

Assigning values to variables

This will present us with the Create Copy Operation window, as shown in the following screenshot:

Assigning values to variables

On the left-hand side, we specify the From variable, that is, where we want to copy from. For our process, we want to copy the content of our input variable to our output variable. So expand inputVariable and select /client:process/client:input, as shown in the preceding screenshot.

On the right-hand side, we specify the To variable, that is, where we want to copy to. So expand outputVariable and select /client:processResponse/client:result.

Once you've done this, click OK and then OK again to close the Assign window.

Deploying the process

This completes our process, so click on the Save All icon (the fourth icon along, in the top-left-hand corner of JDeveloper) to save our work.

Note

As a BPEL project is made up of multiple files, we typically use Save All to ensure that all modifications are updated at the same time.

Our process is now ready to be deployed. Before doing this, make sure the SOA Suite is running and that within JDeveloper we have defined an Application Server connection (as described in the installation guide).

To deploy the process, right-click on our EchoComposite project and then select Deploy | EchoComposite | to MyApplicationServerConnection.

Deploying the process

This will bring up the SOA Deployment Configuration Dialog. This dialog allows us to specify the target servers onto which we wish to deploy the composite. We may also specify a Revision ID for the composite to differentiate it from other deployed versions of the composite. If a revision with the same ID already exists, then it may be replaced by specifying the Overwrite any existing composites with the same revision ID option.

Deploying the process

Clicking OK will begin the build and deployment of the composite. JDeveloper will open up a window below our process containing five tabs: Messages, Feedback, BPEL, Deployment, and SOA, to which it outputs the status of the deployment process.

During the build, the SOA tab will indicate if the build was successful, and assuming it was, then an Authorization Request window will pop up requesting credentials for the application server.

Deploying the process

On completion of the build process, the Deployment tab should state Successfully deployed archive…., as shown in the following screenshot:

Deploying the process

If you don't get this message, then check the log windows for details of the error and fix it accordingly.

Testing the BPEL process

Now that our process has been deployed, the next step is to run it. A simple way to do this is to initiate a test instance using the Enterprise Manager (EM) console, which is the web-based management console for SOA Suite.

To access the EM console, open up a browser and enter the following URL:

http://<hostname>:<port>/em

This will bring up the login screen for the EM console. Log in as weblogic. This will take us to the EM console dashboard, as shown in the following screenshot:

Testing the BPEL process

The Dashboard provides us with a summary report on the Fusion Middleware domain. On the left-hand side we have a list of management areas and on the right we have summaries of application deployments, including our EchoComposite under the SOA tab.

From here, click on the composite name, that is, EchoComposite. This will take us to the Dashboard screen for our composite. From here we can see the number of completed and currently executing composite instances.

Testing the BPEL process

At the top of the Dashboard there is a Test button that allows us to execute a composite test. Pressing this button brings up the Test Web Service page, as shown in the following screenshot:

Testing the BPEL process

When we created our process, JDeveloper automatically created a WSDL file which contained the single operation (that is, process). However, it's quite common to define processes that have multiple operations, as we will see later on in the book.

The Operation drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

When you select the operation to invoke, the console will generate an HTML form with a field for each element in the message payload of the operation (as defined by the WSDL for the process). Here we can enter into each field the value that we want to submit.

For operations with large message payloads it can be simpler to just enter the XML source. If you select the XML View drop-down list the console will replace the form with a free text area with a skeleton XML fragment into which we can insert the required values.

To execute a test instance of our composite, enter some text in the input field and click Test Web Service. This will cause the console to generate a SOAP message and use it to invoke our Echo process.

Upon successful execution of the process, our test page will be updated to show the result which displays the response returned by our process. Here we can see that the result element contains our original input string, as shown in the following screenshot:

Testing the BPEL process

If we expand the SOA and soa-infra items on the left-hand side of the page, we will arrive back at the dashboard for the EchoComposite. Clicking on a completed instance will give us a summary of the composite. From here we can see the components that make up our composite. In this case, the composite consists of a single BPEL process.

Testing the BPEL process

Clicking on the BPEL process takes us to an audit record of the instance. We can expand the tree view to see details of individual operations like the message sent by replyOutput.

Testing the BPEL process

Clicking on the Flow tab will display a graphical representation of the activities within the BPEL process.

Testing the BPEL process

Clicking on any of the activities in the audit trail will pop up a window displaying details of the actions performed by that activity. In the following screenshot, we can see details of the message sent by the replyOutput activity:

Testing the BPEL process

This completes development of our first BPEL process. The next step is to call it via the Mediator. This will give us the option of transforming the input into the format we desire and of routing to different components based on the input content.

Adding a Mediator

By selecting the composite.xml tab in JDeveloper, we can see the outline of the Assembly that we have created for the BPEL process. We can add a Mediator to this by dragging it from the Component Palette.

Adding a Mediator

Dragging the Mediator Component will cause a dialog to be displayed requesting a Name and Template for the Mediator.

Adding a Mediator

If we select the Define Interface Later template, then we can click OK to add a Mediator to our Assembly. Defining the interface later will allow us to define the interface by wiring it to a service. Note that the types of interface templates are the same as the ones we saw for our BPEL process.

Adding a Mediator

We want to have the Mediator use the same interface as the BPEL process. To rewire the composite to use a Mediator, we first delete the line joining the EchoProcess in the Exposed Services swimlane to the BPEL process by right-clicking on the line and selecting Delete.

Adding a Mediator

We can now wire the EchoProcess service to the input of the Mediator by clicking on the chevron in the top-right corner of the exposed service and dragging it onto the connection point on the left-hand side of the Mediator.

Adding a Mediator

Now wire the Mediator to the BPEL process by dragging the yellow arrow on the Mediator onto the blue chevron on the BPEL process.

Adding a Mediator

We have now configured the Mediator to accept the same interface as the BPEL process and wired the Mediator to forward all messages onto the BPEL process. The default behavior of the Mediator, if it has no explicit rules, is to route the input request to the outbound request and then route the response, if any, from the target to the client.

We can now deploy and test the Assembly containing the Mediator in the same way that we deployed and tested the Assembly containing the BPEL process.

Using the Service Bus

In preparation for this, we will need the URL for the WSDL of our process. To obtain this, from the EM Dashboard, click on the EchoComposite Assembly, and then the connector icon to the right of the Settings button. This will display a link for the WSDL location and Endpoint, as shown in the following screenshot:

Using the Service Bus

If you click on this link, the EM console will open a window showing details of the WSDL. Make a note of the WSDL location as we will need this in a moment.

Deploying the process

This completes our process, so click on the Save All icon (the fourth icon along, in the top-left-hand corner of JDeveloper) to save our work.

Note

As a BPEL project is made up of multiple files, we typically use Save All to ensure that all modifications are updated at the same time.

Our process is now ready to be deployed. Before doing this, make sure the SOA Suite is running and that within JDeveloper we have defined an Application Server connection (as described in the installation guide).

To deploy the process, right-click on our EchoComposite project and then select Deploy | EchoComposite | to MyApplicationServerConnection.

Deploying the process

This will bring up the SOA Deployment Configuration Dialog. This dialog allows us to specify the target servers onto which we wish to deploy the composite. We may also specify a Revision ID for the composite to differentiate it from other deployed versions of the composite. If a revision with the same ID already exists, then it may be replaced by specifying the Overwrite any existing composites with the same revision ID option.

Deploying the process

Clicking OK will begin the build and deployment of the composite. JDeveloper will open up a window below our process containing five tabs: Messages, Feedback, BPEL, Deployment, and SOA, to which it outputs the status of the deployment process.

During the build, the SOA tab will indicate if the build was successful, and assuming it was, then an Authorization Request window will pop up requesting credentials for the application server.

Deploying the process

On completion of the build process, the Deployment tab should state Successfully deployed archive…., as shown in the following screenshot:

Deploying the process

If you don't get this message, then check the log windows for details of the error and fix it accordingly.

Testing the BPEL process

Now that our process has been deployed, the next step is to run it. A simple way to do this is to initiate a test instance using the Enterprise Manager (EM) console, which is the web-based management console for SOA Suite.

To access the EM console, open up a browser and enter the following URL:

http://<hostname>:<port>/em

This will bring up the login screen for the EM console. Log in as weblogic. This will take us to the EM console dashboard, as shown in the following screenshot:

Testing the BPEL process

The Dashboard provides us with a summary report on the Fusion Middleware domain. On the left-hand side we have a list of management areas and on the right we have summaries of application deployments, including our EchoComposite under the SOA tab.

From here, click on the composite name, that is, EchoComposite. This will take us to the Dashboard screen for our composite. From here we can see the number of completed and currently executing composite instances.

Testing the BPEL process

At the top of the Dashboard there is a Test button that allows us to execute a composite test. Pressing this button brings up the Test Web Service page, as shown in the following screenshot:

Testing the BPEL process

When we created our process, JDeveloper automatically created a WSDL file which contained the single operation (that is, process). However, it's quite common to define processes that have multiple operations, as we will see later on in the book.

The Operation drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

When you select the operation to invoke, the console will generate an HTML form with a field for each element in the message payload of the operation (as defined by the WSDL for the process). Here we can enter into each field the value that we want to submit.

For operations with large message payloads it can be simpler to just enter the XML source. If you select the XML View drop-down list the console will replace the form with a free text area with a skeleton XML fragment into which we can insert the required values.

To execute a test instance of our composite, enter some text in the input field and click Test Web Service. This will cause the console to generate a SOAP message and use it to invoke our Echo process.

Upon successful execution of the process, our test page will be updated to show the result which displays the response returned by our process. Here we can see that the result element contains our original input string, as shown in the following screenshot:

Testing the BPEL process

If we expand the SOA and soa-infra items on the left-hand side of the page, we will arrive back at the dashboard for the EchoComposite. Clicking on a completed instance will give us a summary of the composite. From here we can see the components that make up our composite. In this case, the composite consists of a single BPEL process.

Testing the BPEL process

Clicking on the BPEL process takes us to an audit record of the instance. We can expand the tree view to see details of individual operations like the message sent by replyOutput.

Testing the BPEL process

Clicking on the Flow tab will display a graphical representation of the activities within the BPEL process.

Testing the BPEL process

Clicking on any of the activities in the audit trail will pop up a window displaying details of the actions performed by that activity. In the following screenshot, we can see details of the message sent by the replyOutput activity:

Testing the BPEL process

This completes development of our first BPEL process. The next step is to call it via the Mediator. This will give us the option of transforming the input into the format we desire and of routing to different components based on the input content.

Adding a Mediator

By selecting the composite.xml tab in JDeveloper, we can see the outline of the Assembly that we have created for the BPEL process. We can add a Mediator to this by dragging it from the Component Palette.

Adding a Mediator

Dragging the Mediator Component will cause a dialog to be displayed requesting a Name and Template for the Mediator.

Adding a Mediator

If we select the Define Interface Later template, then we can click OK to add a Mediator to our Assembly. Defining the interface later will allow us to define the interface by wiring it to a service. Note that the types of interface templates are the same as the ones we saw for our BPEL process.

Adding a Mediator

We want to have the Mediator use the same interface as the BPEL process. To rewire the composite to use a Mediator, we first delete the line joining the EchoProcess in the Exposed Services swimlane to the BPEL process by right-clicking on the line and selecting Delete.

Adding a Mediator

We can now wire the EchoProcess service to the input of the Mediator by clicking on the chevron in the top-right corner of the exposed service and dragging it onto the connection point on the left-hand side of the Mediator.

Adding a Mediator

Now wire the Mediator to the BPEL process by dragging the yellow arrow on the Mediator onto the blue chevron on the BPEL process.

Adding a Mediator

We have now configured the Mediator to accept the same interface as the BPEL process and wired the Mediator to forward all messages onto the BPEL process. The default behavior of the Mediator, if it has no explicit rules, is to route the input request to the outbound request and then route the response, if any, from the target to the client.

We can now deploy and test the Assembly containing the Mediator in the same way that we deployed and tested the Assembly containing the BPEL process.

Using the Service Bus

In preparation for this, we will need the URL for the WSDL of our process. To obtain this, from the EM Dashboard, click on the EchoComposite Assembly, and then the connector icon to the right of the Settings button. This will display a link for the WSDL location and Endpoint, as shown in the following screenshot:

Using the Service Bus

If you click on this link, the EM console will open a window showing details of the WSDL. Make a note of the WSDL location as we will need this in a moment.

Testing the BPEL process

Now that our process has been deployed, the next step is to run it. A simple way to do this is to initiate a test instance using the Enterprise Manager (EM) console, which is the web-based management console for SOA Suite.

To access the EM console, open up a browser and enter the following URL:

http://<hostname>:<port>/em

This will bring up the login screen for the EM console. Log in as weblogic. This will take us to the EM console dashboard, as shown in the following screenshot:

Testing the BPEL process

The Dashboard provides us with a summary report on the Fusion Middleware domain. On the left-hand side we have a list of management areas and on the right we have summaries of application deployments, including our EchoComposite under the SOA tab.

From here, click on the composite name, that is, EchoComposite. This will take us to the Dashboard screen for our composite. From here we can see the number of completed and currently executing composite instances.

Testing the BPEL process

At the top of the Dashboard there is a Test button that allows us to execute a composite test. Pressing this button brings up the Test Web Service page, as shown in the following screenshot:

Testing the BPEL process

When we created our process, JDeveloper automatically created a WSDL file which contained the single operation (that is, process). However, it's quite common to define processes that have multiple operations, as we will see later on in the book.

The Operation drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

When you select the operation to invoke, the console will generate an HTML form with a field for each element in the message payload of the operation (as defined by the WSDL for the process). Here we can enter into each field the value that we want to submit.

For operations with large message payloads it can be simpler to just enter the XML source. If you select the XML View drop-down list the console will replace the form with a free text area with a skeleton XML fragment into which we can insert the required values.

To execute a test instance of our composite, enter some text in the input field and click Test Web Service. This will cause the console to generate a SOAP message and use it to invoke our Echo process.

Upon successful execution of the process, our test page will be updated to show the result which displays the response returned by our process. Here we can see that the result element contains our original input string, as shown in the following screenshot:

Testing the BPEL process

If we expand the SOA and soa-infra items on the left-hand side of the page, we will arrive back at the dashboard for the EchoComposite. Clicking on a completed instance will give us a summary of the composite. From here we can see the components that make up our composite. In this case, the composite consists of a single BPEL process.

Testing the BPEL process

Clicking on the BPEL process takes us to an audit record of the instance. We can expand the tree view to see details of individual operations like the message sent by replyOutput.

Testing the BPEL process

Clicking on the Flow tab will display a graphical representation of the activities within the BPEL process.

Testing the BPEL process

Clicking on any of the activities in the audit trail will pop up a window displaying details of the actions performed by that activity. In the following screenshot, we can see details of the message sent by the replyOutput activity:

Testing the BPEL process

This completes development of our first BPEL process. The next step is to call it via the Mediator. This will give us the option of transforming the input into the format we desire and of routing to different components based on the input content.

Adding a Mediator

By selecting the composite.xml tab in JDeveloper, we can see the outline of the Assembly that we have created for the BPEL process. We can add a Mediator to this by dragging it from the Component Palette.

Adding a Mediator

Dragging the Mediator Component will cause a dialog to be displayed requesting a Name and Template for the Mediator.

Adding a Mediator

If we select the Define Interface Later template, then we can click OK to add a Mediator to our Assembly. Defining the interface later will allow us to define the interface by wiring it to a service. Note that the types of interface templates are the same as the ones we saw for our BPEL process.

Adding a Mediator

We want to have the Mediator use the same interface as the BPEL process. To rewire the composite to use a Mediator, we first delete the line joining the EchoProcess in the Exposed Services swimlane to the BPEL process by right-clicking on the line and selecting Delete.

Adding a Mediator

We can now wire the EchoProcess service to the input of the Mediator by clicking on the chevron in the top-right corner of the exposed service and dragging it onto the connection point on the left-hand side of the Mediator.

Adding a Mediator

Now wire the Mediator to the BPEL process by dragging the yellow arrow on the Mediator onto the blue chevron on the BPEL process.

Adding a Mediator

We have now configured the Mediator to accept the same interface as the BPEL process and wired the Mediator to forward all messages onto the BPEL process. The default behavior of the Mediator, if it has no explicit rules, is to route the input request to the outbound request and then route the response, if any, from the target to the client.

We can now deploy and test the Assembly containing the Mediator in the same way that we deployed and tested the Assembly containing the BPEL process.

Using the Service Bus

In preparation for this, we will need the URL for the WSDL of our process. To obtain this, from the EM Dashboard, click on the EchoComposite Assembly, and then the connector icon to the right of the Settings button. This will display a link for the WSDL location and Endpoint, as shown in the following screenshot:

Using the Service Bus

If you click on this link, the EM console will open a window showing details of the WSDL. Make a note of the WSDL location as we will need this in a moment.

Adding a Mediator

By selecting the composite.xml tab in JDeveloper, we can see the outline of the Assembly that we have created for the BPEL process. We can add a Mediator to this by dragging it from the Component Palette.

Adding a Mediator

Dragging the Mediator Component will cause a dialog to be displayed requesting a Name and Template for the Mediator.

Adding a Mediator

If we select the Define Interface Later template, then we can click OK to add a Mediator to our Assembly. Defining the interface later will allow us to define the interface by wiring it to a service. Note that the types of interface templates are the same as the ones we saw for our BPEL process.

Adding a Mediator

We want to have the Mediator use the same interface as the BPEL process. To rewire the composite to use a Mediator, we first delete the line joining the EchoProcess in the Exposed Services swimlane to the BPEL process by right-clicking on the line and selecting Delete.

Adding a Mediator

We can now wire the EchoProcess service to the input of the Mediator by clicking on the chevron in the top-right corner of the exposed service and dragging it onto the connection point on the left-hand side of the Mediator.

Adding a Mediator

Now wire the Mediator to the BPEL process by dragging the yellow arrow on the Mediator onto the blue chevron on the BPEL process.

Adding a Mediator

We have now configured the Mediator to accept the same interface as the BPEL process and wired the Mediator to forward all messages onto the BPEL process. The default behavior of the Mediator, if it has no explicit rules, is to route the input request to the outbound request and then route the response, if any, from the target to the client.

We can now deploy and test the Assembly containing the Mediator in the same way that we deployed and tested the Assembly containing the BPEL process.

Using the Service Bus

In preparation for this, we will need the URL for the WSDL of our process. To obtain this, from the EM Dashboard, click on the EchoComposite Assembly, and then the connector icon to the right of the Settings button. This will display a link for the WSDL location and Endpoint, as shown in the following screenshot:

Using the Service Bus

If you click on this link, the EM console will open a window showing details of the WSDL. Make a note of the WSDL location as we will need this in a moment.

Using the Service Bus

In preparation for this, we will need the URL for the WSDL of our process. To obtain this, from the EM Dashboard, click on the EchoComposite Assembly, and then the connector icon to the right of the Settings button. This will display a link for the WSDL location and Endpoint, as shown in the following screenshot:

Using the Service Bus

If you click on this link, the EM console will open a window showing details of the WSDL. Make a note of the WSDL location as we will need this in a moment.

Writing our first proxy service

Rather than allowing clients to directly invoke our Echo process, best practice dictates that we provide access to this service via an intermediary or proxy, whose role is to route the request to the actual endpoint. This results in a far more loosely-coupled solution, which is the key if we are to realise many of the benefits of SOA.

In this section, we are going to use the Oracle Service Bus (OSB) to implement a proxy Echo service, which sits between the client and our echo BPEL process, as illustrated in the following diagram:

Writing our first proxy service

It is useful to examine the preceding scenario to understand how messages are processed by OSB. The Service Bus defines two types of services, a proxy service and a business service.

The proxy service is an intermediary service that sits between the client and the actual end service being invoked (our BPEL process in the preceding example).

On receipt of a request the proxy service may perform a number of actions, such as validating, transforming, or enriching it before routing it to the appropriate business service.

Within the OSB, a business service is a definition of an external service for which OSB is a client. This defines whether OSB can invoke the external service and includes details such as the service interface, transport, security, and so on.

In the preceding example, we have defined an Echo Proxy Service that routes messages to the Echo Business Service, which then invokes our Echo BPEL Process. The response from the Echo BPEL Process follows the reverse path with the proxy service returning the final response to the original client.

Writing the Echo proxy service

Ensure that the Oracle Service Bus has started and then open up the Service Bus Console. Either do this from the Programs menu in Windows, select Oracle Weblogic | User Projects | OSB | Oracle Service Bus Admin Console

Or alternatively, open up a browser, and enter the following URL:

http://<hostname>:<port>/sbconsole

Where hostname represents the name of the machine on which OSB is running and port represents the port number. So if OSB is running on your local machine using the default port, enter the following URL in your browser:

http://localhost:7001/sbconsole

This will bring up the login screen for the Service Bus Console, log in as weblogic. By default, the OSB Console will display the Dashboard view, which provides a summary of the overall health of the system.

Writing the Echo proxy service

Looking at the console, we can see that it is divided into three distinct areas. The Change Center in the top-left-hand corner, which we will cover in a moment. Also on the left, below the Change Center, is the navigation bar which we use to navigate our way round the console.

The navigation bar is divided into the following sections: Operations, Resource Browser, Project Explorer, Security Configuration, and System Administration. Clicking on the appropriate section will expand that part of the navigation bar and allow you to access any of its sub-sections and their corresponding menu items.

Clicking on any of the menu items will display the appropriate page within the main window of the console. In the previous diagram we looked at the Dashboard view, under Monitoring, which is part of the Operations section.

Creating a Change Session

Before we can create a new project, or make any configuration changes through the console, we must create a new change session. A Change Session allows us to specify a series of changes as a single unit of work. These changes won't come into effect until we activate a session. At any point we can discard our changes, which will cause OSB to roll back those changes and exit our session.

While making changes through a session, other users can also be making changes under separate sessions. If users create changes that conflict with changes in other sessions, then the Service Bus will flag that as a conflict in the Change Center and neither user will be able to commit their changes until those conflicts have been resolved.

To create a new change session, click on Create in the Change Center. This will update the Change Center to indicate that we are in a session and the user who owns that session. As we are logged in as weblogic, it will be updated to show weblogic session, as shown in the following screenshot:

Creating a Change Session

In addition, you will see that the options available to us in the Change Center have changed to Activate, Discard, and Exit.

Creating a project

Before we can create our Echo proxy service, we must create an OSB project in which to place our resources. Typical resources include WSDL, XSD schemas, XSLT, and XQuery as well as Proxy and Business Services.

Resources can be created directly within our top-level project folder, or we can define a folder structure within our project into which we can place our resources.

Note

From within the same OSB domain, you can reference any resource regardless of which project it is included in.

The Project Explorer is where we create and manage all of this. Click on the Project Explorer section within the navigation bar. This will bring up the Projects view, as shown in the following screenshot:

Creating a project

Here we can see a list of all projects defined in OSB, which at this stage just includes the default project. From here we can also create a new project. Enter a project name, for example Chapter02, as shown in the preceding screenshot, and then click Add Project. This will create a new project and update our list of projects to reflect this.

Creating the project folders

Click on the project name will take us to the Project View, as shown in the screenshot on the next page.

We can see that this splits into three sections. The first section provides some basic details about the project including any references to or from artifacts in other projects as well as an optional description.

The second section lists any folders within the current project folder and provides the option to create additional folders within the project.

The final section lists any resource contained within this folder and provides the option to create additional resource.

Creating the project folders

We are going to create the project folders BusinessService, ProxyService, and WSDL, into which we will place our various resources. To create the first of these, in the Folders section, enter BusinessService as the folder name (circled in the preceding screenshot) and click on Add Folder. This will create a new folder and updates the list of folders to reflect this.

Creating the project folders

Once created, follow the same process to create the remaining folders; your list of folders will now look as shown in the preceding screenshot.

Creating service WSDL

Before we can create either our proxy or business service, we need to define the WSDL on which the service will be based. For this, we are going to use the WSDL of our Echo BPEL process that we created earlier in this chapter.

Before importing the WSDL, we need to ensure that we are in the right folder within our project. To do this, click on the WSDL folder in our Folders list. On doing this the project view will be updated to show us the content of this folder, which is currently empty. In addition, the project summary section of our project view will be updated to show that we are now within the WSDL folder, as circled in the following screenshot:

Creating service WSDL
Creating service WSDL

If we look at the Project Explorer in the navigation bar, we can see that it has been updated to show our location within the projects structure. By clicking on any project or folder in here, the console will take us to the project view for that location.

Importing a WSDL

To import the Echo WSDL into our project, click on the drop-down list next to Create Resource in the Resources section, and select Resources from URL, as shown in the following screenshot:

Importing a WSDL

This will bring up the page for loading resources from a URL, which is shown in the following screenshot:

Importing a WSDL

Note

A WSDL can also be imported from the filesystem by selecting the WSDL option from the Create Resource drop-down list.

In the URL/Path, enter the URL for our Echo WSDL. This is the WSDL location we made a note of earlier (in the WSDL tab for the Echo process in the BPEL console) and should look like the following:

http://<hostname>:<port>/orabpel/default/Echo/1.0/Echo?wsdl

Enter an appropriate value for the Resource Name(for example Echo), select a Resource Type as WSDL, and click on Next.

This will bring up the Load Resources window, which will list the resources that OSB is ready to import.

Importing a WSDL

You will notice that in addition to the actual WSDL file, it will also list the Echo.xsd. This is because the Echo.wsdl contains the following import statement:

<wsdl:types>
  <schema>
    <import namespace=
            "http://xmlns.oracle.com/SOASuiteBook11gChapter2_jws
/EchoComposite/EchoProcess"
            schemaLocation=
            "http://axreynol-us.us.oracle.com:8001/soa-infra/services/default/EchoComposite/echoprocess_client_ep?XSD=xsd/EchoProcess.xsd"/>
  </schema>
</wsdl:types>

This imports the Echo XML schema, which defines the input and output message of our Echo service. This schema was automatically generated by JDeveloper when we created our Echo process. In order to use our WSDL, we will need to import this schema as well. Because of the unusual URL for the XML Schema, the Service Bus generates its own unique name for the schema.

Click Import, the OSB console will confirm that the resources have been successfully imported and provide the option to Load Another resource, as shown in the following screenshot:

Importing a WSDL

Click on the WSDL folder within the project explorer to return to its project view. This will be updated to include our imported resources, as shown in the following screenshot:

Importing a WSDL

Creating our business service

We are now ready to create our Echo business service. Click on the Business Service folder within the Project Explorer to go to the project view for this folder.

In the Resources section, click on the drop-down list next to Create Resource and select Business Service. This will bring up the General Configuration page for creating a business service, as shown in the following screenshot:

Creating our business service

Here we specify the name of our business service (that is, EchoBS) and an optional description. Next we need to specify the Service Type, as we are creating our service based on a WSDL select WSDL Web Service.

Next, click the Browse button. This will launch a window from where we can select the WSDL for the Business Service, as shown on the next page:

Creating our business service

By default, this window will list all WSDL resources that are defined to the Service Bus, though you can restrict the list by defining the search criteria.

In our case, we just have the Echo WSDL, so we click on this. We will now be prompted to select a WSDL definition, as shown in the following screenshot:

Creating our business service

Here we need to select which binding or port definition we wish to use for our Business Service, select EchoProcess_pt and click Submit. Bindings provide an abstract interface and do not specify the physical endpoint, requiring additional configuration later. Ports have a physical endpoint and so require no additional configuration.

This will return us to the General Configuration screen with the Service Type updated to show the details of the selected WSDL and port, as shown in the following screenshot:

Creating our business service

Then, click on Next. This will take us to the Transport Configuration page, as shown in the following screenshot. Here we need to specify how the business service is to invoke an external service.

Creating our business service

As we based our business service on the EchoPort definition, the transport settings are already preconfigured, based on the content of our WSDL file.

Note

If we had based our business service on the EchoBinding definition, then the transport configuration would still have been prepopulated except for the Endpoint URI, which we would need to add manually.

From here, click on Last. This will take us to a summary page of our business service. Click on Save to create our business service.

This will return us to the project view on the Business Service folder and display the message The Service EchoBS was created successfully. If we examine the Resources section, we should see that it now contains our newly created business service.

Creating our business service

Creating our proxy service

We are now ready to create our Echo proxy service. Click on the Proxy Service folder within the Project Explorer to go to the project view for this folder.

In the Resources section, click on the drop-down list next to Create Resource and select Proxy Service. This will bring up the General Configuration page for creating a proxy service, as shown in the following screenshot:

Creating our proxy service

You will notice that this looks very similar to the general configuration screen for a business service. So as before, enter the name of our service (that is, Echo) and an optional description.

Next, we need to specify the Service Type. We could do this in exactly the same way as we did for our business service and base it on the Echo WSDL. However, this time we are going to base it on our EchoBS business service. We will see why in a moment.

For the Service Type, select Business Service, as shown in the screenshot, and click Browse…. This will launch the Select Business Service window from where we can search for and select the business service that we want to base our proxy service on.

Creating our proxy service

By default, this window will list all the business services defined to the Service Bus, though you can restrict the list by defining the search criteria.

In our case, we just have the EchoBS, so select this, and click on Submit. This will return us to the General Configuration screen with Service Type updated, as shown in the following screenshot:

Creating our proxy service

From here, click Last. This will take us to a summary page of our proxy service. Click Save to create our proxy service.

This will return us to the project view on the Proxy Service folder and display the message The Service Echo was created successfully.

If we examine the Resources section of our project view, we should see that it now contains our newly created proxy service as shown in the following screenshot:

Creating our proxy service

Creating message flow

Once we have created our proxy service, the next step is to specify how it should handle requests. This is defined in the message flow of the proxy service.

The message flow defines the actions that the proxy service should perform when a request is received such as validating the payload, transforming, or enriching it before routing it to the appropriate business service.

Within the resource section of our project view, click on the Edit Message Flow icon, as circled in the preceding image. This will take us to the Edit Message Flow window, where we can view and edit the message flow of our proxy service, as shown in the following screenshot:

Creating message flow

Looking at this, we can see that Echo already invokes the route node RouteTo_EchoBS.

Click on this and select Edit Route(as shown in the preceding screenshot). This will take us to the Edit Stage Configuration window, as shown in the following screenshot:

Creating message flow

Here we can see that it's already configured to route requests to the EchoBS business service.

Normally, when we create a proxy service we have to specify the message flow from scratch. However, when we created our Echo proxy service we based it on the EchoBS business service (as opposed to a WSDL). Because of this, the Service Bus has automatically configured the message flow to route requests to EchoBS.

As a result, our message flow is already predefined for us, so click Cancel, and then Cancel again to return to our project view.

Activating the Echo proxy service

We now have a completed proxy service; all that remains is to commit our work. Within the Change Center click Activate.

Activating the Echo proxy service

This will bring up the Activate Session, as shown in the following screenshot:

Activating the Echo proxy service

Before activating a session, it's good practice to give a description of the changes that we've made, just in case we need to roll them back later. So enter an appropriate description and then click on Submit, as shown in the preceding screenshot:

Assuming everything is okay, this will activate our changes, and the console will be updated to list our configuration changes, as shown in the following screenshot:

Activating the Echo proxy service

If you make a mistake and want to undo the changes you have activated, then you can click on the undo icon (circled in the preceding screenshot), and if you change your mind, you can revert the undo.

OSB allows you to undo any of your previous sessions as long as it doesn't result in an error in the runtime configuration of the Service Bus.

Testing our proxy service

All that's left is to test our proxy service. A simple way to do this is to initiate a test instance using the Service Bus test console.

To do this, we need to navigate back to the definition of our proxy service, rather than do this via the Project Explorer. We will use the Resource Browser. This provides a way to view all resources based on their type.

Click on the Resource Browser section within the navigation bar. By default, it will list all proxy services defined to the Service Bus, as shown in the following screenshot:

Testing our proxy service

We can then filter this list further by specifying the appropriate search criteria.

Click on the Launch Test Console icon for the Echo proxy service (circled in the preceding screenshot). This will launch the test console shown on the next page.

The Available Operations drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

By default, the options Direct Call and Include Tracing are selected within the Test Configuration section; keep these selected as they enable us to trace the state of a message as it passes through the proxy service.

The Request Document section allows us to specify the SOAP Header and the Payload for our service. By default, these will contain a skeleton XML fragment based on the WSDL definition of the selected operation with default values for each field.

To execute a test instance of our service, modify the text in the <echo:input> element, as we have in the following screen shot, and click Execute. This will cause the console to generate a request message and use it to invoke our Echo proxy service.

Testing our proxy service

Upon successful execution of the proxy, the test console will be updated to show the response returned. Here we can see that the result element contains our original initial input string, as shown in the following screenshot:

Testing our proxy service

We can examine the state of our message as it passed through the proxy service by expanding the Invocation Trace, as we have in the following screenshot:

Testing our proxy service

In addition, if you log back into the EM console, you should be able to see the Assembly instance that was invoked by the Service Bus.

Writing the Echo proxy service

Ensure that the Oracle Service Bus has started and then open up the Service Bus Console. Either do this from the Programs menu in Windows, select Oracle Weblogic | User Projects | OSB | Oracle Service Bus Admin Console

Or alternatively, open up a browser, and enter the following URL:

http://<hostname>:<port>/sbconsole

Where hostname represents the name of the machine on which OSB is running and port represents the port number. So if OSB is running on your local machine using the default port, enter the following URL in your browser:

http://localhost:7001/sbconsole

This will bring up the login screen for the Service Bus Console, log in as weblogic. By default, the OSB Console will display the Dashboard view, which provides a summary of the overall health of the system.

Writing the Echo proxy service

Looking at the console, we can see that it is divided into three distinct areas. The Change Center in the top-left-hand corner, which we will cover in a moment. Also on the left, below the Change Center, is the navigation bar which we use to navigate our way round the console.

The navigation bar is divided into the following sections: Operations, Resource Browser, Project Explorer, Security Configuration, and System Administration. Clicking on the appropriate section will expand that part of the navigation bar and allow you to access any of its sub-sections and their corresponding menu items.

Clicking on any of the menu items will display the appropriate page within the main window of the console. In the previous diagram we looked at the Dashboard view, under Monitoring, which is part of the Operations section.

Creating a Change Session

Before we can create a new project, or make any configuration changes through the console, we must create a new change session. A Change Session allows us to specify a series of changes as a single unit of work. These changes won't come into effect until we activate a session. At any point we can discard our changes, which will cause OSB to roll back those changes and exit our session.

While making changes through a session, other users can also be making changes under separate sessions. If users create changes that conflict with changes in other sessions, then the Service Bus will flag that as a conflict in the Change Center and neither user will be able to commit their changes until those conflicts have been resolved.

To create a new change session, click on Create in the Change Center. This will update the Change Center to indicate that we are in a session and the user who owns that session. As we are logged in as weblogic, it will be updated to show weblogic session, as shown in the following screenshot:

Creating a Change Session

In addition, you will see that the options available to us in the Change Center have changed to Activate, Discard, and Exit.

Creating a project

Before we can create our Echo proxy service, we must create an OSB project in which to place our resources. Typical resources include WSDL, XSD schemas, XSLT, and XQuery as well as Proxy and Business Services.

Resources can be created directly within our top-level project folder, or we can define a folder structure within our project into which we can place our resources.

Note

From within the same OSB domain, you can reference any resource regardless of which project it is included in.

The Project Explorer is where we create and manage all of this. Click on the Project Explorer section within the navigation bar. This will bring up the Projects view, as shown in the following screenshot:

Creating a project

Here we can see a list of all projects defined in OSB, which at this stage just includes the default project. From here we can also create a new project. Enter a project name, for example Chapter02, as shown in the preceding screenshot, and then click Add Project. This will create a new project and update our list of projects to reflect this.

Creating the project folders

Click on the project name will take us to the Project View, as shown in the screenshot on the next page.

We can see that this splits into three sections. The first section provides some basic details about the project including any references to or from artifacts in other projects as well as an optional description.

The second section lists any folders within the current project folder and provides the option to create additional folders within the project.

The final section lists any resource contained within this folder and provides the option to create additional resource.

Creating the project folders

We are going to create the project folders BusinessService, ProxyService, and WSDL, into which we will place our various resources. To create the first of these, in the Folders section, enter BusinessService as the folder name (circled in the preceding screenshot) and click on Add Folder. This will create a new folder and updates the list of folders to reflect this.

Creating the project folders

Once created, follow the same process to create the remaining folders; your list of folders will now look as shown in the preceding screenshot.

Creating service WSDL

Before we can create either our proxy or business service, we need to define the WSDL on which the service will be based. For this, we are going to use the WSDL of our Echo BPEL process that we created earlier in this chapter.

Before importing the WSDL, we need to ensure that we are in the right folder within our project. To do this, click on the WSDL folder in our Folders list. On doing this the project view will be updated to show us the content of this folder, which is currently empty. In addition, the project summary section of our project view will be updated to show that we are now within the WSDL folder, as circled in the following screenshot:

Creating service WSDL
Creating service WSDL

If we look at the Project Explorer in the navigation bar, we can see that it has been updated to show our location within the projects structure. By clicking on any project or folder in here, the console will take us to the project view for that location.

Importing a WSDL

To import the Echo WSDL into our project, click on the drop-down list next to Create Resource in the Resources section, and select Resources from URL, as shown in the following screenshot:

Importing a WSDL

This will bring up the page for loading resources from a URL, which is shown in the following screenshot:

Importing a WSDL

Note

A WSDL can also be imported from the filesystem by selecting the WSDL option from the Create Resource drop-down list.

In the URL/Path, enter the URL for our Echo WSDL. This is the WSDL location we made a note of earlier (in the WSDL tab for the Echo process in the BPEL console) and should look like the following:

http://<hostname>:<port>/orabpel/default/Echo/1.0/Echo?wsdl

Enter an appropriate value for the Resource Name(for example Echo), select a Resource Type as WSDL, and click on Next.

This will bring up the Load Resources window, which will list the resources that OSB is ready to import.

Importing a WSDL

You will notice that in addition to the actual WSDL file, it will also list the Echo.xsd. This is because the Echo.wsdl contains the following import statement:

<wsdl:types>
  <schema>
    <import namespace=
            "http://xmlns.oracle.com/SOASuiteBook11gChapter2_jws
/EchoComposite/EchoProcess"
            schemaLocation=
            "http://axreynol-us.us.oracle.com:8001/soa-infra/services/default/EchoComposite/echoprocess_client_ep?XSD=xsd/EchoProcess.xsd"/>
  </schema>
</wsdl:types>

This imports the Echo XML schema, which defines the input and output message of our Echo service. This schema was automatically generated by JDeveloper when we created our Echo process. In order to use our WSDL, we will need to import this schema as well. Because of the unusual URL for the XML Schema, the Service Bus generates its own unique name for the schema.

Click Import, the OSB console will confirm that the resources have been successfully imported and provide the option to Load Another resource, as shown in the following screenshot:

Importing a WSDL

Click on the WSDL folder within the project explorer to return to its project view. This will be updated to include our imported resources, as shown in the following screenshot:

Importing a WSDL

Creating our business service

We are now ready to create our Echo business service. Click on the Business Service folder within the Project Explorer to go to the project view for this folder.

In the Resources section, click on the drop-down list next to Create Resource and select Business Service. This will bring up the General Configuration page for creating a business service, as shown in the following screenshot:

Creating our business service

Here we specify the name of our business service (that is, EchoBS) and an optional description. Next we need to specify the Service Type, as we are creating our service based on a WSDL select WSDL Web Service.

Next, click the Browse button. This will launch a window from where we can select the WSDL for the Business Service, as shown on the next page:

Creating our business service

By default, this window will list all WSDL resources that are defined to the Service Bus, though you can restrict the list by defining the search criteria.

In our case, we just have the Echo WSDL, so we click on this. We will now be prompted to select a WSDL definition, as shown in the following screenshot:

Creating our business service

Here we need to select which binding or port definition we wish to use for our Business Service, select EchoProcess_pt and click Submit. Bindings provide an abstract interface and do not specify the physical endpoint, requiring additional configuration later. Ports have a physical endpoint and so require no additional configuration.

This will return us to the General Configuration screen with the Service Type updated to show the details of the selected WSDL and port, as shown in the following screenshot:

Creating our business service

Then, click on Next. This will take us to the Transport Configuration page, as shown in the following screenshot. Here we need to specify how the business service is to invoke an external service.

Creating our business service

As we based our business service on the EchoPort definition, the transport settings are already preconfigured, based on the content of our WSDL file.

Note

If we had based our business service on the EchoBinding definition, then the transport configuration would still have been prepopulated except for the Endpoint URI, which we would need to add manually.

From here, click on Last. This will take us to a summary page of our business service. Click on Save to create our business service.

This will return us to the project view on the Business Service folder and display the message The Service EchoBS was created successfully. If we examine the Resources section, we should see that it now contains our newly created business service.

Creating our business service

Creating our proxy service

We are now ready to create our Echo proxy service. Click on the Proxy Service folder within the Project Explorer to go to the project view for this folder.

In the Resources section, click on the drop-down list next to Create Resource and select Proxy Service. This will bring up the General Configuration page for creating a proxy service, as shown in the following screenshot:

Creating our proxy service

You will notice that this looks very similar to the general configuration screen for a business service. So as before, enter the name of our service (that is, Echo) and an optional description.

Next, we need to specify the Service Type. We could do this in exactly the same way as we did for our business service and base it on the Echo WSDL. However, this time we are going to base it on our EchoBS business service. We will see why in a moment.

For the Service Type, select Business Service, as shown in the screenshot, and click Browse…. This will launch the Select Business Service window from where we can search for and select the business service that we want to base our proxy service on.

Creating our proxy service

By default, this window will list all the business services defined to the Service Bus, though you can restrict the list by defining the search criteria.

In our case, we just have the EchoBS, so select this, and click on Submit. This will return us to the General Configuration screen with Service Type updated, as shown in the following screenshot:

Creating our proxy service

From here, click Last. This will take us to a summary page of our proxy service. Click Save to create our proxy service.

This will return us to the project view on the Proxy Service folder and display the message The Service Echo was created successfully.

If we examine the Resources section of our project view, we should see that it now contains our newly created proxy service as shown in the following screenshot:

Creating our proxy service

Creating message flow

Once we have created our proxy service, the next step is to specify how it should handle requests. This is defined in the message flow of the proxy service.

The message flow defines the actions that the proxy service should perform when a request is received such as validating the payload, transforming, or enriching it before routing it to the appropriate business service.

Within the resource section of our project view, click on the Edit Message Flow icon, as circled in the preceding image. This will take us to the Edit Message Flow window, where we can view and edit the message flow of our proxy service, as shown in the following screenshot:

Creating message flow

Looking at this, we can see that Echo already invokes the route node RouteTo_EchoBS.

Click on this and select Edit Route(as shown in the preceding screenshot). This will take us to the Edit Stage Configuration window, as shown in the following screenshot:

Creating message flow

Here we can see that it's already configured to route requests to the EchoBS business service.

Normally, when we create a proxy service we have to specify the message flow from scratch. However, when we created our Echo proxy service we based it on the EchoBS business service (as opposed to a WSDL). Because of this, the Service Bus has automatically configured the message flow to route requests to EchoBS.

As a result, our message flow is already predefined for us, so click Cancel, and then Cancel again to return to our project view.

Activating the Echo proxy service

We now have a completed proxy service; all that remains is to commit our work. Within the Change Center click Activate.

Activating the Echo proxy service

This will bring up the Activate Session, as shown in the following screenshot:

Activating the Echo proxy service

Before activating a session, it's good practice to give a description of the changes that we've made, just in case we need to roll them back later. So enter an appropriate description and then click on Submit, as shown in the preceding screenshot:

Assuming everything is okay, this will activate our changes, and the console will be updated to list our configuration changes, as shown in the following screenshot:

Activating the Echo proxy service

If you make a mistake and want to undo the changes you have activated, then you can click on the undo icon (circled in the preceding screenshot), and if you change your mind, you can revert the undo.

OSB allows you to undo any of your previous sessions as long as it doesn't result in an error in the runtime configuration of the Service Bus.

Testing our proxy service

All that's left is to test our proxy service. A simple way to do this is to initiate a test instance using the Service Bus test console.

To do this, we need to navigate back to the definition of our proxy service, rather than do this via the Project Explorer. We will use the Resource Browser. This provides a way to view all resources based on their type.

Click on the Resource Browser section within the navigation bar. By default, it will list all proxy services defined to the Service Bus, as shown in the following screenshot:

Testing our proxy service

We can then filter this list further by specifying the appropriate search criteria.

Click on the Launch Test Console icon for the Echo proxy service (circled in the preceding screenshot). This will launch the test console shown on the next page.

The Available Operations drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

By default, the options Direct Call and Include Tracing are selected within the Test Configuration section; keep these selected as they enable us to trace the state of a message as it passes through the proxy service.

The Request Document section allows us to specify the SOAP Header and the Payload for our service. By default, these will contain a skeleton XML fragment based on the WSDL definition of the selected operation with default values for each field.

To execute a test instance of our service, modify the text in the <echo:input> element, as we have in the following screen shot, and click Execute. This will cause the console to generate a request message and use it to invoke our Echo proxy service.

Testing our proxy service

Upon successful execution of the proxy, the test console will be updated to show the response returned. Here we can see that the result element contains our original initial input string, as shown in the following screenshot:

Testing our proxy service

We can examine the state of our message as it passed through the proxy service by expanding the Invocation Trace, as we have in the following screenshot:

Testing our proxy service

In addition, if you log back into the EM console, you should be able to see the Assembly instance that was invoked by the Service Bus.

Creating a Change Session

Before we can create a new project, or make any configuration changes through the console, we must create a new change session. A Change Session allows us to specify a series of changes as a single unit of work. These changes won't come into effect until we activate a session. At any point we can discard our changes, which will cause OSB to roll back those changes and exit our session.

While making changes through a session, other users can also be making changes under separate sessions. If users create changes that conflict with changes in other sessions, then the Service Bus will flag that as a conflict in the Change Center and neither user will be able to commit their changes until those conflicts have been resolved.

To create a new change session, click on Create in the Change Center. This will update the Change Center to indicate that we are in a session and the user who owns that session. As we are logged in as weblogic, it will be updated to show weblogic session, as shown in the following screenshot:

Creating a Change Session

In addition, you will see that the options available to us in the Change Center have changed to Activate, Discard, and Exit.

Creating a project

Before we can create our Echo proxy service, we must create an OSB project in which to place our resources. Typical resources include WSDL, XSD schemas, XSLT, and XQuery as well as Proxy and Business Services.

Resources can be created directly within our top-level project folder, or we can define a folder structure within our project into which we can place our resources.

Note

From within the same OSB domain, you can reference any resource regardless of which project it is included in.

The Project Explorer is where we create and manage all of this. Click on the Project Explorer section within the navigation bar. This will bring up the Projects view, as shown in the following screenshot:

Creating a project

Here we can see a list of all projects defined in OSB, which at this stage just includes the default project. From here we can also create a new project. Enter a project name, for example Chapter02, as shown in the preceding screenshot, and then click Add Project. This will create a new project and update our list of projects to reflect this.

Creating the project folders

Click on the project name will take us to the Project View, as shown in the screenshot on the next page.

We can see that this splits into three sections. The first section provides some basic details about the project including any references to or from artifacts in other projects as well as an optional description.

The second section lists any folders within the current project folder and provides the option to create additional folders within the project.

The final section lists any resource contained within this folder and provides the option to create additional resource.

Creating the project folders

We are going to create the project folders BusinessService, ProxyService, and WSDL, into which we will place our various resources. To create the first of these, in the Folders section, enter BusinessService as the folder name (circled in the preceding screenshot) and click on Add Folder. This will create a new folder and updates the list of folders to reflect this.

Creating the project folders

Once created, follow the same process to create the remaining folders; your list of folders will now look as shown in the preceding screenshot.

Creating service WSDL

Before we can create either our proxy or business service, we need to define the WSDL on which the service will be based. For this, we are going to use the WSDL of our Echo BPEL process that we created earlier in this chapter.

Before importing the WSDL, we need to ensure that we are in the right folder within our project. To do this, click on the WSDL folder in our Folders list. On doing this the project view will be updated to show us the content of this folder, which is currently empty. In addition, the project summary section of our project view will be updated to show that we are now within the WSDL folder, as circled in the following screenshot:

Creating service WSDL
Creating service WSDL

If we look at the Project Explorer in the navigation bar, we can see that it has been updated to show our location within the projects structure. By clicking on any project or folder in here, the console will take us to the project view for that location.

Importing a WSDL

To import the Echo WSDL into our project, click on the drop-down list next to Create Resource in the Resources section, and select Resources from URL, as shown in the following screenshot:

Importing a WSDL

This will bring up the page for loading resources from a URL, which is shown in the following screenshot:

Importing a WSDL

Note

A WSDL can also be imported from the filesystem by selecting the WSDL option from the Create Resource drop-down list.

In the URL/Path, enter the URL for our Echo WSDL. This is the WSDL location we made a note of earlier (in the WSDL tab for the Echo process in the BPEL console) and should look like the following:

http://<hostname>:<port>/orabpel/default/Echo/1.0/Echo?wsdl

Enter an appropriate value for the Resource Name(for example Echo), select a Resource Type as WSDL, and click on Next.

This will bring up the Load Resources window, which will list the resources that OSB is ready to import.

Importing a WSDL

You will notice that in addition to the actual WSDL file, it will also list the Echo.xsd. This is because the Echo.wsdl contains the following import statement:

<wsdl:types>
  <schema>
    <import namespace=
            "http://xmlns.oracle.com/SOASuiteBook11gChapter2_jws
/EchoComposite/EchoProcess"
            schemaLocation=
            "http://axreynol-us.us.oracle.com:8001/soa-infra/services/default/EchoComposite/echoprocess_client_ep?XSD=xsd/EchoProcess.xsd"/>
  </schema>
</wsdl:types>

This imports the Echo XML schema, which defines the input and output message of our Echo service. This schema was automatically generated by JDeveloper when we created our Echo process. In order to use our WSDL, we will need to import this schema as well. Because of the unusual URL for the XML Schema, the Service Bus generates its own unique name for the schema.

Click Import, the OSB console will confirm that the resources have been successfully imported and provide the option to Load Another resource, as shown in the following screenshot:

Importing a WSDL

Click on the WSDL folder within the project explorer to return to its project view. This will be updated to include our imported resources, as shown in the following screenshot:

Importing a WSDL

Creating our business service

We are now ready to create our Echo business service. Click on the Business Service folder within the Project Explorer to go to the project view for this folder.

In the Resources section, click on the drop-down list next to Create Resource and select Business Service. This will bring up the General Configuration page for creating a business service, as shown in the following screenshot:

Creating our business service

Here we specify the name of our business service (that is, EchoBS) and an optional description. Next we need to specify the Service Type, as we are creating our service based on a WSDL select WSDL Web Service.

Next, click the Browse button. This will launch a window from where we can select the WSDL for the Business Service, as shown on the next page:

Creating our business service

By default, this window will list all WSDL resources that are defined to the Service Bus, though you can restrict the list by defining the search criteria.

In our case, we just have the Echo WSDL, so we click on this. We will now be prompted to select a WSDL definition, as shown in the following screenshot:

Creating our business service

Here we need to select which binding or port definition we wish to use for our Business Service, select EchoProcess_pt and click Submit. Bindings provide an abstract interface and do not specify the physical endpoint, requiring additional configuration later. Ports have a physical endpoint and so require no additional configuration.

This will return us to the General Configuration screen with the Service Type updated to show the details of the selected WSDL and port, as shown in the following screenshot:

Creating our business service

Then, click on Next. This will take us to the Transport Configuration page, as shown in the following screenshot. Here we need to specify how the business service is to invoke an external service.

Creating our business service

As we based our business service on the EchoPort definition, the transport settings are already preconfigured, based on the content of our WSDL file.

Note

If we had based our business service on the EchoBinding definition, then the transport configuration would still have been prepopulated except for the Endpoint URI, which we would need to add manually.

From here, click on Last. This will take us to a summary page of our business service. Click on Save to create our business service.

This will return us to the project view on the Business Service folder and display the message The Service EchoBS was created successfully. If we examine the Resources section, we should see that it now contains our newly created business service.

Creating our business service

Creating our proxy service

We are now ready to create our Echo proxy service. Click on the Proxy Service folder within the Project Explorer to go to the project view for this folder.

In the Resources section, click on the drop-down list next to Create Resource and select Proxy Service. This will bring up the General Configuration page for creating a proxy service, as shown in the following screenshot:

Creating our proxy service

You will notice that this looks very similar to the general configuration screen for a business service. So as before, enter the name of our service (that is, Echo) and an optional description.

Next, we need to specify the Service Type. We could do this in exactly the same way as we did for our business service and base it on the Echo WSDL. However, this time we are going to base it on our EchoBS business service. We will see why in a moment.

For the Service Type, select Business Service, as shown in the screenshot, and click Browse…. This will launch the Select Business Service window from where we can search for and select the business service that we want to base our proxy service on.

Creating our proxy service

By default, this window will list all the business services defined to the Service Bus, though you can restrict the list by defining the search criteria.

In our case, we just have the EchoBS, so select this, and click on Submit. This will return us to the General Configuration screen with Service Type updated, as shown in the following screenshot:

Creating our proxy service

From here, click Last. This will take us to a summary page of our proxy service. Click Save to create our proxy service.

This will return us to the project view on the Proxy Service folder and display the message The Service Echo was created successfully.

If we examine the Resources section of our project view, we should see that it now contains our newly created proxy service as shown in the following screenshot:

Creating our proxy service

Creating message flow

Once we have created our proxy service, the next step is to specify how it should handle requests. This is defined in the message flow of the proxy service.

The message flow defines the actions that the proxy service should perform when a request is received such as validating the payload, transforming, or enriching it before routing it to the appropriate business service.

Within the resource section of our project view, click on the Edit Message Flow icon, as circled in the preceding image. This will take us to the Edit Message Flow window, where we can view and edit the message flow of our proxy service, as shown in the following screenshot:

Creating message flow

Looking at this, we can see that Echo already invokes the route node RouteTo_EchoBS.

Click on this and select Edit Route(as shown in the preceding screenshot). This will take us to the Edit Stage Configuration window, as shown in the following screenshot:

Creating message flow

Here we can see that it's already configured to route requests to the EchoBS business service.

Normally, when we create a proxy service we have to specify the message flow from scratch. However, when we created our Echo proxy service we based it on the EchoBS business service (as opposed to a WSDL). Because of this, the Service Bus has automatically configured the message flow to route requests to EchoBS.

As a result, our message flow is already predefined for us, so click Cancel, and then Cancel again to return to our project view.

Activating the Echo proxy service

We now have a completed proxy service; all that remains is to commit our work. Within the Change Center click Activate.

Activating the Echo proxy service

This will bring up the Activate Session, as shown in the following screenshot:

Activating the Echo proxy service

Before activating a session, it's good practice to give a description of the changes that we've made, just in case we need to roll them back later. So enter an appropriate description and then click on Submit, as shown in the preceding screenshot:

Assuming everything is okay, this will activate our changes, and the console will be updated to list our configuration changes, as shown in the following screenshot:

Activating the Echo proxy service

If you make a mistake and want to undo the changes you have activated, then you can click on the undo icon (circled in the preceding screenshot), and if you change your mind, you can revert the undo.

OSB allows you to undo any of your previous sessions as long as it doesn't result in an error in the runtime configuration of the Service Bus.

Testing our proxy service

All that's left is to test our proxy service. A simple way to do this is to initiate a test instance using the Service Bus test console.

To do this, we need to navigate back to the definition of our proxy service, rather than do this via the Project Explorer. We will use the Resource Browser. This provides a way to view all resources based on their type.

Click on the Resource Browser section within the navigation bar. By default, it will list all proxy services defined to the Service Bus, as shown in the following screenshot:

Testing our proxy service

We can then filter this list further by specifying the appropriate search criteria.

Click on the Launch Test Console icon for the Echo proxy service (circled in the preceding screenshot). This will launch the test console shown on the next page.

The Available Operations drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

By default, the options Direct Call and Include Tracing are selected within the Test Configuration section; keep these selected as they enable us to trace the state of a message as it passes through the proxy service.

The Request Document section allows us to specify the SOAP Header and the Payload for our service. By default, these will contain a skeleton XML fragment based on the WSDL definition of the selected operation with default values for each field.

To execute a test instance of our service, modify the text in the <echo:input> element, as we have in the following screen shot, and click Execute. This will cause the console to generate a request message and use it to invoke our Echo proxy service.

Testing our proxy service

Upon successful execution of the proxy, the test console will be updated to show the response returned. Here we can see that the result element contains our original initial input string, as shown in the following screenshot:

Testing our proxy service

We can examine the state of our message as it passed through the proxy service by expanding the Invocation Trace, as we have in the following screenshot:

Testing our proxy service

In addition, if you log back into the EM console, you should be able to see the Assembly instance that was invoked by the Service Bus.

Creating a project

Before we can create our Echo proxy service, we must create an OSB project in which to place our resources. Typical resources include WSDL, XSD schemas, XSLT, and XQuery as well as Proxy and Business Services.

Resources can be created directly within our top-level project folder, or we can define a folder structure within our project into which we can place our resources.

Note

From within the same OSB domain, you can reference any resource regardless of which project it is included in.

The Project Explorer is where we create and manage all of this. Click on the Project Explorer section within the navigation bar. This will bring up the Projects view, as shown in the following screenshot:

Creating a project

Here we can see a list of all projects defined in OSB, which at this stage just includes the default project. From here we can also create a new project. Enter a project name, for example Chapter02, as shown in the preceding screenshot, and then click Add Project. This will create a new project and update our list of projects to reflect this.

Creating the project folders

Click on the project name will take us to the Project View, as shown in the screenshot on the next page.

We can see that this splits into three sections. The first section provides some basic details about the project including any references to or from artifacts in other projects as well as an optional description.

The second section lists any folders within the current project folder and provides the option to create additional folders within the project.

The final section lists any resource contained within this folder and provides the option to create additional resource.

Creating the project folders

We are going to create the project folders BusinessService, ProxyService, and WSDL, into which we will place our various resources. To create the first of these, in the Folders section, enter BusinessService as the folder name (circled in the preceding screenshot) and click on Add Folder. This will create a new folder and updates the list of folders to reflect this.

Creating the project folders

Once created, follow the same process to create the remaining folders; your list of folders will now look as shown in the preceding screenshot.

Creating service WSDL

Before we can create either our proxy or business service, we need to define the WSDL on which the service will be based. For this, we are going to use the WSDL of our Echo BPEL process that we created earlier in this chapter.

Before importing the WSDL, we need to ensure that we are in the right folder within our project. To do this, click on the WSDL folder in our Folders list. On doing this the project view will be updated to show us the content of this folder, which is currently empty. In addition, the project summary section of our project view will be updated to show that we are now within the WSDL folder, as circled in the following screenshot:

Creating service WSDL
Creating service WSDL

If we look at the Project Explorer in the navigation bar, we can see that it has been updated to show our location within the projects structure. By clicking on any project or folder in here, the console will take us to the project view for that location.

Importing a WSDL

To import the Echo WSDL into our project, click on the drop-down list next to Create Resource in the Resources section, and select Resources from URL, as shown in the following screenshot:

Importing a WSDL

This will bring up the page for loading resources from a URL, which is shown in the following screenshot:

Importing a WSDL

Note

A WSDL can also be imported from the filesystem by selecting the WSDL option from the Create Resource drop-down list.

In the URL/Path, enter the URL for our Echo WSDL. This is the WSDL location we made a note of earlier (in the WSDL tab for the Echo process in the BPEL console) and should look like the following:

http://<hostname>:<port>/orabpel/default/Echo/1.0/Echo?wsdl

Enter an appropriate value for the Resource Name(for example Echo), select a Resource Type as WSDL, and click on Next.

This will bring up the Load Resources window, which will list the resources that OSB is ready to import.

Importing a WSDL

You will notice that in addition to the actual WSDL file, it will also list the Echo.xsd. This is because the Echo.wsdl contains the following import statement:

<wsdl:types>
  <schema>
    <import namespace=
            "http://xmlns.oracle.com/SOASuiteBook11gChapter2_jws
/EchoComposite/EchoProcess"
            schemaLocation=
            "http://axreynol-us.us.oracle.com:8001/soa-infra/services/default/EchoComposite/echoprocess_client_ep?XSD=xsd/EchoProcess.xsd"/>
  </schema>
</wsdl:types>

This imports the Echo XML schema, which defines the input and output message of our Echo service. This schema was automatically generated by JDeveloper when we created our Echo process. In order to use our WSDL, we will need to import this schema as well. Because of the unusual URL for the XML Schema, the Service Bus generates its own unique name for the schema.

Click Import, the OSB console will confirm that the resources have been successfully imported and provide the option to Load Another resource, as shown in the following screenshot:

Importing a WSDL

Click on the WSDL folder within the project explorer to return to its project view. This will be updated to include our imported resources, as shown in the following screenshot:

Importing a WSDL

Creating our business service

We are now ready to create our Echo business service. Click on the Business Service folder within the Project Explorer to go to the project view for this folder.

In the Resources section, click on the drop-down list next to Create Resource and select Business Service. This will bring up the General Configuration page for creating a business service, as shown in the following screenshot:

Creating our business service

Here we specify the name of our business service (that is, EchoBS) and an optional description. Next we need to specify the Service Type, as we are creating our service based on a WSDL select WSDL Web Service.

Next, click the Browse button. This will launch a window from where we can select the WSDL for the Business Service, as shown on the next page:

Creating our business service

By default, this window will list all WSDL resources that are defined to the Service Bus, though you can restrict the list by defining the search criteria.

In our case, we just have the Echo WSDL, so we click on this. We will now be prompted to select a WSDL definition, as shown in the following screenshot:

Creating our business service

Here we need to select which binding or port definition we wish to use for our Business Service, select EchoProcess_pt and click Submit. Bindings provide an abstract interface and do not specify the physical endpoint, requiring additional configuration later. Ports have a physical endpoint and so require no additional configuration.

This will return us to the General Configuration screen with the Service Type updated to show the details of the selected WSDL and port, as shown in the following screenshot:

Creating our business service

Then, click on Next. This will take us to the Transport Configuration page, as shown in the following screenshot. Here we need to specify how the business service is to invoke an external service.

Creating our business service

As we based our business service on the EchoPort definition, the transport settings are already preconfigured, based on the content of our WSDL file.

Note

If we had based our business service on the EchoBinding definition, then the transport configuration would still have been prepopulated except for the Endpoint URI, which we would need to add manually.

From here, click on Last. This will take us to a summary page of our business service. Click on Save to create our business service.

This will return us to the project view on the Business Service folder and display the message The Service EchoBS was created successfully. If we examine the Resources section, we should see that it now contains our newly created business service.

Creating our business service

Creating our proxy service

We are now ready to create our Echo proxy service. Click on the Proxy Service folder within the Project Explorer to go to the project view for this folder.

In the Resources section, click on the drop-down list next to Create Resource and select Proxy Service. This will bring up the General Configuration page for creating a proxy service, as shown in the following screenshot:

Creating our proxy service

You will notice that this looks very similar to the general configuration screen for a business service. So as before, enter the name of our service (that is, Echo) and an optional description.

Next, we need to specify the Service Type. We could do this in exactly the same way as we did for our business service and base it on the Echo WSDL. However, this time we are going to base it on our EchoBS business service. We will see why in a moment.

For the Service Type, select Business Service, as shown in the screenshot, and click Browse…. This will launch the Select Business Service window from where we can search for and select the business service that we want to base our proxy service on.

Creating our proxy service

By default, this window will list all the business services defined to the Service Bus, though you can restrict the list by defining the search criteria.

In our case, we just have the EchoBS, so select this, and click on Submit. This will return us to the General Configuration screen with Service Type updated, as shown in the following screenshot:

Creating our proxy service

From here, click Last. This will take us to a summary page of our proxy service. Click Save to create our proxy service.

This will return us to the project view on the Proxy Service folder and display the message The Service Echo was created successfully.

If we examine the Resources section of our project view, we should see that it now contains our newly created proxy service as shown in the following screenshot:

Creating our proxy service

Creating message flow

Once we have created our proxy service, the next step is to specify how it should handle requests. This is defined in the message flow of the proxy service.

The message flow defines the actions that the proxy service should perform when a request is received such as validating the payload, transforming, or enriching it before routing it to the appropriate business service.

Within the resource section of our project view, click on the Edit Message Flow icon, as circled in the preceding image. This will take us to the Edit Message Flow window, where we can view and edit the message flow of our proxy service, as shown in the following screenshot:

Creating message flow

Looking at this, we can see that Echo already invokes the route node RouteTo_EchoBS.

Click on this and select Edit Route(as shown in the preceding screenshot). This will take us to the Edit Stage Configuration window, as shown in the following screenshot:

Creating message flow

Here we can see that it's already configured to route requests to the EchoBS business service.

Normally, when we create a proxy service we have to specify the message flow from scratch. However, when we created our Echo proxy service we based it on the EchoBS business service (as opposed to a WSDL). Because of this, the Service Bus has automatically configured the message flow to route requests to EchoBS.

As a result, our message flow is already predefined for us, so click Cancel, and then Cancel again to return to our project view.

Activating the Echo proxy service

We now have a completed proxy service; all that remains is to commit our work. Within the Change Center click Activate.

Activating the Echo proxy service

This will bring up the Activate Session, as shown in the following screenshot:

Activating the Echo proxy service

Before activating a session, it's good practice to give a description of the changes that we've made, just in case we need to roll them back later. So enter an appropriate description and then click on Submit, as shown in the preceding screenshot:

Assuming everything is okay, this will activate our changes, and the console will be updated to list our configuration changes, as shown in the following screenshot:

Activating the Echo proxy service

If you make a mistake and want to undo the changes you have activated, then you can click on the undo icon (circled in the preceding screenshot), and if you change your mind, you can revert the undo.

OSB allows you to undo any of your previous sessions as long as it doesn't result in an error in the runtime configuration of the Service Bus.

Testing our proxy service

All that's left is to test our proxy service. A simple way to do this is to initiate a test instance using the Service Bus test console.

To do this, we need to navigate back to the definition of our proxy service, rather than do this via the Project Explorer. We will use the Resource Browser. This provides a way to view all resources based on their type.

Click on the Resource Browser section within the navigation bar. By default, it will list all proxy services defined to the Service Bus, as shown in the following screenshot:

Testing our proxy service

We can then filter this list further by specifying the appropriate search criteria.

Click on the Launch Test Console icon for the Echo proxy service (circled in the preceding screenshot). This will launch the test console shown on the next page.

The Available Operations drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

By default, the options Direct Call and Include Tracing are selected within the Test Configuration section; keep these selected as they enable us to trace the state of a message as it passes through the proxy service.

The Request Document section allows us to specify the SOAP Header and the Payload for our service. By default, these will contain a skeleton XML fragment based on the WSDL definition of the selected operation with default values for each field.

To execute a test instance of our service, modify the text in the <echo:input> element, as we have in the following screen shot, and click Execute. This will cause the console to generate a request message and use it to invoke our Echo proxy service.

Testing our proxy service

Upon successful execution of the proxy, the test console will be updated to show the response returned. Here we can see that the result element contains our original initial input string, as shown in the following screenshot:

Testing our proxy service

We can examine the state of our message as it passed through the proxy service by expanding the Invocation Trace, as we have in the following screenshot:

Testing our proxy service

In addition, if you log back into the EM console, you should be able to see the Assembly instance that was invoked by the Service Bus.

Creating the project folders

Click on the project name will take us to the Project View, as shown in the screenshot on the next page.

We can see that this splits into three sections. The first section provides some basic details about the project including any references to or from artifacts in other projects as well as an optional description.

The second section lists any folders within the current project folder and provides the option to create additional folders within the project.

The final section lists any resource contained within this folder and provides the option to create additional resource.

Creating the project folders

We are going to create the project folders BusinessService, ProxyService, and WSDL, into which we will place our various resources. To create the first of these, in the Folders section, enter BusinessService as the folder name (circled in the preceding screenshot) and click on Add Folder. This will create a new folder and updates the list of folders to reflect this.

Creating the project folders

Once created, follow the same process to create the remaining folders; your list of folders will now look as shown in the preceding screenshot.

Creating service WSDL

Before we can create either our proxy or business service, we need to define the WSDL on which the service will be based. For this, we are going to use the WSDL of our Echo BPEL process that we created earlier in this chapter.

Before importing the WSDL, we need to ensure that we are in the right folder within our project. To do this, click on the WSDL folder in our Folders list. On doing this the project view will be updated to show us the content of this folder, which is currently empty. In addition, the project summary section of our project view will be updated to show that we are now within the WSDL folder, as circled in the following screenshot:

Creating service WSDL
Creating service WSDL

If we look at the Project Explorer in the navigation bar, we can see that it has been updated to show our location within the projects structure. By clicking on any project or folder in here, the console will take us to the project view for that location.

Importing a WSDL

To import the Echo WSDL into our project, click on the drop-down list next to Create Resource in the Resources section, and select Resources from URL, as shown in the following screenshot:

Importing a WSDL

This will bring up the page for loading resources from a URL, which is shown in the following screenshot:

Importing a WSDL

Note

A WSDL can also be imported from the filesystem by selecting the WSDL option from the Create Resource drop-down list.

In the URL/Path, enter the URL for our Echo WSDL. This is the WSDL location we made a note of earlier (in the WSDL tab for the Echo process in the BPEL console) and should look like the following:

http://<hostname>:<port>/orabpel/default/Echo/1.0/Echo?wsdl

Enter an appropriate value for the Resource Name(for example Echo), select a Resource Type as WSDL, and click on Next.

This will bring up the Load Resources window, which will list the resources that OSB is ready to import.

Importing a WSDL

You will notice that in addition to the actual WSDL file, it will also list the Echo.xsd. This is because the Echo.wsdl contains the following import statement:

<wsdl:types>
  <schema>
    <import namespace=
            "http://xmlns.oracle.com/SOASuiteBook11gChapter2_jws
/EchoComposite/EchoProcess"
            schemaLocation=
            "http://axreynol-us.us.oracle.com:8001/soa-infra/services/default/EchoComposite/echoprocess_client_ep?XSD=xsd/EchoProcess.xsd"/>
  </schema>
</wsdl:types>

This imports the Echo XML schema, which defines the input and output message of our Echo service. This schema was automatically generated by JDeveloper when we created our Echo process. In order to use our WSDL, we will need to import this schema as well. Because of the unusual URL for the XML Schema, the Service Bus generates its own unique name for the schema.

Click Import, the OSB console will confirm that the resources have been successfully imported and provide the option to Load Another resource, as shown in the following screenshot:

Importing a WSDL

Click on the WSDL folder within the project explorer to return to its project view. This will be updated to include our imported resources, as shown in the following screenshot:

Importing a WSDL
Creating our business service

We are now ready to create our Echo business service. Click on the Business Service folder within the Project Explorer to go to the project view for this folder.

In the Resources section, click on the drop-down list next to Create Resource and select Business Service. This will bring up the General Configuration page for creating a business service, as shown in the following screenshot:

Creating our business service

Here we specify the name of our business service (that is, EchoBS) and an optional description. Next we need to specify the Service Type, as we are creating our service based on a WSDL select WSDL Web Service.

Next, click the Browse button. This will launch a window from where we can select the WSDL for the Business Service, as shown on the next page:

Creating our business service

By default, this window will list all WSDL resources that are defined to the Service Bus, though you can restrict the list by defining the search criteria.

In our case, we just have the Echo WSDL, so we click on this. We will now be prompted to select a WSDL definition, as shown in the following screenshot:

Creating our business service

Here we need to select which binding or port definition we wish to use for our Business Service, select EchoProcess_pt and click Submit. Bindings provide an abstract interface and do not specify the physical endpoint, requiring additional configuration later. Ports have a physical endpoint and so require no additional configuration.

This will return us to the General Configuration screen with the Service Type updated to show the details of the selected WSDL and port, as shown in the following screenshot:

Creating our business service

Then, click on Next. This will take us to the Transport Configuration page, as shown in the following screenshot. Here we need to specify how the business service is to invoke an external service.

Creating our business service

As we based our business service on the EchoPort definition, the transport settings are already preconfigured, based on the content of our WSDL file.

Note

If we had based our business service on the EchoBinding definition, then the transport configuration would still have been prepopulated except for the Endpoint URI, which we would need to add manually.

From here, click on Last. This will take us to a summary page of our business service. Click on Save to create our business service.

This will return us to the project view on the Business Service folder and display the message The Service EchoBS was created successfully. If we examine the Resources section, we should see that it now contains our newly created business service.

Creating our business service
Creating our proxy service

We are now ready to create our Echo proxy service. Click on the Proxy Service folder within the Project Explorer to go to the project view for this folder.

In the Resources section, click on the drop-down list next to Create Resource and select Proxy Service. This will bring up the General Configuration page for creating a proxy service, as shown in the following screenshot:

Creating our proxy service

You will notice that this looks very similar to the general configuration screen for a business service. So as before, enter the name of our service (that is, Echo) and an optional description.

Next, we need to specify the Service Type. We could do this in exactly the same way as we did for our business service and base it on the Echo WSDL. However, this time we are going to base it on our EchoBS business service. We will see why in a moment.

For the Service Type, select Business Service, as shown in the screenshot, and click Browse…. This will launch the Select Business Service window from where we can search for and select the business service that we want to base our proxy service on.

Creating our proxy service

By default, this window will list all the business services defined to the Service Bus, though you can restrict the list by defining the search criteria.

In our case, we just have the EchoBS, so select this, and click on Submit. This will return us to the General Configuration screen with Service Type updated, as shown in the following screenshot:

Creating our proxy service

From here, click Last. This will take us to a summary page of our proxy service. Click Save to create our proxy service.

This will return us to the project view on the Proxy Service folder and display the message The Service Echo was created successfully.

If we examine the Resources section of our project view, we should see that it now contains our newly created proxy service as shown in the following screenshot:

Creating our proxy service

Creating message flow

Once we have created our proxy service, the next step is to specify how it should handle requests. This is defined in the message flow of the proxy service.

The message flow defines the actions that the proxy service should perform when a request is received such as validating the payload, transforming, or enriching it before routing it to the appropriate business service.

Within the resource section of our project view, click on the Edit Message Flow icon, as circled in the preceding image. This will take us to the Edit Message Flow window, where we can view and edit the message flow of our proxy service, as shown in the following screenshot:

Creating message flow

Looking at this, we can see that Echo already invokes the route node RouteTo_EchoBS.

Click on this and select Edit Route(as shown in the preceding screenshot). This will take us to the Edit Stage Configuration window, as shown in the following screenshot:

Creating message flow

Here we can see that it's already configured to route requests to the EchoBS business service.

Normally, when we create a proxy service we have to specify the message flow from scratch. However, when we created our Echo proxy service we based it on the EchoBS business service (as opposed to a WSDL). Because of this, the Service Bus has automatically configured the message flow to route requests to EchoBS.

As a result, our message flow is already predefined for us, so click Cancel, and then Cancel again to return to our project view.

Activating the Echo proxy service

We now have a completed proxy service; all that remains is to commit our work. Within the Change Center click Activate.

Activating the Echo proxy service

This will bring up the Activate Session, as shown in the following screenshot:

Activating the Echo proxy service

Before activating a session, it's good practice to give a description of the changes that we've made, just in case we need to roll them back later. So enter an appropriate description and then click on Submit, as shown in the preceding screenshot:

Assuming everything is okay, this will activate our changes, and the console will be updated to list our configuration changes, as shown in the following screenshot:

Activating the Echo proxy service

If you make a mistake and want to undo the changes you have activated, then you can click on the undo icon (circled in the preceding screenshot), and if you change your mind, you can revert the undo.

OSB allows you to undo any of your previous sessions as long as it doesn't result in an error in the runtime configuration of the Service Bus.

Testing our proxy service

All that's left is to test our proxy service. A simple way to do this is to initiate a test instance using the Service Bus test console.

To do this, we need to navigate back to the definition of our proxy service, rather than do this via the Project Explorer. We will use the Resource Browser. This provides a way to view all resources based on their type.

Click on the Resource Browser section within the navigation bar. By default, it will list all proxy services defined to the Service Bus, as shown in the following screenshot:

Testing our proxy service

We can then filter this list further by specifying the appropriate search criteria.

Click on the Launch Test Console icon for the Echo proxy service (circled in the preceding screenshot). This will launch the test console shown on the next page.

The Available Operations drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

By default, the options Direct Call and Include Tracing are selected within the Test Configuration section; keep these selected as they enable us to trace the state of a message as it passes through the proxy service.

The Request Document section allows us to specify the SOAP Header and the Payload for our service. By default, these will contain a skeleton XML fragment based on the WSDL definition of the selected operation with default values for each field.

To execute a test instance of our service, modify the text in the <echo:input> element, as we have in the following screen shot, and click Execute. This will cause the console to generate a request message and use it to invoke our Echo proxy service.

Testing our proxy service

Upon successful execution of the proxy, the test console will be updated to show the response returned. Here we can see that the result element contains our original initial input string, as shown in the following screenshot:

Testing our proxy service

We can examine the state of our message as it passed through the proxy service by expanding the Invocation Trace, as we have in the following screenshot:

Testing our proxy service

In addition, if you log back into the EM console, you should be able to see the Assembly instance that was invoked by the Service Bus.

Creating service WSDL

Before we can create either our proxy or business service, we need to define the WSDL on which the service will be based. For this, we are going to use the WSDL of our Echo BPEL process that we created earlier in this chapter.

Before importing the WSDL, we need to ensure that we are in the right folder within our project. To do this, click on the WSDL folder in our Folders list. On doing this the project view will be updated to show us the content of this folder, which is currently empty. In addition, the project summary section of our project view will be updated to show that we are now within the WSDL folder, as circled in the following screenshot:

Creating service WSDL
Creating service WSDL

If we look at the Project Explorer in the navigation bar, we can see that it has been updated to show our location within the projects structure. By clicking on any project or folder in here, the console will take us to the project view for that location.

Importing a WSDL

To import the Echo WSDL into our project, click on the drop-down list next to Create Resource in the Resources section, and select Resources from URL, as shown in the following screenshot:

Importing a WSDL

This will bring up the page for loading resources from a URL, which is shown in the following screenshot:

Importing a WSDL

Note

A WSDL can also be imported from the filesystem by selecting the WSDL option from the Create Resource drop-down list.

In the URL/Path, enter the URL for our Echo WSDL. This is the WSDL location we made a note of earlier (in the WSDL tab for the Echo process in the BPEL console) and should look like the following:

http://<hostname>:<port>/orabpel/default/Echo/1.0/Echo?wsdl

Enter an appropriate value for the Resource Name(for example Echo), select a Resource Type as WSDL, and click on Next.

This will bring up the Load Resources window, which will list the resources that OSB is ready to import.

Importing a WSDL

You will notice that in addition to the actual WSDL file, it will also list the Echo.xsd. This is because the Echo.wsdl contains the following import statement:

<wsdl:types>
  <schema>
    <import namespace=
            "http://xmlns.oracle.com/SOASuiteBook11gChapter2_jws
/EchoComposite/EchoProcess"
            schemaLocation=
            "http://axreynol-us.us.oracle.com:8001/soa-infra/services/default/EchoComposite/echoprocess_client_ep?XSD=xsd/EchoProcess.xsd"/>
  </schema>
</wsdl:types>

This imports the Echo XML schema, which defines the input and output message of our Echo service. This schema was automatically generated by JDeveloper when we created our Echo process. In order to use our WSDL, we will need to import this schema as well. Because of the unusual URL for the XML Schema, the Service Bus generates its own unique name for the schema.

Click Import, the OSB console will confirm that the resources have been successfully imported and provide the option to Load Another resource, as shown in the following screenshot:

Importing a WSDL

Click on the WSDL folder within the project explorer to return to its project view. This will be updated to include our imported resources, as shown in the following screenshot:

Importing a WSDL

Creating our business service

We are now ready to create our Echo business service. Click on the Business Service folder within the Project Explorer to go to the project view for this folder.

In the Resources section, click on the drop-down list next to Create Resource and select Business Service. This will bring up the General Configuration page for creating a business service, as shown in the following screenshot:

Creating our business service

Here we specify the name of our business service (that is, EchoBS) and an optional description. Next we need to specify the Service Type, as we are creating our service based on a WSDL select WSDL Web Service.

Next, click the Browse button. This will launch a window from where we can select the WSDL for the Business Service, as shown on the next page:

Creating our business service

By default, this window will list all WSDL resources that are defined to the Service Bus, though you can restrict the list by defining the search criteria.

In our case, we just have the Echo WSDL, so we click on this. We will now be prompted to select a WSDL definition, as shown in the following screenshot:

Creating our business service

Here we need to select which binding or port definition we wish to use for our Business Service, select EchoProcess_pt and click Submit. Bindings provide an abstract interface and do not specify the physical endpoint, requiring additional configuration later. Ports have a physical endpoint and so require no additional configuration.

This will return us to the General Configuration screen with the Service Type updated to show the details of the selected WSDL and port, as shown in the following screenshot:

Creating our business service

Then, click on Next. This will take us to the Transport Configuration page, as shown in the following screenshot. Here we need to specify how the business service is to invoke an external service.

Creating our business service

As we based our business service on the EchoPort definition, the transport settings are already preconfigured, based on the content of our WSDL file.

Note

If we had based our business service on the EchoBinding definition, then the transport configuration would still have been prepopulated except for the Endpoint URI, which we would need to add manually.

From here, click on Last. This will take us to a summary page of our business service. Click on Save to create our business service.

This will return us to the project view on the Business Service folder and display the message The Service EchoBS was created successfully. If we examine the Resources section, we should see that it now contains our newly created business service.

Creating our business service

Creating our proxy service

We are now ready to create our Echo proxy service. Click on the Proxy Service folder within the Project Explorer to go to the project view for this folder.

In the Resources section, click on the drop-down list next to Create Resource and select Proxy Service. This will bring up the General Configuration page for creating a proxy service, as shown in the following screenshot:

Creating our proxy service

You will notice that this looks very similar to the general configuration screen for a business service. So as before, enter the name of our service (that is, Echo) and an optional description.

Next, we need to specify the Service Type. We could do this in exactly the same way as we did for our business service and base it on the Echo WSDL. However, this time we are going to base it on our EchoBS business service. We will see why in a moment.

For the Service Type, select Business Service, as shown in the screenshot, and click Browse…. This will launch the Select Business Service window from where we can search for and select the business service that we want to base our proxy service on.

Creating our proxy service

By default, this window will list all the business services defined to the Service Bus, though you can restrict the list by defining the search criteria.

In our case, we just have the EchoBS, so select this, and click on Submit. This will return us to the General Configuration screen with Service Type updated, as shown in the following screenshot:

Creating our proxy service

From here, click Last. This will take us to a summary page of our proxy service. Click Save to create our proxy service.

This will return us to the project view on the Proxy Service folder and display the message The Service Echo was created successfully.

If we examine the Resources section of our project view, we should see that it now contains our newly created proxy service as shown in the following screenshot:

Creating our proxy service

Creating message flow

Once we have created our proxy service, the next step is to specify how it should handle requests. This is defined in the message flow of the proxy service.

The message flow defines the actions that the proxy service should perform when a request is received such as validating the payload, transforming, or enriching it before routing it to the appropriate business service.

Within the resource section of our project view, click on the Edit Message Flow icon, as circled in the preceding image. This will take us to the Edit Message Flow window, where we can view and edit the message flow of our proxy service, as shown in the following screenshot:

Creating message flow

Looking at this, we can see that Echo already invokes the route node RouteTo_EchoBS.

Click on this and select Edit Route(as shown in the preceding screenshot). This will take us to the Edit Stage Configuration window, as shown in the following screenshot:

Creating message flow

Here we can see that it's already configured to route requests to the EchoBS business service.

Normally, when we create a proxy service we have to specify the message flow from scratch. However, when we created our Echo proxy service we based it on the EchoBS business service (as opposed to a WSDL). Because of this, the Service Bus has automatically configured the message flow to route requests to EchoBS.

As a result, our message flow is already predefined for us, so click Cancel, and then Cancel again to return to our project view.

Activating the Echo proxy service

We now have a completed proxy service; all that remains is to commit our work. Within the Change Center click Activate.

Activating the Echo proxy service

This will bring up the Activate Session, as shown in the following screenshot:

Activating the Echo proxy service

Before activating a session, it's good practice to give a description of the changes that we've made, just in case we need to roll them back later. So enter an appropriate description and then click on Submit, as shown in the preceding screenshot:

Assuming everything is okay, this will activate our changes, and the console will be updated to list our configuration changes, as shown in the following screenshot:

Activating the Echo proxy service

If you make a mistake and want to undo the changes you have activated, then you can click on the undo icon (circled in the preceding screenshot), and if you change your mind, you can revert the undo.

OSB allows you to undo any of your previous sessions as long as it doesn't result in an error in the runtime configuration of the Service Bus.

Testing our proxy service

All that's left is to test our proxy service. A simple way to do this is to initiate a test instance using the Service Bus test console.

To do this, we need to navigate back to the definition of our proxy service, rather than do this via the Project Explorer. We will use the Resource Browser. This provides a way to view all resources based on their type.

Click on the Resource Browser section within the navigation bar. By default, it will list all proxy services defined to the Service Bus, as shown in the following screenshot:

Testing our proxy service

We can then filter this list further by specifying the appropriate search criteria.

Click on the Launch Test Console icon for the Echo proxy service (circled in the preceding screenshot). This will launch the test console shown on the next page.

The Available Operations drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

By default, the options Direct Call and Include Tracing are selected within the Test Configuration section; keep these selected as they enable us to trace the state of a message as it passes through the proxy service.

The Request Document section allows us to specify the SOAP Header and the Payload for our service. By default, these will contain a skeleton XML fragment based on the WSDL definition of the selected operation with default values for each field.

To execute a test instance of our service, modify the text in the <echo:input> element, as we have in the following screen shot, and click Execute. This will cause the console to generate a request message and use it to invoke our Echo proxy service.

Testing our proxy service

Upon successful execution of the proxy, the test console will be updated to show the response returned. Here we can see that the result element contains our original initial input string, as shown in the following screenshot:

Testing our proxy service

We can examine the state of our message as it passed through the proxy service by expanding the Invocation Trace, as we have in the following screenshot:

Testing our proxy service

In addition, if you log back into the EM console, you should be able to see the Assembly instance that was invoked by the Service Bus.

Importing a WSDL

To import the Echo WSDL into our project, click on the drop-down list next to Create Resource in the Resources section, and select Resources from URL, as shown in the following screenshot:

Importing a WSDL

This will bring up the page for loading resources from a URL, which is shown in the following screenshot:

Importing a WSDL

Note

A WSDL can also be imported from the filesystem by selecting the WSDL option from the Create Resource drop-down list.

In the URL/Path, enter the URL for our Echo WSDL. This is the WSDL location we made a note of earlier (in the WSDL tab for the Echo process in the BPEL console) and should look like the following:

http://<hostname>:<port>/orabpel/default/Echo/1.0/Echo?wsdl

Enter an appropriate value for the Resource Name(for example Echo), select a Resource Type as WSDL, and click on Next.

This will bring up the Load Resources window, which will list the resources that OSB is ready to import.

Importing a WSDL

You will notice that in addition to the actual WSDL file, it will also list the Echo.xsd. This is because the Echo.wsdl contains the following import statement:

<wsdl:types>
  <schema>
    <import namespace=
            "http://xmlns.oracle.com/SOASuiteBook11gChapter2_jws
/EchoComposite/EchoProcess"
            schemaLocation=
            "http://axreynol-us.us.oracle.com:8001/soa-infra/services/default/EchoComposite/echoprocess_client_ep?XSD=xsd/EchoProcess.xsd"/>
  </schema>
</wsdl:types>

This imports the Echo XML schema, which defines the input and output message of our Echo service. This schema was automatically generated by JDeveloper when we created our Echo process. In order to use our WSDL, we will need to import this schema as well. Because of the unusual URL for the XML Schema, the Service Bus generates its own unique name for the schema.

Click Import, the OSB console will confirm that the resources have been successfully imported and provide the option to Load Another resource, as shown in the following screenshot:

Importing a WSDL

Click on the WSDL folder within the project explorer to return to its project view. This will be updated to include our imported resources, as shown in the following screenshot:

Importing a WSDL
Creating our business service

We are now ready to create our Echo business service. Click on the Business Service folder within the Project Explorer to go to the project view for this folder.

In the Resources section, click on the drop-down list next to Create Resource and select Business Service. This will bring up the General Configuration page for creating a business service, as shown in the following screenshot:

Creating our business service

Here we specify the name of our business service (that is, EchoBS) and an optional description. Next we need to specify the Service Type, as we are creating our service based on a WSDL select WSDL Web Service.

Next, click the Browse button. This will launch a window from where we can select the WSDL for the Business Service, as shown on the next page:

Creating our business service

By default, this window will list all WSDL resources that are defined to the Service Bus, though you can restrict the list by defining the search criteria.

In our case, we just have the Echo WSDL, so we click on this. We will now be prompted to select a WSDL definition, as shown in the following screenshot:

Creating our business service

Here we need to select which binding or port definition we wish to use for our Business Service, select EchoProcess_pt and click Submit. Bindings provide an abstract interface and do not specify the physical endpoint, requiring additional configuration later. Ports have a physical endpoint and so require no additional configuration.

This will return us to the General Configuration screen with the Service Type updated to show the details of the selected WSDL and port, as shown in the following screenshot:

Creating our business service

Then, click on Next. This will take us to the Transport Configuration page, as shown in the following screenshot. Here we need to specify how the business service is to invoke an external service.

Creating our business service

As we based our business service on the EchoPort definition, the transport settings are already preconfigured, based on the content of our WSDL file.

Note

If we had based our business service on the EchoBinding definition, then the transport configuration would still have been prepopulated except for the Endpoint URI, which we would need to add manually.

From here, click on Last. This will take us to a summary page of our business service. Click on Save to create our business service.

This will return us to the project view on the Business Service folder and display the message The Service EchoBS was created successfully. If we examine the Resources section, we should see that it now contains our newly created business service.

Creating our business service
Creating our proxy service

We are now ready to create our Echo proxy service. Click on the Proxy Service folder within the Project Explorer to go to the project view for this folder.

In the Resources section, click on the drop-down list next to Create Resource and select Proxy Service. This will bring up the General Configuration page for creating a proxy service, as shown in the following screenshot:

Creating our proxy service

You will notice that this looks very similar to the general configuration screen for a business service. So as before, enter the name of our service (that is, Echo) and an optional description.

Next, we need to specify the Service Type. We could do this in exactly the same way as we did for our business service and base it on the Echo WSDL. However, this time we are going to base it on our EchoBS business service. We will see why in a moment.

For the Service Type, select Business Service, as shown in the screenshot, and click Browse…. This will launch the Select Business Service window from where we can search for and select the business service that we want to base our proxy service on.

Creating our proxy service

By default, this window will list all the business services defined to the Service Bus, though you can restrict the list by defining the search criteria.

In our case, we just have the EchoBS, so select this, and click on Submit. This will return us to the General Configuration screen with Service Type updated, as shown in the following screenshot:

Creating our proxy service

From here, click Last. This will take us to a summary page of our proxy service. Click Save to create our proxy service.

This will return us to the project view on the Proxy Service folder and display the message The Service Echo was created successfully.

If we examine the Resources section of our project view, we should see that it now contains our newly created proxy service as shown in the following screenshot:

Creating our proxy service

Creating message flow

Once we have created our proxy service, the next step is to specify how it should handle requests. This is defined in the message flow of the proxy service.

The message flow defines the actions that the proxy service should perform when a request is received such as validating the payload, transforming, or enriching it before routing it to the appropriate business service.

Within the resource section of our project view, click on the Edit Message Flow icon, as circled in the preceding image. This will take us to the Edit Message Flow window, where we can view and edit the message flow of our proxy service, as shown in the following screenshot:

Creating message flow

Looking at this, we can see that Echo already invokes the route node RouteTo_EchoBS.

Click on this and select Edit Route(as shown in the preceding screenshot). This will take us to the Edit Stage Configuration window, as shown in the following screenshot:

Creating message flow

Here we can see that it's already configured to route requests to the EchoBS business service.

Normally, when we create a proxy service we have to specify the message flow from scratch. However, when we created our Echo proxy service we based it on the EchoBS business service (as opposed to a WSDL). Because of this, the Service Bus has automatically configured the message flow to route requests to EchoBS.

As a result, our message flow is already predefined for us, so click Cancel, and then Cancel again to return to our project view.

Activating the Echo proxy service

We now have a completed proxy service; all that remains is to commit our work. Within the Change Center click Activate.

Activating the Echo proxy service

This will bring up the Activate Session, as shown in the following screenshot:

Activating the Echo proxy service

Before activating a session, it's good practice to give a description of the changes that we've made, just in case we need to roll them back later. So enter an appropriate description and then click on Submit, as shown in the preceding screenshot:

Assuming everything is okay, this will activate our changes, and the console will be updated to list our configuration changes, as shown in the following screenshot:

Activating the Echo proxy service

If you make a mistake and want to undo the changes you have activated, then you can click on the undo icon (circled in the preceding screenshot), and if you change your mind, you can revert the undo.

OSB allows you to undo any of your previous sessions as long as it doesn't result in an error in the runtime configuration of the Service Bus.

Testing our proxy service

All that's left is to test our proxy service. A simple way to do this is to initiate a test instance using the Service Bus test console.

To do this, we need to navigate back to the definition of our proxy service, rather than do this via the Project Explorer. We will use the Resource Browser. This provides a way to view all resources based on their type.

Click on the Resource Browser section within the navigation bar. By default, it will list all proxy services defined to the Service Bus, as shown in the following screenshot:

Testing our proxy service

We can then filter this list further by specifying the appropriate search criteria.

Click on the Launch Test Console icon for the Echo proxy service (circled in the preceding screenshot). This will launch the test console shown on the next page.

The Available Operations drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

By default, the options Direct Call and Include Tracing are selected within the Test Configuration section; keep these selected as they enable us to trace the state of a message as it passes through the proxy service.

The Request Document section allows us to specify the SOAP Header and the Payload for our service. By default, these will contain a skeleton XML fragment based on the WSDL definition of the selected operation with default values for each field.

To execute a test instance of our service, modify the text in the <echo:input> element, as we have in the following screen shot, and click Execute. This will cause the console to generate a request message and use it to invoke our Echo proxy service.

Testing our proxy service

Upon successful execution of the proxy, the test console will be updated to show the response returned. Here we can see that the result element contains our original initial input string, as shown in the following screenshot:

Testing our proxy service

We can examine the state of our message as it passed through the proxy service by expanding the Invocation Trace, as we have in the following screenshot:

Testing our proxy service

In addition, if you log back into the EM console, you should be able to see the Assembly instance that was invoked by the Service Bus.

Creating our business service

We are now ready to create our Echo business service. Click on the Business Service folder within the Project Explorer to go to the project view for this folder.

In the Resources section, click on the drop-down list next to Create Resource and select Business Service. This will bring up the General Configuration page for creating a business service, as shown in the following screenshot:

Creating our business service

Here we specify the name of our business service (that is, EchoBS) and an optional description. Next we need to specify the Service Type, as we are creating our service based on a WSDL select WSDL Web Service.

Next, click the Browse button. This will launch a window from where we can select the WSDL for the Business Service, as shown on the next page:

Creating our business service

By default, this window will list all WSDL resources that are defined to the Service Bus, though you can restrict the list by defining the search criteria.

In our case, we just have the Echo WSDL, so we click on this. We will now be prompted to select a WSDL definition, as shown in the following screenshot:

Creating our business service

Here we need to select which binding or port definition we wish to use for our Business Service, select EchoProcess_pt and click Submit. Bindings provide an abstract interface and do not specify the physical endpoint, requiring additional configuration later. Ports have a physical endpoint and so require no additional configuration.

This will return us to the General Configuration screen with the Service Type updated to show the details of the selected WSDL and port, as shown in the following screenshot:

Creating our business service

Then, click on Next. This will take us to the Transport Configuration page, as shown in the following screenshot. Here we need to specify how the business service is to invoke an external service.

Creating our business service

As we based our business service on the EchoPort definition, the transport settings are already preconfigured, based on the content of our WSDL file.

Note

If we had based our business service on the EchoBinding definition, then the transport configuration would still have been prepopulated except for the Endpoint URI, which we would need to add manually.

From here, click on Last. This will take us to a summary page of our business service. Click on Save to create our business service.

This will return us to the project view on the Business Service folder and display the message The Service EchoBS was created successfully. If we examine the Resources section, we should see that it now contains our newly created business service.

Creating our business service

Creating our proxy service

We are now ready to create our Echo proxy service. Click on the Proxy Service folder within the Project Explorer to go to the project view for this folder.

In the Resources section, click on the drop-down list next to Create Resource and select Proxy Service. This will bring up the General Configuration page for creating a proxy service, as shown in the following screenshot:

Creating our proxy service

You will notice that this looks very similar to the general configuration screen for a business service. So as before, enter the name of our service (that is, Echo) and an optional description.

Next, we need to specify the Service Type. We could do this in exactly the same way as we did for our business service and base it on the Echo WSDL. However, this time we are going to base it on our EchoBS business service. We will see why in a moment.

For the Service Type, select Business Service, as shown in the screenshot, and click Browse…. This will launch the Select Business Service window from where we can search for and select the business service that we want to base our proxy service on.

Creating our proxy service

By default, this window will list all the business services defined to the Service Bus, though you can restrict the list by defining the search criteria.

In our case, we just have the EchoBS, so select this, and click on Submit. This will return us to the General Configuration screen with Service Type updated, as shown in the following screenshot:

Creating our proxy service

From here, click Last. This will take us to a summary page of our proxy service. Click Save to create our proxy service.

This will return us to the project view on the Proxy Service folder and display the message The Service Echo was created successfully.

If we examine the Resources section of our project view, we should see that it now contains our newly created proxy service as shown in the following screenshot:

Creating our proxy service

Creating message flow

Once we have created our proxy service, the next step is to specify how it should handle requests. This is defined in the message flow of the proxy service.

The message flow defines the actions that the proxy service should perform when a request is received such as validating the payload, transforming, or enriching it before routing it to the appropriate business service.

Within the resource section of our project view, click on the Edit Message Flow icon, as circled in the preceding image. This will take us to the Edit Message Flow window, where we can view and edit the message flow of our proxy service, as shown in the following screenshot:

Creating message flow

Looking at this, we can see that Echo already invokes the route node RouteTo_EchoBS.

Click on this and select Edit Route(as shown in the preceding screenshot). This will take us to the Edit Stage Configuration window, as shown in the following screenshot:

Creating message flow

Here we can see that it's already configured to route requests to the EchoBS business service.

Normally, when we create a proxy service we have to specify the message flow from scratch. However, when we created our Echo proxy service we based it on the EchoBS business service (as opposed to a WSDL). Because of this, the Service Bus has automatically configured the message flow to route requests to EchoBS.

As a result, our message flow is already predefined for us, so click Cancel, and then Cancel again to return to our project view.

Activating the Echo proxy service

We now have a completed proxy service; all that remains is to commit our work. Within the Change Center click Activate.

Activating the Echo proxy service

This will bring up the Activate Session, as shown in the following screenshot:

Activating the Echo proxy service

Before activating a session, it's good practice to give a description of the changes that we've made, just in case we need to roll them back later. So enter an appropriate description and then click on Submit, as shown in the preceding screenshot:

Assuming everything is okay, this will activate our changes, and the console will be updated to list our configuration changes, as shown in the following screenshot:

Activating the Echo proxy service

If you make a mistake and want to undo the changes you have activated, then you can click on the undo icon (circled in the preceding screenshot), and if you change your mind, you can revert the undo.

OSB allows you to undo any of your previous sessions as long as it doesn't result in an error in the runtime configuration of the Service Bus.

Testing our proxy service

All that's left is to test our proxy service. A simple way to do this is to initiate a test instance using the Service Bus test console.

To do this, we need to navigate back to the definition of our proxy service, rather than do this via the Project Explorer. We will use the Resource Browser. This provides a way to view all resources based on their type.

Click on the Resource Browser section within the navigation bar. By default, it will list all proxy services defined to the Service Bus, as shown in the following screenshot:

Testing our proxy service

We can then filter this list further by specifying the appropriate search criteria.

Click on the Launch Test Console icon for the Echo proxy service (circled in the preceding screenshot). This will launch the test console shown on the next page.

The Available Operations drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

By default, the options Direct Call and Include Tracing are selected within the Test Configuration section; keep these selected as they enable us to trace the state of a message as it passes through the proxy service.

The Request Document section allows us to specify the SOAP Header and the Payload for our service. By default, these will contain a skeleton XML fragment based on the WSDL definition of the selected operation with default values for each field.

To execute a test instance of our service, modify the text in the <echo:input> element, as we have in the following screen shot, and click Execute. This will cause the console to generate a request message and use it to invoke our Echo proxy service.

Testing our proxy service

Upon successful execution of the proxy, the test console will be updated to show the response returned. Here we can see that the result element contains our original initial input string, as shown in the following screenshot:

Testing our proxy service

We can examine the state of our message as it passed through the proxy service by expanding the Invocation Trace, as we have in the following screenshot:

Testing our proxy service

In addition, if you log back into the EM console, you should be able to see the Assembly instance that was invoked by the Service Bus.

Creating our proxy service

We are now ready to create our Echo proxy service. Click on the Proxy Service folder within the Project Explorer to go to the project view for this folder.

In the Resources section, click on the drop-down list next to Create Resource and select Proxy Service. This will bring up the General Configuration page for creating a proxy service, as shown in the following screenshot:

Creating our proxy service

You will notice that this looks very similar to the general configuration screen for a business service. So as before, enter the name of our service (that is, Echo) and an optional description.

Next, we need to specify the Service Type. We could do this in exactly the same way as we did for our business service and base it on the Echo WSDL. However, this time we are going to base it on our EchoBS business service. We will see why in a moment.

For the Service Type, select Business Service, as shown in the screenshot, and click Browse…. This will launch the Select Business Service window from where we can search for and select the business service that we want to base our proxy service on.

Creating our proxy service

By default, this window will list all the business services defined to the Service Bus, though you can restrict the list by defining the search criteria.

In our case, we just have the EchoBS, so select this, and click on Submit. This will return us to the General Configuration screen with Service Type updated, as shown in the following screenshot:

Creating our proxy service

From here, click Last. This will take us to a summary page of our proxy service. Click Save to create our proxy service.

This will return us to the project view on the Proxy Service folder and display the message The Service Echo was created successfully.

If we examine the Resources section of our project view, we should see that it now contains our newly created proxy service as shown in the following screenshot:

Creating our proxy service

Creating message flow

Once we have created our proxy service, the next step is to specify how it should handle requests. This is defined in the message flow of the proxy service.

The message flow defines the actions that the proxy service should perform when a request is received such as validating the payload, transforming, or enriching it before routing it to the appropriate business service.

Within the resource section of our project view, click on the Edit Message Flow icon, as circled in the preceding image. This will take us to the Edit Message Flow window, where we can view and edit the message flow of our proxy service, as shown in the following screenshot:

Creating message flow

Looking at this, we can see that Echo already invokes the route node RouteTo_EchoBS.

Click on this and select Edit Route(as shown in the preceding screenshot). This will take us to the Edit Stage Configuration window, as shown in the following screenshot:

Creating message flow

Here we can see that it's already configured to route requests to the EchoBS business service.

Normally, when we create a proxy service we have to specify the message flow from scratch. However, when we created our Echo proxy service we based it on the EchoBS business service (as opposed to a WSDL). Because of this, the Service Bus has automatically configured the message flow to route requests to EchoBS.

As a result, our message flow is already predefined for us, so click Cancel, and then Cancel again to return to our project view.

Activating the Echo proxy service

We now have a completed proxy service; all that remains is to commit our work. Within the Change Center click Activate.

Activating the Echo proxy service

This will bring up the Activate Session, as shown in the following screenshot:

Activating the Echo proxy service

Before activating a session, it's good practice to give a description of the changes that we've made, just in case we need to roll them back later. So enter an appropriate description and then click on Submit, as shown in the preceding screenshot:

Assuming everything is okay, this will activate our changes, and the console will be updated to list our configuration changes, as shown in the following screenshot:

Activating the Echo proxy service

If you make a mistake and want to undo the changes you have activated, then you can click on the undo icon (circled in the preceding screenshot), and if you change your mind, you can revert the undo.

OSB allows you to undo any of your previous sessions as long as it doesn't result in an error in the runtime configuration of the Service Bus.

Testing our proxy service

All that's left is to test our proxy service. A simple way to do this is to initiate a test instance using the Service Bus test console.

To do this, we need to navigate back to the definition of our proxy service, rather than do this via the Project Explorer. We will use the Resource Browser. This provides a way to view all resources based on their type.

Click on the Resource Browser section within the navigation bar. By default, it will list all proxy services defined to the Service Bus, as shown in the following screenshot:

Testing our proxy service

We can then filter this list further by specifying the appropriate search criteria.

Click on the Launch Test Console icon for the Echo proxy service (circled in the preceding screenshot). This will launch the test console shown on the next page.

The Available Operations drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

By default, the options Direct Call and Include Tracing are selected within the Test Configuration section; keep these selected as they enable us to trace the state of a message as it passes through the proxy service.

The Request Document section allows us to specify the SOAP Header and the Payload for our service. By default, these will contain a skeleton XML fragment based on the WSDL definition of the selected operation with default values for each field.

To execute a test instance of our service, modify the text in the <echo:input> element, as we have in the following screen shot, and click Execute. This will cause the console to generate a request message and use it to invoke our Echo proxy service.

Testing our proxy service

Upon successful execution of the proxy, the test console will be updated to show the response returned. Here we can see that the result element contains our original initial input string, as shown in the following screenshot:

Testing our proxy service

We can examine the state of our message as it passed through the proxy service by expanding the Invocation Trace, as we have in the following screenshot:

Testing our proxy service

In addition, if you log back into the EM console, you should be able to see the Assembly instance that was invoked by the Service Bus.

Creating message flow

Once we have created our proxy service, the next step is to specify how it should handle requests. This is defined in the message flow of the proxy service.

The message flow defines the actions that the proxy service should perform when a request is received such as validating the payload, transforming, or enriching it before routing it to the appropriate business service.

Within the resource section of our project view, click on the Edit Message Flow icon, as circled in the preceding image. This will take us to the Edit Message Flow window, where we can view and edit the message flow of our proxy service, as shown in the following screenshot:

Creating message flow

Looking at this, we can see that Echo already invokes the route node RouteTo_EchoBS.

Click on this and select Edit Route(as shown in the preceding screenshot). This will take us to the Edit Stage Configuration window, as shown in the following screenshot:

Creating message flow

Here we can see that it's already configured to route requests to the EchoBS business service.

Normally, when we create a proxy service we have to specify the message flow from scratch. However, when we created our Echo proxy service we based it on the EchoBS business service (as opposed to a WSDL). Because of this, the Service Bus has automatically configured the message flow to route requests to EchoBS.

As a result, our message flow is already predefined for us, so click Cancel, and then Cancel again to return to our project view.

Activating the Echo proxy service

We now have a completed proxy service; all that remains is to commit our work. Within the Change Center click Activate.

Activating the Echo proxy service

This will bring up the Activate Session, as shown in the following screenshot:

Activating the Echo proxy service

Before activating a session, it's good practice to give a description of the changes that we've made, just in case we need to roll them back later. So enter an appropriate description and then click on Submit, as shown in the preceding screenshot:

Assuming everything is okay, this will activate our changes, and the console will be updated to list our configuration changes, as shown in the following screenshot:

Activating the Echo proxy service

If you make a mistake and want to undo the changes you have activated, then you can click on the undo icon (circled in the preceding screenshot), and if you change your mind, you can revert the undo.

OSB allows you to undo any of your previous sessions as long as it doesn't result in an error in the runtime configuration of the Service Bus.

Testing our proxy service

All that's left is to test our proxy service. A simple way to do this is to initiate a test instance using the Service Bus test console.

To do this, we need to navigate back to the definition of our proxy service, rather than do this via the Project Explorer. We will use the Resource Browser. This provides a way to view all resources based on their type.

Click on the Resource Browser section within the navigation bar. By default, it will list all proxy services defined to the Service Bus, as shown in the following screenshot:

Testing our proxy service

We can then filter this list further by specifying the appropriate search criteria.

Click on the Launch Test Console icon for the Echo proxy service (circled in the preceding screenshot). This will launch the test console shown on the next page.

The Available Operations drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

By default, the options Direct Call and Include Tracing are selected within the Test Configuration section; keep these selected as they enable us to trace the state of a message as it passes through the proxy service.

The Request Document section allows us to specify the SOAP Header and the Payload for our service. By default, these will contain a skeleton XML fragment based on the WSDL definition of the selected operation with default values for each field.

To execute a test instance of our service, modify the text in the <echo:input> element, as we have in the following screen shot, and click Execute. This will cause the console to generate a request message and use it to invoke our Echo proxy service.

Testing our proxy service

Upon successful execution of the proxy, the test console will be updated to show the response returned. Here we can see that the result element contains our original initial input string, as shown in the following screenshot:

Testing our proxy service

We can examine the state of our message as it passed through the proxy service by expanding the Invocation Trace, as we have in the following screenshot:

Testing our proxy service

In addition, if you log back into the EM console, you should be able to see the Assembly instance that was invoked by the Service Bus.

Activating the Echo proxy service

We now have a completed proxy service; all that remains is to commit our work. Within the Change Center click Activate.

Activating the Echo proxy service

This will bring up the Activate Session, as shown in the following screenshot:

Activating the Echo proxy service

Before activating a session, it's good practice to give a description of the changes that we've made, just in case we need to roll them back later. So enter an appropriate description and then click on Submit, as shown in the preceding screenshot:

Assuming everything is okay, this will activate our changes, and the console will be updated to list our configuration changes, as shown in the following screenshot:

Activating the Echo proxy service

If you make a mistake and want to undo the changes you have activated, then you can click on the undo icon (circled in the preceding screenshot), and if you change your mind, you can revert the undo.

OSB allows you to undo any of your previous sessions as long as it doesn't result in an error in the runtime configuration of the Service Bus.

Testing our proxy service

All that's left is to test our proxy service. A simple way to do this is to initiate a test instance using the Service Bus test console.

To do this, we need to navigate back to the definition of our proxy service, rather than do this via the Project Explorer. We will use the Resource Browser. This provides a way to view all resources based on their type.

Click on the Resource Browser section within the navigation bar. By default, it will list all proxy services defined to the Service Bus, as shown in the following screenshot:

Testing our proxy service

We can then filter this list further by specifying the appropriate search criteria.

Click on the Launch Test Console icon for the Echo proxy service (circled in the preceding screenshot). This will launch the test console shown on the next page.

The Available Operations drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

By default, the options Direct Call and Include Tracing are selected within the Test Configuration section; keep these selected as they enable us to trace the state of a message as it passes through the proxy service.

The Request Document section allows us to specify the SOAP Header and the Payload for our service. By default, these will contain a skeleton XML fragment based on the WSDL definition of the selected operation with default values for each field.

To execute a test instance of our service, modify the text in the <echo:input> element, as we have in the following screen shot, and click Execute. This will cause the console to generate a request message and use it to invoke our Echo proxy service.

Testing our proxy service

Upon successful execution of the proxy, the test console will be updated to show the response returned. Here we can see that the result element contains our original initial input string, as shown in the following screenshot:

Testing our proxy service

We can examine the state of our message as it passed through the proxy service by expanding the Invocation Trace, as we have in the following screenshot:

Testing our proxy service

In addition, if you log back into the EM console, you should be able to see the Assembly instance that was invoked by the Service Bus.

Testing our proxy service

All that's left is to test our proxy service. A simple way to do this is to initiate a test instance using the Service Bus test console.

To do this, we need to navigate back to the definition of our proxy service, rather than do this via the Project Explorer. We will use the Resource Browser. This provides a way to view all resources based on their type.

Click on the Resource Browser section within the navigation bar. By default, it will list all proxy services defined to the Service Bus, as shown in the following screenshot:

Testing our proxy service

We can then filter this list further by specifying the appropriate search criteria.

Click on the Launch Test Console icon for the Echo proxy service (circled in the preceding screenshot). This will launch the test console shown on the next page.

The Available Operations drop-down list allows us to specify which operation we want to invoke. In our case, it's automatically defaulted to process.

By default, the options Direct Call and Include Tracing are selected within the Test Configuration section; keep these selected as they enable us to trace the state of a message as it passes through the proxy service.

The Request Document section allows us to specify the SOAP Header and the Payload for our service. By default, these will contain a skeleton XML fragment based on the WSDL definition of the selected operation with default values for each field.

To execute a test instance of our service, modify the text in the <echo:input> element, as we have in the following screen shot, and click Execute. This will cause the console to generate a request message and use it to invoke our Echo proxy service.

Testing our proxy service

Upon successful execution of the proxy, the test console will be updated to show the response returned. Here we can see that the result element contains our original initial input string, as shown in the following screenshot:

Testing our proxy service

We can examine the state of our message as it passed through the proxy service by expanding the Invocation Trace, as we have in the following screenshot:

Testing our proxy service

In addition, if you log back into the EM console, you should be able to see the Assembly instance that was invoked by the Service Bus.

Summary

In this section, we have implemented our first SCA Assembly and then built our first proxy service on top of it. While this example is about as trivial as it can get, it has provided us with an initial introduction to both the design time and runtime components of Oracle BPEL PM and Oracle Service Bus.

In the next few chapters we will go into more detail on each of these components as well as look at how we can use adapters to service enable existing systems.

Chapter 3. Service-enabling Existing Systems

The heart of service-oriented architecture (SOA) is the creation of processes and applications from existing services. The question arises, where do these services come from? Within an SOA solution, some services will need to be written from scratch, but most of the functions required should already exist in some form within the IT assets of the organization. Existing applications within the enterprise already provide many services that just require exposing to an SOA infrastructure. In this chapter, we will examine some ways to create services from existing applications. We refer to this process as service-enabling existing systems. After discussing some of the different types of systems, we will look at the specific functionality provided in the Oracle SOA Suite that makes it easy to convert file and database interfaces into services.

Types of systems

IT systems come in all sorts of shapes and forms; some have existing web service interfaces which can be consumed directly by an SOA infrastructure, others have completely proprietary interfaces, and others expose functionality through some well understood but non web service-based interfaces. In terms of service-enabling a system, it is useful to classify it by the type of interface it exposes.

Within the SOA Suite, components called adapters provide a mapping between non-web service interfaces and the rest of the SOA Suite. These adapters allow the SOA Suite to treat non-web service interfaces as though they have a web service interface.

Web service interfaces

If an application exposes a web service interface, meaning a SOAP service described by a Web Service Description Language (WSDL) document, then it may be consumed directly. Such web services can directly be included as part of a composite application or business process.

The latest versions of many applications expose web services, for example SAP, Siebel, Peoplesoft, and E-Business Suite applications provide access to at least some of their functionality through web services.

Technology interfaces

Many applications, such as SAP and Oracle E-Business Suite, currently expose only part of their functionality or no functionality through web service interfaces, but they can still participate in service-oriented architecture. Many applications have adopted an interface that is to some extent based on a standard technology.

Examples of standard technology interfaces include the following:

  • Files
  • Database tables and stored procedures
  • Message queues

While these interfaces may be based on a standard technology, they do not provide a standard data model, and generally, there must be a mapping between the raw technology interface and the more structured web service style interface that we would like.

The following table shows how these interfaces are supported through technology adapters provided with the SOA Suite.

Technology

Adapter

Notes

Files

File

Reads and writes files mounted directly on the machine. This can be physically attached disks or network mounted devices (for example, Windows shared drives or NFS drives).

FTP

Reads and writes files mounted on an FTP server.

Database

Database

Reads and writes database tables and invokes stored procedures.

Message queues

JMS

Reads and posts messages to Java Messaging Service (JMS) queues and topics.

AQ

Reads and posts messages to Oracle AQ (Advanced Queuing) queues.

MQ

Reads and posts messages to IBM MQ (Message Queue) Series queues.

Java

EJB

Read and writes to EJBs.

TCP/IP

Socket

Reads and writes to raw socket interfaces.

In addition to the eight technology adapters listed previously, there are other technology adapters available, such as a CICS adapter to connect to IBM mainframes and an adapter to connect to systems running Oracle's Tuxedo transaction processing system. There are many other technology adapters that may be purchased to work with the SOA Suite.

The installed adapters are shown in the Component Palette of JDeveloper in the Service Adapters section when SOA is selected, as shown in the following screenshot:

Technology interfaces

Application interfaces

The technology adapters leave the task of mapping interfaces and their associated data structures into XML in the hands of the service-enabler. When using an application adapter, such as those for the Oracle E-Business Suite or SAP, the grouping of interfaces and mapping them into XML is already done for you by the adapter developer. These application adapters make life easier for the service-enabler by hiding underlying data formats and transport protocols.

Unfortunately, the topic of application adapters is too large an area to delve into in this book, but you should always check if an application-specific adapter already exists for the system that you want to service-enable. This is because application adapters will be easier to use than the technology adapters.

There are hundreds of third-party adapters that may be purchased to provide SOA Suite with access to functionality within packaged applications.

Java Connector Architecture

Within the SOA Suite, adapters are implemented and accessed using a Java technology known as Java Connector Architecture (JCA). JCA provides a standard packaging and discovery method for adapter functionality. Most of the time, SOA Suite developers will be unaware of JCA because JDeveloper generates a JCA binding as part of a WSDL interface and automatically deploys them with the SCA Assembly. In the current release, JCA adapters must be deployed separately to a WebLogic server for use by the Oracle Service Bus.

Creating services from files

A common mechanism for communicating with an existing application is through a file. Many applications will write their output to a file, expecting it to be picked up and processed by other applications. By using the file adapter, we can create a service representation that makes the file producing application appear as an SOA-enabled service that invokes other services. Similarly, other applications can be configured to take input by reading files. A file adapter allows us to make the production of the file appear as an SOA invocation, but under the covers, the invocation actually creates a file.

File communication is either inbound (this means that a file has been created by an application and must be read) or outbound (this means that a file must be written to provide input to an application). The files that are written and read by existing applications may be in a variety of formats including XML, separator delimited files, or fixed format files.

A payroll use case

Consider a company that has a payroll application that produces a file detailing payments. This file must be transformed into a file format that is accepted by the company's bank and then delivered to the bank via FTP. The company wants to use SOA technologies to perform this transfer because it allows them to perform additional validations or enrichment of the data before sending it to the bank. In addition, they want to store the details of what was sent in a database for audit purposes. In this scenario, a file adapter could be used to take the data from the file, an FTP adapter to deliver it to the bank, and a database adapter could post it into the tables required for audit purposes.

Reading a payroll file

Let's look at how we would read from a payroll file. Normally, we will poll to check for the arrival of a file, although it is also possible to read a file without polling. The key points to be considered beforehand are:

  • How often should we poll for the file?
  • Do we need to read the contents of the file?
  • Do we need to move it to a different location?
  • What do we do with the file when we have read or moved it?
    • Should we delete it?
    • Should we move it to an archive directory?
  • How large is the file and its records?
  • Does the file have one record or many?

We will consider all these factors as we interact with the File Adapter Wizard.

Starting the wizard

We begin by dragging the file adapter from the component palette in JDeveloper onto either a BPEL process or an SCA Assembly (refer to Chapter 2, Writing your First Composite for more information on building a composite).

This causes the File Adapter Configuration Wizard to start.

Starting the wizard

Naming the service

Clicking on Next allows us to choose a name for the service that we are creating and optionally a description. We will use the service name PayrollinputFileService. Any name can be used, as long as it has some meaning to the developers. It is a good idea to have a consistent naming convention, for example, identifying the business role (PayrollInput), the technology (File), and the fact that this is a service (PayrollinputFileService).

Naming the service

Identifying the operation

Clicking on Next allows us to either import an existing WSDL definition for our service or create a new service definition. We would import an existing WSDL to reuse an existing adapter configuration that had been created previously. Choosing Define from operation and schema (specified later) allows us to create a new definition.

Identifying the operation

If we choose to create a new definition, then we start by specifying how we map the files onto a service. It is here that we decide whether we are reading or writing the file. When reading a file, we decide if we wish to generate an event when it is available (a normal Read File operation that requires an inbound operation to receive the message) or if we want to read it only when requested (a Synchronous Read File operation that requires an outbound operation).

Identifying the operation

Tip

Who calls who?

We usually think of a service as something that we call and then get a result from. However, in reality, services in a service-oriented architecture will often initiate events. These events may be delivered to a BPEL process which is waiting for an event, or routed to another service through the Service Bus, Mediator, or even initiate a whole new SCA Assembly. Under the covers, an adapter might need to poll to detect an event, but the service will always be able to generate an event. With a service, we either call it to get a result or it generates an event that calls some other service or process.

The file adapter wizard exposes four types of operation, as outlined in the following table. We will explore the read operation to generate events as a file is created.

Operation Type

Direction

Description

Read File

Inbound call from service

Reads the file and generates one or more calls into BPEL, Mediator, or Service Bus when a file appears.

Write File

Outbound call to service with no response

Writes a file, with one or more calls from BPEL, Mediator, or the Service Bus, causing records to be written to a file.

Synchronous Read File

Outbound call to service returning file contents

BPEL, Mediator, or Service Bus requests a file to be read, returning nothing if the file doesn't exist.

List Files

Outbound call to service returning a list of files in a directory

Provides a means for listing the files in a directory.

Tip

Why ignore the contents of the file?

The file adapter has an option named Do not read file content. This is used when the file is just a signal for some event. Do not use this feature for the scenario where a file is written and then marked as available by another file being written. This is explicitly handled elsewhere in the file adapter. Instead, the feature can be used as a signal of some event that has no relevant data other than the fact that something has happened. Although the file itself is not readable, certain metadata is made available as part of the message sent.

Identifying the operation

Defining the file location

Clicking on Next allows us to configure the location of the file. Locations can be specified as either physical (mapped directly onto the filesystem) or logical (an indirection to the real location). The Directory for Incoming Files specifies where the adapter should look to find new files. If the file should appear in a subdirectory of the one specified, then the Process files recursively box should be checked.

Defining the file location

The key question now is what to do with the file when it appears. One option is to keep a copy of the file in an archive directory. This is achieved by checking the Archive processed files attribute and providing a location for the file archive. In addition to archiving the file, we need to decide if we want to delete the original file. This is indicated by the Delete files after successful retrieval checkbox.

Tip

Logical versus Physical locations

The file adapter allows us to have logical (Logical Name) or physical locations (Physical Path) for files. Physical locations are easier for developers as we embed the exact file location into the assembly with no more work required. However, this only works if the file locations are the same in the development, test and production environments, particularly unlikely if development is done on Windows but production is on Linux. Hence for production systems, it is best to use logical locations that must be mapped onto physical locations when deployed. Chapter 19, Packaging and Deploying shows how this mapping may be different for each environment.

Selecting specific files

Having defined the location where files are found, we can now move on to the next step in the wizard. Here we describe what the filenames look like. We can describe filenames using either wildcards (using '*' to represent a sequence of 0 or more characters) or using Java regular expressions, as described in the documentation for the java.util.regex.Pattern class. Usually wildcards will be good enough. For example, if we want to select all files that start with "PR" and end with ".txt", then we would use the wildcard string "PR*.txt" or the regular expression "PR.*\.txt". As you can see, it is generally easier to use wildcards rather than regular expressions. We can also specify a pattern to identify which files should not be processed.

The final part of this screen in the adapter wizard asks if the file contains a single message or many messages. This is confusing because when the screen refers to messages, it really means records.

Selecting specific files

Tip

XML files

It is worth remembering that a well formed XML document can only have a single root element, and hence an XML input file will normally have only a single input record. In the case of very large XML files, it is possible to have the file adapter batch the file up into multiple messages, in which case the root element is replicated in each message, and the second level elements are treated as records. This behavior is requested by setting the streaming option.

By default, a message will contain a single record from the file. Records will be defined in the next step of the wizard. If the file causes a BPEL process to be started, then a 1000 record file would result in 1000 BPEL processes being initiated. To improve efficiency, records can be batched, and the Publish Messages in Batches of attribute controls the maximum number of records in a message.

Tip

Message batching

It is common for an incoming file to contain many records. These records, when processed, can impact system performance and memory requirements. Hence it is important to align the use of the records with their likely impact on system resources.

Detecting that the file is available

The next step in the wizard allows us to configure the frequency of polling for the inbound file. There are two parameters that can be configured here–the Polling Frequency and the Minimum File Age.

Detecting that the file is available

The Polling Frequency just means the time delay between checking to see if a file is available for processing. The adapter will check once per interval to see if the file exists. Setting this too low can consume needless CPU resources, setting it too high can make the system appear unresponsive. 'Too high' and 'too low' are very subjective and will depend on your individual requirements. For example, the polling interval for a file that is expected to be written twice a day may be set to three hours, while the interval for a file that is expected to be written every hour may be set to 15 minutes.

Minimum File Age specifies how old a file must be before it is processed by the adapter. This setting allows a file to be completely written before it is read. For example, a large file may take five minutes to write out from the original application. If the file is read three minutes after it has been created, then it is possible for the adapter to run out of records to read and assume the file has been processed, when in reality, the application is still writing to the file. Setting a minimum age to ten minutes would avoid this problem by giving the application at least ten minutes to write the file.

As an alternative to polling for a file directly, we may use a trigger file to indicate that a file is available. Some systems write large files to disk and then indicate that they are available by writing a trigger file. This avoids the problems with reading an incomplete file we identified in the previous paragraph, without the delay in processing the file that a minimum age field may cause.

Message format

The penultimate step in the file adapter is to set up the format of records or messages in the file. This is one of the most critical steps, as this defines the format of messages generated by a file.

Messages may be opaque, meaning that they are passed around as black boxes. This may be appropriate with a Microsoft Word file, for example, that must merely be transported from point A to point B without being examined. This is indicated by the Native format translation is not required (Schema is Opaque) checkbox.

Message format

If the document is already in an XML format, then we can just specify a schema and an expected root element and the job is done. Normally the file is in some non-XML format that must be mapped onto an XML Schema generated through the native format builder wizard, which is invoked through the Define Schema for Native Format button.

Defining a native format schema

Invoking the Native Format Builder wizard brings up an initial start screen that leads on to the first step in the wizard, choosing the type of format, as shown in the following screenshot:

Defining a native format schema

This allows us to identify the overall record structure. If we have an existing schema document that describes the record structure, then we can point to that. Usually, we will need to determine the type of structure of the file ourselves. The choices available are:

  • Delimited: These are files such as CSV files (Comma Separated Values), records with spaces, or '+' signs for separators.
  • Fixed Length: These are files whose records consist of fixed length fields. Be careful not to confuse these with space-separated files, as if a value does not fill the entire field, it will usually be padded with spaces.
  • Complex Type: These files may include nested records like a master detail type structure.
  • DTD to be converted to XSD: These are XML Data Type Definition XML files that will be mapped onto an XML Schema description of the file content.
  • Cobol Copybook to be converted to native format: These are files that have usually been produced by a COBOL system, often originating from a mainframe.

We will look at a delimited file, as it is one of the most common formats.

Although we are using the separator file type, the steps involved are basically the same for most file types including the fixed length field format, which is also extremely common.

Using a sample file

To make it easier to describe the format of the incoming file, the wizard asks us to specify a file to use as a sample. If necessary, we can skip rows in the file and determine the number of records to read. Obviously, reading a very large number of records may take a while, and if all the variability on the file is in the first ten records, then there is no point in wasting time reading any more sample records. We may also choose to restrict the number of rows processed at runtime.

Setting the character set needs to be done carefully, particularly in international environments where non-ASCII character sets may be common.

After selecting a sample file, the wizard will display an initial view of the file with a guess at the field separators.

Using a sample file

Record structure

The next step of the wizard allows us to describe how the records appear in the file.

The first option File contains only one record allows us to process the file as a single message. This can be useful when the file has multiple records, all of the same format, which we want to read as a single message. Use of this option disables batching.

The next option of File contains multiple record instances allows batching to take place. Records are either of the same type or of different types. They can only be marked of different types if they can be distinguished, based on the first field in the record. In other words, in order to choose the Multiple records are of different types, the first field in all the records must be a record type identifier. In the example shown in the preceding screenshot, the first field is either an H for Header records or an R for Records.

Record structure

Choosing a root element

The next step allows us to define the target namespace and root element of the schema that we are generating.

Choosing a root element

Note

Don't forget that when using the Native Format Builder wizard, we are just creating an XML Schema document that describes the native (non-XML) format data. Most of the time this schema is transparent to us. However, at times the XML constructs have to emerge, for example, when identifying a name for a root element. The file is described using an XML Schema extension known as NXSD.

As we can see the root element is mandatory. This root element acts as a wrapper for the records in a message. If message batching is set to 1, then each wrapper will have a single sub-element, namely, the record. If the message is set to greater than 1, then each wrapper will have at least one and possibly more sub-elements, each sub-element being a record. There can never be more sub-elements than the batch size.

Message delimiters

Having described the overall structure of the file, we can now drill down into the individual fields. To do this, we first specify the message delimiters.

In addition to field delimiters, we can also specify a record delimiter. Usually record delimiters are new lines. If fields are also wrapped in quotation marks, then these can be stripped off by specifying the Optionally enclosed by character.

Message delimiters

Record type names

The wizard will identify the types of records based on the first field in each record, as shown in the preceding screenshot. It is possible to ignore record types by selecting them and clicking Delete. If this is done by mistake, then it is possible to add them back by using the Add button. Only fields that exist in the sample data can be added in the wizard.

Record type names

Note that if we want to reset the record types screen, then the Scan button will rescan the sample file and look for all the different record types it contains.

The Record Name field can be set by double-clicking it and providing a suitable record name. This record name is the XML element name that encapsulates the record content.

Field properties

Now that we have identified record and field boundaries, we can drill down into the records and define the data types and names of individual fields. This is done for each record type in turn. We can select which records to define by selecting them from the Record Name drop-down box or by pressing the Next Record Type button.

It is important to be as liberal as possible when defining field data types because any mismatches will cause errors that will need to be handled. Being liberal in our record definitions will allow us to validate the messages, as described in Chapter 13, Building Validation into Services, without raising system errors.

Field properties

The Name column represents the element name of this field. The wizard will attempt to guess the type of the field, but it is important to always check this because the sample data you are using may not include all possibilities. A common error is identifying numbers to be tagged as integers, when they should really be strings—accept integer types only when they are likely to have arithmetic operations performed on them.

Verifying the result

We have now completed our mapping and can verify what has been done by looking at the generated XML Schema file. Note that the generated schema uses some Oracle extensions to enable a non-XML formatted file to be represented as XML. In particular the nxsd namespace prefix is used to identify field separators and record terminators.

Note

The XML Schema generated can be edited manually. This is useful to support nested records (records inside other records) like those that may be found in a file containing order records with nested detail records (an order record contains multiple line item detail records). In this case, it is useful to use the wizard to generate a schema with order records and detail records at the same level. The schema can then be modified by hand to make the detail records children of the order records.

Verifying the result

Clicking the Test button brings up a simple test screen that allows us to apply our newly generated schema to the input document and examine the resulting XML.

Verifying the result

Clicking Next and then Finish will cause the generated schema file to be saved.

Finishing the wizards

Until now, no work has been saved, except for the XML Schema mapping the file content onto an XML structure. The rest of the adapter settings are not saved, and the endpoint is not set up until the Finish button is clicked on the completion screen, as shown in the following screenshot. Note that the file generated is a Web Service Description Language (WSDL) file with a JCA binding.

Finishing the wizards

Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>

Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection

Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation

Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

A payroll use case

Consider a company that has a payroll application that produces a file detailing payments. This file must be transformed into a file format that is accepted by the company's bank and then delivered to the bank via FTP. The company wants to use SOA technologies to perform this transfer because it allows them to perform additional validations or enrichment of the data before sending it to the bank. In addition, they want to store the details of what was sent in a database for audit purposes. In this scenario, a file adapter could be used to take the data from the file, an FTP adapter to deliver it to the bank, and a database adapter could post it into the tables required for audit purposes.

Reading a payroll file

Let's look at how we would read from a payroll file. Normally, we will poll to check for the arrival of a file, although it is also possible to read a file without polling. The key points to be considered beforehand are:

  • How often should we poll for the file?
  • Do we need to read the contents of the file?
  • Do we need to move it to a different location?
  • What do we do with the file when we have read or moved it?
    • Should we delete it?
    • Should we move it to an archive directory?
  • How large is the file and its records?
  • Does the file have one record or many?

We will consider all these factors as we interact with the File Adapter Wizard.

Starting the wizard

We begin by dragging the file adapter from the component palette in JDeveloper onto either a BPEL process or an SCA Assembly (refer to Chapter 2, Writing your First Composite for more information on building a composite).

This causes the File Adapter Configuration Wizard to start.

Starting the wizard

Naming the service

Clicking on Next allows us to choose a name for the service that we are creating and optionally a description. We will use the service name PayrollinputFileService. Any name can be used, as long as it has some meaning to the developers. It is a good idea to have a consistent naming convention, for example, identifying the business role (PayrollInput), the technology (File), and the fact that this is a service (PayrollinputFileService).

Naming the service

Identifying the operation

Clicking on Next allows us to either import an existing WSDL definition for our service or create a new service definition. We would import an existing WSDL to reuse an existing adapter configuration that had been created previously. Choosing Define from operation and schema (specified later) allows us to create a new definition.

Identifying the operation

If we choose to create a new definition, then we start by specifying how we map the files onto a service. It is here that we decide whether we are reading or writing the file. When reading a file, we decide if we wish to generate an event when it is available (a normal Read File operation that requires an inbound operation to receive the message) or if we want to read it only when requested (a Synchronous Read File operation that requires an outbound operation).

Identifying the operation

Tip

Who calls who?

We usually think of a service as something that we call and then get a result from. However, in reality, services in a service-oriented architecture will often initiate events. These events may be delivered to a BPEL process which is waiting for an event, or routed to another service through the Service Bus, Mediator, or even initiate a whole new SCA Assembly. Under the covers, an adapter might need to poll to detect an event, but the service will always be able to generate an event. With a service, we either call it to get a result or it generates an event that calls some other service or process.

The file adapter wizard exposes four types of operation, as outlined in the following table. We will explore the read operation to generate events as a file is created.

Operation Type

Direction

Description

Read File

Inbound call from service

Reads the file and generates one or more calls into BPEL, Mediator, or Service Bus when a file appears.

Write File

Outbound call to service with no response

Writes a file, with one or more calls from BPEL, Mediator, or the Service Bus, causing records to be written to a file.

Synchronous Read File

Outbound call to service returning file contents

BPEL, Mediator, or Service Bus requests a file to be read, returning nothing if the file doesn't exist.

List Files

Outbound call to service returning a list of files in a directory

Provides a means for listing the files in a directory.

Tip

Why ignore the contents of the file?

The file adapter has an option named Do not read file content. This is used when the file is just a signal for some event. Do not use this feature for the scenario where a file is written and then marked as available by another file being written. This is explicitly handled elsewhere in the file adapter. Instead, the feature can be used as a signal of some event that has no relevant data other than the fact that something has happened. Although the file itself is not readable, certain metadata is made available as part of the message sent.

Identifying the operation

Defining the file location

Clicking on Next allows us to configure the location of the file. Locations can be specified as either physical (mapped directly onto the filesystem) or logical (an indirection to the real location). The Directory for Incoming Files specifies where the adapter should look to find new files. If the file should appear in a subdirectory of the one specified, then the Process files recursively box should be checked.

Defining the file location

The key question now is what to do with the file when it appears. One option is to keep a copy of the file in an archive directory. This is achieved by checking the Archive processed files attribute and providing a location for the file archive. In addition to archiving the file, we need to decide if we want to delete the original file. This is indicated by the Delete files after successful retrieval checkbox.

Tip

Logical versus Physical locations

The file adapter allows us to have logical (Logical Name) or physical locations (Physical Path) for files. Physical locations are easier for developers as we embed the exact file location into the assembly with no more work required. However, this only works if the file locations are the same in the development, test and production environments, particularly unlikely if development is done on Windows but production is on Linux. Hence for production systems, it is best to use logical locations that must be mapped onto physical locations when deployed. Chapter 19, Packaging and Deploying shows how this mapping may be different for each environment.

Selecting specific files

Having defined the location where files are found, we can now move on to the next step in the wizard. Here we describe what the filenames look like. We can describe filenames using either wildcards (using '*' to represent a sequence of 0 or more characters) or using Java regular expressions, as described in the documentation for the java.util.regex.Pattern class. Usually wildcards will be good enough. For example, if we want to select all files that start with "PR" and end with ".txt", then we would use the wildcard string "PR*.txt" or the regular expression "PR.*\.txt". As you can see, it is generally easier to use wildcards rather than regular expressions. We can also specify a pattern to identify which files should not be processed.

The final part of this screen in the adapter wizard asks if the file contains a single message or many messages. This is confusing because when the screen refers to messages, it really means records.

Selecting specific files

Tip

XML files

It is worth remembering that a well formed XML document can only have a single root element, and hence an XML input file will normally have only a single input record. In the case of very large XML files, it is possible to have the file adapter batch the file up into multiple messages, in which case the root element is replicated in each message, and the second level elements are treated as records. This behavior is requested by setting the streaming option.

By default, a message will contain a single record from the file. Records will be defined in the next step of the wizard. If the file causes a BPEL process to be started, then a 1000 record file would result in 1000 BPEL processes being initiated. To improve efficiency, records can be batched, and the Publish Messages in Batches of attribute controls the maximum number of records in a message.

Tip

Message batching

It is common for an incoming file to contain many records. These records, when processed, can impact system performance and memory requirements. Hence it is important to align the use of the records with their likely impact on system resources.

Detecting that the file is available

The next step in the wizard allows us to configure the frequency of polling for the inbound file. There are two parameters that can be configured here–the Polling Frequency and the Minimum File Age.

Detecting that the file is available

The Polling Frequency just means the time delay between checking to see if a file is available for processing. The adapter will check once per interval to see if the file exists. Setting this too low can consume needless CPU resources, setting it too high can make the system appear unresponsive. 'Too high' and 'too low' are very subjective and will depend on your individual requirements. For example, the polling interval for a file that is expected to be written twice a day may be set to three hours, while the interval for a file that is expected to be written every hour may be set to 15 minutes.

Minimum File Age specifies how old a file must be before it is processed by the adapter. This setting allows a file to be completely written before it is read. For example, a large file may take five minutes to write out from the original application. If the file is read three minutes after it has been created, then it is possible for the adapter to run out of records to read and assume the file has been processed, when in reality, the application is still writing to the file. Setting a minimum age to ten minutes would avoid this problem by giving the application at least ten minutes to write the file.

As an alternative to polling for a file directly, we may use a trigger file to indicate that a file is available. Some systems write large files to disk and then indicate that they are available by writing a trigger file. This avoids the problems with reading an incomplete file we identified in the previous paragraph, without the delay in processing the file that a minimum age field may cause.

Message format

The penultimate step in the file adapter is to set up the format of records or messages in the file. This is one of the most critical steps, as this defines the format of messages generated by a file.

Messages may be opaque, meaning that they are passed around as black boxes. This may be appropriate with a Microsoft Word file, for example, that must merely be transported from point A to point B without being examined. This is indicated by the Native format translation is not required (Schema is Opaque) checkbox.

Message format

If the document is already in an XML format, then we can just specify a schema and an expected root element and the job is done. Normally the file is in some non-XML format that must be mapped onto an XML Schema generated through the native format builder wizard, which is invoked through the Define Schema for Native Format button.

Defining a native format schema

Invoking the Native Format Builder wizard brings up an initial start screen that leads on to the first step in the wizard, choosing the type of format, as shown in the following screenshot:

Defining a native format schema

This allows us to identify the overall record structure. If we have an existing schema document that describes the record structure, then we can point to that. Usually, we will need to determine the type of structure of the file ourselves. The choices available are:

  • Delimited: These are files such as CSV files (Comma Separated Values), records with spaces, or '+' signs for separators.
  • Fixed Length: These are files whose records consist of fixed length fields. Be careful not to confuse these with space-separated files, as if a value does not fill the entire field, it will usually be padded with spaces.
  • Complex Type: These files may include nested records like a master detail type structure.
  • DTD to be converted to XSD: These are XML Data Type Definition XML files that will be mapped onto an XML Schema description of the file content.
  • Cobol Copybook to be converted to native format: These are files that have usually been produced by a COBOL system, often originating from a mainframe.

We will look at a delimited file, as it is one of the most common formats.

Although we are using the separator file type, the steps involved are basically the same for most file types including the fixed length field format, which is also extremely common.

Using a sample file

To make it easier to describe the format of the incoming file, the wizard asks us to specify a file to use as a sample. If necessary, we can skip rows in the file and determine the number of records to read. Obviously, reading a very large number of records may take a while, and if all the variability on the file is in the first ten records, then there is no point in wasting time reading any more sample records. We may also choose to restrict the number of rows processed at runtime.

Setting the character set needs to be done carefully, particularly in international environments where non-ASCII character sets may be common.

After selecting a sample file, the wizard will display an initial view of the file with a guess at the field separators.

Using a sample file

Record structure

The next step of the wizard allows us to describe how the records appear in the file.

The first option File contains only one record allows us to process the file as a single message. This can be useful when the file has multiple records, all of the same format, which we want to read as a single message. Use of this option disables batching.

The next option of File contains multiple record instances allows batching to take place. Records are either of the same type or of different types. They can only be marked of different types if they can be distinguished, based on the first field in the record. In other words, in order to choose the Multiple records are of different types, the first field in all the records must be a record type identifier. In the example shown in the preceding screenshot, the first field is either an H for Header records or an R for Records.

Record structure

Choosing a root element

The next step allows us to define the target namespace and root element of the schema that we are generating.

Choosing a root element

Note

Don't forget that when using the Native Format Builder wizard, we are just creating an XML Schema document that describes the native (non-XML) format data. Most of the time this schema is transparent to us. However, at times the XML constructs have to emerge, for example, when identifying a name for a root element. The file is described using an XML Schema extension known as NXSD.

As we can see the root element is mandatory. This root element acts as a wrapper for the records in a message. If message batching is set to 1, then each wrapper will have a single sub-element, namely, the record. If the message is set to greater than 1, then each wrapper will have at least one and possibly more sub-elements, each sub-element being a record. There can never be more sub-elements than the batch size.

Message delimiters

Having described the overall structure of the file, we can now drill down into the individual fields. To do this, we first specify the message delimiters.

In addition to field delimiters, we can also specify a record delimiter. Usually record delimiters are new lines. If fields are also wrapped in quotation marks, then these can be stripped off by specifying the Optionally enclosed by character.

Message delimiters

Record type names

The wizard will identify the types of records based on the first field in each record, as shown in the preceding screenshot. It is possible to ignore record types by selecting them and clicking Delete. If this is done by mistake, then it is possible to add them back by using the Add button. Only fields that exist in the sample data can be added in the wizard.

Record type names

Note that if we want to reset the record types screen, then the Scan button will rescan the sample file and look for all the different record types it contains.

The Record Name field can be set by double-clicking it and providing a suitable record name. This record name is the XML element name that encapsulates the record content.

Field properties

Now that we have identified record and field boundaries, we can drill down into the records and define the data types and names of individual fields. This is done for each record type in turn. We can select which records to define by selecting them from the Record Name drop-down box or by pressing the Next Record Type button.

It is important to be as liberal as possible when defining field data types because any mismatches will cause errors that will need to be handled. Being liberal in our record definitions will allow us to validate the messages, as described in Chapter 13, Building Validation into Services, without raising system errors.

Field properties

The Name column represents the element name of this field. The wizard will attempt to guess the type of the field, but it is important to always check this because the sample data you are using may not include all possibilities. A common error is identifying numbers to be tagged as integers, when they should really be strings—accept integer types only when they are likely to have arithmetic operations performed on them.

Verifying the result

We have now completed our mapping and can verify what has been done by looking at the generated XML Schema file. Note that the generated schema uses some Oracle extensions to enable a non-XML formatted file to be represented as XML. In particular the nxsd namespace prefix is used to identify field separators and record terminators.

Note

The XML Schema generated can be edited manually. This is useful to support nested records (records inside other records) like those that may be found in a file containing order records with nested detail records (an order record contains multiple line item detail records). In this case, it is useful to use the wizard to generate a schema with order records and detail records at the same level. The schema can then be modified by hand to make the detail records children of the order records.

Verifying the result

Clicking the Test button brings up a simple test screen that allows us to apply our newly generated schema to the input document and examine the resulting XML.

Verifying the result

Clicking Next and then Finish will cause the generated schema file to be saved.

Finishing the wizards

Until now, no work has been saved, except for the XML Schema mapping the file content onto an XML structure. The rest of the adapter settings are not saved, and the endpoint is not set up until the Finish button is clicked on the completion screen, as shown in the following screenshot. Note that the file generated is a Web Service Description Language (WSDL) file with a JCA binding.

Finishing the wizards

Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>

Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection

Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation

Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Reading a payroll file

Let's look at how we would read from a payroll file. Normally, we will poll to check for the arrival of a file, although it is also possible to read a file without polling. The key points to be considered beforehand are:

  • How often should we poll for the file?
  • Do we need to read the contents of the file?
  • Do we need to move it to a different location?
  • What do we do with the file when we have read or moved it?
    • Should we delete it?
    • Should we move it to an archive directory?
  • How large is the file and its records?
  • Does the file have one record or many?

We will consider all these factors as we interact with the File Adapter Wizard.

Starting the wizard

We begin by dragging the file adapter from the component palette in JDeveloper onto either a BPEL process or an SCA Assembly (refer to Chapter 2, Writing your First Composite for more information on building a composite).

This causes the File Adapter Configuration Wizard to start.

Starting the wizard

Naming the service

Clicking on Next allows us to choose a name for the service that we are creating and optionally a description. We will use the service name PayrollinputFileService. Any name can be used, as long as it has some meaning to the developers. It is a good idea to have a consistent naming convention, for example, identifying the business role (PayrollInput), the technology (File), and the fact that this is a service (PayrollinputFileService).

Naming the service

Identifying the operation

Clicking on Next allows us to either import an existing WSDL definition for our service or create a new service definition. We would import an existing WSDL to reuse an existing adapter configuration that had been created previously. Choosing Define from operation and schema (specified later) allows us to create a new definition.

Identifying the operation

If we choose to create a new definition, then we start by specifying how we map the files onto a service. It is here that we decide whether we are reading or writing the file. When reading a file, we decide if we wish to generate an event when it is available (a normal Read File operation that requires an inbound operation to receive the message) or if we want to read it only when requested (a Synchronous Read File operation that requires an outbound operation).

Identifying the operation

Tip

Who calls who?

We usually think of a service as something that we call and then get a result from. However, in reality, services in a service-oriented architecture will often initiate events. These events may be delivered to a BPEL process which is waiting for an event, or routed to another service through the Service Bus, Mediator, or even initiate a whole new SCA Assembly. Under the covers, an adapter might need to poll to detect an event, but the service will always be able to generate an event. With a service, we either call it to get a result or it generates an event that calls some other service or process.

The file adapter wizard exposes four types of operation, as outlined in the following table. We will explore the read operation to generate events as a file is created.

Operation Type

Direction

Description

Read File

Inbound call from service

Reads the file and generates one or more calls into BPEL, Mediator, or Service Bus when a file appears.

Write File

Outbound call to service with no response

Writes a file, with one or more calls from BPEL, Mediator, or the Service Bus, causing records to be written to a file.

Synchronous Read File

Outbound call to service returning file contents

BPEL, Mediator, or Service Bus requests a file to be read, returning nothing if the file doesn't exist.

List Files

Outbound call to service returning a list of files in a directory

Provides a means for listing the files in a directory.

Tip

Why ignore the contents of the file?

The file adapter has an option named Do not read file content. This is used when the file is just a signal for some event. Do not use this feature for the scenario where a file is written and then marked as available by another file being written. This is explicitly handled elsewhere in the file adapter. Instead, the feature can be used as a signal of some event that has no relevant data other than the fact that something has happened. Although the file itself is not readable, certain metadata is made available as part of the message sent.

Identifying the operation

Defining the file location

Clicking on Next allows us to configure the location of the file. Locations can be specified as either physical (mapped directly onto the filesystem) or logical (an indirection to the real location). The Directory for Incoming Files specifies where the adapter should look to find new files. If the file should appear in a subdirectory of the one specified, then the Process files recursively box should be checked.

Defining the file location

The key question now is what to do with the file when it appears. One option is to keep a copy of the file in an archive directory. This is achieved by checking the Archive processed files attribute and providing a location for the file archive. In addition to archiving the file, we need to decide if we want to delete the original file. This is indicated by the Delete files after successful retrieval checkbox.

Tip

Logical versus Physical locations

The file adapter allows us to have logical (Logical Name) or physical locations (Physical Path) for files. Physical locations are easier for developers as we embed the exact file location into the assembly with no more work required. However, this only works if the file locations are the same in the development, test and production environments, particularly unlikely if development is done on Windows but production is on Linux. Hence for production systems, it is best to use logical locations that must be mapped onto physical locations when deployed. Chapter 19, Packaging and Deploying shows how this mapping may be different for each environment.

Selecting specific files

Having defined the location where files are found, we can now move on to the next step in the wizard. Here we describe what the filenames look like. We can describe filenames using either wildcards (using '*' to represent a sequence of 0 or more characters) or using Java regular expressions, as described in the documentation for the java.util.regex.Pattern class. Usually wildcards will be good enough. For example, if we want to select all files that start with "PR" and end with ".txt", then we would use the wildcard string "PR*.txt" or the regular expression "PR.*\.txt". As you can see, it is generally easier to use wildcards rather than regular expressions. We can also specify a pattern to identify which files should not be processed.

The final part of this screen in the adapter wizard asks if the file contains a single message or many messages. This is confusing because when the screen refers to messages, it really means records.

Selecting specific files

Tip

XML files

It is worth remembering that a well formed XML document can only have a single root element, and hence an XML input file will normally have only a single input record. In the case of very large XML files, it is possible to have the file adapter batch the file up into multiple messages, in which case the root element is replicated in each message, and the second level elements are treated as records. This behavior is requested by setting the streaming option.

By default, a message will contain a single record from the file. Records will be defined in the next step of the wizard. If the file causes a BPEL process to be started, then a 1000 record file would result in 1000 BPEL processes being initiated. To improve efficiency, records can be batched, and the Publish Messages in Batches of attribute controls the maximum number of records in a message.

Tip

Message batching

It is common for an incoming file to contain many records. These records, when processed, can impact system performance and memory requirements. Hence it is important to align the use of the records with their likely impact on system resources.

Detecting that the file is available

The next step in the wizard allows us to configure the frequency of polling for the inbound file. There are two parameters that can be configured here–the Polling Frequency and the Minimum File Age.

Detecting that the file is available

The Polling Frequency just means the time delay between checking to see if a file is available for processing. The adapter will check once per interval to see if the file exists. Setting this too low can consume needless CPU resources, setting it too high can make the system appear unresponsive. 'Too high' and 'too low' are very subjective and will depend on your individual requirements. For example, the polling interval for a file that is expected to be written twice a day may be set to three hours, while the interval for a file that is expected to be written every hour may be set to 15 minutes.

Minimum File Age specifies how old a file must be before it is processed by the adapter. This setting allows a file to be completely written before it is read. For example, a large file may take five minutes to write out from the original application. If the file is read three minutes after it has been created, then it is possible for the adapter to run out of records to read and assume the file has been processed, when in reality, the application is still writing to the file. Setting a minimum age to ten minutes would avoid this problem by giving the application at least ten minutes to write the file.

As an alternative to polling for a file directly, we may use a trigger file to indicate that a file is available. Some systems write large files to disk and then indicate that they are available by writing a trigger file. This avoids the problems with reading an incomplete file we identified in the previous paragraph, without the delay in processing the file that a minimum age field may cause.

Message format

The penultimate step in the file adapter is to set up the format of records or messages in the file. This is one of the most critical steps, as this defines the format of messages generated by a file.

Messages may be opaque, meaning that they are passed around as black boxes. This may be appropriate with a Microsoft Word file, for example, that must merely be transported from point A to point B without being examined. This is indicated by the Native format translation is not required (Schema is Opaque) checkbox.

Message format

If the document is already in an XML format, then we can just specify a schema and an expected root element and the job is done. Normally the file is in some non-XML format that must be mapped onto an XML Schema generated through the native format builder wizard, which is invoked through the Define Schema for Native Format button.

Defining a native format schema

Invoking the Native Format Builder wizard brings up an initial start screen that leads on to the first step in the wizard, choosing the type of format, as shown in the following screenshot:

Defining a native format schema

This allows us to identify the overall record structure. If we have an existing schema document that describes the record structure, then we can point to that. Usually, we will need to determine the type of structure of the file ourselves. The choices available are:

  • Delimited: These are files such as CSV files (Comma Separated Values), records with spaces, or '+' signs for separators.
  • Fixed Length: These are files whose records consist of fixed length fields. Be careful not to confuse these with space-separated files, as if a value does not fill the entire field, it will usually be padded with spaces.
  • Complex Type: These files may include nested records like a master detail type structure.
  • DTD to be converted to XSD: These are XML Data Type Definition XML files that will be mapped onto an XML Schema description of the file content.
  • Cobol Copybook to be converted to native format: These are files that have usually been produced by a COBOL system, often originating from a mainframe.

We will look at a delimited file, as it is one of the most common formats.

Although we are using the separator file type, the steps involved are basically the same for most file types including the fixed length field format, which is also extremely common.

Using a sample file

To make it easier to describe the format of the incoming file, the wizard asks us to specify a file to use as a sample. If necessary, we can skip rows in the file and determine the number of records to read. Obviously, reading a very large number of records may take a while, and if all the variability on the file is in the first ten records, then there is no point in wasting time reading any more sample records. We may also choose to restrict the number of rows processed at runtime.

Setting the character set needs to be done carefully, particularly in international environments where non-ASCII character sets may be common.

After selecting a sample file, the wizard will display an initial view of the file with a guess at the field separators.

Using a sample file

Record structure

The next step of the wizard allows us to describe how the records appear in the file.

The first option File contains only one record allows us to process the file as a single message. This can be useful when the file has multiple records, all of the same format, which we want to read as a single message. Use of this option disables batching.

The next option of File contains multiple record instances allows batching to take place. Records are either of the same type or of different types. They can only be marked of different types if they can be distinguished, based on the first field in the record. In other words, in order to choose the Multiple records are of different types, the first field in all the records must be a record type identifier. In the example shown in the preceding screenshot, the first field is either an H for Header records or an R for Records.

Record structure

Choosing a root element

The next step allows us to define the target namespace and root element of the schema that we are generating.

Choosing a root element

Note

Don't forget that when using the Native Format Builder wizard, we are just creating an XML Schema document that describes the native (non-XML) format data. Most of the time this schema is transparent to us. However, at times the XML constructs have to emerge, for example, when identifying a name for a root element. The file is described using an XML Schema extension known as NXSD.

As we can see the root element is mandatory. This root element acts as a wrapper for the records in a message. If message batching is set to 1, then each wrapper will have a single sub-element, namely, the record. If the message is set to greater than 1, then each wrapper will have at least one and possibly more sub-elements, each sub-element being a record. There can never be more sub-elements than the batch size.

Message delimiters

Having described the overall structure of the file, we can now drill down into the individual fields. To do this, we first specify the message delimiters.

In addition to field delimiters, we can also specify a record delimiter. Usually record delimiters are new lines. If fields are also wrapped in quotation marks, then these can be stripped off by specifying the Optionally enclosed by character.

Message delimiters

Record type names

The wizard will identify the types of records based on the first field in each record, as shown in the preceding screenshot. It is possible to ignore record types by selecting them and clicking Delete. If this is done by mistake, then it is possible to add them back by using the Add button. Only fields that exist in the sample data can be added in the wizard.

Record type names

Note that if we want to reset the record types screen, then the Scan button will rescan the sample file and look for all the different record types it contains.

The Record Name field can be set by double-clicking it and providing a suitable record name. This record name is the XML element name that encapsulates the record content.

Field properties

Now that we have identified record and field boundaries, we can drill down into the records and define the data types and names of individual fields. This is done for each record type in turn. We can select which records to define by selecting them from the Record Name drop-down box or by pressing the Next Record Type button.

It is important to be as liberal as possible when defining field data types because any mismatches will cause errors that will need to be handled. Being liberal in our record definitions will allow us to validate the messages, as described in Chapter 13, Building Validation into Services, without raising system errors.

Field properties

The Name column represents the element name of this field. The wizard will attempt to guess the type of the field, but it is important to always check this because the sample data you are using may not include all possibilities. A common error is identifying numbers to be tagged as integers, when they should really be strings—accept integer types only when they are likely to have arithmetic operations performed on them.

Verifying the result

We have now completed our mapping and can verify what has been done by looking at the generated XML Schema file. Note that the generated schema uses some Oracle extensions to enable a non-XML formatted file to be represented as XML. In particular the nxsd namespace prefix is used to identify field separators and record terminators.

Note

The XML Schema generated can be edited manually. This is useful to support nested records (records inside other records) like those that may be found in a file containing order records with nested detail records (an order record contains multiple line item detail records). In this case, it is useful to use the wizard to generate a schema with order records and detail records at the same level. The schema can then be modified by hand to make the detail records children of the order records.

Verifying the result

Clicking the Test button brings up a simple test screen that allows us to apply our newly generated schema to the input document and examine the resulting XML.

Verifying the result

Clicking Next and then Finish will cause the generated schema file to be saved.

Finishing the wizards

Until now, no work has been saved, except for the XML Schema mapping the file content onto an XML structure. The rest of the adapter settings are not saved, and the endpoint is not set up until the Finish button is clicked on the completion screen, as shown in the following screenshot. Note that the file generated is a Web Service Description Language (WSDL) file with a JCA binding.

Finishing the wizards

Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>

Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection

Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation

Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Starting the wizard

We begin by dragging the file adapter from the component palette in JDeveloper onto either a BPEL process or an SCA Assembly (refer to Chapter 2, Writing your First Composite for more information on building a composite).

This causes the File Adapter Configuration Wizard to start.

Starting the wizard

Naming the service

Clicking on Next allows us to choose a name for the service that we are creating and optionally a description. We will use the service name PayrollinputFileService. Any name can be used, as long as it has some meaning to the developers. It is a good idea to have a consistent naming convention, for example, identifying the business role (PayrollInput), the technology (File), and the fact that this is a service (PayrollinputFileService).

Naming the service

Identifying the operation

Clicking on Next allows us to either import an existing WSDL definition for our service or create a new service definition. We would import an existing WSDL to reuse an existing adapter configuration that had been created previously. Choosing Define from operation and schema (specified later) allows us to create a new definition.

Identifying the operation

If we choose to create a new definition, then we start by specifying how we map the files onto a service. It is here that we decide whether we are reading or writing the file. When reading a file, we decide if we wish to generate an event when it is available (a normal Read File operation that requires an inbound operation to receive the message) or if we want to read it only when requested (a Synchronous Read File operation that requires an outbound operation).

Identifying the operation

Tip

Who calls who?

We usually think of a service as something that we call and then get a result from. However, in reality, services in a service-oriented architecture will often initiate events. These events may be delivered to a BPEL process which is waiting for an event, or routed to another service through the Service Bus, Mediator, or even initiate a whole new SCA Assembly. Under the covers, an adapter might need to poll to detect an event, but the service will always be able to generate an event. With a service, we either call it to get a result or it generates an event that calls some other service or process.

The file adapter wizard exposes four types of operation, as outlined in the following table. We will explore the read operation to generate events as a file is created.

Operation Type

Direction

Description

Read File

Inbound call from service

Reads the file and generates one or more calls into BPEL, Mediator, or Service Bus when a file appears.

Write File

Outbound call to service with no response

Writes a file, with one or more calls from BPEL, Mediator, or the Service Bus, causing records to be written to a file.

Synchronous Read File

Outbound call to service returning file contents

BPEL, Mediator, or Service Bus requests a file to be read, returning nothing if the file doesn't exist.

List Files

Outbound call to service returning a list of files in a directory

Provides a means for listing the files in a directory.

Tip

Why ignore the contents of the file?

The file adapter has an option named Do not read file content. This is used when the file is just a signal for some event. Do not use this feature for the scenario where a file is written and then marked as available by another file being written. This is explicitly handled elsewhere in the file adapter. Instead, the feature can be used as a signal of some event that has no relevant data other than the fact that something has happened. Although the file itself is not readable, certain metadata is made available as part of the message sent.

Identifying the operation

Defining the file location

Clicking on Next allows us to configure the location of the file. Locations can be specified as either physical (mapped directly onto the filesystem) or logical (an indirection to the real location). The Directory for Incoming Files specifies where the adapter should look to find new files. If the file should appear in a subdirectory of the one specified, then the Process files recursively box should be checked.

Defining the file location

The key question now is what to do with the file when it appears. One option is to keep a copy of the file in an archive directory. This is achieved by checking the Archive processed files attribute and providing a location for the file archive. In addition to archiving the file, we need to decide if we want to delete the original file. This is indicated by the Delete files after successful retrieval checkbox.

Tip

Logical versus Physical locations

The file adapter allows us to have logical (Logical Name) or physical locations (Physical Path) for files. Physical locations are easier for developers as we embed the exact file location into the assembly with no more work required. However, this only works if the file locations are the same in the development, test and production environments, particularly unlikely if development is done on Windows but production is on Linux. Hence for production systems, it is best to use logical locations that must be mapped onto physical locations when deployed. Chapter 19, Packaging and Deploying shows how this mapping may be different for each environment.

Selecting specific files

Having defined the location where files are found, we can now move on to the next step in the wizard. Here we describe what the filenames look like. We can describe filenames using either wildcards (using '*' to represent a sequence of 0 or more characters) or using Java regular expressions, as described in the documentation for the java.util.regex.Pattern class. Usually wildcards will be good enough. For example, if we want to select all files that start with "PR" and end with ".txt", then we would use the wildcard string "PR*.txt" or the regular expression "PR.*\.txt". As you can see, it is generally easier to use wildcards rather than regular expressions. We can also specify a pattern to identify which files should not be processed.

The final part of this screen in the adapter wizard asks if the file contains a single message or many messages. This is confusing because when the screen refers to messages, it really means records.

Selecting specific files

Tip

XML files

It is worth remembering that a well formed XML document can only have a single root element, and hence an XML input file will normally have only a single input record. In the case of very large XML files, it is possible to have the file adapter batch the file up into multiple messages, in which case the root element is replicated in each message, and the second level elements are treated as records. This behavior is requested by setting the streaming option.

By default, a message will contain a single record from the file. Records will be defined in the next step of the wizard. If the file causes a BPEL process to be started, then a 1000 record file would result in 1000 BPEL processes being initiated. To improve efficiency, records can be batched, and the Publish Messages in Batches of attribute controls the maximum number of records in a message.

Tip

Message batching

It is common for an incoming file to contain many records. These records, when processed, can impact system performance and memory requirements. Hence it is important to align the use of the records with their likely impact on system resources.

Detecting that the file is available

The next step in the wizard allows us to configure the frequency of polling for the inbound file. There are two parameters that can be configured here–the Polling Frequency and the Minimum File Age.

Detecting that the file is available

The Polling Frequency just means the time delay between checking to see if a file is available for processing. The adapter will check once per interval to see if the file exists. Setting this too low can consume needless CPU resources, setting it too high can make the system appear unresponsive. 'Too high' and 'too low' are very subjective and will depend on your individual requirements. For example, the polling interval for a file that is expected to be written twice a day may be set to three hours, while the interval for a file that is expected to be written every hour may be set to 15 minutes.

Minimum File Age specifies how old a file must be before it is processed by the adapter. This setting allows a file to be completely written before it is read. For example, a large file may take five minutes to write out from the original application. If the file is read three minutes after it has been created, then it is possible for the adapter to run out of records to read and assume the file has been processed, when in reality, the application is still writing to the file. Setting a minimum age to ten minutes would avoid this problem by giving the application at least ten minutes to write the file.

As an alternative to polling for a file directly, we may use a trigger file to indicate that a file is available. Some systems write large files to disk and then indicate that they are available by writing a trigger file. This avoids the problems with reading an incomplete file we identified in the previous paragraph, without the delay in processing the file that a minimum age field may cause.

Message format

The penultimate step in the file adapter is to set up the format of records or messages in the file. This is one of the most critical steps, as this defines the format of messages generated by a file.

Messages may be opaque, meaning that they are passed around as black boxes. This may be appropriate with a Microsoft Word file, for example, that must merely be transported from point A to point B without being examined. This is indicated by the Native format translation is not required (Schema is Opaque) checkbox.

Message format

If the document is already in an XML format, then we can just specify a schema and an expected root element and the job is done. Normally the file is in some non-XML format that must be mapped onto an XML Schema generated through the native format builder wizard, which is invoked through the Define Schema for Native Format button.

Defining a native format schema

Invoking the Native Format Builder wizard brings up an initial start screen that leads on to the first step in the wizard, choosing the type of format, as shown in the following screenshot:

Defining a native format schema

This allows us to identify the overall record structure. If we have an existing schema document that describes the record structure, then we can point to that. Usually, we will need to determine the type of structure of the file ourselves. The choices available are:

  • Delimited: These are files such as CSV files (Comma Separated Values), records with spaces, or '+' signs for separators.
  • Fixed Length: These are files whose records consist of fixed length fields. Be careful not to confuse these with space-separated files, as if a value does not fill the entire field, it will usually be padded with spaces.
  • Complex Type: These files may include nested records like a master detail type structure.
  • DTD to be converted to XSD: These are XML Data Type Definition XML files that will be mapped onto an XML Schema description of the file content.
  • Cobol Copybook to be converted to native format: These are files that have usually been produced by a COBOL system, often originating from a mainframe.

We will look at a delimited file, as it is one of the most common formats.

Although we are using the separator file type, the steps involved are basically the same for most file types including the fixed length field format, which is also extremely common.

Using a sample file

To make it easier to describe the format of the incoming file, the wizard asks us to specify a file to use as a sample. If necessary, we can skip rows in the file and determine the number of records to read. Obviously, reading a very large number of records may take a while, and if all the variability on the file is in the first ten records, then there is no point in wasting time reading any more sample records. We may also choose to restrict the number of rows processed at runtime.

Setting the character set needs to be done carefully, particularly in international environments where non-ASCII character sets may be common.

After selecting a sample file, the wizard will display an initial view of the file with a guess at the field separators.

Using a sample file

Record structure

The next step of the wizard allows us to describe how the records appear in the file.

The first option File contains only one record allows us to process the file as a single message. This can be useful when the file has multiple records, all of the same format, which we want to read as a single message. Use of this option disables batching.

The next option of File contains multiple record instances allows batching to take place. Records are either of the same type or of different types. They can only be marked of different types if they can be distinguished, based on the first field in the record. In other words, in order to choose the Multiple records are of different types, the first field in all the records must be a record type identifier. In the example shown in the preceding screenshot, the first field is either an H for Header records or an R for Records.

Record structure

Choosing a root element

The next step allows us to define the target namespace and root element of the schema that we are generating.

Choosing a root element

Note

Don't forget that when using the Native Format Builder wizard, we are just creating an XML Schema document that describes the native (non-XML) format data. Most of the time this schema is transparent to us. However, at times the XML constructs have to emerge, for example, when identifying a name for a root element. The file is described using an XML Schema extension known as NXSD.

As we can see the root element is mandatory. This root element acts as a wrapper for the records in a message. If message batching is set to 1, then each wrapper will have a single sub-element, namely, the record. If the message is set to greater than 1, then each wrapper will have at least one and possibly more sub-elements, each sub-element being a record. There can never be more sub-elements than the batch size.

Message delimiters

Having described the overall structure of the file, we can now drill down into the individual fields. To do this, we first specify the message delimiters.

In addition to field delimiters, we can also specify a record delimiter. Usually record delimiters are new lines. If fields are also wrapped in quotation marks, then these can be stripped off by specifying the Optionally enclosed by character.

Message delimiters

Record type names

The wizard will identify the types of records based on the first field in each record, as shown in the preceding screenshot. It is possible to ignore record types by selecting them and clicking Delete. If this is done by mistake, then it is possible to add them back by using the Add button. Only fields that exist in the sample data can be added in the wizard.

Record type names

Note that if we want to reset the record types screen, then the Scan button will rescan the sample file and look for all the different record types it contains.

The Record Name field can be set by double-clicking it and providing a suitable record name. This record name is the XML element name that encapsulates the record content.

Field properties

Now that we have identified record and field boundaries, we can drill down into the records and define the data types and names of individual fields. This is done for each record type in turn. We can select which records to define by selecting them from the Record Name drop-down box or by pressing the Next Record Type button.

It is important to be as liberal as possible when defining field data types because any mismatches will cause errors that will need to be handled. Being liberal in our record definitions will allow us to validate the messages, as described in Chapter 13, Building Validation into Services, without raising system errors.

Field properties

The Name column represents the element name of this field. The wizard will attempt to guess the type of the field, but it is important to always check this because the sample data you are using may not include all possibilities. A common error is identifying numbers to be tagged as integers, when they should really be strings—accept integer types only when they are likely to have arithmetic operations performed on them.

Verifying the result

We have now completed our mapping and can verify what has been done by looking at the generated XML Schema file. Note that the generated schema uses some Oracle extensions to enable a non-XML formatted file to be represented as XML. In particular the nxsd namespace prefix is used to identify field separators and record terminators.

Note

The XML Schema generated can be edited manually. This is useful to support nested records (records inside other records) like those that may be found in a file containing order records with nested detail records (an order record contains multiple line item detail records). In this case, it is useful to use the wizard to generate a schema with order records and detail records at the same level. The schema can then be modified by hand to make the detail records children of the order records.

Verifying the result

Clicking the Test button brings up a simple test screen that allows us to apply our newly generated schema to the input document and examine the resulting XML.

Verifying the result

Clicking Next and then Finish will cause the generated schema file to be saved.

Finishing the wizards

Until now, no work has been saved, except for the XML Schema mapping the file content onto an XML structure. The rest of the adapter settings are not saved, and the endpoint is not set up until the Finish button is clicked on the completion screen, as shown in the following screenshot. Note that the file generated is a Web Service Description Language (WSDL) file with a JCA binding.

Finishing the wizards
Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>

Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection

Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation

Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Naming the service

Clicking on Next allows us to choose a name for the service that we are creating and optionally a description. We will use the service name PayrollinputFileService. Any name can be used, as long as it has some meaning to the developers. It is a good idea to have a consistent naming convention, for example, identifying the business role (PayrollInput), the technology (File), and the fact that this is a service (PayrollinputFileService).

Naming the service

Identifying the operation

Clicking on Next allows us to either import an existing WSDL definition for our service or create a new service definition. We would import an existing WSDL to reuse an existing adapter configuration that had been created previously. Choosing Define from operation and schema (specified later) allows us to create a new definition.

Identifying the operation

If we choose to create a new definition, then we start by specifying how we map the files onto a service. It is here that we decide whether we are reading or writing the file. When reading a file, we decide if we wish to generate an event when it is available (a normal Read File operation that requires an inbound operation to receive the message) or if we want to read it only when requested (a Synchronous Read File operation that requires an outbound operation).

Identifying the operation

Tip

Who calls who?

We usually think of a service as something that we call and then get a result from. However, in reality, services in a service-oriented architecture will often initiate events. These events may be delivered to a BPEL process which is waiting for an event, or routed to another service through the Service Bus, Mediator, or even initiate a whole new SCA Assembly. Under the covers, an adapter might need to poll to detect an event, but the service will always be able to generate an event. With a service, we either call it to get a result or it generates an event that calls some other service or process.

The file adapter wizard exposes four types of operation, as outlined in the following table. We will explore the read operation to generate events as a file is created.

Operation Type

Direction

Description

Read File

Inbound call from service

Reads the file and generates one or more calls into BPEL, Mediator, or Service Bus when a file appears.

Write File

Outbound call to service with no response

Writes a file, with one or more calls from BPEL, Mediator, or the Service Bus, causing records to be written to a file.

Synchronous Read File

Outbound call to service returning file contents

BPEL, Mediator, or Service Bus requests a file to be read, returning nothing if the file doesn't exist.

List Files

Outbound call to service returning a list of files in a directory

Provides a means for listing the files in a directory.

Tip

Why ignore the contents of the file?

The file adapter has an option named Do not read file content. This is used when the file is just a signal for some event. Do not use this feature for the scenario where a file is written and then marked as available by another file being written. This is explicitly handled elsewhere in the file adapter. Instead, the feature can be used as a signal of some event that has no relevant data other than the fact that something has happened. Although the file itself is not readable, certain metadata is made available as part of the message sent.

Identifying the operation

Defining the file location

Clicking on Next allows us to configure the location of the file. Locations can be specified as either physical (mapped directly onto the filesystem) or logical (an indirection to the real location). The Directory for Incoming Files specifies where the adapter should look to find new files. If the file should appear in a subdirectory of the one specified, then the Process files recursively box should be checked.

Defining the file location

The key question now is what to do with the file when it appears. One option is to keep a copy of the file in an archive directory. This is achieved by checking the Archive processed files attribute and providing a location for the file archive. In addition to archiving the file, we need to decide if we want to delete the original file. This is indicated by the Delete files after successful retrieval checkbox.

Tip

Logical versus Physical locations

The file adapter allows us to have logical (Logical Name) or physical locations (Physical Path) for files. Physical locations are easier for developers as we embed the exact file location into the assembly with no more work required. However, this only works if the file locations are the same in the development, test and production environments, particularly unlikely if development is done on Windows but production is on Linux. Hence for production systems, it is best to use logical locations that must be mapped onto physical locations when deployed. Chapter 19, Packaging and Deploying shows how this mapping may be different for each environment.

Selecting specific files

Having defined the location where files are found, we can now move on to the next step in the wizard. Here we describe what the filenames look like. We can describe filenames using either wildcards (using '*' to represent a sequence of 0 or more characters) or using Java regular expressions, as described in the documentation for the java.util.regex.Pattern class. Usually wildcards will be good enough. For example, if we want to select all files that start with "PR" and end with ".txt", then we would use the wildcard string "PR*.txt" or the regular expression "PR.*\.txt". As you can see, it is generally easier to use wildcards rather than regular expressions. We can also specify a pattern to identify which files should not be processed.

The final part of this screen in the adapter wizard asks if the file contains a single message or many messages. This is confusing because when the screen refers to messages, it really means records.

Selecting specific files

Tip

XML files

It is worth remembering that a well formed XML document can only have a single root element, and hence an XML input file will normally have only a single input record. In the case of very large XML files, it is possible to have the file adapter batch the file up into multiple messages, in which case the root element is replicated in each message, and the second level elements are treated as records. This behavior is requested by setting the streaming option.

By default, a message will contain a single record from the file. Records will be defined in the next step of the wizard. If the file causes a BPEL process to be started, then a 1000 record file would result in 1000 BPEL processes being initiated. To improve efficiency, records can be batched, and the Publish Messages in Batches of attribute controls the maximum number of records in a message.

Tip

Message batching

It is common for an incoming file to contain many records. These records, when processed, can impact system performance and memory requirements. Hence it is important to align the use of the records with their likely impact on system resources.

Detecting that the file is available

The next step in the wizard allows us to configure the frequency of polling for the inbound file. There are two parameters that can be configured here–the Polling Frequency and the Minimum File Age.

Detecting that the file is available

The Polling Frequency just means the time delay between checking to see if a file is available for processing. The adapter will check once per interval to see if the file exists. Setting this too low can consume needless CPU resources, setting it too high can make the system appear unresponsive. 'Too high' and 'too low' are very subjective and will depend on your individual requirements. For example, the polling interval for a file that is expected to be written twice a day may be set to three hours, while the interval for a file that is expected to be written every hour may be set to 15 minutes.

Minimum File Age specifies how old a file must be before it is processed by the adapter. This setting allows a file to be completely written before it is read. For example, a large file may take five minutes to write out from the original application. If the file is read three minutes after it has been created, then it is possible for the adapter to run out of records to read and assume the file has been processed, when in reality, the application is still writing to the file. Setting a minimum age to ten minutes would avoid this problem by giving the application at least ten minutes to write the file.

As an alternative to polling for a file directly, we may use a trigger file to indicate that a file is available. Some systems write large files to disk and then indicate that they are available by writing a trigger file. This avoids the problems with reading an incomplete file we identified in the previous paragraph, without the delay in processing the file that a minimum age field may cause.

Message format

The penultimate step in the file adapter is to set up the format of records or messages in the file. This is one of the most critical steps, as this defines the format of messages generated by a file.

Messages may be opaque, meaning that they are passed around as black boxes. This may be appropriate with a Microsoft Word file, for example, that must merely be transported from point A to point B without being examined. This is indicated by the Native format translation is not required (Schema is Opaque) checkbox.

Message format

If the document is already in an XML format, then we can just specify a schema and an expected root element and the job is done. Normally the file is in some non-XML format that must be mapped onto an XML Schema generated through the native format builder wizard, which is invoked through the Define Schema for Native Format button.

Defining a native format schema

Invoking the Native Format Builder wizard brings up an initial start screen that leads on to the first step in the wizard, choosing the type of format, as shown in the following screenshot:

Defining a native format schema

This allows us to identify the overall record structure. If we have an existing schema document that describes the record structure, then we can point to that. Usually, we will need to determine the type of structure of the file ourselves. The choices available are:

  • Delimited: These are files such as CSV files (Comma Separated Values), records with spaces, or '+' signs for separators.
  • Fixed Length: These are files whose records consist of fixed length fields. Be careful not to confuse these with space-separated files, as if a value does not fill the entire field, it will usually be padded with spaces.
  • Complex Type: These files may include nested records like a master detail type structure.
  • DTD to be converted to XSD: These are XML Data Type Definition XML files that will be mapped onto an XML Schema description of the file content.
  • Cobol Copybook to be converted to native format: These are files that have usually been produced by a COBOL system, often originating from a mainframe.

We will look at a delimited file, as it is one of the most common formats.

Although we are using the separator file type, the steps involved are basically the same for most file types including the fixed length field format, which is also extremely common.

Using a sample file

To make it easier to describe the format of the incoming file, the wizard asks us to specify a file to use as a sample. If necessary, we can skip rows in the file and determine the number of records to read. Obviously, reading a very large number of records may take a while, and if all the variability on the file is in the first ten records, then there is no point in wasting time reading any more sample records. We may also choose to restrict the number of rows processed at runtime.

Setting the character set needs to be done carefully, particularly in international environments where non-ASCII character sets may be common.

After selecting a sample file, the wizard will display an initial view of the file with a guess at the field separators.

Using a sample file

Record structure

The next step of the wizard allows us to describe how the records appear in the file.

The first option File contains only one record allows us to process the file as a single message. This can be useful when the file has multiple records, all of the same format, which we want to read as a single message. Use of this option disables batching.

The next option of File contains multiple record instances allows batching to take place. Records are either of the same type or of different types. They can only be marked of different types if they can be distinguished, based on the first field in the record. In other words, in order to choose the Multiple records are of different types, the first field in all the records must be a record type identifier. In the example shown in the preceding screenshot, the first field is either an H for Header records or an R for Records.

Record structure

Choosing a root element

The next step allows us to define the target namespace and root element of the schema that we are generating.

Choosing a root element

Note

Don't forget that when using the Native Format Builder wizard, we are just creating an XML Schema document that describes the native (non-XML) format data. Most of the time this schema is transparent to us. However, at times the XML constructs have to emerge, for example, when identifying a name for a root element. The file is described using an XML Schema extension known as NXSD.

As we can see the root element is mandatory. This root element acts as a wrapper for the records in a message. If message batching is set to 1, then each wrapper will have a single sub-element, namely, the record. If the message is set to greater than 1, then each wrapper will have at least one and possibly more sub-elements, each sub-element being a record. There can never be more sub-elements than the batch size.

Message delimiters

Having described the overall structure of the file, we can now drill down into the individual fields. To do this, we first specify the message delimiters.

In addition to field delimiters, we can also specify a record delimiter. Usually record delimiters are new lines. If fields are also wrapped in quotation marks, then these can be stripped off by specifying the Optionally enclosed by character.

Message delimiters

Record type names

The wizard will identify the types of records based on the first field in each record, as shown in the preceding screenshot. It is possible to ignore record types by selecting them and clicking Delete. If this is done by mistake, then it is possible to add them back by using the Add button. Only fields that exist in the sample data can be added in the wizard.

Record type names

Note that if we want to reset the record types screen, then the Scan button will rescan the sample file and look for all the different record types it contains.

The Record Name field can be set by double-clicking it and providing a suitable record name. This record name is the XML element name that encapsulates the record content.

Field properties

Now that we have identified record and field boundaries, we can drill down into the records and define the data types and names of individual fields. This is done for each record type in turn. We can select which records to define by selecting them from the Record Name drop-down box or by pressing the Next Record Type button.

It is important to be as liberal as possible when defining field data types because any mismatches will cause errors that will need to be handled. Being liberal in our record definitions will allow us to validate the messages, as described in Chapter 13, Building Validation into Services, without raising system errors.

Field properties

The Name column represents the element name of this field. The wizard will attempt to guess the type of the field, but it is important to always check this because the sample data you are using may not include all possibilities. A common error is identifying numbers to be tagged as integers, when they should really be strings—accept integer types only when they are likely to have arithmetic operations performed on them.

Verifying the result

We have now completed our mapping and can verify what has been done by looking at the generated XML Schema file. Note that the generated schema uses some Oracle extensions to enable a non-XML formatted file to be represented as XML. In particular the nxsd namespace prefix is used to identify field separators and record terminators.

Note

The XML Schema generated can be edited manually. This is useful to support nested records (records inside other records) like those that may be found in a file containing order records with nested detail records (an order record contains multiple line item detail records). In this case, it is useful to use the wizard to generate a schema with order records and detail records at the same level. The schema can then be modified by hand to make the detail records children of the order records.

Verifying the result

Clicking the Test button brings up a simple test screen that allows us to apply our newly generated schema to the input document and examine the resulting XML.

Verifying the result

Clicking Next and then Finish will cause the generated schema file to be saved.

Finishing the wizards

Until now, no work has been saved, except for the XML Schema mapping the file content onto an XML structure. The rest of the adapter settings are not saved, and the endpoint is not set up until the Finish button is clicked on the completion screen, as shown in the following screenshot. Note that the file generated is a Web Service Description Language (WSDL) file with a JCA binding.

Finishing the wizards
Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>

Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection

Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation

Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Identifying the operation

Clicking on Next allows us to either import an existing WSDL definition for our service or create a new service definition. We would import an existing WSDL to reuse an existing adapter configuration that had been created previously. Choosing Define from operation and schema (specified later) allows us to create a new definition.

Identifying the operation

If we choose to create a new definition, then we start by specifying how we map the files onto a service. It is here that we decide whether we are reading or writing the file. When reading a file, we decide if we wish to generate an event when it is available (a normal Read File operation that requires an inbound operation to receive the message) or if we want to read it only when requested (a Synchronous Read File operation that requires an outbound operation).

Identifying the operation

Tip

Who calls who?

We usually think of a service as something that we call and then get a result from. However, in reality, services in a service-oriented architecture will often initiate events. These events may be delivered to a BPEL process which is waiting for an event, or routed to another service through the Service Bus, Mediator, or even initiate a whole new SCA Assembly. Under the covers, an adapter might need to poll to detect an event, but the service will always be able to generate an event. With a service, we either call it to get a result or it generates an event that calls some other service or process.

The file adapter wizard exposes four types of operation, as outlined in the following table. We will explore the read operation to generate events as a file is created.

Operation Type

Direction

Description

Read File

Inbound call from service

Reads the file and generates one or more calls into BPEL, Mediator, or Service Bus when a file appears.

Write File

Outbound call to service with no response

Writes a file, with one or more calls from BPEL, Mediator, or the Service Bus, causing records to be written to a file.

Synchronous Read File

Outbound call to service returning file contents

BPEL, Mediator, or Service Bus requests a file to be read, returning nothing if the file doesn't exist.

List Files

Outbound call to service returning a list of files in a directory

Provides a means for listing the files in a directory.

Tip

Why ignore the contents of the file?

The file adapter has an option named Do not read file content. This is used when the file is just a signal for some event. Do not use this feature for the scenario where a file is written and then marked as available by another file being written. This is explicitly handled elsewhere in the file adapter. Instead, the feature can be used as a signal of some event that has no relevant data other than the fact that something has happened. Although the file itself is not readable, certain metadata is made available as part of the message sent.

Identifying the operation

Defining the file location

Clicking on Next allows us to configure the location of the file. Locations can be specified as either physical (mapped directly onto the filesystem) or logical (an indirection to the real location). The Directory for Incoming Files specifies where the adapter should look to find new files. If the file should appear in a subdirectory of the one specified, then the Process files recursively box should be checked.

Defining the file location

The key question now is what to do with the file when it appears. One option is to keep a copy of the file in an archive directory. This is achieved by checking the Archive processed files attribute and providing a location for the file archive. In addition to archiving the file, we need to decide if we want to delete the original file. This is indicated by the Delete files after successful retrieval checkbox.

Tip

Logical versus Physical locations

The file adapter allows us to have logical (Logical Name) or physical locations (Physical Path) for files. Physical locations are easier for developers as we embed the exact file location into the assembly with no more work required. However, this only works if the file locations are the same in the development, test and production environments, particularly unlikely if development is done on Windows but production is on Linux. Hence for production systems, it is best to use logical locations that must be mapped onto physical locations when deployed. Chapter 19, Packaging and Deploying shows how this mapping may be different for each environment.

Selecting specific files

Having defined the location where files are found, we can now move on to the next step in the wizard. Here we describe what the filenames look like. We can describe filenames using either wildcards (using '*' to represent a sequence of 0 or more characters) or using Java regular expressions, as described in the documentation for the java.util.regex.Pattern class. Usually wildcards will be good enough. For example, if we want to select all files that start with "PR" and end with ".txt", then we would use the wildcard string "PR*.txt" or the regular expression "PR.*\.txt". As you can see, it is generally easier to use wildcards rather than regular expressions. We can also specify a pattern to identify which files should not be processed.

The final part of this screen in the adapter wizard asks if the file contains a single message or many messages. This is confusing because when the screen refers to messages, it really means records.

Selecting specific files

Tip

XML files

It is worth remembering that a well formed XML document can only have a single root element, and hence an XML input file will normally have only a single input record. In the case of very large XML files, it is possible to have the file adapter batch the file up into multiple messages, in which case the root element is replicated in each message, and the second level elements are treated as records. This behavior is requested by setting the streaming option.

By default, a message will contain a single record from the file. Records will be defined in the next step of the wizard. If the file causes a BPEL process to be started, then a 1000 record file would result in 1000 BPEL processes being initiated. To improve efficiency, records can be batched, and the Publish Messages in Batches of attribute controls the maximum number of records in a message.

Tip

Message batching

It is common for an incoming file to contain many records. These records, when processed, can impact system performance and memory requirements. Hence it is important to align the use of the records with their likely impact on system resources.

Detecting that the file is available

The next step in the wizard allows us to configure the frequency of polling for the inbound file. There are two parameters that can be configured here–the Polling Frequency and the Minimum File Age.

Detecting that the file is available

The Polling Frequency just means the time delay between checking to see if a file is available for processing. The adapter will check once per interval to see if the file exists. Setting this too low can consume needless CPU resources, setting it too high can make the system appear unresponsive. 'Too high' and 'too low' are very subjective and will depend on your individual requirements. For example, the polling interval for a file that is expected to be written twice a day may be set to three hours, while the interval for a file that is expected to be written every hour may be set to 15 minutes.

Minimum File Age specifies how old a file must be before it is processed by the adapter. This setting allows a file to be completely written before it is read. For example, a large file may take five minutes to write out from the original application. If the file is read three minutes after it has been created, then it is possible for the adapter to run out of records to read and assume the file has been processed, when in reality, the application is still writing to the file. Setting a minimum age to ten minutes would avoid this problem by giving the application at least ten minutes to write the file.

As an alternative to polling for a file directly, we may use a trigger file to indicate that a file is available. Some systems write large files to disk and then indicate that they are available by writing a trigger file. This avoids the problems with reading an incomplete file we identified in the previous paragraph, without the delay in processing the file that a minimum age field may cause.

Message format

The penultimate step in the file adapter is to set up the format of records or messages in the file. This is one of the most critical steps, as this defines the format of messages generated by a file.

Messages may be opaque, meaning that they are passed around as black boxes. This may be appropriate with a Microsoft Word file, for example, that must merely be transported from point A to point B without being examined. This is indicated by the Native format translation is not required (Schema is Opaque) checkbox.

Message format

If the document is already in an XML format, then we can just specify a schema and an expected root element and the job is done. Normally the file is in some non-XML format that must be mapped onto an XML Schema generated through the native format builder wizard, which is invoked through the Define Schema for Native Format button.

Defining a native format schema

Invoking the Native Format Builder wizard brings up an initial start screen that leads on to the first step in the wizard, choosing the type of format, as shown in the following screenshot:

Defining a native format schema

This allows us to identify the overall record structure. If we have an existing schema document that describes the record structure, then we can point to that. Usually, we will need to determine the type of structure of the file ourselves. The choices available are:

  • Delimited: These are files such as CSV files (Comma Separated Values), records with spaces, or '+' signs for separators.
  • Fixed Length: These are files whose records consist of fixed length fields. Be careful not to confuse these with space-separated files, as if a value does not fill the entire field, it will usually be padded with spaces.
  • Complex Type: These files may include nested records like a master detail type structure.
  • DTD to be converted to XSD: These are XML Data Type Definition XML files that will be mapped onto an XML Schema description of the file content.
  • Cobol Copybook to be converted to native format: These are files that have usually been produced by a COBOL system, often originating from a mainframe.

We will look at a delimited file, as it is one of the most common formats.

Although we are using the separator file type, the steps involved are basically the same for most file types including the fixed length field format, which is also extremely common.

Using a sample file

To make it easier to describe the format of the incoming file, the wizard asks us to specify a file to use as a sample. If necessary, we can skip rows in the file and determine the number of records to read. Obviously, reading a very large number of records may take a while, and if all the variability on the file is in the first ten records, then there is no point in wasting time reading any more sample records. We may also choose to restrict the number of rows processed at runtime.

Setting the character set needs to be done carefully, particularly in international environments where non-ASCII character sets may be common.

After selecting a sample file, the wizard will display an initial view of the file with a guess at the field separators.

Using a sample file

Record structure

The next step of the wizard allows us to describe how the records appear in the file.

The first option File contains only one record allows us to process the file as a single message. This can be useful when the file has multiple records, all of the same format, which we want to read as a single message. Use of this option disables batching.

The next option of File contains multiple record instances allows batching to take place. Records are either of the same type or of different types. They can only be marked of different types if they can be distinguished, based on the first field in the record. In other words, in order to choose the Multiple records are of different types, the first field in all the records must be a record type identifier. In the example shown in the preceding screenshot, the first field is either an H for Header records or an R for Records.

Record structure

Choosing a root element

The next step allows us to define the target namespace and root element of the schema that we are generating.

Choosing a root element

Note

Don't forget that when using the Native Format Builder wizard, we are just creating an XML Schema document that describes the native (non-XML) format data. Most of the time this schema is transparent to us. However, at times the XML constructs have to emerge, for example, when identifying a name for a root element. The file is described using an XML Schema extension known as NXSD.

As we can see the root element is mandatory. This root element acts as a wrapper for the records in a message. If message batching is set to 1, then each wrapper will have a single sub-element, namely, the record. If the message is set to greater than 1, then each wrapper will have at least one and possibly more sub-elements, each sub-element being a record. There can never be more sub-elements than the batch size.

Message delimiters

Having described the overall structure of the file, we can now drill down into the individual fields. To do this, we first specify the message delimiters.

In addition to field delimiters, we can also specify a record delimiter. Usually record delimiters are new lines. If fields are also wrapped in quotation marks, then these can be stripped off by specifying the Optionally enclosed by character.

Message delimiters

Record type names

The wizard will identify the types of records based on the first field in each record, as shown in the preceding screenshot. It is possible to ignore record types by selecting them and clicking Delete. If this is done by mistake, then it is possible to add them back by using the Add button. Only fields that exist in the sample data can be added in the wizard.

Record type names

Note that if we want to reset the record types screen, then the Scan button will rescan the sample file and look for all the different record types it contains.

The Record Name field can be set by double-clicking it and providing a suitable record name. This record name is the XML element name that encapsulates the record content.

Field properties

Now that we have identified record and field boundaries, we can drill down into the records and define the data types and names of individual fields. This is done for each record type in turn. We can select which records to define by selecting them from the Record Name drop-down box or by pressing the Next Record Type button.

It is important to be as liberal as possible when defining field data types because any mismatches will cause errors that will need to be handled. Being liberal in our record definitions will allow us to validate the messages, as described in Chapter 13, Building Validation into Services, without raising system errors.

Field properties

The Name column represents the element name of this field. The wizard will attempt to guess the type of the field, but it is important to always check this because the sample data you are using may not include all possibilities. A common error is identifying numbers to be tagged as integers, when they should really be strings—accept integer types only when they are likely to have arithmetic operations performed on them.

Verifying the result

We have now completed our mapping and can verify what has been done by looking at the generated XML Schema file. Note that the generated schema uses some Oracle extensions to enable a non-XML formatted file to be represented as XML. In particular the nxsd namespace prefix is used to identify field separators and record terminators.

Note

The XML Schema generated can be edited manually. This is useful to support nested records (records inside other records) like those that may be found in a file containing order records with nested detail records (an order record contains multiple line item detail records). In this case, it is useful to use the wizard to generate a schema with order records and detail records at the same level. The schema can then be modified by hand to make the detail records children of the order records.

Verifying the result

Clicking the Test button brings up a simple test screen that allows us to apply our newly generated schema to the input document and examine the resulting XML.

Verifying the result

Clicking Next and then Finish will cause the generated schema file to be saved.

Finishing the wizards

Until now, no work has been saved, except for the XML Schema mapping the file content onto an XML structure. The rest of the adapter settings are not saved, and the endpoint is not set up until the Finish button is clicked on the completion screen, as shown in the following screenshot. Note that the file generated is a Web Service Description Language (WSDL) file with a JCA binding.

Finishing the wizards
Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>

Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection

Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation

Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Defining the file location

Clicking on Next allows us to configure the location of the file. Locations can be specified as either physical (mapped directly onto the filesystem) or logical (an indirection to the real location). The Directory for Incoming Files specifies where the adapter should look to find new files. If the file should appear in a subdirectory of the one specified, then the Process files recursively box should be checked.

Defining the file location

The key question now is what to do with the file when it appears. One option is to keep a copy of the file in an archive directory. This is achieved by checking the Archive processed files attribute and providing a location for the file archive. In addition to archiving the file, we need to decide if we want to delete the original file. This is indicated by the Delete files after successful retrieval checkbox.

Tip

Logical versus Physical locations

The file adapter allows us to have logical (Logical Name) or physical locations (Physical Path) for files. Physical locations are easier for developers as we embed the exact file location into the assembly with no more work required. However, this only works if the file locations are the same in the development, test and production environments, particularly unlikely if development is done on Windows but production is on Linux. Hence for production systems, it is best to use logical locations that must be mapped onto physical locations when deployed. Chapter 19, Packaging and Deploying shows how this mapping may be different for each environment.

Selecting specific files

Having defined the location where files are found, we can now move on to the next step in the wizard. Here we describe what the filenames look like. We can describe filenames using either wildcards (using '*' to represent a sequence of 0 or more characters) or using Java regular expressions, as described in the documentation for the java.util.regex.Pattern class. Usually wildcards will be good enough. For example, if we want to select all files that start with "PR" and end with ".txt", then we would use the wildcard string "PR*.txt" or the regular expression "PR.*\.txt". As you can see, it is generally easier to use wildcards rather than regular expressions. We can also specify a pattern to identify which files should not be processed.

The final part of this screen in the adapter wizard asks if the file contains a single message or many messages. This is confusing because when the screen refers to messages, it really means records.

Selecting specific files

Tip

XML files

It is worth remembering that a well formed XML document can only have a single root element, and hence an XML input file will normally have only a single input record. In the case of very large XML files, it is possible to have the file adapter batch the file up into multiple messages, in which case the root element is replicated in each message, and the second level elements are treated as records. This behavior is requested by setting the streaming option.

By default, a message will contain a single record from the file. Records will be defined in the next step of the wizard. If the file causes a BPEL process to be started, then a 1000 record file would result in 1000 BPEL processes being initiated. To improve efficiency, records can be batched, and the Publish Messages in Batches of attribute controls the maximum number of records in a message.

Tip

Message batching

It is common for an incoming file to contain many records. These records, when processed, can impact system performance and memory requirements. Hence it is important to align the use of the records with their likely impact on system resources.

Detecting that the file is available

The next step in the wizard allows us to configure the frequency of polling for the inbound file. There are two parameters that can be configured here–the Polling Frequency and the Minimum File Age.

Detecting that the file is available

The Polling Frequency just means the time delay between checking to see if a file is available for processing. The adapter will check once per interval to see if the file exists. Setting this too low can consume needless CPU resources, setting it too high can make the system appear unresponsive. 'Too high' and 'too low' are very subjective and will depend on your individual requirements. For example, the polling interval for a file that is expected to be written twice a day may be set to three hours, while the interval for a file that is expected to be written every hour may be set to 15 minutes.

Minimum File Age specifies how old a file must be before it is processed by the adapter. This setting allows a file to be completely written before it is read. For example, a large file may take five minutes to write out from the original application. If the file is read three minutes after it has been created, then it is possible for the adapter to run out of records to read and assume the file has been processed, when in reality, the application is still writing to the file. Setting a minimum age to ten minutes would avoid this problem by giving the application at least ten minutes to write the file.

As an alternative to polling for a file directly, we may use a trigger file to indicate that a file is available. Some systems write large files to disk and then indicate that they are available by writing a trigger file. This avoids the problems with reading an incomplete file we identified in the previous paragraph, without the delay in processing the file that a minimum age field may cause.

Message format

The penultimate step in the file adapter is to set up the format of records or messages in the file. This is one of the most critical steps, as this defines the format of messages generated by a file.

Messages may be opaque, meaning that they are passed around as black boxes. This may be appropriate with a Microsoft Word file, for example, that must merely be transported from point A to point B without being examined. This is indicated by the Native format translation is not required (Schema is Opaque) checkbox.

Message format

If the document is already in an XML format, then we can just specify a schema and an expected root element and the job is done. Normally the file is in some non-XML format that must be mapped onto an XML Schema generated through the native format builder wizard, which is invoked through the Define Schema for Native Format button.

Defining a native format schema

Invoking the Native Format Builder wizard brings up an initial start screen that leads on to the first step in the wizard, choosing the type of format, as shown in the following screenshot:

Defining a native format schema

This allows us to identify the overall record structure. If we have an existing schema document that describes the record structure, then we can point to that. Usually, we will need to determine the type of structure of the file ourselves. The choices available are:

  • Delimited: These are files such as CSV files (Comma Separated Values), records with spaces, or '+' signs for separators.
  • Fixed Length: These are files whose records consist of fixed length fields. Be careful not to confuse these with space-separated files, as if a value does not fill the entire field, it will usually be padded with spaces.
  • Complex Type: These files may include nested records like a master detail type structure.
  • DTD to be converted to XSD: These are XML Data Type Definition XML files that will be mapped onto an XML Schema description of the file content.
  • Cobol Copybook to be converted to native format: These are files that have usually been produced by a COBOL system, often originating from a mainframe.

We will look at a delimited file, as it is one of the most common formats.

Although we are using the separator file type, the steps involved are basically the same for most file types including the fixed length field format, which is also extremely common.

Using a sample file

To make it easier to describe the format of the incoming file, the wizard asks us to specify a file to use as a sample. If necessary, we can skip rows in the file and determine the number of records to read. Obviously, reading a very large number of records may take a while, and if all the variability on the file is in the first ten records, then there is no point in wasting time reading any more sample records. We may also choose to restrict the number of rows processed at runtime.

Setting the character set needs to be done carefully, particularly in international environments where non-ASCII character sets may be common.

After selecting a sample file, the wizard will display an initial view of the file with a guess at the field separators.

Using a sample file

Record structure

The next step of the wizard allows us to describe how the records appear in the file.

The first option File contains only one record allows us to process the file as a single message. This can be useful when the file has multiple records, all of the same format, which we want to read as a single message. Use of this option disables batching.

The next option of File contains multiple record instances allows batching to take place. Records are either of the same type or of different types. They can only be marked of different types if they can be distinguished, based on the first field in the record. In other words, in order to choose the Multiple records are of different types, the first field in all the records must be a record type identifier. In the example shown in the preceding screenshot, the first field is either an H for Header records or an R for Records.

Record structure

Choosing a root element

The next step allows us to define the target namespace and root element of the schema that we are generating.

Choosing a root element

Note

Don't forget that when using the Native Format Builder wizard, we are just creating an XML Schema document that describes the native (non-XML) format data. Most of the time this schema is transparent to us. However, at times the XML constructs have to emerge, for example, when identifying a name for a root element. The file is described using an XML Schema extension known as NXSD.

As we can see the root element is mandatory. This root element acts as a wrapper for the records in a message. If message batching is set to 1, then each wrapper will have a single sub-element, namely, the record. If the message is set to greater than 1, then each wrapper will have at least one and possibly more sub-elements, each sub-element being a record. There can never be more sub-elements than the batch size.

Message delimiters

Having described the overall structure of the file, we can now drill down into the individual fields. To do this, we first specify the message delimiters.

In addition to field delimiters, we can also specify a record delimiter. Usually record delimiters are new lines. If fields are also wrapped in quotation marks, then these can be stripped off by specifying the Optionally enclosed by character.

Message delimiters

Record type names

The wizard will identify the types of records based on the first field in each record, as shown in the preceding screenshot. It is possible to ignore record types by selecting them and clicking Delete. If this is done by mistake, then it is possible to add them back by using the Add button. Only fields that exist in the sample data can be added in the wizard.

Record type names

Note that if we want to reset the record types screen, then the Scan button will rescan the sample file and look for all the different record types it contains.

The Record Name field can be set by double-clicking it and providing a suitable record name. This record name is the XML element name that encapsulates the record content.

Field properties

Now that we have identified record and field boundaries, we can drill down into the records and define the data types and names of individual fields. This is done for each record type in turn. We can select which records to define by selecting them from the Record Name drop-down box or by pressing the Next Record Type button.

It is important to be as liberal as possible when defining field data types because any mismatches will cause errors that will need to be handled. Being liberal in our record definitions will allow us to validate the messages, as described in Chapter 13, Building Validation into Services, without raising system errors.

Field properties

The Name column represents the element name of this field. The wizard will attempt to guess the type of the field, but it is important to always check this because the sample data you are using may not include all possibilities. A common error is identifying numbers to be tagged as integers, when they should really be strings—accept integer types only when they are likely to have arithmetic operations performed on them.

Verifying the result

We have now completed our mapping and can verify what has been done by looking at the generated XML Schema file. Note that the generated schema uses some Oracle extensions to enable a non-XML formatted file to be represented as XML. In particular the nxsd namespace prefix is used to identify field separators and record terminators.

Note

The XML Schema generated can be edited manually. This is useful to support nested records (records inside other records) like those that may be found in a file containing order records with nested detail records (an order record contains multiple line item detail records). In this case, it is useful to use the wizard to generate a schema with order records and detail records at the same level. The schema can then be modified by hand to make the detail records children of the order records.

Verifying the result

Clicking the Test button brings up a simple test screen that allows us to apply our newly generated schema to the input document and examine the resulting XML.

Verifying the result

Clicking Next and then Finish will cause the generated schema file to be saved.

Finishing the wizards

Until now, no work has been saved, except for the XML Schema mapping the file content onto an XML structure. The rest of the adapter settings are not saved, and the endpoint is not set up until the Finish button is clicked on the completion screen, as shown in the following screenshot. Note that the file generated is a Web Service Description Language (WSDL) file with a JCA binding.

Finishing the wizards
Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>

Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection

Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation

Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Selecting specific files

Having defined the location where files are found, we can now move on to the next step in the wizard. Here we describe what the filenames look like. We can describe filenames using either wildcards (using '*' to represent a sequence of 0 or more characters) or using Java regular expressions, as described in the documentation for the java.util.regex.Pattern class. Usually wildcards will be good enough. For example, if we want to select all files that start with "PR" and end with ".txt", then we would use the wildcard string "PR*.txt" or the regular expression "PR.*\.txt". As you can see, it is generally easier to use wildcards rather than regular expressions. We can also specify a pattern to identify which files should not be processed.

The final part of this screen in the adapter wizard asks if the file contains a single message or many messages. This is confusing because when the screen refers to messages, it really means records.

Selecting specific files

Tip

XML files

It is worth remembering that a well formed XML document can only have a single root element, and hence an XML input file will normally have only a single input record. In the case of very large XML files, it is possible to have the file adapter batch the file up into multiple messages, in which case the root element is replicated in each message, and the second level elements are treated as records. This behavior is requested by setting the streaming option.

By default, a message will contain a single record from the file. Records will be defined in the next step of the wizard. If the file causes a BPEL process to be started, then a 1000 record file would result in 1000 BPEL processes being initiated. To improve efficiency, records can be batched, and the Publish Messages in Batches of attribute controls the maximum number of records in a message.

Tip

Message batching

It is common for an incoming file to contain many records. These records, when processed, can impact system performance and memory requirements. Hence it is important to align the use of the records with their likely impact on system resources.

Detecting that the file is available

The next step in the wizard allows us to configure the frequency of polling for the inbound file. There are two parameters that can be configured here–the Polling Frequency and the Minimum File Age.

Detecting that the file is available

The Polling Frequency just means the time delay between checking to see if a file is available for processing. The adapter will check once per interval to see if the file exists. Setting this too low can consume needless CPU resources, setting it too high can make the system appear unresponsive. 'Too high' and 'too low' are very subjective and will depend on your individual requirements. For example, the polling interval for a file that is expected to be written twice a day may be set to three hours, while the interval for a file that is expected to be written every hour may be set to 15 minutes.

Minimum File Age specifies how old a file must be before it is processed by the adapter. This setting allows a file to be completely written before it is read. For example, a large file may take five minutes to write out from the original application. If the file is read three minutes after it has been created, then it is possible for the adapter to run out of records to read and assume the file has been processed, when in reality, the application is still writing to the file. Setting a minimum age to ten minutes would avoid this problem by giving the application at least ten minutes to write the file.

As an alternative to polling for a file directly, we may use a trigger file to indicate that a file is available. Some systems write large files to disk and then indicate that they are available by writing a trigger file. This avoids the problems with reading an incomplete file we identified in the previous paragraph, without the delay in processing the file that a minimum age field may cause.

Message format

The penultimate step in the file adapter is to set up the format of records or messages in the file. This is one of the most critical steps, as this defines the format of messages generated by a file.

Messages may be opaque, meaning that they are passed around as black boxes. This may be appropriate with a Microsoft Word file, for example, that must merely be transported from point A to point B without being examined. This is indicated by the Native format translation is not required (Schema is Opaque) checkbox.

Message format

If the document is already in an XML format, then we can just specify a schema and an expected root element and the job is done. Normally the file is in some non-XML format that must be mapped onto an XML Schema generated through the native format builder wizard, which is invoked through the Define Schema for Native Format button.

Defining a native format schema

Invoking the Native Format Builder wizard brings up an initial start screen that leads on to the first step in the wizard, choosing the type of format, as shown in the following screenshot:

Defining a native format schema

This allows us to identify the overall record structure. If we have an existing schema document that describes the record structure, then we can point to that. Usually, we will need to determine the type of structure of the file ourselves. The choices available are:

  • Delimited: These are files such as CSV files (Comma Separated Values), records with spaces, or '+' signs for separators.
  • Fixed Length: These are files whose records consist of fixed length fields. Be careful not to confuse these with space-separated files, as if a value does not fill the entire field, it will usually be padded with spaces.
  • Complex Type: These files may include nested records like a master detail type structure.
  • DTD to be converted to XSD: These are XML Data Type Definition XML files that will be mapped onto an XML Schema description of the file content.
  • Cobol Copybook to be converted to native format: These are files that have usually been produced by a COBOL system, often originating from a mainframe.

We will look at a delimited file, as it is one of the most common formats.

Although we are using the separator file type, the steps involved are basically the same for most file types including the fixed length field format, which is also extremely common.

Using a sample file

To make it easier to describe the format of the incoming file, the wizard asks us to specify a file to use as a sample. If necessary, we can skip rows in the file and determine the number of records to read. Obviously, reading a very large number of records may take a while, and if all the variability on the file is in the first ten records, then there is no point in wasting time reading any more sample records. We may also choose to restrict the number of rows processed at runtime.

Setting the character set needs to be done carefully, particularly in international environments where non-ASCII character sets may be common.

After selecting a sample file, the wizard will display an initial view of the file with a guess at the field separators.

Using a sample file

Record structure

The next step of the wizard allows us to describe how the records appear in the file.

The first option File contains only one record allows us to process the file as a single message. This can be useful when the file has multiple records, all of the same format, which we want to read as a single message. Use of this option disables batching.

The next option of File contains multiple record instances allows batching to take place. Records are either of the same type or of different types. They can only be marked of different types if they can be distinguished, based on the first field in the record. In other words, in order to choose the Multiple records are of different types, the first field in all the records must be a record type identifier. In the example shown in the preceding screenshot, the first field is either an H for Header records or an R for Records.

Record structure

Choosing a root element

The next step allows us to define the target namespace and root element of the schema that we are generating.

Choosing a root element

Note

Don't forget that when using the Native Format Builder wizard, we are just creating an XML Schema document that describes the native (non-XML) format data. Most of the time this schema is transparent to us. However, at times the XML constructs have to emerge, for example, when identifying a name for a root element. The file is described using an XML Schema extension known as NXSD.

As we can see the root element is mandatory. This root element acts as a wrapper for the records in a message. If message batching is set to 1, then each wrapper will have a single sub-element, namely, the record. If the message is set to greater than 1, then each wrapper will have at least one and possibly more sub-elements, each sub-element being a record. There can never be more sub-elements than the batch size.

Message delimiters

Having described the overall structure of the file, we can now drill down into the individual fields. To do this, we first specify the message delimiters.

In addition to field delimiters, we can also specify a record delimiter. Usually record delimiters are new lines. If fields are also wrapped in quotation marks, then these can be stripped off by specifying the Optionally enclosed by character.

Message delimiters

Record type names

The wizard will identify the types of records based on the first field in each record, as shown in the preceding screenshot. It is possible to ignore record types by selecting them and clicking Delete. If this is done by mistake, then it is possible to add them back by using the Add button. Only fields that exist in the sample data can be added in the wizard.

Record type names

Note that if we want to reset the record types screen, then the Scan button will rescan the sample file and look for all the different record types it contains.

The Record Name field can be set by double-clicking it and providing a suitable record name. This record name is the XML element name that encapsulates the record content.

Field properties

Now that we have identified record and field boundaries, we can drill down into the records and define the data types and names of individual fields. This is done for each record type in turn. We can select which records to define by selecting them from the Record Name drop-down box or by pressing the Next Record Type button.

It is important to be as liberal as possible when defining field data types because any mismatches will cause errors that will need to be handled. Being liberal in our record definitions will allow us to validate the messages, as described in Chapter 13, Building Validation into Services, without raising system errors.

Field properties

The Name column represents the element name of this field. The wizard will attempt to guess the type of the field, but it is important to always check this because the sample data you are using may not include all possibilities. A common error is identifying numbers to be tagged as integers, when they should really be strings—accept integer types only when they are likely to have arithmetic operations performed on them.

Verifying the result

We have now completed our mapping and can verify what has been done by looking at the generated XML Schema file. Note that the generated schema uses some Oracle extensions to enable a non-XML formatted file to be represented as XML. In particular the nxsd namespace prefix is used to identify field separators and record terminators.

Note

The XML Schema generated can be edited manually. This is useful to support nested records (records inside other records) like those that may be found in a file containing order records with nested detail records (an order record contains multiple line item detail records). In this case, it is useful to use the wizard to generate a schema with order records and detail records at the same level. The schema can then be modified by hand to make the detail records children of the order records.

Verifying the result

Clicking the Test button brings up a simple test screen that allows us to apply our newly generated schema to the input document and examine the resulting XML.

Verifying the result

Clicking Next and then Finish will cause the generated schema file to be saved.

Finishing the wizards

Until now, no work has been saved, except for the XML Schema mapping the file content onto an XML structure. The rest of the adapter settings are not saved, and the endpoint is not set up until the Finish button is clicked on the completion screen, as shown in the following screenshot. Note that the file generated is a Web Service Description Language (WSDL) file with a JCA binding.

Finishing the wizards
Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>

Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection

Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation

Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Detecting that the file is available

The next step in the wizard allows us to configure the frequency of polling for the inbound file. There are two parameters that can be configured here–the Polling Frequency and the Minimum File Age.

Detecting that the file is available

The Polling Frequency just means the time delay between checking to see if a file is available for processing. The adapter will check once per interval to see if the file exists. Setting this too low can consume needless CPU resources, setting it too high can make the system appear unresponsive. 'Too high' and 'too low' are very subjective and will depend on your individual requirements. For example, the polling interval for a file that is expected to be written twice a day may be set to three hours, while the interval for a file that is expected to be written every hour may be set to 15 minutes.

Minimum File Age specifies how old a file must be before it is processed by the adapter. This setting allows a file to be completely written before it is read. For example, a large file may take five minutes to write out from the original application. If the file is read three minutes after it has been created, then it is possible for the adapter to run out of records to read and assume the file has been processed, when in reality, the application is still writing to the file. Setting a minimum age to ten minutes would avoid this problem by giving the application at least ten minutes to write the file.

As an alternative to polling for a file directly, we may use a trigger file to indicate that a file is available. Some systems write large files to disk and then indicate that they are available by writing a trigger file. This avoids the problems with reading an incomplete file we identified in the previous paragraph, without the delay in processing the file that a minimum age field may cause.

Message format

The penultimate step in the file adapter is to set up the format of records or messages in the file. This is one of the most critical steps, as this defines the format of messages generated by a file.

Messages may be opaque, meaning that they are passed around as black boxes. This may be appropriate with a Microsoft Word file, for example, that must merely be transported from point A to point B without being examined. This is indicated by the Native format translation is not required (Schema is Opaque) checkbox.

Message format

If the document is already in an XML format, then we can just specify a schema and an expected root element and the job is done. Normally the file is in some non-XML format that must be mapped onto an XML Schema generated through the native format builder wizard, which is invoked through the Define Schema for Native Format button.

Defining a native format schema

Invoking the Native Format Builder wizard brings up an initial start screen that leads on to the first step in the wizard, choosing the type of format, as shown in the following screenshot:

Defining a native format schema

This allows us to identify the overall record structure. If we have an existing schema document that describes the record structure, then we can point to that. Usually, we will need to determine the type of structure of the file ourselves. The choices available are:

  • Delimited: These are files such as CSV files (Comma Separated Values), records with spaces, or '+' signs for separators.
  • Fixed Length: These are files whose records consist of fixed length fields. Be careful not to confuse these with space-separated files, as if a value does not fill the entire field, it will usually be padded with spaces.
  • Complex Type: These files may include nested records like a master detail type structure.
  • DTD to be converted to XSD: These are XML Data Type Definition XML files that will be mapped onto an XML Schema description of the file content.
  • Cobol Copybook to be converted to native format: These are files that have usually been produced by a COBOL system, often originating from a mainframe.

We will look at a delimited file, as it is one of the most common formats.

Although we are using the separator file type, the steps involved are basically the same for most file types including the fixed length field format, which is also extremely common.

Using a sample file

To make it easier to describe the format of the incoming file, the wizard asks us to specify a file to use as a sample. If necessary, we can skip rows in the file and determine the number of records to read. Obviously, reading a very large number of records may take a while, and if all the variability on the file is in the first ten records, then there is no point in wasting time reading any more sample records. We may also choose to restrict the number of rows processed at runtime.

Setting the character set needs to be done carefully, particularly in international environments where non-ASCII character sets may be common.

After selecting a sample file, the wizard will display an initial view of the file with a guess at the field separators.

Using a sample file

Record structure

The next step of the wizard allows us to describe how the records appear in the file.

The first option File contains only one record allows us to process the file as a single message. This can be useful when the file has multiple records, all of the same format, which we want to read as a single message. Use of this option disables batching.

The next option of File contains multiple record instances allows batching to take place. Records are either of the same type or of different types. They can only be marked of different types if they can be distinguished, based on the first field in the record. In other words, in order to choose the Multiple records are of different types, the first field in all the records must be a record type identifier. In the example shown in the preceding screenshot, the first field is either an H for Header records or an R for Records.

Record structure

Choosing a root element

The next step allows us to define the target namespace and root element of the schema that we are generating.

Choosing a root element

Note

Don't forget that when using the Native Format Builder wizard, we are just creating an XML Schema document that describes the native (non-XML) format data. Most of the time this schema is transparent to us. However, at times the XML constructs have to emerge, for example, when identifying a name for a root element. The file is described using an XML Schema extension known as NXSD.

As we can see the root element is mandatory. This root element acts as a wrapper for the records in a message. If message batching is set to 1, then each wrapper will have a single sub-element, namely, the record. If the message is set to greater than 1, then each wrapper will have at least one and possibly more sub-elements, each sub-element being a record. There can never be more sub-elements than the batch size.

Message delimiters

Having described the overall structure of the file, we can now drill down into the individual fields. To do this, we first specify the message delimiters.

In addition to field delimiters, we can also specify a record delimiter. Usually record delimiters are new lines. If fields are also wrapped in quotation marks, then these can be stripped off by specifying the Optionally enclosed by character.

Message delimiters

Record type names

The wizard will identify the types of records based on the first field in each record, as shown in the preceding screenshot. It is possible to ignore record types by selecting them and clicking Delete. If this is done by mistake, then it is possible to add them back by using the Add button. Only fields that exist in the sample data can be added in the wizard.

Record type names

Note that if we want to reset the record types screen, then the Scan button will rescan the sample file and look for all the different record types it contains.

The Record Name field can be set by double-clicking it and providing a suitable record name. This record name is the XML element name that encapsulates the record content.

Field properties

Now that we have identified record and field boundaries, we can drill down into the records and define the data types and names of individual fields. This is done for each record type in turn. We can select which records to define by selecting them from the Record Name drop-down box or by pressing the Next Record Type button.

It is important to be as liberal as possible when defining field data types because any mismatches will cause errors that will need to be handled. Being liberal in our record definitions will allow us to validate the messages, as described in Chapter 13, Building Validation into Services, without raising system errors.

Field properties

The Name column represents the element name of this field. The wizard will attempt to guess the type of the field, but it is important to always check this because the sample data you are using may not include all possibilities. A common error is identifying numbers to be tagged as integers, when they should really be strings—accept integer types only when they are likely to have arithmetic operations performed on them.

Verifying the result

We have now completed our mapping and can verify what has been done by looking at the generated XML Schema file. Note that the generated schema uses some Oracle extensions to enable a non-XML formatted file to be represented as XML. In particular the nxsd namespace prefix is used to identify field separators and record terminators.

Note

The XML Schema generated can be edited manually. This is useful to support nested records (records inside other records) like those that may be found in a file containing order records with nested detail records (an order record contains multiple line item detail records). In this case, it is useful to use the wizard to generate a schema with order records and detail records at the same level. The schema can then be modified by hand to make the detail records children of the order records.

Verifying the result

Clicking the Test button brings up a simple test screen that allows us to apply our newly generated schema to the input document and examine the resulting XML.

Verifying the result

Clicking Next and then Finish will cause the generated schema file to be saved.

Finishing the wizards

Until now, no work has been saved, except for the XML Schema mapping the file content onto an XML structure. The rest of the adapter settings are not saved, and the endpoint is not set up until the Finish button is clicked on the completion screen, as shown in the following screenshot. Note that the file generated is a Web Service Description Language (WSDL) file with a JCA binding.

Finishing the wizards
Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>

Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection

Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation

Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Message format

The penultimate step in the file adapter is to set up the format of records or messages in the file. This is one of the most critical steps, as this defines the format of messages generated by a file.

Messages may be opaque, meaning that they are passed around as black boxes. This may be appropriate with a Microsoft Word file, for example, that must merely be transported from point A to point B without being examined. This is indicated by the Native format translation is not required (Schema is Opaque) checkbox.

Message format

If the document is already in an XML format, then we can just specify a schema and an expected root element and the job is done. Normally the file is in some non-XML format that must be mapped onto an XML Schema generated through the native format builder wizard, which is invoked through the Define Schema for Native Format button.

Defining a native format schema

Invoking the Native Format Builder wizard brings up an initial start screen that leads on to the first step in the wizard, choosing the type of format, as shown in the following screenshot:

Defining a native format schema

This allows us to identify the overall record structure. If we have an existing schema document that describes the record structure, then we can point to that. Usually, we will need to determine the type of structure of the file ourselves. The choices available are:

  • Delimited: These are files such as CSV files (Comma Separated Values), records with spaces, or '+' signs for separators.
  • Fixed Length: These are files whose records consist of fixed length fields. Be careful not to confuse these with space-separated files, as if a value does not fill the entire field, it will usually be padded with spaces.
  • Complex Type: These files may include nested records like a master detail type structure.
  • DTD to be converted to XSD: These are XML Data Type Definition XML files that will be mapped onto an XML Schema description of the file content.
  • Cobol Copybook to be converted to native format: These are files that have usually been produced by a COBOL system, often originating from a mainframe.

We will look at a delimited file, as it is one of the most common formats.

Although we are using the separator file type, the steps involved are basically the same for most file types including the fixed length field format, which is also extremely common.

Using a sample file

To make it easier to describe the format of the incoming file, the wizard asks us to specify a file to use as a sample. If necessary, we can skip rows in the file and determine the number of records to read. Obviously, reading a very large number of records may take a while, and if all the variability on the file is in the first ten records, then there is no point in wasting time reading any more sample records. We may also choose to restrict the number of rows processed at runtime.

Setting the character set needs to be done carefully, particularly in international environments where non-ASCII character sets may be common.

After selecting a sample file, the wizard will display an initial view of the file with a guess at the field separators.

Using a sample file

Record structure

The next step of the wizard allows us to describe how the records appear in the file.

The first option File contains only one record allows us to process the file as a single message. This can be useful when the file has multiple records, all of the same format, which we want to read as a single message. Use of this option disables batching.

The next option of File contains multiple record instances allows batching to take place. Records are either of the same type or of different types. They can only be marked of different types if they can be distinguished, based on the first field in the record. In other words, in order to choose the Multiple records are of different types, the first field in all the records must be a record type identifier. In the example shown in the preceding screenshot, the first field is either an H for Header records or an R for Records.

Record structure

Choosing a root element

The next step allows us to define the target namespace and root element of the schema that we are generating.

Choosing a root element

Note

Don't forget that when using the Native Format Builder wizard, we are just creating an XML Schema document that describes the native (non-XML) format data. Most of the time this schema is transparent to us. However, at times the XML constructs have to emerge, for example, when identifying a name for a root element. The file is described using an XML Schema extension known as NXSD.

As we can see the root element is mandatory. This root element acts as a wrapper for the records in a message. If message batching is set to 1, then each wrapper will have a single sub-element, namely, the record. If the message is set to greater than 1, then each wrapper will have at least one and possibly more sub-elements, each sub-element being a record. There can never be more sub-elements than the batch size.

Message delimiters

Having described the overall structure of the file, we can now drill down into the individual fields. To do this, we first specify the message delimiters.

In addition to field delimiters, we can also specify a record delimiter. Usually record delimiters are new lines. If fields are also wrapped in quotation marks, then these can be stripped off by specifying the Optionally enclosed by character.

Message delimiters

Record type names

The wizard will identify the types of records based on the first field in each record, as shown in the preceding screenshot. It is possible to ignore record types by selecting them and clicking Delete. If this is done by mistake, then it is possible to add them back by using the Add button. Only fields that exist in the sample data can be added in the wizard.

Record type names

Note that if we want to reset the record types screen, then the Scan button will rescan the sample file and look for all the different record types it contains.

The Record Name field can be set by double-clicking it and providing a suitable record name. This record name is the XML element name that encapsulates the record content.

Field properties

Now that we have identified record and field boundaries, we can drill down into the records and define the data types and names of individual fields. This is done for each record type in turn. We can select which records to define by selecting them from the Record Name drop-down box or by pressing the Next Record Type button.

It is important to be as liberal as possible when defining field data types because any mismatches will cause errors that will need to be handled. Being liberal in our record definitions will allow us to validate the messages, as described in Chapter 13, Building Validation into Services, without raising system errors.

Field properties

The Name column represents the element name of this field. The wizard will attempt to guess the type of the field, but it is important to always check this because the sample data you are using may not include all possibilities. A common error is identifying numbers to be tagged as integers, when they should really be strings—accept integer types only when they are likely to have arithmetic operations performed on them.

Verifying the result

We have now completed our mapping and can verify what has been done by looking at the generated XML Schema file. Note that the generated schema uses some Oracle extensions to enable a non-XML formatted file to be represented as XML. In particular the nxsd namespace prefix is used to identify field separators and record terminators.

Note

The XML Schema generated can be edited manually. This is useful to support nested records (records inside other records) like those that may be found in a file containing order records with nested detail records (an order record contains multiple line item detail records). In this case, it is useful to use the wizard to generate a schema with order records and detail records at the same level. The schema can then be modified by hand to make the detail records children of the order records.

Verifying the result

Clicking the Test button brings up a simple test screen that allows us to apply our newly generated schema to the input document and examine the resulting XML.

Verifying the result

Clicking Next and then Finish will cause the generated schema file to be saved.

Finishing the wizards

Until now, no work has been saved, except for the XML Schema mapping the file content onto an XML structure. The rest of the adapter settings are not saved, and the endpoint is not set up until the Finish button is clicked on the completion screen, as shown in the following screenshot. Note that the file generated is a Web Service Description Language (WSDL) file with a JCA binding.

Finishing the wizards
Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>

Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection

Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation

Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Defining a native format schema

Invoking the Native Format Builder wizard brings up an initial start screen that leads on to the first step in the wizard, choosing the type of format, as shown in the following screenshot:

Defining a native format schema

This allows us to identify the overall record structure. If we have an existing schema document that describes the record structure, then we can point to that. Usually, we will need to determine the type of structure of the file ourselves. The choices available are:

  • Delimited: These are files such as CSV files (Comma Separated Values), records with spaces, or '+' signs for separators.
  • Fixed Length: These are files whose records consist of fixed length fields. Be careful not to confuse these with space-separated files, as if a value does not fill the entire field, it will usually be padded with spaces.
  • Complex Type: These files may include nested records like a master detail type structure.
  • DTD to be converted to XSD: These are XML Data Type Definition XML files that will be mapped onto an XML Schema description of the file content.
  • Cobol Copybook to be converted to native format: These are files that have usually been produced by a COBOL system, often originating from a mainframe.

We will look at a delimited file, as it is one of the most common formats.

Although we are using the separator file type, the steps involved are basically the same for most file types including the fixed length field format, which is also extremely common.

Using a sample file

To make it easier to describe the format of the incoming file, the wizard asks us to specify a file to use as a sample. If necessary, we can skip rows in the file and determine the number of records to read. Obviously, reading a very large number of records may take a while, and if all the variability on the file is in the first ten records, then there is no point in wasting time reading any more sample records. We may also choose to restrict the number of rows processed at runtime.

Setting the character set needs to be done carefully, particularly in international environments where non-ASCII character sets may be common.

After selecting a sample file, the wizard will display an initial view of the file with a guess at the field separators.

Using a sample file

Record structure

The next step of the wizard allows us to describe how the records appear in the file.

The first option File contains only one record allows us to process the file as a single message. This can be useful when the file has multiple records, all of the same format, which we want to read as a single message. Use of this option disables batching.

The next option of File contains multiple record instances allows batching to take place. Records are either of the same type or of different types. They can only be marked of different types if they can be distinguished, based on the first field in the record. In other words, in order to choose the Multiple records are of different types, the first field in all the records must be a record type identifier. In the example shown in the preceding screenshot, the first field is either an H for Header records or an R for Records.

Record structure

Choosing a root element

The next step allows us to define the target namespace and root element of the schema that we are generating.

Choosing a root element
Note

Don't forget that when using the Native Format Builder wizard, we are just creating an XML Schema document that describes the native (non-XML) format data. Most of the time this schema is transparent to us. However, at times the XML constructs have to emerge, for example, when identifying a name for a root element. The file is described using an XML Schema extension known as NXSD.

As we can see the root element is mandatory. This root element acts as a wrapper for the records in a message. If message batching is set to 1, then each wrapper will have a single sub-element, namely, the record. If the message is set to greater than 1, then each wrapper will have at least one and possibly more sub-elements, each sub-element being a record. There can never be more sub-elements than the batch size.

Message delimiters

Having described the overall structure of the file, we can now drill down into the individual fields. To do this, we first specify the message delimiters.

In addition to field delimiters, we can also specify a record delimiter. Usually record delimiters are new lines. If fields are also wrapped in quotation marks, then these can be stripped off by specifying the Optionally enclosed by character.

Message delimiters

Record type names

The wizard will identify the types of records based on the first field in each record, as shown in the preceding screenshot. It is possible to ignore record types by selecting them and clicking Delete. If this is done by mistake, then it is possible to add them back by using the Add button. Only fields that exist in the sample data can be added in the wizard.

Record type names

Note that if we want to reset the record types screen, then the Scan button will rescan the sample file and look for all the different record types it contains.

The Record Name field can be set by double-clicking it and providing a suitable record name. This record name is the XML element name that encapsulates the record content.

Field properties

Now that we have identified record and field boundaries, we can drill down into the records and define the data types and names of individual fields. This is done for each record type in turn. We can select which records to define by selecting them from the Record Name drop-down box or by pressing the Next Record Type button.

It is important to be as liberal as possible when defining field data types because any mismatches will cause errors that will need to be handled. Being liberal in our record definitions will allow us to validate the messages, as described in Chapter 13, Building Validation into Services, without raising system errors.

Field properties

The Name column represents the element name of this field. The wizard will attempt to guess the type of the field, but it is important to always check this because the sample data you are using may not include all possibilities. A common error is identifying numbers to be tagged as integers, when they should really be strings—accept integer types only when they are likely to have arithmetic operations performed on them.

Verifying the result

We have now completed our mapping and can verify what has been done by looking at the generated XML Schema file. Note that the generated schema uses some Oracle extensions to enable a non-XML formatted file to be represented as XML. In particular the nxsd namespace prefix is used to identify field separators and record terminators.

Note

The XML Schema generated can be edited manually. This is useful to support nested records (records inside other records) like those that may be found in a file containing order records with nested detail records (an order record contains multiple line item detail records). In this case, it is useful to use the wizard to generate a schema with order records and detail records at the same level. The schema can then be modified by hand to make the detail records children of the order records.

Verifying the result

Clicking the Test button brings up a simple test screen that allows us to apply our newly generated schema to the input document and examine the resulting XML.

Verifying the result

Clicking Next and then Finish will cause the generated schema file to be saved.

Finishing the wizards

Until now, no work has been saved, except for the XML Schema mapping the file content onto an XML structure. The rest of the adapter settings are not saved, and the endpoint is not set up until the Finish button is clicked on the completion screen, as shown in the following screenshot. Note that the file generated is a Web Service Description Language (WSDL) file with a JCA binding.

Finishing the wizards
Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>
Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection
Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation
Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete
Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Using a sample file

To make it easier to describe the format of the incoming file, the wizard asks us to specify a file to use as a sample. If necessary, we can skip rows in the file and determine the number of records to read. Obviously, reading a very large number of records may take a while, and if all the variability on the file is in the first ten records, then there is no point in wasting time reading any more sample records. We may also choose to restrict the number of rows processed at runtime.

Setting the character set needs to be done carefully, particularly in international environments where non-ASCII character sets may be common.

After selecting a sample file, the wizard will display an initial view of the file with a guess at the field separators.

Using a sample file

Record structure

The next step of the wizard allows us to describe how the records appear in the file.

The first option File contains only one record allows us to process the file as a single message. This can be useful when the file has multiple records, all of the same format, which we want to read as a single message. Use of this option disables batching.

The next option of File contains multiple record instances allows batching to take place. Records are either of the same type or of different types. They can only be marked of different types if they can be distinguished, based on the first field in the record. In other words, in order to choose the Multiple records are of different types, the first field in all the records must be a record type identifier. In the example shown in the preceding screenshot, the first field is either an H for Header records or an R for Records.

Record structure

Choosing a root element

The next step allows us to define the target namespace and root element of the schema that we are generating.

Choosing a root element
Note

Don't forget that when using the Native Format Builder wizard, we are just creating an XML Schema document that describes the native (non-XML) format data. Most of the time this schema is transparent to us. However, at times the XML constructs have to emerge, for example, when identifying a name for a root element. The file is described using an XML Schema extension known as NXSD.

As we can see the root element is mandatory. This root element acts as a wrapper for the records in a message. If message batching is set to 1, then each wrapper will have a single sub-element, namely, the record. If the message is set to greater than 1, then each wrapper will have at least one and possibly more sub-elements, each sub-element being a record. There can never be more sub-elements than the batch size.

Message delimiters

Having described the overall structure of the file, we can now drill down into the individual fields. To do this, we first specify the message delimiters.

In addition to field delimiters, we can also specify a record delimiter. Usually record delimiters are new lines. If fields are also wrapped in quotation marks, then these can be stripped off by specifying the Optionally enclosed by character.

Message delimiters

Record type names

The wizard will identify the types of records based on the first field in each record, as shown in the preceding screenshot. It is possible to ignore record types by selecting them and clicking Delete. If this is done by mistake, then it is possible to add them back by using the Add button. Only fields that exist in the sample data can be added in the wizard.

Record type names

Note that if we want to reset the record types screen, then the Scan button will rescan the sample file and look for all the different record types it contains.

The Record Name field can be set by double-clicking it and providing a suitable record name. This record name is the XML element name that encapsulates the record content.

Field properties

Now that we have identified record and field boundaries, we can drill down into the records and define the data types and names of individual fields. This is done for each record type in turn. We can select which records to define by selecting them from the Record Name drop-down box or by pressing the Next Record Type button.

It is important to be as liberal as possible when defining field data types because any mismatches will cause errors that will need to be handled. Being liberal in our record definitions will allow us to validate the messages, as described in Chapter 13, Building Validation into Services, without raising system errors.

Field properties

The Name column represents the element name of this field. The wizard will attempt to guess the type of the field, but it is important to always check this because the sample data you are using may not include all possibilities. A common error is identifying numbers to be tagged as integers, when they should really be strings—accept integer types only when they are likely to have arithmetic operations performed on them.

Verifying the result

We have now completed our mapping and can verify what has been done by looking at the generated XML Schema file. Note that the generated schema uses some Oracle extensions to enable a non-XML formatted file to be represented as XML. In particular the nxsd namespace prefix is used to identify field separators and record terminators.

Note

The XML Schema generated can be edited manually. This is useful to support nested records (records inside other records) like those that may be found in a file containing order records with nested detail records (an order record contains multiple line item detail records). In this case, it is useful to use the wizard to generate a schema with order records and detail records at the same level. The schema can then be modified by hand to make the detail records children of the order records.

Verifying the result

Clicking the Test button brings up a simple test screen that allows us to apply our newly generated schema to the input document and examine the resulting XML.

Verifying the result

Clicking Next and then Finish will cause the generated schema file to be saved.

Finishing the wizards

Until now, no work has been saved, except for the XML Schema mapping the file content onto an XML structure. The rest of the adapter settings are not saved, and the endpoint is not set up until the Finish button is clicked on the completion screen, as shown in the following screenshot. Note that the file generated is a Web Service Description Language (WSDL) file with a JCA binding.

Finishing the wizards
Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>
Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection
Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation
Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete
Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Record structure

The next step of the wizard allows us to describe how the records appear in the file.

The first option File contains only one record allows us to process the file as a single message. This can be useful when the file has multiple records, all of the same format, which we want to read as a single message. Use of this option disables batching.

The next option of File contains multiple record instances allows batching to take place. Records are either of the same type or of different types. They can only be marked of different types if they can be distinguished, based on the first field in the record. In other words, in order to choose the Multiple records are of different types, the first field in all the records must be a record type identifier. In the example shown in the preceding screenshot, the first field is either an H for Header records or an R for Records.

Record structure

Choosing a root element

The next step allows us to define the target namespace and root element of the schema that we are generating.

Choosing a root element
Note

Don't forget that when using the Native Format Builder wizard, we are just creating an XML Schema document that describes the native (non-XML) format data. Most of the time this schema is transparent to us. However, at times the XML constructs have to emerge, for example, when identifying a name for a root element. The file is described using an XML Schema extension known as NXSD.

As we can see the root element is mandatory. This root element acts as a wrapper for the records in a message. If message batching is set to 1, then each wrapper will have a single sub-element, namely, the record. If the message is set to greater than 1, then each wrapper will have at least one and possibly more sub-elements, each sub-element being a record. There can never be more sub-elements than the batch size.

Message delimiters

Having described the overall structure of the file, we can now drill down into the individual fields. To do this, we first specify the message delimiters.

In addition to field delimiters, we can also specify a record delimiter. Usually record delimiters are new lines. If fields are also wrapped in quotation marks, then these can be stripped off by specifying the Optionally enclosed by character.

Message delimiters

Record type names

The wizard will identify the types of records based on the first field in each record, as shown in the preceding screenshot. It is possible to ignore record types by selecting them and clicking Delete. If this is done by mistake, then it is possible to add them back by using the Add button. Only fields that exist in the sample data can be added in the wizard.

Record type names

Note that if we want to reset the record types screen, then the Scan button will rescan the sample file and look for all the different record types it contains.

The Record Name field can be set by double-clicking it and providing a suitable record name. This record name is the XML element name that encapsulates the record content.

Field properties

Now that we have identified record and field boundaries, we can drill down into the records and define the data types and names of individual fields. This is done for each record type in turn. We can select which records to define by selecting them from the Record Name drop-down box or by pressing the Next Record Type button.

It is important to be as liberal as possible when defining field data types because any mismatches will cause errors that will need to be handled. Being liberal in our record definitions will allow us to validate the messages, as described in Chapter 13, Building Validation into Services, without raising system errors.

Field properties

The Name column represents the element name of this field. The wizard will attempt to guess the type of the field, but it is important to always check this because the sample data you are using may not include all possibilities. A common error is identifying numbers to be tagged as integers, when they should really be strings—accept integer types only when they are likely to have arithmetic operations performed on them.

Verifying the result

We have now completed our mapping and can verify what has been done by looking at the generated XML Schema file. Note that the generated schema uses some Oracle extensions to enable a non-XML formatted file to be represented as XML. In particular the nxsd namespace prefix is used to identify field separators and record terminators.

Note

The XML Schema generated can be edited manually. This is useful to support nested records (records inside other records) like those that may be found in a file containing order records with nested detail records (an order record contains multiple line item detail records). In this case, it is useful to use the wizard to generate a schema with order records and detail records at the same level. The schema can then be modified by hand to make the detail records children of the order records.

Verifying the result

Clicking the Test button brings up a simple test screen that allows us to apply our newly generated schema to the input document and examine the resulting XML.

Verifying the result

Clicking Next and then Finish will cause the generated schema file to be saved.

Finishing the wizards

Until now, no work has been saved, except for the XML Schema mapping the file content onto an XML structure. The rest of the adapter settings are not saved, and the endpoint is not set up until the Finish button is clicked on the completion screen, as shown in the following screenshot. Note that the file generated is a Web Service Description Language (WSDL) file with a JCA binding.

Finishing the wizards
Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>
Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection
Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation
Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete
Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Choosing a root element

The next step allows us to define the target namespace and root element of the schema that we are generating.

Choosing a root element
Note

Don't forget that when using the Native Format Builder wizard, we are just creating an XML Schema document that describes the native (non-XML) format data. Most of the time this schema is transparent to us. However, at times the XML constructs have to emerge, for example, when identifying a name for a root element. The file is described using an XML Schema extension known as NXSD.

As we can see the root element is mandatory. This root element acts as a wrapper for the records in a message. If message batching is set to 1, then each wrapper will have a single sub-element, namely, the record. If the message is set to greater than 1, then each wrapper will have at least one and possibly more sub-elements, each sub-element being a record. There can never be more sub-elements than the batch size.

Message delimiters

Having described the overall structure of the file, we can now drill down into the individual fields. To do this, we first specify the message delimiters.

In addition to field delimiters, we can also specify a record delimiter. Usually record delimiters are new lines. If fields are also wrapped in quotation marks, then these can be stripped off by specifying the Optionally enclosed by character.

Message delimiters

Record type names

The wizard will identify the types of records based on the first field in each record, as shown in the preceding screenshot. It is possible to ignore record types by selecting them and clicking Delete. If this is done by mistake, then it is possible to add them back by using the Add button. Only fields that exist in the sample data can be added in the wizard.

Record type names

Note that if we want to reset the record types screen, then the Scan button will rescan the sample file and look for all the different record types it contains.

The Record Name field can be set by double-clicking it and providing a suitable record name. This record name is the XML element name that encapsulates the record content.

Field properties

Now that we have identified record and field boundaries, we can drill down into the records and define the data types and names of individual fields. This is done for each record type in turn. We can select which records to define by selecting them from the Record Name drop-down box or by pressing the Next Record Type button.

It is important to be as liberal as possible when defining field data types because any mismatches will cause errors that will need to be handled. Being liberal in our record definitions will allow us to validate the messages, as described in Chapter 13, Building Validation into Services, without raising system errors.

Field properties

The Name column represents the element name of this field. The wizard will attempt to guess the type of the field, but it is important to always check this because the sample data you are using may not include all possibilities. A common error is identifying numbers to be tagged as integers, when they should really be strings—accept integer types only when they are likely to have arithmetic operations performed on them.

Verifying the result

We have now completed our mapping and can verify what has been done by looking at the generated XML Schema file. Note that the generated schema uses some Oracle extensions to enable a non-XML formatted file to be represented as XML. In particular the nxsd namespace prefix is used to identify field separators and record terminators.

Note

The XML Schema generated can be edited manually. This is useful to support nested records (records inside other records) like those that may be found in a file containing order records with nested detail records (an order record contains multiple line item detail records). In this case, it is useful to use the wizard to generate a schema with order records and detail records at the same level. The schema can then be modified by hand to make the detail records children of the order records.

Verifying the result

Clicking the Test button brings up a simple test screen that allows us to apply our newly generated schema to the input document and examine the resulting XML.

Verifying the result

Clicking Next and then Finish will cause the generated schema file to be saved.

Finishing the wizards

Until now, no work has been saved, except for the XML Schema mapping the file content onto an XML structure. The rest of the adapter settings are not saved, and the endpoint is not set up until the Finish button is clicked on the completion screen, as shown in the following screenshot. Note that the file generated is a Web Service Description Language (WSDL) file with a JCA binding.

Finishing the wizards
Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>
Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection
Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation
Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete
Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Message delimiters

Having described the overall structure of the file, we can now drill down into the individual fields. To do this, we first specify the message delimiters.

In addition to field delimiters, we can also specify a record delimiter. Usually record delimiters are new lines. If fields are also wrapped in quotation marks, then these can be stripped off by specifying the Optionally enclosed by character.

Message delimiters

Record type names

The wizard will identify the types of records based on the first field in each record, as shown in the preceding screenshot. It is possible to ignore record types by selecting them and clicking Delete. If this is done by mistake, then it is possible to add them back by using the Add button. Only fields that exist in the sample data can be added in the wizard.

Record type names

Note that if we want to reset the record types screen, then the Scan button will rescan the sample file and look for all the different record types it contains.

The Record Name field can be set by double-clicking it and providing a suitable record name. This record name is the XML element name that encapsulates the record content.

Field properties

Now that we have identified record and field boundaries, we can drill down into the records and define the data types and names of individual fields. This is done for each record type in turn. We can select which records to define by selecting them from the Record Name drop-down box or by pressing the Next Record Type button.

It is important to be as liberal as possible when defining field data types because any mismatches will cause errors that will need to be handled. Being liberal in our record definitions will allow us to validate the messages, as described in Chapter 13, Building Validation into Services, without raising system errors.

Field properties

The Name column represents the element name of this field. The wizard will attempt to guess the type of the field, but it is important to always check this because the sample data you are using may not include all possibilities. A common error is identifying numbers to be tagged as integers, when they should really be strings—accept integer types only when they are likely to have arithmetic operations performed on them.

Verifying the result

We have now completed our mapping and can verify what has been done by looking at the generated XML Schema file. Note that the generated schema uses some Oracle extensions to enable a non-XML formatted file to be represented as XML. In particular the nxsd namespace prefix is used to identify field separators and record terminators.

Note

The XML Schema generated can be edited manually. This is useful to support nested records (records inside other records) like those that may be found in a file containing order records with nested detail records (an order record contains multiple line item detail records). In this case, it is useful to use the wizard to generate a schema with order records and detail records at the same level. The schema can then be modified by hand to make the detail records children of the order records.

Verifying the result

Clicking the Test button brings up a simple test screen that allows us to apply our newly generated schema to the input document and examine the resulting XML.

Verifying the result

Clicking Next and then Finish will cause the generated schema file to be saved.

Finishing the wizards

Until now, no work has been saved, except for the XML Schema mapping the file content onto an XML structure. The rest of the adapter settings are not saved, and the endpoint is not set up until the Finish button is clicked on the completion screen, as shown in the following screenshot. Note that the file generated is a Web Service Description Language (WSDL) file with a JCA binding.

Finishing the wizards
Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>
Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection
Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation
Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete
Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Record type names

The wizard will identify the types of records based on the first field in each record, as shown in the preceding screenshot. It is possible to ignore record types by selecting them and clicking Delete. If this is done by mistake, then it is possible to add them back by using the Add button. Only fields that exist in the sample data can be added in the wizard.

Record type names

Note that if we want to reset the record types screen, then the Scan button will rescan the sample file and look for all the different record types it contains.

The Record Name field can be set by double-clicking it and providing a suitable record name. This record name is the XML element name that encapsulates the record content.

Field properties

Now that we have identified record and field boundaries, we can drill down into the records and define the data types and names of individual fields. This is done for each record type in turn. We can select which records to define by selecting them from the Record Name drop-down box or by pressing the Next Record Type button.

It is important to be as liberal as possible when defining field data types because any mismatches will cause errors that will need to be handled. Being liberal in our record definitions will allow us to validate the messages, as described in Chapter 13, Building Validation into Services, without raising system errors.

Field properties

The Name column represents the element name of this field. The wizard will attempt to guess the type of the field, but it is important to always check this because the sample data you are using may not include all possibilities. A common error is identifying numbers to be tagged as integers, when they should really be strings—accept integer types only when they are likely to have arithmetic operations performed on them.

Verifying the result

We have now completed our mapping and can verify what has been done by looking at the generated XML Schema file. Note that the generated schema uses some Oracle extensions to enable a non-XML formatted file to be represented as XML. In particular the nxsd namespace prefix is used to identify field separators and record terminators.

Note

The XML Schema generated can be edited manually. This is useful to support nested records (records inside other records) like those that may be found in a file containing order records with nested detail records (an order record contains multiple line item detail records). In this case, it is useful to use the wizard to generate a schema with order records and detail records at the same level. The schema can then be modified by hand to make the detail records children of the order records.

Verifying the result

Clicking the Test button brings up a simple test screen that allows us to apply our newly generated schema to the input document and examine the resulting XML.

Verifying the result

Clicking Next and then Finish will cause the generated schema file to be saved.

Finishing the wizards

Until now, no work has been saved, except for the XML Schema mapping the file content onto an XML structure. The rest of the adapter settings are not saved, and the endpoint is not set up until the Finish button is clicked on the completion screen, as shown in the following screenshot. Note that the file generated is a Web Service Description Language (WSDL) file with a JCA binding.

Finishing the wizards
Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>
Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection
Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation
Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete
Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Field properties

Now that we have identified record and field boundaries, we can drill down into the records and define the data types and names of individual fields. This is done for each record type in turn. We can select which records to define by selecting them from the Record Name drop-down box or by pressing the Next Record Type button.

It is important to be as liberal as possible when defining field data types because any mismatches will cause errors that will need to be handled. Being liberal in our record definitions will allow us to validate the messages, as described in Chapter 13, Building Validation into Services, without raising system errors.

Field properties

The Name column represents the element name of this field. The wizard will attempt to guess the type of the field, but it is important to always check this because the sample data you are using may not include all possibilities. A common error is identifying numbers to be tagged as integers, when they should really be strings—accept integer types only when they are likely to have arithmetic operations performed on them.

Verifying the result

We have now completed our mapping and can verify what has been done by looking at the generated XML Schema file. Note that the generated schema uses some Oracle extensions to enable a non-XML formatted file to be represented as XML. In particular the nxsd namespace prefix is used to identify field separators and record terminators.

Note

The XML Schema generated can be edited manually. This is useful to support nested records (records inside other records) like those that may be found in a file containing order records with nested detail records (an order record contains multiple line item detail records). In this case, it is useful to use the wizard to generate a schema with order records and detail records at the same level. The schema can then be modified by hand to make the detail records children of the order records.

Verifying the result

Clicking the Test button brings up a simple test screen that allows us to apply our newly generated schema to the input document and examine the resulting XML.

Verifying the result

Clicking Next and then Finish will cause the generated schema file to be saved.

Finishing the wizards

Until now, no work has been saved, except for the XML Schema mapping the file content onto an XML structure. The rest of the adapter settings are not saved, and the endpoint is not set up until the Finish button is clicked on the completion screen, as shown in the following screenshot. Note that the file generated is a Web Service Description Language (WSDL) file with a JCA binding.

Finishing the wizards
Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>
Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection
Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation
Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete
Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Verifying the result

We have now completed our mapping and can verify what has been done by looking at the generated XML Schema file. Note that the generated schema uses some Oracle extensions to enable a non-XML formatted file to be represented as XML. In particular the nxsd namespace prefix is used to identify field separators and record terminators.

Note

The XML Schema generated can be edited manually. This is useful to support nested records (records inside other records) like those that may be found in a file containing order records with nested detail records (an order record contains multiple line item detail records). In this case, it is useful to use the wizard to generate a schema with order records and detail records at the same level. The schema can then be modified by hand to make the detail records children of the order records.

Verifying the result

Clicking the Test button brings up a simple test screen that allows us to apply our newly generated schema to the input document and examine the resulting XML.

Verifying the result

Clicking Next and then Finish will cause the generated schema file to be saved.

Finishing the wizards

Until now, no work has been saved, except for the XML Schema mapping the file content onto an XML structure. The rest of the adapter settings are not saved, and the endpoint is not set up until the Finish button is clicked on the completion screen, as shown in the following screenshot. Note that the file generated is a Web Service Description Language (WSDL) file with a JCA binding.

Finishing the wizards
Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>
Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection
Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation
Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete
Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Finishing the wizards

Until now, no work has been saved, except for the XML Schema mapping the file content onto an XML structure. The rest of the adapter settings are not saved, and the endpoint is not set up until the Finish button is clicked on the completion screen, as shown in the following screenshot. Note that the file generated is a Web Service Description Language (WSDL) file with a JCA binding.

Finishing the wizards
Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>

Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection

Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation

Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Throttling the file and FTP adapter

The file and FTP adapters can consume a lot of resources when processing large files (thousands of records) because they keep sending messages with batches of records until the file is processed, while not waiting for the records to be processed. This behavior can be altered by forcing them to wait until a message is processed before sending another message. This is done by making the following changes to the WSDL generated by the wizard. This changes the one-way read operation into a two-way read operation that will not complete until a reply is generated by our code in BPEL or the Service Bus.

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>

Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection

Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation

Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Creating a dummy message type

Add a new message definition to the WSDL like the one in the following code snippet:

<message name="Dummy_msg">
    <part xmlns:xsd="http://www.w3.org/2001/XMLSchema"
          name="Dummy" type="xsd:string"/>
</message>

Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection

Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation

Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Adding an output message to the read operation

In the <portType> , add an <output> element to the read <operation> element.

    <portType name="Read_ptt">
      <operation name="Read">
        <input message="tns:PayrollList_msg"/>
        <output message="tns:Dummy_msg"/>
      </operation>
    </portType>

In the <jca:operation> element, add an empty <output/> element.

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection

Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation

Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Using the modified interface

The adapter will now have a two way interface and will need to receive a reply to a message before it sends the next batch of records, thus throttling the throughput. Note that no data needs to be sent in the reply message. This will limit the number of active operations to the number of threads assigned to the file adapter.

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection

Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation

Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Writing a payroll file

We can now use the FTP adapter to write the payroll file to a remote filesystem. This requires us to create another adapter within our BPEL process, Mediator, or Service Bus. Setting up the FTP adapter to write to a remote filesystem is very similar to reading files with the file adapter.

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection

Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation

Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Selecting the FTP connection

The first difference is that when using the FTP adapter instead of the File adapter, we have to specify an FTP connection to use in the underlying application server. This connection is set up in the application server running the adapter. For example, when using the WebLogic Application Server, the WebLogic Console can be used. The JNDI location of the connection factory is the location that must be provided to the wizard. The JNDI location must be configured in the application server using the administrative tools provided by the application server. Refer to your application server documentation for details on how to do this, as it varies between applications servers.

Selecting the FTP connection

Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation

Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Choosing the operation

When we choose the type of operation, we again notice that the screen is different from the file adapter, having an additional File Type category. This relates to the Ascii and Binary settings of an FTP session. ASCII causes the FTP transfer to adapt to changes in character encoding between the two systems, for example, converting between EBCDIC and ASCII or altering line feeds between systems. When using text files, it is generally a good idea to select the ASCII format. When sending binary files, it is vital that the binary file type is used to avoid any unfortunate and unwanted transformations.

Choosing the operation

Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Selecting the file destination

Choosing where the file is created is the same for both the FTP and the file adapter. Again, there is a choice of physical or logical paths. The file naming convention allows us some control over the name of the output file. In addition to the %SEQ% symbol that inserts a unique sequence number, it is also possible to insert a date or date time string into the filename. Note that in the current release, you cannot have both a date time string and a sequence string in the file naming convention.

Note

When using a date time string as part of the filename, files with the same date time string will overwrite each other. If this is the case, then consider using a sequence number instead.

When producing an output file, we can either keep appending to a single file by selecting the Append to existing file checkbox, which will keep growing without limit, or we can create new files, which will be dependent on attributes of the data being written. This is the normal way of working for non-XML files, and a new output file will be generated when one or more records are written to the adapter.

Selecting the file destination

The criteria for deciding to write to a new file are as follows:

  • Number of Messages Equals: This criterion forces the file to be written when the given number of messages is reached. This can be thought of as batching the output so that we reduce the number of files created.
  • Elapsed Time Exceeds: This criterion puts a time limit on how long the adapter will keep the file open. This places an upper time limit on creating an output file.
  • File Size Exceeds: This criterion allows us to limit the file sizes. As soon as a message causes the file to exceed the given size, no further message will be appended to this file.

These criteria can all be applied together, and as soon as one of them is satisfied, a new file will be created.

Tip

Writing XML files

When writing XML files, care should be taken to have only a single message per file, or else there will be multiple XML root elements in the document that will make it an invalid XML document.

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Completing the FTP file writer service

The next step in the wizard is to define the actual record formats. This is exactly the same as when creating an input file. If we don't have an existing XML Schema for the output file, then we can use the wizard to create one if we have a sample file to use.

Finally, remember to run through the wizard to the end, and click Finish rather than Cancel or our entire configuration will be lost.

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Moving, copying, and deleting files

Sometimes we will just want an adapter to move, copy, or delete a file without reading it. We will use the ability of the file adapter to move a file (refer to Chapter 16, Message Interaction Patterns, to set up a scheduler service within the SOA Suite).

The following steps will configure an outbound file or FTP adapter to move, copy, or delete a file without reading it.

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Generating an adapter

Use the file or FTP adapter wizard to generate an outbound adapter with a file synchronous read or FTP synchronous get operation. You may also use a write or put operation. The data location should use physical directories, and the content should be marked as opaque, so that there is no need to understand the content of the file. Once this has been done, we will modify the WSDL generated to add additional operations.

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Modifying the port type

First, we edit the WSDL file itself, which we call <AdapterServiceName>.wsdl. Modify the port type of the adapter to include the additional operations required, as shown in the following code snippet. Use the same message type as the operation generated by the wizard.

    <portType name="Write_ptt">
      <operation name="Write">
        <input message="tns:Write_msg"/>
        </operation>
      <operation name="Move">
          <input message="tns:Write_msg"/>
        </operation>
    </portType>

Note that the following operation names are supported:

  • Move
  • Copy
  • Delete

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Modifying the binding

We now edit the bindings which are contained in a separate file to the WSDL called <AdapterServiceName>_<file/ftp>.jca. Bindings describe how the service description maps onto the physical service implementation. For now, we will just modify the binding to add the additional operations needed and map them to the appropriate implementation, as shown in the following code snippet:

<endpoint-interaction portType="Write_ptt" operation="Write">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileInteractionSpec">
    <property name="PhysicalDirectory" value="/user/oracle"/>
    <property name="FileNamingConvention" value="fred.txt"/>
    <property name="Append" value="false"/> 
  </interaction-spec>
</endpoint-interaction>
<endpoint-interaction portType="Write_ptt" operation="Move">
  <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <property name="Type" value="MOVE"/>
    <property name="SourcePhysicalDirectory" value="/usr/oracle"/>
    <property name="SourceFileName" value="fred.txt"/>
    <property name="TargetPhysicalDirectory" value="/usr/payroll"/>
    <property name="TargetFileName" value="Payroll.txt"/>
  </interaction-spec>
</endpoint-interaction>

Note that the following types are supported for use with the equivalent operation names. Observe that operation names are mixed case and types are uppercase:

  • MOVE
  • COPY
  • DELETE

The interaction-spec is used to define the types of operations supported by this particular binding. It references a Java class that provides the functionality and may have properties associated with it to configure its behavior. When using the FTP adapter for move, copy, and delete operations, the InteractionSpec property in the <AdapterServiceName>_ftp.jca file must be changed to oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec.

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Configuring file locations through additional header properties

In order to allow runtime configuration of the source and destination locations, it is necessary to pass the source and destination information as properties of the call to the adapter.

For example, when using a BPEL invoke activity, we would pass the properties, as shown in the following code snippet:

<invoke name="Invoke_Move" inputVariable="Invoke_Move_InputVariable"
        partnerLink="MoveFileService" portType="ns1:Write_ptt"
        operation="Move">
  <bpelx:inputProperty name="jca.file.SourceDirectory"
                       variable="srcDir"/>
  <bpelx:inputProperty name="jca.file.SourceFileName"
                       variable="srcFile"/>
  <bpelx:inputProperty name="jca.file.TargetDirectory"
                       variable="dstDir"/>
  <bpelx:inputProperty name="jca.file.TargetFileName"
                       variable="dstFile"/>
</invoke>

These properties will override the default values in the .jca file and can be used to dynamically select at runtime the locations to be used for the move, copy, or delete operation. The properties may be edited in the source tab of the BPEL document, the .bpel file, or they may be created and modified through the Properties tab of the Invoke dialog in the BPEL visual editor.

Configuring file locations through additional header properties

With these modifications, the move, copy, or delete operations will appear as additional operations on the service that can be invoked from the service bus or within BPEL.

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Adapter headers

In addition to the data associated with the service being provided by the adapter, sometimes referred to as the payload of the service, it is also possible to configure or obtain information about the operation of an adapter through header messages. Header messages are passed as properties to the adapter. We demonstrated this in the previous section when setting the header properties for the Move operation of the file adapter.

The properties passed are of the following two formats:

<property name="AdapterSpecificPropertyName" value="data"/>
<property name="AdapterSpecificPropertyName" variable="varName"/>

The former style allows the passing of literal values through the header. The latter allows the property to be set from a string variable.

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Testing the file adapters

We can test the adapters by using them within a BPEL process or a Mediator like the one shown in the following screenshot. Building a BPEL process is covered in Chapter 5, Building Composite Services and Business Processes. This uses the two services we that have just described and links them with a copy operation that transforms data from one format to the other.

Testing the file adapters

Creating services from databases

Along with files, databases are one of the most common ways of interacting with existing applications and providing them with a service interface. In this section we will investigate how to write recordrs into the database using the database adapter.

Writing to a database

Before we configure a database adapter, we first need to create a new database connection within JDeveloper. This is done by creating a Database Connection from the New Gallery.

Choosing a database connection brings up the database connection wizard, which allows us to enter the connection details of our database.

Selecting the database schema

With an established database connection, we can now create a service based on a database table. We will create a service that updates the database with the payroll details. The model for the database tables is shown in the following screenshot:

Selecting the database schema

Now that we have our database connection, we can run the Database Adapter Wizard by dragging the database adapter icon from the tool palette onto a BPEL process or an SCA Assembly. This starts the Database Adapter Wizard, and after giving the service a name, we come to the Service Connection screen. This is shown as follows:

Selecting the database schema

This allows us to choose a local connection in JDeveloper to use and also to select the JNDI location in the runtime environment of the database connection. Note that this JNDI connection must be configured as part of the database adapter in the default application in a similar way to the configuration of the FTP adapter.

Tip

How connections are resolved by the database adapter

When the adapter tries to connect to the database, it first tries to use the JNDI name provided, which should map to a JCA connection factory in the application server. If this name does not exist, then the adapter will use the database connection details from the JDeveloper database connection that was used in the wizard. This behavior is very convenient for development environments because it means that you can deploy and test the adapters in development without having to configure the JCA connection factories. However, best practice is to always configure the JCA adapter in the target environment to avoid unexpected connection failures.

Identifying the operation type

The database adapter has many ways in which it can interact with the database to provide a service interface.

Identifying the operation type

The Operation Type splits into two groups, calls into the database and events generated from the database. Calls into the database cover the following operations:

  • The stored procedure or function call to execute a specific piece of code in the database. This could either update the database or retrieve information, but in either case, it is a synchronous call into the database.
  • Perform an insert, update, delete, or select operation on the database. Again, this is done synchronously as a call into the database.
  • Poll for new or changed records in a database table. This is done as a call into the SCA Assembly or BPEL.
  • Execute custom SQL. This again runs the SQL synchronously against the database.

Polling for new or changed records is the only way for the database adapter to generate messages to be consumed in a BPEL process or an SCA Assembly. If we wish the adapter to wait for BPEL or the SCA Assembly to process the message, then we can use the Do Synchronous Post to BPEL checkbox. For this exercise, we will select insert / update for the operation.

Identifying tables to be operated on

The next step in the wizard asks which table is the root table or the beginning of the query. To select this, we first click the Import Tables… button to bring up the Import Tables dialog.

Identifying tables to be operated on

Once we have imported the tables we need, we then select the PAYROLLITEM table as the root table. We do this because each record will create a new PAYROLLITEM entry. All operations through the database adapter must be done with a root table. Any other table must be referenceable from this root table.

Identifying tables to be operated on

Identifying the relationship between tables

As we have more than one table involved in this operation, we need to decide which table relationship we want to use. In this case, we want to tie a payroll item back to a single employee, so we select the one-to-one relationship.

Identifying the relationship between tables

We can now finish the creation of the database adapter and hook it up with the file adapter we created earlier to allow us to read records from a file and place them in a database.

Note

In our example, the adapter was able to work out relationships between tables by analysis of the foreign keys. If foreign key relationships do not exist, then the Create button may be used to inform the adapter of relationships that exist between tables.

Under the covers

Under the covers, a lot has happened. An offline copy of the relevant database schema has been created so that the design time is not reliant on being permanently connected to a database. The actual mapping of a database onto an XML document has also occurred. This is done using Oracle TopLink to create the mapping and a lot of the functions of the wizard are implemented using TopLink. The mapping can be further refined using the features of TopLink.

Tip

Using keys

Always identify the Primary Key for any table used by the database adapter. This can be done by applying a Primary Key constraint in the database, or if no such key has been created, then TopLink will prompt you to create one. If you have to create one in TopLink, then make sure it is really a Primary Key. TopLink optimizes its use of the database by maintaining an identity for each row in a table with a Primary Key. It only reads the Primary Key on select statements and then checks to see which records it needs to read in from the database. This reduces the amount of work mapping fields that have already been mapped because a record appears multiple times in the selection. If you don't identify a Primary Key correctly, then TopLink may incorrectly identify different records as being the same record and only load the data for the first such record encountered. So if you seem to be getting a lot of identical records in response to a query that should have separate records, then check your Primary Key definitions.

Writing to a database

Before we configure a database adapter, we first need to create a new database connection within JDeveloper. This is done by creating a Database Connection from the New Gallery.

Choosing a database connection brings up the database connection wizard, which allows us to enter the connection details of our database.

Selecting the database schema

With an established database connection, we can now create a service based on a database table. We will create a service that updates the database with the payroll details. The model for the database tables is shown in the following screenshot:

Selecting the database schema

Now that we have our database connection, we can run the Database Adapter Wizard by dragging the database adapter icon from the tool palette onto a BPEL process or an SCA Assembly. This starts the Database Adapter Wizard, and after giving the service a name, we come to the Service Connection screen. This is shown as follows:

Selecting the database schema

This allows us to choose a local connection in JDeveloper to use and also to select the JNDI location in the runtime environment of the database connection. Note that this JNDI connection must be configured as part of the database adapter in the default application in a similar way to the configuration of the FTP adapter.

Tip

How connections are resolved by the database adapter

When the adapter tries to connect to the database, it first tries to use the JNDI name provided, which should map to a JCA connection factory in the application server. If this name does not exist, then the adapter will use the database connection details from the JDeveloper database connection that was used in the wizard. This behavior is very convenient for development environments because it means that you can deploy and test the adapters in development without having to configure the JCA connection factories. However, best practice is to always configure the JCA adapter in the target environment to avoid unexpected connection failures.

Identifying the operation type

The database adapter has many ways in which it can interact with the database to provide a service interface.

Identifying the operation type

The Operation Type splits into two groups, calls into the database and events generated from the database. Calls into the database cover the following operations:

  • The stored procedure or function call to execute a specific piece of code in the database. This could either update the database or retrieve information, but in either case, it is a synchronous call into the database.
  • Perform an insert, update, delete, or select operation on the database. Again, this is done synchronously as a call into the database.
  • Poll for new or changed records in a database table. This is done as a call into the SCA Assembly or BPEL.
  • Execute custom SQL. This again runs the SQL synchronously against the database.

Polling for new or changed records is the only way for the database adapter to generate messages to be consumed in a BPEL process or an SCA Assembly. If we wish the adapter to wait for BPEL or the SCA Assembly to process the message, then we can use the Do Synchronous Post to BPEL checkbox. For this exercise, we will select insert / update for the operation.

Identifying tables to be operated on

The next step in the wizard asks which table is the root table or the beginning of the query. To select this, we first click the Import Tables… button to bring up the Import Tables dialog.

Identifying tables to be operated on

Once we have imported the tables we need, we then select the PAYROLLITEM table as the root table. We do this because each record will create a new PAYROLLITEM entry. All operations through the database adapter must be done with a root table. Any other table must be referenceable from this root table.

Identifying tables to be operated on

Identifying the relationship between tables

As we have more than one table involved in this operation, we need to decide which table relationship we want to use. In this case, we want to tie a payroll item back to a single employee, so we select the one-to-one relationship.

Identifying the relationship between tables

We can now finish the creation of the database adapter and hook it up with the file adapter we created earlier to allow us to read records from a file and place them in a database.

Note

In our example, the adapter was able to work out relationships between tables by analysis of the foreign keys. If foreign key relationships do not exist, then the Create button may be used to inform the adapter of relationships that exist between tables.

Under the covers

Under the covers, a lot has happened. An offline copy of the relevant database schema has been created so that the design time is not reliant on being permanently connected to a database. The actual mapping of a database onto an XML document has also occurred. This is done using Oracle TopLink to create the mapping and a lot of the functions of the wizard are implemented using TopLink. The mapping can be further refined using the features of TopLink.

Tip

Using keys

Always identify the Primary Key for any table used by the database adapter. This can be done by applying a Primary Key constraint in the database, or if no such key has been created, then TopLink will prompt you to create one. If you have to create one in TopLink, then make sure it is really a Primary Key. TopLink optimizes its use of the database by maintaining an identity for each row in a table with a Primary Key. It only reads the Primary Key on select statements and then checks to see which records it needs to read in from the database. This reduces the amount of work mapping fields that have already been mapped because a record appears multiple times in the selection. If you don't identify a Primary Key correctly, then TopLink may incorrectly identify different records as being the same record and only load the data for the first such record encountered. So if you seem to be getting a lot of identical records in response to a query that should have separate records, then check your Primary Key definitions.

Selecting the database schema

With an established database connection, we can now create a service based on a database table. We will create a service that updates the database with the payroll details. The model for the database tables is shown in the following screenshot:

Selecting the database schema

Now that we have our database connection, we can run the Database Adapter Wizard by dragging the database adapter icon from the tool palette onto a BPEL process or an SCA Assembly. This starts the Database Adapter Wizard, and after giving the service a name, we come to the Service Connection screen. This is shown as follows:

Selecting the database schema

This allows us to choose a local connection in JDeveloper to use and also to select the JNDI location in the runtime environment of the database connection. Note that this JNDI connection must be configured as part of the database adapter in the default application in a similar way to the configuration of the FTP adapter.

Tip

How connections are resolved by the database adapter

When the adapter tries to connect to the database, it first tries to use the JNDI name provided, which should map to a JCA connection factory in the application server. If this name does not exist, then the adapter will use the database connection details from the JDeveloper database connection that was used in the wizard. This behavior is very convenient for development environments because it means that you can deploy and test the adapters in development without having to configure the JCA connection factories. However, best practice is to always configure the JCA adapter in the target environment to avoid unexpected connection failures.

Identifying the operation type

The database adapter has many ways in which it can interact with the database to provide a service interface.

Identifying the operation type

The Operation Type splits into two groups, calls into the database and events generated from the database. Calls into the database cover the following operations:

  • The stored procedure or function call to execute a specific piece of code in the database. This could either update the database or retrieve information, but in either case, it is a synchronous call into the database.
  • Perform an insert, update, delete, or select operation on the database. Again, this is done synchronously as a call into the database.
  • Poll for new or changed records in a database table. This is done as a call into the SCA Assembly or BPEL.
  • Execute custom SQL. This again runs the SQL synchronously against the database.

Polling for new or changed records is the only way for the database adapter to generate messages to be consumed in a BPEL process or an SCA Assembly. If we wish the adapter to wait for BPEL or the SCA Assembly to process the message, then we can use the Do Synchronous Post to BPEL checkbox. For this exercise, we will select insert / update for the operation.

Identifying tables to be operated on

The next step in the wizard asks which table is the root table or the beginning of the query. To select this, we first click the Import Tables… button to bring up the Import Tables dialog.

Identifying tables to be operated on

Once we have imported the tables we need, we then select the PAYROLLITEM table as the root table. We do this because each record will create a new PAYROLLITEM entry. All operations through the database adapter must be done with a root table. Any other table must be referenceable from this root table.

Identifying tables to be operated on

Identifying the relationship between tables

As we have more than one table involved in this operation, we need to decide which table relationship we want to use. In this case, we want to tie a payroll item back to a single employee, so we select the one-to-one relationship.

Identifying the relationship between tables

We can now finish the creation of the database adapter and hook it up with the file adapter we created earlier to allow us to read records from a file and place them in a database.

Note

In our example, the adapter was able to work out relationships between tables by analysis of the foreign keys. If foreign key relationships do not exist, then the Create button may be used to inform the adapter of relationships that exist between tables.

Under the covers

Under the covers, a lot has happened. An offline copy of the relevant database schema has been created so that the design time is not reliant on being permanently connected to a database. The actual mapping of a database onto an XML document has also occurred. This is done using Oracle TopLink to create the mapping and a lot of the functions of the wizard are implemented using TopLink. The mapping can be further refined using the features of TopLink.

Tip

Using keys

Always identify the Primary Key for any table used by the database adapter. This can be done by applying a Primary Key constraint in the database, or if no such key has been created, then TopLink will prompt you to create one. If you have to create one in TopLink, then make sure it is really a Primary Key. TopLink optimizes its use of the database by maintaining an identity for each row in a table with a Primary Key. It only reads the Primary Key on select statements and then checks to see which records it needs to read in from the database. This reduces the amount of work mapping fields that have already been mapped because a record appears multiple times in the selection. If you don't identify a Primary Key correctly, then TopLink may incorrectly identify different records as being the same record and only load the data for the first such record encountered. So if you seem to be getting a lot of identical records in response to a query that should have separate records, then check your Primary Key definitions.

Identifying the operation type

The database adapter has many ways in which it can interact with the database to provide a service interface.

Identifying the operation type

The Operation Type splits into two groups, calls into the database and events generated from the database. Calls into the database cover the following operations:

  • The stored procedure or function call to execute a specific piece of code in the database. This could either update the database or retrieve information, but in either case, it is a synchronous call into the database.
  • Perform an insert, update, delete, or select operation on the database. Again, this is done synchronously as a call into the database.
  • Poll for new or changed records in a database table. This is done as a call into the SCA Assembly or BPEL.
  • Execute custom SQL. This again runs the SQL synchronously against the database.

Polling for new or changed records is the only way for the database adapter to generate messages to be consumed in a BPEL process or an SCA Assembly. If we wish the adapter to wait for BPEL or the SCA Assembly to process the message, then we can use the Do Synchronous Post to BPEL checkbox. For this exercise, we will select insert / update for the operation.

Identifying tables to be operated on

The next step in the wizard asks which table is the root table or the beginning of the query. To select this, we first click the Import Tables… button to bring up the Import Tables dialog.

Identifying tables to be operated on

Once we have imported the tables we need, we then select the PAYROLLITEM table as the root table. We do this because each record will create a new PAYROLLITEM entry. All operations through the database adapter must be done with a root table. Any other table must be referenceable from this root table.

Identifying tables to be operated on

Identifying the relationship between tables

As we have more than one table involved in this operation, we need to decide which table relationship we want to use. In this case, we want to tie a payroll item back to a single employee, so we select the one-to-one relationship.

Identifying the relationship between tables

We can now finish the creation of the database adapter and hook it up with the file adapter we created earlier to allow us to read records from a file and place them in a database.

Note

In our example, the adapter was able to work out relationships between tables by analysis of the foreign keys. If foreign key relationships do not exist, then the Create button may be used to inform the adapter of relationships that exist between tables.

Under the covers

Under the covers, a lot has happened. An offline copy of the relevant database schema has been created so that the design time is not reliant on being permanently connected to a database. The actual mapping of a database onto an XML document has also occurred. This is done using Oracle TopLink to create the mapping and a lot of the functions of the wizard are implemented using TopLink. The mapping can be further refined using the features of TopLink.

Tip

Using keys

Always identify the Primary Key for any table used by the database adapter. This can be done by applying a Primary Key constraint in the database, or if no such key has been created, then TopLink will prompt you to create one. If you have to create one in TopLink, then make sure it is really a Primary Key. TopLink optimizes its use of the database by maintaining an identity for each row in a table with a Primary Key. It only reads the Primary Key on select statements and then checks to see which records it needs to read in from the database. This reduces the amount of work mapping fields that have already been mapped because a record appears multiple times in the selection. If you don't identify a Primary Key correctly, then TopLink may incorrectly identify different records as being the same record and only load the data for the first such record encountered. So if you seem to be getting a lot of identical records in response to a query that should have separate records, then check your Primary Key definitions.

Identifying tables to be operated on

The next step in the wizard asks which table is the root table or the beginning of the query. To select this, we first click the Import Tables… button to bring up the Import Tables dialog.

Identifying tables to be operated on

Once we have imported the tables we need, we then select the PAYROLLITEM table as the root table. We do this because each record will create a new PAYROLLITEM entry. All operations through the database adapter must be done with a root table. Any other table must be referenceable from this root table.

Identifying tables to be operated on

Identifying the relationship between tables

As we have more than one table involved in this operation, we need to decide which table relationship we want to use. In this case, we want to tie a payroll item back to a single employee, so we select the one-to-one relationship.

Identifying the relationship between tables

We can now finish the creation of the database adapter and hook it up with the file adapter we created earlier to allow us to read records from a file and place them in a database.

Note

In our example, the adapter was able to work out relationships between tables by analysis of the foreign keys. If foreign key relationships do not exist, then the Create button may be used to inform the adapter of relationships that exist between tables.

Under the covers

Under the covers, a lot has happened. An offline copy of the relevant database schema has been created so that the design time is not reliant on being permanently connected to a database. The actual mapping of a database onto an XML document has also occurred. This is done using Oracle TopLink to create the mapping and a lot of the functions of the wizard are implemented using TopLink. The mapping can be further refined using the features of TopLink.

Tip

Using keys

Always identify the Primary Key for any table used by the database adapter. This can be done by applying a Primary Key constraint in the database, or if no such key has been created, then TopLink will prompt you to create one. If you have to create one in TopLink, then make sure it is really a Primary Key. TopLink optimizes its use of the database by maintaining an identity for each row in a table with a Primary Key. It only reads the Primary Key on select statements and then checks to see which records it needs to read in from the database. This reduces the amount of work mapping fields that have already been mapped because a record appears multiple times in the selection. If you don't identify a Primary Key correctly, then TopLink may incorrectly identify different records as being the same record and only load the data for the first such record encountered. So if you seem to be getting a lot of identical records in response to a query that should have separate records, then check your Primary Key definitions.

Identifying the relationship between tables

As we have more than one table involved in this operation, we need to decide which table relationship we want to use. In this case, we want to tie a payroll item back to a single employee, so we select the one-to-one relationship.

Identifying the relationship between tables

We can now finish the creation of the database adapter and hook it up with the file adapter we created earlier to allow us to read records from a file and place them in a database.

Note

In our example, the adapter was able to work out relationships between tables by analysis of the foreign keys. If foreign key relationships do not exist, then the Create button may be used to inform the adapter of relationships that exist between tables.

Under the covers

Under the covers, a lot has happened. An offline copy of the relevant database schema has been created so that the design time is not reliant on being permanently connected to a database. The actual mapping of a database onto an XML document has also occurred. This is done using Oracle TopLink to create the mapping and a lot of the functions of the wizard are implemented using TopLink. The mapping can be further refined using the features of TopLink.

Tip

Using keys

Always identify the Primary Key for any table used by the database adapter. This can be done by applying a Primary Key constraint in the database, or if no such key has been created, then TopLink will prompt you to create one. If you have to create one in TopLink, then make sure it is really a Primary Key. TopLink optimizes its use of the database by maintaining an identity for each row in a table with a Primary Key. It only reads the Primary Key on select statements and then checks to see which records it needs to read in from the database. This reduces the amount of work mapping fields that have already been mapped because a record appears multiple times in the selection. If you don't identify a Primary Key correctly, then TopLink may incorrectly identify different records as being the same record and only load the data for the first such record encountered. So if you seem to be getting a lot of identical records in response to a query that should have separate records, then check your Primary Key definitions.

Under the covers

Under the covers, a lot has happened. An offline copy of the relevant database schema has been created so that the design time is not reliant on being permanently connected to a database. The actual mapping of a database onto an XML document has also occurred. This is done using Oracle TopLink to create the mapping and a lot of the functions of the wizard are implemented using TopLink. The mapping can be further refined using the features of TopLink.

Tip

Using keys

Always identify the Primary Key for any table used by the database adapter. This can be done by applying a Primary Key constraint in the database, or if no such key has been created, then TopLink will prompt you to create one. If you have to create one in TopLink, then make sure it is really a Primary Key. TopLink optimizes its use of the database by maintaining an identity for each row in a table with a Primary Key. It only reads the Primary Key on select statements and then checks to see which records it needs to read in from the database. This reduces the amount of work mapping fields that have already been mapped because a record appears multiple times in the selection. If you don't identify a Primary Key correctly, then TopLink may incorrectly identify different records as being the same record and only load the data for the first such record encountered. So if you seem to be getting a lot of identical records in response to a query that should have separate records, then check your Primary Key definitions.

Summary

In this chapter, we have looked at how to use the file and database adapters to turn file and database interfaces into services that can be consumed by the rest of the SOA Suite. Note that when using the adapters, the schemas are automatically generated and changing the way the adapter works may mean a change in the schema. Therefore, in the next chapter, we will look at how to isolate our applications from the actual adapter service details.

Chapter 4. Loosely-coupling Services

In the previous chapter, we explored how we can take functionality in our existing applications and expose them as services. When we do this, we often find that the service interface we create is tightly coupled to the underlying implementation. We can make our architecture more robust by reducing this coupling. By defining our interface around our architecture, rather than around our existing application interfaces, we can reduce coupling. We can also reduce coupling by using a routing service to avoid physical location dependencies. In this chapter, we will explore how service virtualization through the Mediator and the Service Bus of the Oracle SOA Suite can be used to deliver more loosely-coupled services. Loose coupling reduces the impact of change on our systems, allowing us to deploy new functions more rapidly into the market. Loose coupling also reduces the maintenance costs associated with our deployments.

Coupling

Coupling is a measure of how dependent one service is upon another. The more closely one service depends on another service, the more tightly coupled they are. There have been a number of efforts to formalize metrics for coupling, and they all revolve around the same basic items:

  • Number of input data items: Basically, the number of input parameters of the service.
  • Number of output data items: The output data of the service.
  • Dependencies on other services: The number of services called by this service.
  • Dependencies of other services on this service: The number of services that invoke this service.
  • Use of shared global data: The number of shared data items used by this service. This may include database tables or shared files.
  • Temporal dependencies: Dependencies on other services being available at specific times.

Let us examine how each of these measures may be applied to our service interface. The principles below are relevant to all services, but widely re-used services have a special need for all of the items.

Number of input data items

A service should only accept as input the data items required to perform the service being requested. Additional information should not be passed into the service because this creates unnecessary dependencies on the input formats. This economy of input allows the service to focus just on the function it is intended to provide and does not require it to understand unnecessary data formats. The best way to isolate the service from changes in data formats that it does not use is to make the service unaware of those data formats.

For example, a credit rating service should only require sufficient information to identify the individual being rated. Additional information, such as the amount of credit being requested or the type of goods or services for which a loan is required, is not necessary for the credit rating service to perform its job.

Tip

Services should accept only the data required to perform their function and nothing more.

When talking about reducing the number of data items input or output from a service, we are talking about the service implementation, not a logical service interface that may be implemented using a canonical data model. The canonical data model may have additional attributes not required by a particular service, but these should not be part of the physical service interface. Adding attributes to "make a service more universal" only serves to make it harder to maintain.

Number of output data items

In the same way that a service should not accept inputs that are unnecessary for the function it performs, a service should not return data that is related only to its internal operation. Exposing such data as part of the response data will create dependencies on the internal implementation of the service that are not necessary.

Sometimes, a service needs to maintain its state between requests. State implies that state information must be maintained at least in the client of the service, so that it can identify the state required by the service when making further requests. However, the state information in the client is often just an index into the state information held in the service. We will return to this subject later in the chapter.

Tip

Services should not return public data that relates to their own internal processing.

Dependencies on other services

Generally, re-use of other services to create a new composite service is a good thing. However, having dependencies on other services does increase the degree of coupling because there is a risk that changes in those services may impact the composite service, and consequently, any services with dependencies on the composite service. We can reduce the risk that this poses by limiting our use of functionality in other services to just that required by the composite.

Tip

Services should reduce the functionality required of other services to the minimum required for their own functionality.

For example, a dispatching service may decide to validate the address it receives. If this functionality is not specified as being required, because, let's say, all addresses are validated elsewhere, then the dispatching service has an unnecessary dependency that may cause problems in the future.

Dependencies of other services on this service

Having a widely used service is great for re-use, but the greater the number of services that make use of this service, the greater impact a change in this service will have on other services. Extra care must be taken with widely re-used services to ensure that their interfaces are as stable as possible. This stability can be provided by following the guidelines in this section.

Tip

Widely re-used services should focus their interface on just the functionality needed by clients and avoid exposing any unnecessary functions or data.

Use of shared global data

Shared global data in the service context is often through dependencies on a shared resource such as data in a database. Such use of shared data structures is subversive to good design because it does not appear in the service definitions and so the owners of the shared data may be unaware of the dependency. Effectively, this is an extra interface into the service. If this is not documented, then the service is very vulnerable to the shared data structure being changed unknowingly. Even if the shared data structure is well documented, any changes required must still be synchronized across all users of the shared data.

Note

Avoid the use of shared global data in services unless it is absolutely necessary. If it is absolutely necessary, then the dependency needs to be clearly documented in all users of the shared data. A service's data should only be manipulated through its defined interface. Consider creating a wrapper service around the shared data.

Temporal dependencies

Not all service requests require an immediate response. Often a service can be requested, and the response may be returned later. This is a common model in message-based systems and allows for individual services to be unavailable without impacting other services. Use of queuing systems allows temporal or time decoupling of services, so that two communicating services do not have to be available at the same instant in time. The queue allows messages to be delivered when the service is available rather than when the service is requested.

Tip

Use asynchronous message interfaces to reduce dependencies of one service on the availability of another.

Reducing coupling in stateful services

A stateful service maintains context for a given client between invocations. When using stateful services, we always need to return some kind of state information to the client. To avoid unnecessary coupling, this state information should always be opaque. By opaque, we mean that it should have no meaning to the client other than as a reference that must be returned to the service when requesting follow on operations. For example, a numeric session ID has no meaning to the client and may be used by the service as an index into a table stored either in memory or in a database table. We will examine how this may be accomplished later in this section.

A common use of state information in a service is to preserve the position in a search that returns more results than can reasonably be returned in a single response. Another use of state information might be to perform correlation between services that have multiple interactions, such as between a bidding service and a bidding client.

Whatever the reason may be, the first task, when confronted with the need for state in a service, is to investigate the ways to remove the state requirement. If there is definitely a need for state to be maintained, then there are two approaches that the service can follow.

  • Externalize all state and return it to the client.
  • Maintain state within the service and return a reference to the client.

In the first case, it is necessary to package up the required state information and return it to the client. Because the client should be unaware of the format of this data, it must be returned as an opaque type. This is best done as an <any> element in the schema for returning the response to the client. An <any> element may be used to hold any type of data, from simple strings through to complex structured types.

For example, if a listing service returns only 20 items at a time, then it must pass back sufficient information to enable it to retrieve the next 20 items in the query.

In the following XML Schema example, we have the XML data definitions to support two operations on a listing service:

  • searchItems
  • nextItems

The searchItems operation will take a searchItemsRequest element for input and return a searchItemsResponse element. The searchItemsResponse has within it a searchState element. This element is a sequence that has an unlimited number of arbitrary elements. This can be used by the service to store sufficient state to allow it to deliver the next 20 items in the response. It is important to realize that this state does not have to be understood by the client of the service. The client of the service just has to copy the searchState element to the continueSearchItemsRequest element to retrieve the next set of 20 results.

Reducing coupling in stateful services

The preceding approach has the advantage that the service may still be stateless, although it gives the appearance of being stateful. The sample schema below could be used to allow the service to resume the search where it left off without the need for any internal state information in the service. By storing the state information (the original request and the index of the next item to be returned) within the response, the service can retrieve the next set of items without having to maintain any state within itself. Obviously, the service for purposes of efficiency could maintain some internal state, such as a database cursor, for a period of time, but this is not necessary.

Reducing coupling in stateful services

An alternative approach to state management is to keep the state information within the service itself. This still requires some state information to be returned to the client, but only a reference to the internal state information is required. In this case, there are a couple of options for dealing with this reference.

One is to take state management outside of the request/response messages and make it part of the wider service contract, either through the use of WS-Correlation or an HTTP cookie for example. This approach has the advantage that the service can generally take advantage of state management functions of the platform, such as support for Java services, to use the HTTP session state.

Note

Use of WS-Correlation

It is possible to use a standard correlation mechanism such as WS-Correlation. This is used within SOA Suite by BPEL to correlate process instances with requests. If this approach is used, however, it precludes the use of the externalized state approach discussed earlier. This makes it harder to swap out your service implementation with one that externalizes all its state information. In addition to requiring your service to always internalize state management, no matter how it is implemented, your clients must now support WS-Correlation.

The alternative is to continue to keep the state management in the request/response messages and deal with it within the service. This keeps the client unaware of how the state is managed because the interface is exactly the same for a service that maintains internal state and a service that externalizes all states. A sample schema for this is shown below. Note that unlike the previous schema, there is only a service-specific reference to its own internal state. The service is responsible for maintaining all the required information internally and using the externalized reference to locate this state information.

Reducing coupling in stateful services

The Oracle Service Bus ( OSB) in SOA Suite enables us to hide a services native state management and expose it as an abstract state management that is less tightly coupled to the way state is physically handled by the service.

Some web service implementations allow for stateful web services, with state managed in a variety of proprietary fashions.

We want to use native state management when we internalize session state because it is easier to manage. The container will do the work for us using mechanisms native to the container. However, this means the client has to be aware that we are using native state management because the client must make use of these mechanisms. We want the client to be unaware of whether the service uses native state management, its own custom state lookup mechanism, or externalizes all session state into the messages flowing between the client and the service. The latter two can look the same to the client and hence make it possible to switch services with different approaches. However, the native state management explicitly requires the client to be aware of how state is managed.

To avoid this coupling, we can use the OSB or Mediator to wrap the native state management services, as shown in the following diagram. The client passes a session state element of unknown contents back to the service façade, which is provided by the OSB or Mediator. The OSB or Mediator then removes the session state element and maps it onto the native state management used by the service, such as placing the value into a session cookie. Thus we have the benefits of using native state management without the need for coupling the client to a particular implementation of the service. For example, a service may use cookies to manage session state, and by having the OSB or Mediator move the cookie value to a field in the message, we avoid clients of the service having to deal with the specifics of the services state management.

Reducing coupling in stateful services

Service abstraction tools in SOA Suite

Earlier versions of the SOA Suite had the Oracle Enterprise Service Bus. This component has become the Mediator in 11g. In Chapter 2, Writing your First Composite, we introduced the Mediator component of an SCA Assembly. This provides basic routing and transformation abilities. The SOA Suite also includes the Oracle Service Bus. The Oracle Service Bus can also be used for routing and transformation but provides a much richer environment than the Mediator for service abstraction. At first glance, it is not clear whether to use the Oracle Service Bus or the Mediator to perform service abstraction. In this section, we will examine the pros and cons of using each and give some guidance on when to use one and when to use the other.

Do you have a choice?

The Oracle Service Bus currently only runs on the WebLogic platform. The rest of the SOA Suite has been designed to run on multiple platforms such as WebSphere and JBoss. If you need to run on these other platforms then, until OSB becomes multi-platform, you have no choice but to use the Mediator.

When to use the Mediator

Because the Mediator runs within an SCA Assembly, it has the most efficient bindings to other SCA Assembly components, specifically the BPEL engine. This lets us focus on using the Mediator to provide service virtualization services within SCA assemblies. The Mediator enables the virtualization of inputs and outputs within an SCA Assembly. This leads us to four key uses of the Mediator within SCA.

  • Routing between components in an SCA Assembly
  • Validation of incoming messages into an SCA Assembly
  • Transformation of data from one format to another within an SCA Assembly
  • Filtering to allow selection of components to invoke based on message content

The Mediator is an integral part of SCA Assemblies and should be used to adapt SCA Assembly formats to the canonical message formats, which we will talk about later in this chapter.

When to use Oracle Service Bus

The Oracle Service Bus runs in a separate JVM to the other SOA Suite components and so there is a cost associated with invoking SOA Suite components in terms of additional inter-process communication and hence time. However, the OSB has some very powerful capabilities that make it well suited to be the enterprise strength Service Bus for a more general enterprise-wide virtualisation role. As it is separate from the other components, it is easy to deploy separately and use as an independent Service Bus.

The Service Bus can be used to virtualize external services, where external may mean outside the company but also includes non-SCA services. OSB makes it very easy for operators to modify service endpoint details at runtime, making it very flexible in managing change.

The Service Bus goes beyond routing and transformation by providing the ability to throttle services, restricting the number of invocations they receive. This can be valuable in enforcing client contracts and ensuring that services are not swamped by more requests than they can handle.

Tip

What should I use to virtualize my services?

Service virtualization within an SCA Assembly is the job of the Mediator. The Mediator should be used to ensure that the SCA Assembly always presents a canonical interface to clients and services. Service virtualization of non-SCA components should be done with the Oracle Service Bus. Oracle Service Bus may also be used to transparently enforce throughput restriction on services.

Do you have a choice?

The Oracle Service Bus currently only runs on the WebLogic platform. The rest of the SOA Suite has been designed to run on multiple platforms such as WebSphere and JBoss. If you need to run on these other platforms then, until OSB becomes multi-platform, you have no choice but to use the Mediator.

When to use the Mediator

Because the Mediator runs within an SCA Assembly, it has the most efficient bindings to other SCA Assembly components, specifically the BPEL engine. This lets us focus on using the Mediator to provide service virtualization services within SCA assemblies. The Mediator enables the virtualization of inputs and outputs within an SCA Assembly. This leads us to four key uses of the Mediator within SCA.

  • Routing between components in an SCA Assembly
  • Validation of incoming messages into an SCA Assembly
  • Transformation of data from one format to another within an SCA Assembly
  • Filtering to allow selection of components to invoke based on message content

The Mediator is an integral part of SCA Assemblies and should be used to adapt SCA Assembly formats to the canonical message formats, which we will talk about later in this chapter.

When to use Oracle Service Bus

The Oracle Service Bus runs in a separate JVM to the other SOA Suite components and so there is a cost associated with invoking SOA Suite components in terms of additional inter-process communication and hence time. However, the OSB has some very powerful capabilities that make it well suited to be the enterprise strength Service Bus for a more general enterprise-wide virtualisation role. As it is separate from the other components, it is easy to deploy separately and use as an independent Service Bus.

The Service Bus can be used to virtualize external services, where external may mean outside the company but also includes non-SCA services. OSB makes it very easy for operators to modify service endpoint details at runtime, making it very flexible in managing change.

The Service Bus goes beyond routing and transformation by providing the ability to throttle services, restricting the number of invocations they receive. This can be valuable in enforcing client contracts and ensuring that services are not swamped by more requests than they can handle.

Tip

What should I use to virtualize my services?

Service virtualization within an SCA Assembly is the job of the Mediator. The Mediator should be used to ensure that the SCA Assembly always presents a canonical interface to clients and services. Service virtualization of non-SCA components should be done with the Oracle Service Bus. Oracle Service Bus may also be used to transparently enforce throughput restriction on services.

When to use the Mediator

Because the Mediator runs within an SCA Assembly, it has the most efficient bindings to other SCA Assembly components, specifically the BPEL engine. This lets us focus on using the Mediator to provide service virtualization services within SCA assemblies. The Mediator enables the virtualization of inputs and outputs within an SCA Assembly. This leads us to four key uses of the Mediator within SCA.

  • Routing between components in an SCA Assembly
  • Validation of incoming messages into an SCA Assembly
  • Transformation of data from one format to another within an SCA Assembly
  • Filtering to allow selection of components to invoke based on message content

The Mediator is an integral part of SCA Assemblies and should be used to adapt SCA Assembly formats to the canonical message formats, which we will talk about later in this chapter.

When to use Oracle Service Bus

The Oracle Service Bus runs in a separate JVM to the other SOA Suite components and so there is a cost associated with invoking SOA Suite components in terms of additional inter-process communication and hence time. However, the OSB has some very powerful capabilities that make it well suited to be the enterprise strength Service Bus for a more general enterprise-wide virtualisation role. As it is separate from the other components, it is easy to deploy separately and use as an independent Service Bus.

The Service Bus can be used to virtualize external services, where external may mean outside the company but also includes non-SCA services. OSB makes it very easy for operators to modify service endpoint details at runtime, making it very flexible in managing change.

The Service Bus goes beyond routing and transformation by providing the ability to throttle services, restricting the number of invocations they receive. This can be valuable in enforcing client contracts and ensuring that services are not swamped by more requests than they can handle.

Tip

What should I use to virtualize my services?

Service virtualization within an SCA Assembly is the job of the Mediator. The Mediator should be used to ensure that the SCA Assembly always presents a canonical interface to clients and services. Service virtualization of non-SCA components should be done with the Oracle Service Bus. Oracle Service Bus may also be used to transparently enforce throughput restriction on services.

When to use Oracle Service Bus

The Oracle Service Bus runs in a separate JVM to the other SOA Suite components and so there is a cost associated with invoking SOA Suite components in terms of additional inter-process communication and hence time. However, the OSB has some very powerful capabilities that make it well suited to be the enterprise strength Service Bus for a more general enterprise-wide virtualisation role. As it is separate from the other components, it is easy to deploy separately and use as an independent Service Bus.

The Service Bus can be used to virtualize external services, where external may mean outside the company but also includes non-SCA services. OSB makes it very easy for operators to modify service endpoint details at runtime, making it very flexible in managing change.

The Service Bus goes beyond routing and transformation by providing the ability to throttle services, restricting the number of invocations they receive. This can be valuable in enforcing client contracts and ensuring that services are not swamped by more requests than they can handle.

Tip

What should I use to virtualize my services?

Service virtualization within an SCA Assembly is the job of the Mediator. The Mediator should be used to ensure that the SCA Assembly always presents a canonical interface to clients and services. Service virtualization of non-SCA components should be done with the Oracle Service Bus. Oracle Service Bus may also be used to transparently enforce throughput restriction on services.

Oracle Service Bus design tools

The Oracle Service Bus can be configured either using the Oracle Workshop for WebLogic or the Oracle Service Bus Console.

Oracle Workshop for WebLogic

Oracle Workshop for WebLogic provides tools for creating all the artifacts needed by the Oracle Service Bus. Based on Eclipse, it provides a rich design environment for building service routings and transformations for deployment to the Service Bus. In future releases, it is expected that all the Service Bus functionality in the Workshop for WebLogic will be provided in JDeveloper. The current versions of JDeveloper do not have support for Oracle Service Bus. Note that there is some duplication functionality between JDeveloper and Workshop for WebLogic. In some cases, such as WSDL generation, the functionality provided in the Workshop for WebLogic is superior to that provided by JDeveloper. In other cases, such as XSLT generation, the functionality provided by JDeveloper is superior.

Oracle Service Bus Console

In Chapter 2, Writing your First Composite,we introduced the Oracle Service Bus console and used it to build a proxy service that invoked an SCA Assembly.

Oracle Workshop for WebLogic

Oracle Workshop for WebLogic provides tools for creating all the artifacts needed by the Oracle Service Bus. Based on Eclipse, it provides a rich design environment for building service routings and transformations for deployment to the Service Bus. In future releases, it is expected that all the Service Bus functionality in the Workshop for WebLogic will be provided in JDeveloper. The current versions of JDeveloper do not have support for Oracle Service Bus. Note that there is some duplication functionality between JDeveloper and Workshop for WebLogic. In some cases, such as WSDL generation, the functionality provided in the Workshop for WebLogic is superior to that provided by JDeveloper. In other cases, such as XSLT generation, the functionality provided by JDeveloper is superior.

Oracle Service Bus Console

In Chapter 2, Writing your First Composite,we introduced the Oracle Service Bus console and used it to build a proxy service that invoked an SCA Assembly.

Oracle Service Bus Console

In Chapter 2, Writing your First Composite,we introduced the Oracle Service Bus console and used it to build a proxy service that invoked an SCA Assembly.

Service Bus overview

In this section, we will introduce the key features of the Oracle Service Bus and show how they can be used to support service virtualization.

Service Bus message flow

It is useful to examine how messages are processed by the Service Bus. Messages normally target an endpoint in the Service Bus known as a proxy service. Once received by the proxy service the message is processed through a series of input pipeline stages. These pipeline stages may enrich the data by calling out to other web services, or they may transform the message as well as providing logging and message validation. Finally, the message reaches a routing step where it is routed to a service known as a business service. The response, if any, from the service is then sent through the output pipeline stages, which may also enrich the response or transform it before returning a response to the invoker.

Note that there may be no pipeline stages and the router may make a choice between multiple endpoints. Finally, note that the business service is a reference to the target service, which may be hosted within the Service Bus or as a standalone service. The proxy service may be thought of as the external service interface and associated transforms required to make use of the actual business service.

Service Bus message flow

Note

The proxy service should be the canonical interface to our service (see later in the chapter for an explanation of canonical interfaces). The Business Service is the physical implementation interface. The pipelines and routing step transform the request to and from canonical form.

Service Bus message flow

It is useful to examine how messages are processed by the Service Bus. Messages normally target an endpoint in the Service Bus known as a proxy service. Once received by the proxy service the message is processed through a series of input pipeline stages. These pipeline stages may enrich the data by calling out to other web services, or they may transform the message as well as providing logging and message validation. Finally, the message reaches a routing step where it is routed to a service known as a business service. The response, if any, from the service is then sent through the output pipeline stages, which may also enrich the response or transform it before returning a response to the invoker.

Note that there may be no pipeline stages and the router may make a choice between multiple endpoints. Finally, note that the business service is a reference to the target service, which may be hosted within the Service Bus or as a standalone service. The proxy service may be thought of as the external service interface and associated transforms required to make use of the actual business service.

Service Bus message flow

Note

The proxy service should be the canonical interface to our service (see later in the chapter for an explanation of canonical interfaces). The Business Service is the physical implementation interface. The pipelines and routing step transform the request to and from canonical form.

Virtualizing service endpoints

To begin our exploration of the Oracle Service Bus, let us start by looking at how we can use it to virtualize service endpoints. By virtualizing a service endpoint, we mean that we can move the location of the service without affecting any of the services' dependants.

Moving service location

To virtualize the address of our service, we use the business service in the Service Bus. We covered creating a business service in Chapter 2, Writing your First Composite. Note that we are not limited to services described by WSDL. In addition to already defined business and proxy services, we can base our service on XML or messaging systems. The easiest to use is the WSDL web service.

Tip

Endpoint address considerations

When specifying endpoints in the Service Bus, it is generally not a good idea to use localhost or 127.0.0.1. Because the Service Bus definitions may be deployed across multiple nodes, there is no guarantee that business service will be co-located with the Service Bus on every node the Service Bus is deployed upon. Therefore, it is best to ensure that all endpoint addresses use virtual hostnames. Machines that are referenced by a virtual hostname should have that hostname in the local hosts file pointing to the loopback address (127.0.0.1) to benefit from machine affinity.

When we selected the WSDL we wanted to use in Chapter 2, Writing your First Composite, we were taken to another dialog that introspects the WSDL, identifies any ports or bindings, and asks us for which one we wish to use. Bindings are mappings of the WSDL service onto a physical transport mechanism such as SOAP over HTTP. Ports are the mapping of the binding onto a physical endpoint such as a specific server.

Note that if we choose a port, we do not have to provide physical endpoint details later in the definition of the business service, although we may choose to do so. If we choose a binding because it doesn't include a physical endpoint address, we have to provide the physical endpoint details explicitly.

If we have chosen a binding, we can skip the physical endpoint details. If, however, we chose a port or we wish to change the physical service endpoint or add additional physical service endpoints, then we hit the Next>> button to allow us to configure the physical endpoints of the service.

Moving service location

This dialog allows us to do several important things:

  • Modify the Protocol to support a variety of transports.
  • Choose a Load Balancing Algorithm. If there is more than one endpoint URI, then the Service Bus will load balance across them according to this algorithm.
  • Change, add, or remove Endpoint URI s or physical targets.
  • Specify retry logic, specifically the Retry Count, the Retry Iteration Interval, and whether or not to Retry Application Errors (errors generated by the service called, not the transport).

    Note

    Note that the Service Bus gives us the ability to change, add, and remove physical endpoint URIs as well as change the protocol used at runtime. This allows us to change the target services without impacting any clients of the service, providing us with virtualization of our service location.

Using Adapters in Service Bus

The Service Bus can also use adapter definitions created in JDeveloper. To use an adapter from JDeveloper, we cannot directly import the WSDL, we need to import the artifacts in the following order:

  1. The XSD generated by the adapter using Select Resource Type | Interface | XML Schema
  2. The WSDL generated by the adapter using Select Resource Type | Interface | WSDL
  3. The JCA file generated by the adapter using Select Resource Type | Interface | JCA Binding

The WSDL can then be used as a business service. Make sure that references in the JCA file are configured in the WebLogic Server.

The proxy service provides the interface and adaption to our business service, typically joining them by a routing step, as we did in Chapter 2, Writing your First Composite. There are other types of actions besides routing flows. Clicking on Add an Action allows us to choose the type of Communication we want to add. Flow Control allows us to add If .. Then … logic to our routing decision. However, in most cases, the Communication items will provide all the flexibility we need in our routing decisions. This gives us three types of routing to apply:

  1. Dynamic Routing allows us to route to the result of an XQuery. This is useful if the endpoint address is part of the input message.
  2. Routing allows us to select a single static endpoint.
  3. Routing Table allows us to use an XQuery to route between several endpoints. This is useful when we want to route to different services, based on a particular attribute of the input message.

For simple service endpoint virtualization, we only require the Routing option.

Using Adapters in Service Bus

Having selected a target endpoint, usually a business service, we can then configure how we use that endpoint. In the case of simple location virtualization, the proxy service and the business service endpoints are the same, and so we can just pass on the input message directly to the business service. Later on, we will look at how to transform data to allow virtualization of the service interface.

Selecting a service to call

We can further virtualize our endpoint by routing different requests to different services, based upon the values of the input message. For example, we may use one address lookup service for addresses in our own country and another service for all other addresses. In this case, we would use the routing table option on the add action to provide a list of possible service destinations.

The routing table enables us to have a number of different destinations, and the message will be routed based on the value of an expression. When using a routing table, all the services must be selected based on the same expression; the comparison operators may vary, but the actual value being tested against will always be the same. If this is not the case, then it may be better to use "if … then … else" routing. The routing table may be thought of as a "switch statement", and as with all switch statements, it is a good practice to add a default case.

In the routing table, we can create additional cases, each of which will have a test associated with it. Note that we can also add the default case.

Selecting a service to call

We need to specify the expression to be used for testing against. Clicking on the <Expression> link takes us to the XQuery /XSLT Expression Editor. By selecting the Variable Structures tab and selecting a new structure, we can find the input body of the message, which lets us select the field we wish to use as the comparison expression in our routing table.

When selecting in the tab on the left of the screen, the appropriate XPath expression should appear in the Property Inspector window. We can the click on the XQuery Text area of the screen prior to clicking on the Copy Property to transfer the property XPath expression from the property inspector to the XQuery Text area. We then complete our selection of the expression by clicking on the Save button.

In the example, we are going to route our service based on the country of the address. In addition to the data in the body of the message, we could also route based on other information from the request. Alternatively, by using a message pipeline, we could base our lookup on data external to the request.

Selecting a service to call

Once we have created an expression to use as the basis of comparison for routing, then we select an operator and a value to use for the actual routing comparison. In the following example, if the country value from the expression matches the string uk (include the quotes), then the LocalAddressLookup service will be invoked. Any other value will cause the default service to be invoked, as yet undefined in the following example:

Selecting a service to call

Once the routing has been defined, then it can be saved, as shown in Chapter 2, Writing your First Composite.

Note that we have shown a very simple routing example. The Service Bus is capable of doing much more sophisticated routing decisions. A common pattern is to use a pipeline to enrich the inbound data and then route based on the inbound data. For example, a pricing proxy service may use the inbound pipeline to look up the status of a customer, adding that status to the data available as part of the request. The routing service could then route high value customers to one service and low value customers to another service, based on the looked up status. In this case, the routing is based on a derived value rather than on a value already available in the message.

In summary, a request can be routed to different references, based on the content of the request message. This allows messages to be routed based on geography or pecuniary value for example. This routing, because it takes place in the composite, is transparent to clients of the composite and so aids us in reducing coupling in the system.

Moving service location

To virtualize the address of our service, we use the business service in the Service Bus. We covered creating a business service in Chapter 2, Writing your First Composite. Note that we are not limited to services described by WSDL. In addition to already defined business and proxy services, we can base our service on XML or messaging systems. The easiest to use is the WSDL web service.

Tip

Endpoint address considerations

When specifying endpoints in the Service Bus, it is generally not a good idea to use localhost or 127.0.0.1. Because the Service Bus definitions may be deployed across multiple nodes, there is no guarantee that business service will be co-located with the Service Bus on every node the Service Bus is deployed upon. Therefore, it is best to ensure that all endpoint addresses use virtual hostnames. Machines that are referenced by a virtual hostname should have that hostname in the local hosts file pointing to the loopback address (127.0.0.1) to benefit from machine affinity.

When we selected the WSDL we wanted to use in Chapter 2, Writing your First Composite, we were taken to another dialog that introspects the WSDL, identifies any ports or bindings, and asks us for which one we wish to use. Bindings are mappings of the WSDL service onto a physical transport mechanism such as SOAP over HTTP. Ports are the mapping of the binding onto a physical endpoint such as a specific server.

Note that if we choose a port, we do not have to provide physical endpoint details later in the definition of the business service, although we may choose to do so. If we choose a binding because it doesn't include a physical endpoint address, we have to provide the physical endpoint details explicitly.

If we have chosen a binding, we can skip the physical endpoint details. If, however, we chose a port or we wish to change the physical service endpoint or add additional physical service endpoints, then we hit the Next>> button to allow us to configure the physical endpoints of the service.

Moving service location

This dialog allows us to do several important things:

  • Modify the Protocol to support a variety of transports.
  • Choose a Load Balancing Algorithm. If there is more than one endpoint URI, then the Service Bus will load balance across them according to this algorithm.
  • Change, add, or remove Endpoint URI s or physical targets.
  • Specify retry logic, specifically the Retry Count, the Retry Iteration Interval, and whether or not to Retry Application Errors (errors generated by the service called, not the transport).

    Note

    Note that the Service Bus gives us the ability to change, add, and remove physical endpoint URIs as well as change the protocol used at runtime. This allows us to change the target services without impacting any clients of the service, providing us with virtualization of our service location.

Using Adapters in Service Bus

The Service Bus can also use adapter definitions created in JDeveloper. To use an adapter from JDeveloper, we cannot directly import the WSDL, we need to import the artifacts in the following order:

  1. The XSD generated by the adapter using Select Resource Type | Interface | XML Schema
  2. The WSDL generated by the adapter using Select Resource Type | Interface | WSDL
  3. The JCA file generated by the adapter using Select Resource Type | Interface | JCA Binding

The WSDL can then be used as a business service. Make sure that references in the JCA file are configured in the WebLogic Server.

The proxy service provides the interface and adaption to our business service, typically joining them by a routing step, as we did in Chapter 2, Writing your First Composite. There are other types of actions besides routing flows. Clicking on Add an Action allows us to choose the type of Communication we want to add. Flow Control allows us to add If .. Then … logic to our routing decision. However, in most cases, the Communication items will provide all the flexibility we need in our routing decisions. This gives us three types of routing to apply:

  1. Dynamic Routing allows us to route to the result of an XQuery. This is useful if the endpoint address is part of the input message.
  2. Routing allows us to select a single static endpoint.
  3. Routing Table allows us to use an XQuery to route between several endpoints. This is useful when we want to route to different services, based on a particular attribute of the input message.

For simple service endpoint virtualization, we only require the Routing option.

Using Adapters in Service Bus

Having selected a target endpoint, usually a business service, we can then configure how we use that endpoint. In the case of simple location virtualization, the proxy service and the business service endpoints are the same, and so we can just pass on the input message directly to the business service. Later on, we will look at how to transform data to allow virtualization of the service interface.

Selecting a service to call

We can further virtualize our endpoint by routing different requests to different services, based upon the values of the input message. For example, we may use one address lookup service for addresses in our own country and another service for all other addresses. In this case, we would use the routing table option on the add action to provide a list of possible service destinations.

The routing table enables us to have a number of different destinations, and the message will be routed based on the value of an expression. When using a routing table, all the services must be selected based on the same expression; the comparison operators may vary, but the actual value being tested against will always be the same. If this is not the case, then it may be better to use "if … then … else" routing. The routing table may be thought of as a "switch statement", and as with all switch statements, it is a good practice to add a default case.

In the routing table, we can create additional cases, each of which will have a test associated with it. Note that we can also add the default case.

Selecting a service to call

We need to specify the expression to be used for testing against. Clicking on the <Expression> link takes us to the XQuery /XSLT Expression Editor. By selecting the Variable Structures tab and selecting a new structure, we can find the input body of the message, which lets us select the field we wish to use as the comparison expression in our routing table.

When selecting in the tab on the left of the screen, the appropriate XPath expression should appear in the Property Inspector window. We can the click on the XQuery Text area of the screen prior to clicking on the Copy Property to transfer the property XPath expression from the property inspector to the XQuery Text area. We then complete our selection of the expression by clicking on the Save button.

In the example, we are going to route our service based on the country of the address. In addition to the data in the body of the message, we could also route based on other information from the request. Alternatively, by using a message pipeline, we could base our lookup on data external to the request.

Selecting a service to call

Once we have created an expression to use as the basis of comparison for routing, then we select an operator and a value to use for the actual routing comparison. In the following example, if the country value from the expression matches the string uk (include the quotes), then the LocalAddressLookup service will be invoked. Any other value will cause the default service to be invoked, as yet undefined in the following example:

Selecting a service to call

Once the routing has been defined, then it can be saved, as shown in Chapter 2, Writing your First Composite.

Note that we have shown a very simple routing example. The Service Bus is capable of doing much more sophisticated routing decisions. A common pattern is to use a pipeline to enrich the inbound data and then route based on the inbound data. For example, a pricing proxy service may use the inbound pipeline to look up the status of a customer, adding that status to the data available as part of the request. The routing service could then route high value customers to one service and low value customers to another service, based on the looked up status. In this case, the routing is based on a derived value rather than on a value already available in the message.

In summary, a request can be routed to different references, based on the content of the request message. This allows messages to be routed based on geography or pecuniary value for example. This routing, because it takes place in the composite, is transparent to clients of the composite and so aids us in reducing coupling in the system.

Using Adapters in Service Bus

The Service Bus can also use adapter definitions created in JDeveloper. To use an adapter from JDeveloper, we cannot directly import the WSDL, we need to import the artifacts in the following order:

  1. The XSD generated by the adapter using Select Resource Type | Interface | XML Schema
  2. The WSDL generated by the adapter using Select Resource Type | Interface | WSDL
  3. The JCA file generated by the adapter using Select Resource Type | Interface | JCA Binding

The WSDL can then be used as a business service. Make sure that references in the JCA file are configured in the WebLogic Server.

The proxy service provides the interface and adaption to our business service, typically joining them by a routing step, as we did in Chapter 2, Writing your First Composite. There are other types of actions besides routing flows. Clicking on Add an Action allows us to choose the type of Communication we want to add. Flow Control allows us to add If .. Then … logic to our routing decision. However, in most cases, the Communication items will provide all the flexibility we need in our routing decisions. This gives us three types of routing to apply:

  1. Dynamic Routing allows us to route to the result of an XQuery. This is useful if the endpoint address is part of the input message.
  2. Routing allows us to select a single static endpoint.
  3. Routing Table allows us to use an XQuery to route between several endpoints. This is useful when we want to route to different services, based on a particular attribute of the input message.

For simple service endpoint virtualization, we only require the Routing option.

Using Adapters in Service Bus

Having selected a target endpoint, usually a business service, we can then configure how we use that endpoint. In the case of simple location virtualization, the proxy service and the business service endpoints are the same, and so we can just pass on the input message directly to the business service. Later on, we will look at how to transform data to allow virtualization of the service interface.

Selecting a service to call

We can further virtualize our endpoint by routing different requests to different services, based upon the values of the input message. For example, we may use one address lookup service for addresses in our own country and another service for all other addresses. In this case, we would use the routing table option on the add action to provide a list of possible service destinations.

The routing table enables us to have a number of different destinations, and the message will be routed based on the value of an expression. When using a routing table, all the services must be selected based on the same expression; the comparison operators may vary, but the actual value being tested against will always be the same. If this is not the case, then it may be better to use "if … then … else" routing. The routing table may be thought of as a "switch statement", and as with all switch statements, it is a good practice to add a default case.

In the routing table, we can create additional cases, each of which will have a test associated with it. Note that we can also add the default case.

Selecting a service to call

We need to specify the expression to be used for testing against. Clicking on the <Expression> link takes us to the XQuery /XSLT Expression Editor. By selecting the Variable Structures tab and selecting a new structure, we can find the input body of the message, which lets us select the field we wish to use as the comparison expression in our routing table.

When selecting in the tab on the left of the screen, the appropriate XPath expression should appear in the Property Inspector window. We can the click on the XQuery Text area of the screen prior to clicking on the Copy Property to transfer the property XPath expression from the property inspector to the XQuery Text area. We then complete our selection of the expression by clicking on the Save button.

In the example, we are going to route our service based on the country of the address. In addition to the data in the body of the message, we could also route based on other information from the request. Alternatively, by using a message pipeline, we could base our lookup on data external to the request.

Selecting a service to call

Once we have created an expression to use as the basis of comparison for routing, then we select an operator and a value to use for the actual routing comparison. In the following example, if the country value from the expression matches the string uk (include the quotes), then the LocalAddressLookup service will be invoked. Any other value will cause the default service to be invoked, as yet undefined in the following example:

Selecting a service to call

Once the routing has been defined, then it can be saved, as shown in Chapter 2, Writing your First Composite.

Note that we have shown a very simple routing example. The Service Bus is capable of doing much more sophisticated routing decisions. A common pattern is to use a pipeline to enrich the inbound data and then route based on the inbound data. For example, a pricing proxy service may use the inbound pipeline to look up the status of a customer, adding that status to the data available as part of the request. The routing service could then route high value customers to one service and low value customers to another service, based on the looked up status. In this case, the routing is based on a derived value rather than on a value already available in the message.

In summary, a request can be routed to different references, based on the content of the request message. This allows messages to be routed based on geography or pecuniary value for example. This routing, because it takes place in the composite, is transparent to clients of the composite and so aids us in reducing coupling in the system.

Selecting a service to call

We can further virtualize our endpoint by routing different requests to different services, based upon the values of the input message. For example, we may use one address lookup service for addresses in our own country and another service for all other addresses. In this case, we would use the routing table option on the add action to provide a list of possible service destinations.

The routing table enables us to have a number of different destinations, and the message will be routed based on the value of an expression. When using a routing table, all the services must be selected based on the same expression; the comparison operators may vary, but the actual value being tested against will always be the same. If this is not the case, then it may be better to use "if … then … else" routing. The routing table may be thought of as a "switch statement", and as with all switch statements, it is a good practice to add a default case.

In the routing table, we can create additional cases, each of which will have a test associated with it. Note that we can also add the default case.

Selecting a service to call

We need to specify the expression to be used for testing against. Clicking on the <Expression> link takes us to the XQuery /XSLT Expression Editor. By selecting the Variable Structures tab and selecting a new structure, we can find the input body of the message, which lets us select the field we wish to use as the comparison expression in our routing table.

When selecting in the tab on the left of the screen, the appropriate XPath expression should appear in the Property Inspector window. We can the click on the XQuery Text area of the screen prior to clicking on the Copy Property to transfer the property XPath expression from the property inspector to the XQuery Text area. We then complete our selection of the expression by clicking on the Save button.

In the example, we are going to route our service based on the country of the address. In addition to the data in the body of the message, we could also route based on other information from the request. Alternatively, by using a message pipeline, we could base our lookup on data external to the request.

Selecting a service to call

Once we have created an expression to use as the basis of comparison for routing, then we select an operator and a value to use for the actual routing comparison. In the following example, if the country value from the expression matches the string uk (include the quotes), then the LocalAddressLookup service will be invoked. Any other value will cause the default service to be invoked, as yet undefined in the following example:

Selecting a service to call

Once the routing has been defined, then it can be saved, as shown in Chapter 2, Writing your First Composite.

Note that we have shown a very simple routing example. The Service Bus is capable of doing much more sophisticated routing decisions. A common pattern is to use a pipeline to enrich the inbound data and then route based on the inbound data. For example, a pricing proxy service may use the inbound pipeline to look up the status of a customer, adding that status to the data available as part of the request. The routing service could then route high value customers to one service and low value customers to another service, based on the looked up status. In this case, the routing is based on a derived value rather than on a value already available in the message.

In summary, a request can be routed to different references, based on the content of the request message. This allows messages to be routed based on geography or pecuniary value for example. This routing, because it takes place in the composite, is transparent to clients of the composite and so aids us in reducing coupling in the system.

Virtualizing service interfaces

We have looked at how to virtualize a service endpoint. Now let's look at how we can further virtualize the service by abstracting its interface into a common format, known as canonical form. This will provide us further flexibility by allowing us to change the implementation of the service with one that has a different interface but performs the same function. The native format is the way the data format service actually uses, the canonical format is an idealized format that we wish to develop against.

Physical versus logical interfaces

Best practice for integration projects was to have a canonical form for all messages exchanged between systems. The canonical form was a common format for all messages. If a system wanted to send a message, then it first needed to transform it to the canonical form before it could be forwarded to the receiving system, which would then transform it from the canonical form to its own representation. This same good practice is still valid in a service-oriented world and the Service Bus is the mechanism SOA Suite provides for us to do this.

Tip

Canonical data and canonical interface

The canonical data formats should represent the idealized data format for the data entities in the system. The canonical interfaces should be the idealized service interfaces. Generally, it is a bad idea to use existing service data formats or service interfaces as the canonical form. There is a lot of work being done in various industry-specific bodies to define standardized canonical forms for entities that are exchanged between corporations.

The benefits of a canonical form are as follows:

  • Transformations are only necessary to and from canonical form, reducing the number of different transformations required to be created
  • Decouples format of data from services, allowing a service to be replaced by one providing the same function but a different format of data

This is illustrated graphically by a system where two different clients make requests for one of the four services, all providing the same function but different implementations. Without the canonical form, we would need a transformation of data between the client format and the server format inbound and again outbound. For four services, this yields eight transformations, and for two clients, this doubles to sixteen transformations.

Using the canonical format gives us two transformations for each client, inbound and outbound to the canonical form. With two clients, this gives us four transformations. To this, we add the server transformations to and from the canonical form, of which there are two per server, giving us eight transformations. This gives us a total of twelve transformations that must be coded up rather than sixteen if we were using native-to-native transformation.

Physical versus logical interfaces

The benefits of the canonical form are most clearly seen when we deploy a new client. Without the canonical form, we would need to develop eight transformations to allow the client to work with the four different possible service implementations. With the canonical form, we only need two transformations, to and from the canonical form.

Let's look at how we implement the canonical form in Oracle Service Bus.

Mapping service interfaces

In order to take advantage of the canonical form in our service interfaces, we must have an abstract service interface that provides the functionality we need without being specific to any particular service implementation. Once we have this, we can then use it as the canonical service form.

We set up the initial project in the same way we did in the previous section on virtualizing service endpoints. The proxy should provide the canonical interface, while the business service provides the native service interface. Because the proxy and business services are not the same interface, we need to do some more work in the route configuration.

We need to map the canonical form of the address list interface onto the native service form of the interface. In the example, we are mapping our canonical interface to the interface provided by a web-based address solution from the Harte-Hanks Global Address (http://www.qudox.com). To do this, we create a new Service Bus project and add the Harte-Hanks WSDL (http://webservices.globaladdress.net/globaladdress.asmx?WSDL). We use this to define the business service. We also add the canonical interface WSDL that we have defined and create a new proxy with this interface. We then need to map the proxy service onto the Harte-Hanks service by editing the message flow associated with the proxy, as we did in the previous section.

Our mapping needs to do two things as follows:

  • Map the method name on the interface to the correct method in the business service
  • Map the parameters in the canonical request onto the parameters needed in the business service request

For each method on the canonical interface, we must map it onto a method in the physical interface. We do this by selecting the appropriate method from the business service operation drop-down box. We need to do this because the methods provided in the external service do not match the method names in our canonical service. In the following example, we have mapped onto the SearchAddress method.

canonical formimplementing, in OSBMapping service interfaces

Having selected an operation, we now need to transform the input data from the format provided by the canonical interface into the format required by the external service. We need to map the request and response messages if it is a two-way method or just the request message for one-way method. The actual mapping may be done either by XQuery or XSLT. In our example, we will use the XSLT transform.

To perform the transformation, we add a Messaging Processing action to our message flow, which in this case is a Replace operation. The variable body always holds the message in the Service Bus flow. This receives the message through the proxy interface and is also used to deliver the message to the business service interface. This behavior differs from BPEL and most programming languages, where we typically have separate variables for the input and output messages. We need to transform this message from the proxy input canonical format to the business service native output format.

Be aware that there are really two flows associated with the proxy service. The request flow is used to receive the inbound message and perform any processing before invoking the target business service. The response flow takes the response from the business service and performs any necessary processing before replying to the invoker of the proxy service.

canonical formimplementing, in OSBMapping service interfaces

On selecting replace, we can fill in the details in the Request Actions dialog. The message is held in the body variable, and so we can fill this (body) in as the target variable name. We then need to select which part of the body we want to replace.

canonical formimplementing, in OSBMapping service interfaces

Clicking on the XPath link brings up the XPath Expression Editor, where we can enter the portion of the target variable that we wish to replace. In this case, we wish to replace all the elements so we enter ./*, which selects the top level element and all elements beneath it. Clicking on the Save button causes the expression to be saved in the Replace Action dialog.

canonical formimplementing, in OSBMapping service interfaces

Having identified the portion of the message we wish to replace (all of it) , we now need to specify what we will replace it with. In this case, we wish to transform the whole input message, so we click on the Expression link and select the XSLT Resources tab. Clicking on the Browse button enables us to choose a previously registered XSLT transformation file. After selecting the file, we need to identify the input to the transformation. In this case, the input message is in the body variable, and so we select all the elements in the body by using the expression $body/*. We then save our transformation expression.

Having provided the source data, the target, and the transformation, we can then save and repeat the whole process for the response message (in this case, converting from native to canonical form).

canonical formimplementing, in OSBMapping service interfaces

We can use JDeveloper to build an XSLT transform and then upload it into the Service Bus. A future release will add support for XQuery in JDeveloper, similar to that provided in Oracle Workshop for WebLogic. XSLT is an XML language that describes how to transform one XML document into another. Fortunately, most XSLT can be created using the graphical mapping tool in JDeveloper, and so SOA Suite developers don't have to be experts in XSLT, although it is very useful to know how it works. Note that in our transform, we may need to enhance the message with additional information, for example, all the Global Address methods require a username and password to be provided to allow accounting of the requests to take place. This information has no place in the canonical request format, but must be added in the transform. A sample transform that does just this is shown in the following screenshot:

canonical formimplementing, in OSBMapping service interfaces

Note that we use XPath string functions to set the username and password fields. It would be better to set these from the properties or an external file, as we would usually want to use them in a number of calls to the physical service. XPath functions are capable of allowing access to composite properties. We actually only need to set five fields in the request, namely, a country, postcode, username, password, and the maximum number of results to return. All the other fields are not necessary for the service we are using and so are hidden from end users because they do not appear in the canonical form of the service.

Applying canonical form in the Service Bus

When we think about the canonical form and routing, we have several different operations that may need to be performed.

  • Conversion to/from the native business service form from/to the canonical proxy form
  • Conversion to/from the native client form from/to the canonical proxy form
  • Routing between multiple native services, each potentially with its own message format

The following diagram represents these different potential interactions as distinct proxy implementations in the service. To reduce coupling and make maintenance easier, each native service has a corresponding canonical proxy service. This isolates the rest of the system from the actual native formats. This is shown below in the Local-Harte-Hanks-Proxy and Local-LocalAddress-Proxy services that transform the native service to/from the canonical form. This approach allows us to change the native address lookup implementations without impacting anything other than the Local-*-Proxy service.

The Canonical-Address-Proxy has the job of hiding the fact that the address lookup service is actually provided by a number of different service providers, each with their own message formats. By providing this service, we can easily add additional address providers without impacting the clients of the address lookup service.

Applying canonical form in the Service Bus

In addition to the services shown in the diagram, we may have clients that are not written to use the canonical address lookup. In this case, we need to provide a proxy that transforms the native input request to/from the canonical form. This allows us to be isolated from the requirements of the clients of the service. If a client requires its own interface to the address lookup service, we can easily provide that through a proxy without the need to impact the rest of the system, again reducing coupling.

An important optimization

The previous approach provides a very robust way of isolating service consumers and service requestors from the native formats and locations of their partners. However, there must be a concern about the overhead of all these additional proxy services and also about the possibility of a client accessing a native service directly. To avoid these problems, the Service Bus provides a local transport mechanism that can be specified as part of the binding of the proxy service. The local transport provides two things for us:

  • It makes services only consumable by other services in the Service Bus, they cannot be accessed externally
  • It provides a highly optimized messaging transport between proxy services, providing in-memory speed to avoid unnecessary overhead in service hand-offs between proxy services

These optimizations mean that it is very efficient to use the canonical form, and so the Service Bus not only allows us great flexibility in how we decouple our services from each other, but it also provides a very efficient mechanism for us to implement that decoupling. Note, though, that there is a cost involved in performing XSLT or XQuery transformations. This cost may be viewed as the price of loose coupling.

Physical versus logical interfaces

Best practice for integration projects was to have a canonical form for all messages exchanged between systems. The canonical form was a common format for all messages. If a system wanted to send a message, then it first needed to transform it to the canonical form before it could be forwarded to the receiving system, which would then transform it from the canonical form to its own representation. This same good practice is still valid in a service-oriented world and the Service Bus is the mechanism SOA Suite provides for us to do this.

Tip

Canonical data and canonical interface

The canonical data formats should represent the idealized data format for the data entities in the system. The canonical interfaces should be the idealized service interfaces. Generally, it is a bad idea to use existing service data formats or service interfaces as the canonical form. There is a lot of work being done in various industry-specific bodies to define standardized canonical forms for entities that are exchanged between corporations.

The benefits of a canonical form are as follows:

  • Transformations are only necessary to and from canonical form, reducing the number of different transformations required to be created
  • Decouples format of data from services, allowing a service to be replaced by one providing the same function but a different format of data

This is illustrated graphically by a system where two different clients make requests for one of the four services, all providing the same function but different implementations. Without the canonical form, we would need a transformation of data between the client format and the server format inbound and again outbound. For four services, this yields eight transformations, and for two clients, this doubles to sixteen transformations.

Using the canonical format gives us two transformations for each client, inbound and outbound to the canonical form. With two clients, this gives us four transformations. To this, we add the server transformations to and from the canonical form, of which there are two per server, giving us eight transformations. This gives us a total of twelve transformations that must be coded up rather than sixteen if we were using native-to-native transformation.

Physical versus logical interfaces

The benefits of the canonical form are most clearly seen when we deploy a new client. Without the canonical form, we would need to develop eight transformations to allow the client to work with the four different possible service implementations. With the canonical form, we only need two transformations, to and from the canonical form.

Let's look at how we implement the canonical form in Oracle Service Bus.

Mapping service interfaces

In order to take advantage of the canonical form in our service interfaces, we must have an abstract service interface that provides the functionality we need without being specific to any particular service implementation. Once we have this, we can then use it as the canonical service form.

We set up the initial project in the same way we did in the previous section on virtualizing service endpoints. The proxy should provide the canonical interface, while the business service provides the native service interface. Because the proxy and business services are not the same interface, we need to do some more work in the route configuration.

We need to map the canonical form of the address list interface onto the native service form of the interface. In the example, we are mapping our canonical interface to the interface provided by a web-based address solution from the Harte-Hanks Global Address (http://www.qudox.com). To do this, we create a new Service Bus project and add the Harte-Hanks WSDL (http://webservices.globaladdress.net/globaladdress.asmx?WSDL). We use this to define the business service. We also add the canonical interface WSDL that we have defined and create a new proxy with this interface. We then need to map the proxy service onto the Harte-Hanks service by editing the message flow associated with the proxy, as we did in the previous section.

Our mapping needs to do two things as follows:

  • Map the method name on the interface to the correct method in the business service
  • Map the parameters in the canonical request onto the parameters needed in the business service request

For each method on the canonical interface, we must map it onto a method in the physical interface. We do this by selecting the appropriate method from the business service operation drop-down box. We need to do this because the methods provided in the external service do not match the method names in our canonical service. In the following example, we have mapped onto the SearchAddress method.

canonical formimplementing, in OSBMapping service interfaces

Having selected an operation, we now need to transform the input data from the format provided by the canonical interface into the format required by the external service. We need to map the request and response messages if it is a two-way method or just the request message for one-way method. The actual mapping may be done either by XQuery or XSLT. In our example, we will use the XSLT transform.

To perform the transformation, we add a Messaging Processing action to our message flow, which in this case is a Replace operation. The variable body always holds the message in the Service Bus flow. This receives the message through the proxy interface and is also used to deliver the message to the business service interface. This behavior differs from BPEL and most programming languages, where we typically have separate variables for the input and output messages. We need to transform this message from the proxy input canonical format to the business service native output format.

Be aware that there are really two flows associated with the proxy service. The request flow is used to receive the inbound message and perform any processing before invoking the target business service. The response flow takes the response from the business service and performs any necessary processing before replying to the invoker of the proxy service.

canonical formimplementing, in OSBMapping service interfaces

On selecting replace, we can fill in the details in the Request Actions dialog. The message is held in the body variable, and so we can fill this (body) in as the target variable name. We then need to select which part of the body we want to replace.

canonical formimplementing, in OSBMapping service interfaces

Clicking on the XPath link brings up the XPath Expression Editor, where we can enter the portion of the target variable that we wish to replace. In this case, we wish to replace all the elements so we enter ./*, which selects the top level element and all elements beneath it. Clicking on the Save button causes the expression to be saved in the Replace Action dialog.

canonical formimplementing, in OSBMapping service interfaces

Having identified the portion of the message we wish to replace (all of it) , we now need to specify what we will replace it with. In this case, we wish to transform the whole input message, so we click on the Expression link and select the XSLT Resources tab. Clicking on the Browse button enables us to choose a previously registered XSLT transformation file. After selecting the file, we need to identify the input to the transformation. In this case, the input message is in the body variable, and so we select all the elements in the body by using the expression $body/*. We then save our transformation expression.

Having provided the source data, the target, and the transformation, we can then save and repeat the whole process for the response message (in this case, converting from native to canonical form).

canonical formimplementing, in OSBMapping service interfaces

We can use JDeveloper to build an XSLT transform and then upload it into the Service Bus. A future release will add support for XQuery in JDeveloper, similar to that provided in Oracle Workshop for WebLogic. XSLT is an XML language that describes how to transform one XML document into another. Fortunately, most XSLT can be created using the graphical mapping tool in JDeveloper, and so SOA Suite developers don't have to be experts in XSLT, although it is very useful to know how it works. Note that in our transform, we may need to enhance the message with additional information, for example, all the Global Address methods require a username and password to be provided to allow accounting of the requests to take place. This information has no place in the canonical request format, but must be added in the transform. A sample transform that does just this is shown in the following screenshot:

canonical formimplementing, in OSBMapping service interfaces

Note that we use XPath string functions to set the username and password fields. It would be better to set these from the properties or an external file, as we would usually want to use them in a number of calls to the physical service. XPath functions are capable of allowing access to composite properties. We actually only need to set five fields in the request, namely, a country, postcode, username, password, and the maximum number of results to return. All the other fields are not necessary for the service we are using and so are hidden from end users because they do not appear in the canonical form of the service.

Applying canonical form in the Service Bus

When we think about the canonical form and routing, we have several different operations that may need to be performed.

  • Conversion to/from the native business service form from/to the canonical proxy form
  • Conversion to/from the native client form from/to the canonical proxy form
  • Routing between multiple native services, each potentially with its own message format

The following diagram represents these different potential interactions as distinct proxy implementations in the service. To reduce coupling and make maintenance easier, each native service has a corresponding canonical proxy service. This isolates the rest of the system from the actual native formats. This is shown below in the Local-Harte-Hanks-Proxy and Local-LocalAddress-Proxy services that transform the native service to/from the canonical form. This approach allows us to change the native address lookup implementations without impacting anything other than the Local-*-Proxy service.

The Canonical-Address-Proxy has the job of hiding the fact that the address lookup service is actually provided by a number of different service providers, each with their own message formats. By providing this service, we can easily add additional address providers without impacting the clients of the address lookup service.

Applying canonical form in the Service Bus

In addition to the services shown in the diagram, we may have clients that are not written to use the canonical address lookup. In this case, we need to provide a proxy that transforms the native input request to/from the canonical form. This allows us to be isolated from the requirements of the clients of the service. If a client requires its own interface to the address lookup service, we can easily provide that through a proxy without the need to impact the rest of the system, again reducing coupling.

An important optimization

The previous approach provides a very robust way of isolating service consumers and service requestors from the native formats and locations of their partners. However, there must be a concern about the overhead of all these additional proxy services and also about the possibility of a client accessing a native service directly. To avoid these problems, the Service Bus provides a local transport mechanism that can be specified as part of the binding of the proxy service. The local transport provides two things for us:

  • It makes services only consumable by other services in the Service Bus, they cannot be accessed externally
  • It provides a highly optimized messaging transport between proxy services, providing in-memory speed to avoid unnecessary overhead in service hand-offs between proxy services

These optimizations mean that it is very efficient to use the canonical form, and so the Service Bus not only allows us great flexibility in how we decouple our services from each other, but it also provides a very efficient mechanism for us to implement that decoupling. Note, though, that there is a cost involved in performing XSLT or XQuery transformations. This cost may be viewed as the price of loose coupling.

Mapping service interfaces

In order to take advantage of the canonical form in our service interfaces, we must have an abstract service interface that provides the functionality we need without being specific to any particular service implementation. Once we have this, we can then use it as the canonical service form.

We set up the initial project in the same way we did in the previous section on virtualizing service endpoints. The proxy should provide the canonical interface, while the business service provides the native service interface. Because the proxy and business services are not the same interface, we need to do some more work in the route configuration.

We need to map the canonical form of the address list interface onto the native service form of the interface. In the example, we are mapping our canonical interface to the interface provided by a web-based address solution from the Harte-Hanks Global Address (http://www.qudox.com). To do this, we create a new Service Bus project and add the Harte-Hanks WSDL (http://webservices.globaladdress.net/globaladdress.asmx?WSDL). We use this to define the business service. We also add the canonical interface WSDL that we have defined and create a new proxy with this interface. We then need to map the proxy service onto the Harte-Hanks service by editing the message flow associated with the proxy, as we did in the previous section.

Our mapping needs to do two things as follows:

  • Map the method name on the interface to the correct method in the business service
  • Map the parameters in the canonical request onto the parameters needed in the business service request

For each method on the canonical interface, we must map it onto a method in the physical interface. We do this by selecting the appropriate method from the business service operation drop-down box. We need to do this because the methods provided in the external service do not match the method names in our canonical service. In the following example, we have mapped onto the SearchAddress method.

canonical formimplementing, in OSBMapping service interfaces

Having selected an operation, we now need to transform the input data from the format provided by the canonical interface into the format required by the external service. We need to map the request and response messages if it is a two-way method or just the request message for one-way method. The actual mapping may be done either by XQuery or XSLT. In our example, we will use the XSLT transform.

To perform the transformation, we add a Messaging Processing action to our message flow, which in this case is a Replace operation. The variable body always holds the message in the Service Bus flow. This receives the message through the proxy interface and is also used to deliver the message to the business service interface. This behavior differs from BPEL and most programming languages, where we typically have separate variables for the input and output messages. We need to transform this message from the proxy input canonical format to the business service native output format.

Be aware that there are really two flows associated with the proxy service. The request flow is used to receive the inbound message and perform any processing before invoking the target business service. The response flow takes the response from the business service and performs any necessary processing before replying to the invoker of the proxy service.

canonical formimplementing, in OSBMapping service interfaces

On selecting replace, we can fill in the details in the Request Actions dialog. The message is held in the body variable, and so we can fill this (body) in as the target variable name. We then need to select which part of the body we want to replace.

canonical formimplementing, in OSBMapping service interfaces

Clicking on the XPath link brings up the XPath Expression Editor, where we can enter the portion of the target variable that we wish to replace. In this case, we wish to replace all the elements so we enter ./*, which selects the top level element and all elements beneath it. Clicking on the Save button causes the expression to be saved in the Replace Action dialog.

canonical formimplementing, in OSBMapping service interfaces

Having identified the portion of the message we wish to replace (all of it) , we now need to specify what we will replace it with. In this case, we wish to transform the whole input message, so we click on the Expression link and select the XSLT Resources tab. Clicking on the Browse button enables us to choose a previously registered XSLT transformation file. After selecting the file, we need to identify the input to the transformation. In this case, the input message is in the body variable, and so we select all the elements in the body by using the expression $body/*. We then save our transformation expression.

Having provided the source data, the target, and the transformation, we can then save and repeat the whole process for the response message (in this case, converting from native to canonical form).

canonical formimplementing, in OSBMapping service interfaces

We can use JDeveloper to build an XSLT transform and then upload it into the Service Bus. A future release will add support for XQuery in JDeveloper, similar to that provided in Oracle Workshop for WebLogic. XSLT is an XML language that describes how to transform one XML document into another. Fortunately, most XSLT can be created using the graphical mapping tool in JDeveloper, and so SOA Suite developers don't have to be experts in XSLT, although it is very useful to know how it works. Note that in our transform, we may need to enhance the message with additional information, for example, all the Global Address methods require a username and password to be provided to allow accounting of the requests to take place. This information has no place in the canonical request format, but must be added in the transform. A sample transform that does just this is shown in the following screenshot:

canonical formimplementing, in OSBMapping service interfaces

Note that we use XPath string functions to set the username and password fields. It would be better to set these from the properties or an external file, as we would usually want to use them in a number of calls to the physical service. XPath functions are capable of allowing access to composite properties. We actually only need to set five fields in the request, namely, a country, postcode, username, password, and the maximum number of results to return. All the other fields are not necessary for the service we are using and so are hidden from end users because they do not appear in the canonical form of the service.

Applying canonical form in the Service Bus

When we think about the canonical form and routing, we have several different operations that may need to be performed.

  • Conversion to/from the native business service form from/to the canonical proxy form
  • Conversion to/from the native client form from/to the canonical proxy form
  • Routing between multiple native services, each potentially with its own message format

The following diagram represents these different potential interactions as distinct proxy implementations in the service. To reduce coupling and make maintenance easier, each native service has a corresponding canonical proxy service. This isolates the rest of the system from the actual native formats. This is shown below in the Local-Harte-Hanks-Proxy and Local-LocalAddress-Proxy services that transform the native service to/from the canonical form. This approach allows us to change the native address lookup implementations without impacting anything other than the Local-*-Proxy service.

The Canonical-Address-Proxy has the job of hiding the fact that the address lookup service is actually provided by a number of different service providers, each with their own message formats. By providing this service, we can easily add additional address providers without impacting the clients of the address lookup service.

Applying canonical form in the Service Bus

In addition to the services shown in the diagram, we may have clients that are not written to use the canonical address lookup. In this case, we need to provide a proxy that transforms the native input request to/from the canonical form. This allows us to be isolated from the requirements of the clients of the service. If a client requires its own interface to the address lookup service, we can easily provide that through a proxy without the need to impact the rest of the system, again reducing coupling.

An important optimization

The previous approach provides a very robust way of isolating service consumers and service requestors from the native formats and locations of their partners. However, there must be a concern about the overhead of all these additional proxy services and also about the possibility of a client accessing a native service directly. To avoid these problems, the Service Bus provides a local transport mechanism that can be specified as part of the binding of the proxy service. The local transport provides two things for us:

  • It makes services only consumable by other services in the Service Bus, they cannot be accessed externally
  • It provides a highly optimized messaging transport between proxy services, providing in-memory speed to avoid unnecessary overhead in service hand-offs between proxy services

These optimizations mean that it is very efficient to use the canonical form, and so the Service Bus not only allows us great flexibility in how we decouple our services from each other, but it also provides a very efficient mechanism for us to implement that decoupling. Note, though, that there is a cost involved in performing XSLT or XQuery transformations. This cost may be viewed as the price of loose coupling.

Applying canonical form in the Service Bus

When we think about the canonical form and routing, we have several different operations that may need to be performed.

  • Conversion to/from the native business service form from/to the canonical proxy form
  • Conversion to/from the native client form from/to the canonical proxy form
  • Routing between multiple native services, each potentially with its own message format

The following diagram represents these different potential interactions as distinct proxy implementations in the service. To reduce coupling and make maintenance easier, each native service has a corresponding canonical proxy service. This isolates the rest of the system from the actual native formats. This is shown below in the Local-Harte-Hanks-Proxy and Local-LocalAddress-Proxy services that transform the native service to/from the canonical form. This approach allows us to change the native address lookup implementations without impacting anything other than the Local-*-Proxy service.

The Canonical-Address-Proxy has the job of hiding the fact that the address lookup service is actually provided by a number of different service providers, each with their own message formats. By providing this service, we can easily add additional address providers without impacting the clients of the address lookup service.

Applying canonical form in the Service Bus

In addition to the services shown in the diagram, we may have clients that are not written to use the canonical address lookup. In this case, we need to provide a proxy that transforms the native input request to/from the canonical form. This allows us to be isolated from the requirements of the clients of the service. If a client requires its own interface to the address lookup service, we can easily provide that through a proxy without the need to impact the rest of the system, again reducing coupling.

An important optimization

The previous approach provides a very robust way of isolating service consumers and service requestors from the native formats and locations of their partners. However, there must be a concern about the overhead of all these additional proxy services and also about the possibility of a client accessing a native service directly. To avoid these problems, the Service Bus provides a local transport mechanism that can be specified as part of the binding of the proxy service. The local transport provides two things for us:

  • It makes services only consumable by other services in the Service Bus, they cannot be accessed externally
  • It provides a highly optimized messaging transport between proxy services, providing in-memory speed to avoid unnecessary overhead in service hand-offs between proxy services

These optimizations mean that it is very efficient to use the canonical form, and so the Service Bus not only allows us great flexibility in how we decouple our services from each other, but it also provides a very efficient mechanism for us to implement that decoupling. Note, though, that there is a cost involved in performing XSLT or XQuery transformations. This cost may be viewed as the price of loose coupling.

An important optimization

The previous approach provides a very robust way of isolating service consumers and service requestors from the native formats and locations of their partners. However, there must be a concern about the overhead of all these additional proxy services and also about the possibility of a client accessing a native service directly. To avoid these problems, the Service Bus provides a local transport mechanism that can be specified as part of the binding of the proxy service. The local transport provides two things for us:

  • It makes services only consumable by other services in the Service Bus, they cannot be accessed externally
  • It provides a highly optimized messaging transport between proxy services, providing in-memory speed to avoid unnecessary overhead in service hand-offs between proxy services

These optimizations mean that it is very efficient to use the canonical form, and so the Service Bus not only allows us great flexibility in how we decouple our services from each other, but it also provides a very efficient mechanism for us to implement that decoupling. Note, though, that there is a cost involved in performing XSLT or XQuery transformations. This cost may be viewed as the price of loose coupling.

Using the Mediator for virtualization

As discussed earlier, we can also use the Mediator for virtualization within an SCA Assembly. The Mediator should be used to ensure that interface into and out of SCA Assemblies use canonical form. We can also use XSL transforms in Mediator in a similar fashion to Service Bus to provide mappings between one data format and another.

To do this, we would select the canonical format WSDL as the input to our composite and wire this to the Mediator in the same way as we did in Chapter 2, Writing your First Composite. We can then double-click on the Mediator to open it and add a transformation to convert the messages to and from the canonical form.

Using the Mediator for virtualization

If necessary, we may need to expand the routing rule to show the details. For the input message, we have the option of filtering the message, meaning that we can choose what to call, based on the contents of the input message. If no filter expression is provided, then all messages will be delivered to a single target.

The Validate Semantic field allows us to check that the input message is of the correct format. This requires a schematron file and is covered in Chapter 13, Building Validation into Services.

The Assign Values field allows us to set values using either the input message or message properties. This is particularly useful when using adapters, as some of the data required may be provided in adapter headers such as the input filename. This may also be used to set adapter header properties, if invoking an adapter.

The Transform Using field allows us to select an XSL stylesheet to transform the input (in this case, the canonical format) to the internal format. Clicking the Using the Mediator for virtualization icon brings up the Request Transformation Map dialog:

Using the Mediator for virtualization

Here we can either select an existing XSL or create a new one based on the input and output formats.

Using the Mediator for virtualization

The XSL editor provides a graphical drag-and-drop mechanism for creating XSL stylesheets. Alternatively, it is possible to select the Source tab and input XSL commands directly. Note that many XSL commands are not supported by the graphical editor, and so it is best to do as much as possible in the graphical editor before switching to the source mode.

Summary

In this chapter, we have explored how we can use the Oracle Service Bus and the Mediator in the SOA Suite to reduce the degree of coupling. By reducing coupling, or the dependencies between services, our architectures become more resilient to change. In particular, we looked at how to use the Service Bus to reduce coupling by abstracting endpoint interface locations and formats. Crucial to this is the concept of canonical or common data formats that reduce the amount of data transformation that is required, particularly in bringing new services into our architecture. Finally, we considered how this abstraction can go as far as hiding the fact that we are using multiple services' concurrently by allowing us to make routing decisions at runtime.

All these features are there to help us build service-oriented architectures that are resilient to change and can easily absorb new functionality and services.

Chapter 5. Using BPEL to Build Composite Services and Business Processes

In the previous two chapters, we saw how we can service-enable functionality embedded within existing systems. The next challenge is how to assemble these services to build "composite" applications or business processes. This is the role of the Web Service Business Process Execution Language (WS BPEL) or Business Process Execution Language (BPEL), as it's commonly referred to.

BPEL is a rich XML-based language for describing the assembly of a set of existing web services into either a composite service or a business process. Once deployed, a BPEL process itself is actually invoked as a web service.

Thus, anything that can call a web service, can also call a BPEL process, including of course, other BPEL processes. This allows you to take a nested approach to writing BPEL processes, giving you a lot of flexibility.

In this chapter, we first introduce the basic structure of a BPEL process, its key constructs, and the difference between a synchronous and asynchronous service.

We then demonstrate through the building and refinement of two example BPEL processes (one synchronous the other asynchronous), how to use BPEL to invoke external web services (including other BPEL processes), and to build composite services. During this process, we also take the opportunity to introduce the reader to many of the key BPEL activities in more detail.

Basic structure of a BPEL process

The following image shows the core structure of a BPEL process, and how it interacts with components external to it: either web services that the BPEL process invokes (Service A and Service B in this case) or external clients that invoke the BPEL process as a web service.

From this, we can see that the BPEL process divides into two distinct parts: the partner links (with associated WSDL files, which describe the interactions between the BPEL process and the outside world) and the core BPEL Process itself, which describes the process to be executed at runtime.

Basic structure of a BPEL process

Core BPEL process

The core BPEL process consists of a number of steps or activities as they are called in BPEL.

These consist of simple activities, including:

  • Assign: Used to manipulate variables.
  • Transform: A specialized assign activity that uses XSLT to map data from a source format to a target format.
  • Wait: Used to pause the process for a period of time.
  • Empty: Does nothing. It is used in branches of your process where syntactically an activity is required, but you don't want to perform an activity.

Structured activities that control the flow through the process, these include:

  • While: For implementing loops
  • Switch: Construct for implementing conditional branches
  • Flow: For implementing branches that execute in parallel
  • FlowN: For implementing a dynamic number of parallel branches

And messaging activities (for example, Receive, Invoke, Reply, and Pick)

The activities within a BPEL process can be subdivided into logical groups of activities, using the Scope activity. Along with providing a useful way to structure and organize your process, it also lets you define attributes such as variables, fault handlers, and compensation handlers that just apply to the scope.

Variables

Each BPEL process also defines variables, which are used to hold the state of the process as well as messages that are sent and received by the process. They can be defined at the process level, in which case, they are considered global and visible to all parts of the process, or can be declared within a scope, in which case they are only visible to activities contained within that scope (and scopes nested within the scope to which the variable belongs).

Variables can be one of the following types:

  • Simple type: Can hold any simple data type defined by XML Schema (for example, string, integer, Boolean, and float)
  • WSDL message type: Used to hold the content of a WSDL message sent to or received from partners
  • Element: Can hold either a complex or simple XML Schema element defined in either a WSDL file or a separate XML Schema

Variables are manipulated using the <assign> activity, which can be used to copy data from one variable to another, as well as create new data using XPath expressions or XSLT.

For variables that are WSDL messages or complex elements, we can work with it at the subcomponent level by specifying the part of the variable we would like to work with using an XPath expression.

Partner links

All interaction between a process and other parties (or partners) is via web services, as defined by their corresponding WSDL files. Even though each service is fully described by its WSDL, it fails to define the relationship between the process and the partner, that is, who the consumer of a service is and who the provider is. At first glance, the relationship may seem implicit. However, this is not always the case, so BPEL uses partner links to explicitly define this relationship.

Partner links are defined using the <partnerLinkType>, which is an extension to WSDL (defined by the BPEL standard). Whenever you refer to a web service whose WSDL doesn't contain a <partnerLinkType>, JDeveloper will automatically ask you whether you want it to create one for you. Assuming your answer is yes, it will create this as a separate WSDL document, which then imports the original WSDL.

Messaging activities

BPEL defines three messaging activities <receive>, <reply>, and <invoke>; how you use these depends on whether the message interaction is either synchronous or asynchronous and whether the BPEL process is either a consumer or provider of the service.

Synchronous messaging

With synchronous messaging the caller will block until it has received a reply (or times out), that is, the BPEL process will wait for a reply before moving on to the next activity.

As we can see in the following, Process A uses the <invoke> activity to call a synchronous web service (Process B in this case), once it has sent the initial request, it blocks and waits for a corresponding reply from Process B.

Synchronous messaging

Process B uses the <receive> activity to receive the request. Once it has processed the request, it uses the <reply> activity to send a response back to Process A.

Theoretically, Process B could take as long as it wants before sending a reply, but typically Process A will only wait for a short time (for example, 30 seconds) before it times out the <invoke> operation under the assumption that something has gone wrong. Thus, if Process B is going to take a substantial period of time before replying, then you should model the exchange as an Asynchronous Send-Receive (refer to the following section).

Asynchronous messaging

With asynchronous messaging, the key difference is that once the caller has sent the request, the send operation will return immediately, and the BPEL process may then continue with additional activities until it is ready to receive the reply. At this point, the process will block until it receives the reply (which may already be there).

If we look at the following screenshot, you will notice that just like the synchronous request Process A uses the <invoke> activity to call an asynchronous web service. However, the difference is that it doesn't block waiting for a response, rather it continues processing until it is ready to process the response. It then receives this using the <receive> activity.

Asynchronous messaging

Conversely, Process B uses a <receive> activity to receive the initial request and an <invoke> activity to send back the corresponding response.

While at a logical level, there is little difference between synchronous and asynchronous messaging (especially if there are no activities between the <invoke> and <receive> activity in Process A), at a technical level there is a key difference.

This is because with asynchronous messaging, we have two <invoke>, <receive> pairs, each corresponding to a separate web service operation. One is for the request and the other is for the reply.

From a decision perspective, a key driver as to which to choose is the length of time it takes for Process B to service the request, as asynchronous messaging supports far longer processing times. In general, once the time it takes for Process B to return a response goes above 30 seconds, you should consider switching to asynchronous messaging.

Note

With potentially many instances of Process A and Process B running at the same time, BPEL needs to ensure that each reply is matched (or correlated) to the appropriate request. By default, BPEL uses WS-Addressing to achieve this. We look at this in more detail in Chapter 16, Message Interaction Patterns.

One way messaging

A variation of asynchronous messaging is one way messaging (also known as fire and forget). This involves a single message being sent from the calling process, with no response being returned.

If we look at the following screenshot, you will notice that just like the asynchronous request, Process A uses the <invoke> activity to send a message to Process B.

Once Process A has sent the message, it continues processing until it completes, that is, it never stops to wait for a response from Process B. Similarly, Process B, upon receipt of the message, continues processing until it has completed and never sends any response back to Process A.

One way messaging

A simple composite service

Despite the fact that BPEL is intended primarily for writing long running processes, it also provides an excellent way to build a composite service, that is, a service that is assembled from other services.

Let's take a simple example: say I have a service that gives me the stock quote for a specified company, and that I also have a service that gives me the exchange rate between two currencies. I can use BPEL to combine these two services and provide a service that gives the stock quote for a company in the currency of my choice.

So let's create our stock quote service, we will create a simple synchronous BPEL process which takes two parameters, the stock ticker and the required currency. This will then call two external services.

Creating our StockQuote service

Before we begin, we will create an application (named Chapter05), which we will use for all our samples in this chapter. To do this, follow the same process we used to create our first application in Chapter 2, Writing your First Composite. When prompted to create a project, create an Empty Composite named StockService.

Next, drag a BPEL process from the SOA Component Palette onto our StockService composite. This will launch the Create BPEL Process wizard, specify a name of StockQuote, and select a Synchronous BPEL Process. However, at this stage do not click OK.

Creating our StockQuote service

You may remember when we created our Echo service back in Chapter 2, Writing your First Composite, JDeveloper automatically created a simple WSDL file for our service, with a single input and output field. For our StockQuote service, we need to pass in multiple fields (that is, Stock Ticker and Currency). So, to define the input and output messages for our BPEL process, we are going to make use of a predefined schema StockService.xsd, as shown in the following code snippet (for brevity, only the parts which are relevant to this example are shown. However, the complete schema is provided in the downloadable samples file for the book).

<?xml version="1.0" encoding="windows-1252"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
            xmlns="http://xmlns.packtpub.com/StockService"
            targetNamespace="http://xmlns.packtpub.com/StockService" 
            elementFormDefault="qualified">

	<xsd:element name="getQuote"         type=" tGetQuote"/>
	<xsd:element name="getQuoteResponse" type=" tGetQuoteResponse"/>

	<xsd:complexType name="tGetQuote">
		<xsd:sequence>
			<xsd:element name="stockSymbol" type="xsd:string"/>
			<xsd:element name="currency" type="xsd:string"/>
		</xsd:sequence>
	</xsd:complexType>

	<xsd:complexType name="tGetQuoteResponse">
		<xsd:sequence>
			<xsd:element name="stockSymbol" type="xsd:string"/>
			<xsd:element name="currency" type="xsd:string"/>
			<xsd:element name="amount" type="xsd:decimal"/>
		</xsd:sequence>
	</xsd:complexType>

   … 
   
</xsd:schema>

Importing StockService schema

To override the default input schema element generated by JDeveloper, click on Browse Input Elements … (the magnifying glass circled in the previous screenshot). This will bring up the Type Chooser , as shown in the following screenshot, which allows you to browse all schemas imported by the composite and select an element from them.

Importing StockService schema

In our case, we have yet to import any schemas, so click on Import Schema File … (circled in the previous screenshot). This will launch the Import Schema File window. Click on the magnifying glass to launch the SOA Resource Browser (in File System mode), which will allow us to search our filesystem for an appropriate schema.

Find the StockService.xsd located in the samples folder for Chapter 5 and select this. Ensure that the option to Copy to Project is selected and click OK; JDeveloper will then bring up the Localize Files window. Keep the default options and click OK. This will cause JDeveloper to create a local copy of our XML Schema and any dependant files (of which there are none in this example) within our project.

JDeveloper will now open the schema browser dialog, containing the imported StockService schema. Browse this and select the getQuote element, as shown in the following screenshot:

Importing StockService schema

Repeat this step for the output schema element, but select the getQuoteResponse element. Click OK and this will create our StockQuote process within our composite, as shown in the following screenshot:

Importing StockService schema

Within the composite, double-click the StockQuote process to open it in the BPEL editor. You will see that, by default, JDeveloper has created a skeleton BPEL process, which contains an initial <receive> activity to receive the stock quote request, followed by a <reply> activity to send back the result (as we discussed in the earlier section – Synchronous Messaging). In addition, it will have created two variables; inputVariable, which contains the initial stockquote request, and outputVariable, in which we will place the result to return to the requestor.

Note

If you look in the Projects section of the Application Navigator, you will see that it contains the file StockQuote.wsdl. This contains the WSDL description (including partner link extensions) for our process. If you examine this, you will see that we have a single operation; process, which is used to call the BPEL process.

Calling the external web services

The next step is to call our external web services. For our stock quote service, we are going to use Xignite's quotes web service, which delivers delayed equity price quotes from all U.S. stock exchanges (NYSE, NASDAQ, AMEX, NASDAQ OTC Bulletin Board, and Pink Sheets).

Note

Before you can use this service, you will need to register with Xignite. To do this, or for more information on this and other services provided by Xignite, go to www.xignite.com.

To call a web service in BPEL, we first need to create a partner link (as discussed at the start of this chapter). So from the Component Palette, expand the BPEL Services section and drag a Partner Link (Web Service / Adapter) component into the Partner Link swim lane in your BPEL process. This will pop up the following screen:

Calling the external web services

First enter a name for the partner link, for example, XigniteQuotes. Next we need to specify the WSDL file for the partner link. JDeveloper provides the following ways to do this:

  • SOA Resource Lookup: Allows us to browse the filesystem for WSDL files or any connected application server for deployed services
  • SOA Service Explorer: Allows us to browse other services that are defined within the composite (for example, other BPEL processes, Mediator, or external services)
  • Define Service: This enables us to define adapter services (refer to Chapter 3, Service-enabling Existing Systems) directly within the context of a BPEL process
  • WSDL URL: Directly enter the URL for the WSDL file into the corresponding field

For our reference, we have a local copy of the WSDL for Xignite's quotes service, called XigniteQuotes.wsdl, which is included with the samples for Chapter 5. Click on the SOA Resource Lookup … icon (circled in the preceding screenshot), then browse to and select this file (select Yes if prompted to create a local copy of the file).

JDeveloper will parse the WSDL, and assuming it is successful, it will pop up a window saying that there are no partner link types defined in the current WSDL and ask if you want to create partner links for the file. Click Yes. JDeveloper will then create one Partner Link Type for each port type defined in the WSDL. In cases where we have multiple partner link types, we will need to specify which one to use within our process. To do this, click on the drop-down list next to Partner Link Type and select the appropriate one. In our case, we have selected XigniteQuotesSoap_PL, as shown in the following screenshot:

Calling the external web services

Finally, we need to specify the Partner Role and My Role. When invoking a synchronous service, there will only be a single role defined in the WSDL, which represents the provider of the service. So specify this for the Partner Role and leave My Role as ----- Not Specified -----.

Note

Best practice would dictate that rather than calling the stock quote service directly from within BPEL, we would invoke it via the Oracle Service Bus. This is an area we look at more closely in Chapter 10, oBay Introduction when we define our blueprint for SOA.

If you look at the composite view, you will see that XigniteQuotes is defined as an External Reference and is wired to our BPEL process.

Calling the web service

Once we have defined a partner link for the web service, the next step is to call it. As this is a synchronous service, we will need to use an <invoke> activity to call it, as we described earlier in this chapter.

On the Component Palette, ensure that the BPEL Activities and Components section is expanded. Then from it, drag an Invoke activity on to your BPEL process.

Next, place your mouse over the arrow next to the Invoke activity. Click and hold your mouse button, drag the arrow over your partner link, and then release, as shown in the following screenshot:

Calling the web service

This will then pop up the Edit Invoke activity window, as shown in the following screenshot:

Calling the web service

We need to specify a number of values to configure the Invoke activity, namely:

  • Name: This is the name we want to assign to the Invoke activity, and can be any value. So just assign a meaningful value such as GetQuote.
  • Partner Link: This is the Partner Link whose service we want to invoke; it should already be set to use XigniteQuotes, as we have already linked this activity to that Partner Link. An alternate approach would be to click on the corresponding spotlight icon, which would allow us to select from any Partner Link already defined to the process.
  • Operation: Once we've specified a Partner Link, we need to specify which of its operations we wish to invoke. This presents us with a drop-down list, listing all the operations that are available, for our purpose, select GetSingleQuote.
  • Input: Here we must specify the variable that contains the data to be passed to the web service that's being invoked. It is important that the variable is of type Message, and that it is of the same message type expected by the Operation (that is, as defined in the WSDL file for the web service).

    The simplest way to ensure this is by getting JDeveloper to create the variable for you. To do this, click on the green plus sign to the right of the input variable field. This will bring up the Create Variable window, as shown in the following screenshot. You will notice that JDeveloper creates a default name for the variable (based on the name you gave the invoke operation and the operation that you are calling). You can override this with something more meaningful (for example, QuoteInput).

  • Output: Finally, we must specify the variable into which the value returned by the web service will be placed. As with the input variable, this should be of the type Message and corresponds to the output message defined in the WSDL file for the selected operation. Again, the simplest way to ensure this is to get JDeveloper to create the variable for you.
    Calling the web service

Once you've specified values for all these fields, as illustrated in the preceding screenshot, click OK.

Assigning values to variables

In our previous step, we created the variable QuoteInput, which we pass to our invocation of GetSingleQuote. However, we have yet to initialize the variable or assign any value to it.

To do this, BPEL provides the <assign> activity, which is used to update the values of variables with new data. The <assign> activity typically consists of one or more copy operations. Each copy consists of a target variable, that is, the variable that you wish to assign a value to and a source (this can either be another variable or an XPath expression).

For our purposes, we want to assign the stock symbol passed into our BPEL process to our QuoteInput variable.

To do this, drag an Assign activity from the Component Palette on to your BPEL process at the point just before our Invoke activity. Then double-click on it to open up the Assign configuration window. Click on the green plus sign and select Copy Operation….

This will present us with the Create Copy Operation window, as shown in the following screenshot:

Assigning values to variables

On the left-hand side, we specify the From variable (that is, the source). Here we want to specify the stock symbol passed in as part of the input variable to the BPEL process. So expand the inputVariable tree, and select /ns2:getQuote/ns2:stockSymbol.

For the target, expand QuoteInput and select /ns1:GetSingleQuote/ns1:Symbol.

You will notice that for both the source and target, JDeveloper has created the equivalent XPath expression (circled in the preceding screenshot).

Note

The source and target can either be a simple type (for example, xsd:int, xsd:date, or xsd:string), as in the preceding example. Or a complex type (for example, ns2:getQuote), but make sure the source and target are either of the same type, or at least compatible.

Testing the process

At this stage, even though the process isn't complete, we can still save, deploy, and run our composite. Do this in the same way as previously covered in Chapter 2, Writing your First Composite. When you run the composite from the console you will notice that it doesn't return anything (as we haven't specified this yet). But if you look at the audit trail, you should successfully see the GetSingleQuote operation being invoked. Assuming this is the case, we know we have implemented that part of the process correctly.

Calling the exchange rate web service

The next step of the process is to determine the exchange rate between the requested currency and the US dollar (the currency used by the GetSingleQuote operation). For this, we are going to use the currency convertor service provided by webserviceX.NET .

For more information on this and other services provided by webserviceX.NET, go to www.webservicex.net.

This service provides a single operation ConversionRate, which gets the conversion rate from one currency to another. The WSDL file for this service can be found at the following URL:

http://www.webservicex.net/CurrencyConvertor.asmx?wsdl

For convenience, we have included a local copy of the WSDL for webserviceX.NET's currency convertor service, called CurrencyConvertor.wsdl. It's included with the samples of Chapter 5.

To invoke the ConversionRate operation, we will follow the same basic steps that we did in the previous section to invoke the GetSingleQuote operation. For brevity, we won't repeat them here, but will allow the reader to do this.

Note

To follow the examples, name the input variable for the exchange rate web service ExchangeRateInput and the output variable ExchangeRateOutput.

Assigning constant values to variables

The operation ConversionRate takes two input values as follows:

  • FromCurrency: This should be set to 'USD'
  • ToCurrency: This should be set to the currency field contained within the inputVariable for the BPEL process.

To set the FromCurrency, create another copy operation. However, for the From value, select Expression as the Type (circled in the following screenshot).

This will replace the variable browser with a free format textbox. Here you can specify any value, within quotes, that you wish to assign to your target variable. For our purposes, enter 'USD', as shown in the following screenshot:

Assigning constant values to variables

To set the value of ToCurrency, create another copy operation and copy in the value of the currency field contained within the inputVariable.

At this stage again, save, deploy, and run the composite to validate that we are calling the exchange rate service correctly.

Using the expression builder

The final part of the process is to combine the exchange rate returned by one service with the stock price returned by the other, in order to determine the stock price in the requested currency and return that to the caller of the composite.

To do this, we will again use an <assign> activity. So drag another <assign> activity onto the process, just after our second invoke activity. Now in our previous use of the <assign> activity, we have just used it to copy a value from one variable to another.

Here, it is slightly different, in that we want to combine multiple values into a single value, and to do that, we will need to write the appropriate piece of XPath. Create a copy operation as before, but for the source type, select Expression from the drop-down list, as shown in the following screenshot:

Using the expression builder

Now, if you want, you can type in the XPath expression manually (into the Expression area), but it's far easier and less error prone to use the Expression Builder. To do this, click on the XPath expression builder icon; the calculator icon, which is circled in the preceding screenshot, will pop up the Expression Builder (shown below):

Using the expression builder

The Expression Builder provides a graphical tool for writing XPath expressions, which are executed as part of the copy operation. It consists of the following areas:

  • Expression: The top textbox contains the XPath expression that you are working on. You can either type data directly in here, or use the Expression Builder to insert XPath fragments to build up the XPath required.
  • BPEL variables: This part of the Expression Builder lets you browse the variables defined within your BPEL process. Once you've located the variable that you wish to use, click on the Insert Into Expression button, and this will insert the appropriate code fragment into the XPath expression.

    Note

    The code fragment is inserted at the point within the expression where the cursor is currently positioned.

  • Functions: This shows you all the different types of XPath functions that are available to build up your XPath expression. To make it easier to locate the required function, they are grouped into categories such as String Functions, Mathematical Functions, and so on.

    The drop-down list lets you select the category that you are interested in (for example, Mathematical Functions, as illustrated in the preceding screenshot), and then the window below that lists all the functions available to that group.

    To use a particular function, select the required function, and click Insert Into Expression. This will insert the appropriate XPath fragment into the XPath Expression (again at the point that the cursor is currently positioned).

  • Content Preview: This box displays a preview of the content that would be inserted into the XPath Expression if you clicked the Insert Into Expression button. For example, if you had currently selected a particular BPEL variable, it would show you the XPath to access that variable.
  • Description: If you've currently selected a function, this box provides a brief description of the function, as well as the expected usage and number of parameters.

So let's use this to build our XPath expression. The expression we want to build is a relatively simple one, namely, the stock price returned by the stock quote service multiplied by the exchange rate returned by the exchange rate service.

To build our XPath expression, carry out the following steps:

First, within the BPEL Variables area, in the variable QuoteOutput, locate the element ns1:GetSingleQuoteResult|ns1:Last, as shown in the following screenshot:

Using the expression builder

Then click Insert Into Expression to insert this into the XPath expression.

Next, within the Functions area, select the Mathematical Functions category, and select the multiply function (notice the description in the Description box, as shown in the following screenshot), and insert this into the XPath expression:

Using the expression builder

Finally, back in the BPEL Variables area, locate the element ConversionRateResult within the variable ExchangeRateOutput, and insert that into the XPath expression.

You should now have an XPath expression similar to the one illustrated below, once you are happy with it, click OK.

Using the expression builder

Finally make sure you specify the target part of the copy operation, which should be the amount element within the outputVariable.

In order to complete the <assign> activity, you will need to create two more copy operations to copy the Currency and StockSymbol specified in the inputVariable into the equivalent values in the outputVariable.

Once done, your BPEL process should be complete. So deploy and run the composite.

Asynchronous service

Following our StockQuote service, another service would be a stock order service, which would enable us to buy or sell a particular stock. For this service, a client would need to specify the stock, whether they wanted to buy or sell, the quantity, and the price.

It makes sense to make this an asynchronous service, as once the order has been placed, it may take seconds, minutes, hours, or even days for the order to be matched.

Now, I'm not aware of any trade services that are free to try (probably for a good reason!). However, there is no reason why we can't simulate one. To do this, we will write a simple asynchronous process.

Drag another BPEL process on to our StockService composite and give it the name StockOrder, but specify that it is an asynchronous BPEL process.

As with the StockQuote process, we also want to specify predefined elements for its input and output. The elements we are going to use are placeOrder for the input and placeOrderResponse for the output, the definitions for which are shown in the following code snippet:

<xsd:element name="placeOrder"         type="tPlaceOrder"/>
<xsd:element name="placeOrderResponse" type="tPlaceOrderResponse"/>

<xsd:complexType name="tPlaceOrder">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="bidPrice"    type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

<xsd:complexType name="tPlaceOrderResponse">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="actualPrice" type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

These are also defined in the StockService.xsd that we previously imported into the StockService composite. So, for each field, we click on the magnifying glass to bring up the type chooser and select the appropriate element definitions. Then click OK to create the process. This will create a second BPEL process within our composite, so double-click on this to open it.

You will see that, by default, JDeveloper has created a skeleton asynchronous BPEL process, which contains an initial <receive> activity to receive the stock order request. But this time it's followed by an <invoke> activity to send the result back (as opposed to a <reply> activity used by the synchronous process).

If you look at the WSDL for the process, you will see that it defines two operations: process to call the process, and processResponse, which will be called by the process to send back the result. Thus the client that calls the process operation will need to provide the processResponse callback in order to receive the result (this is something we will look at in more detail in Chapter 15, Message Interaction Patterns.

Now, for the purpose of our simulation, we will assume that the StockOrder request is successful and the actualPrice achieved is always the bid price. So to do this, create an assign operation that copies all the original input values to their corresponding output values. Deploy the composite, and run it from the console.

Note

When you click the Test Web Service button for the StockService composite, you will now be presented with two options: stockorder_client_ep and stockquote_client_ep. These correspond to each of the exposed services we have defined in our composite. Ensure you select stockorder_client_ep, which is wired to our StockOrder process.

This time, you will notice that no result is returned (as it's being processed asynchronously); rather it displays a message to indicate that the service was invoked successfully, as shown in the following screenshot:

Asynchronous service

Click on Launch Message Flow Trace to bring up the trace for the composite, and then select StockOrder to bring up the audit trail for the process. Switch to the flow view, and expand the callbackClient activity at the end of the trace. This will pop up a window showing the details of the response sent by our process, as shown in the following screenshot:

Asynchronous service

Using the wait activity

Now you've probably spotted the most obvious flaw with this simulation, in that the process returns a response almost immediately, which negates the whole point of making it asynchronous.

To make it more realistic, we will use the <wait> activity to wait for a period of time. To do this drag the <wait> activity from the Component Palette onto your BPEL process just before the <assign> activity, and then double-click on it to open the Wait activity window, as shown below.

The <wait> activity allows you to specify that the process wait for a specified duration of time or until a specified deadline. In either case, you specify a fixed value or choose to specify an XPath expression to evaluate the value at runtime.

If you specify Expression, and then click the calculator icon to the right of it, this will launch the Expression Builder that we introduced earlier in the chapter. The result of the expression must evaluate to a valid value of xsd:duration for periods and xsd:dateTime for deadlines. The format of xsd:duration is PnYnMnDTnHnMnS, for example. P1M would be a duration of 1 month and P10DT1H25M would be 10 days, 1 hour and 25 minutes.

For deadlines, the expression should evaluate to a valid value of xsd:date.

The structure of xsd:dateTime is YYYY-MM-DDThh:mm:ss+hh:mm, where the +hh:mm is optional and is the time period offset from UTC (or GMT, if you prefer). Obviously, the offset can be negative or positive.

For example, 2010-01-19T17:37:47-05:00 is the time 17:37:47 on January 19th 2010, 5 hours behind UTC (that is, Eastern Standard Time in the US).

Using the wait activity

For our purposes, we just need to wait for a relatively short period of time, so set it to wait for one minute.

Now save, deploy, and run the composite. If you now look at the audit trail of the process, you will see that it has paused on the <wait> activity (which will be highlighted in orange).

Improving the stock trade service

We have a very trivial trade service, which always results in a successful trade after one minute. Let's see if we can make it a bit more "realistic".

We will modify the process to call the stockQuote service and compare the actual price against the requested price. If the quote we get back matches or is better than the price specified, then we will return a successful trade (at the quoted price). Otherwise we will wait a minute and loop back round and try again.

Creating the while loop

The bulk of this process will now be contained within a while loop, so from the Process Activities list of the Component Palette, drag a While activity into the process.

Click on the plus symbol to expand the While activity. It will now display an area where you can drop a sequence of one or more activities that will be executed every time the process iterates through the loop.

Creating the while loop

We want to iterate through the loop until the trade has been fulfilled, so let's create a variable of type xsd:Boolean called tradeFulfilled and use an <assign> statement before the while loop to set its value to false.

The first step is to create a variable of type xsd:Boolean. Until now, we've used JDeveloper to automatically create the variables we've required, typically as part of the process of defining an Invoke activity. However, that's not an option here.

If you look at the diagram of your BPEL process, you will see that it is surrounded by a light grey dashed box, and on the top left-hand side there are a number of icons. If you click on the top one of these (x), as shown in the following screenshot, this will open a window that lists all the variables defined in the process:

Creating the while loop

At this stage, it will list just the default inputVariable and outputVariable, which were automatically created with the process. Click on the green plus button. This will bring up the Create Variable window, as shown in the following screenshot:

Creating the while loop

Here we simply specify the Name of the variable (for example, tradeFulfilled) and its Type. In our case, we want an xsd:Boolean, so select Simple Type and click the magnifying glass to the right of it.

This will bring up the Type Chooser, which will list all the simple built-in data types defined by XML Schema. Select Boolean and click OK.

We need to initialize the variable to false, so drag an <assign> statement on to your process just before the while loop. Use the function false(), under the category Logical Functions, to achieve this.

Next, we need to set the condition on the while loop, so that it will execute only while tradeFulfilled equals false. Double-click on the while loop. This will open the While activity window, as shown in the following screenshot:

Creating the while loop

We must now specify an XPath expression, which will evaluate to either true or false. If you click on the expression builder icon, which is circled in the preceding screenshot, this will launch the Expression Builder. Use this to build the following expression:

bpws:getVariableData('tradeFullfilled') = false()

Once we are happy with this, click OK.

Checking the price

The first activity we need to perform within the while loop is to get a quote for the stock that we are trading. For this, we will need to invoke the stock quote process we created earlier. As both of these processes are in the same composite, the simplest way to do this is to wire them together.

Switch to the composite view in JDeveloper, next place your mouse over the yellow arrow on the StockOrder process (the one to add a new Reference). Click and hold your mouse button, then drag the arrow onto the blue arrow on the StockQuote process (the one that represents the Service Interface), then release, as shown in the following screenshot:

Checking the price

This will wire these two processes together and create a corresponding partner link in the StockOrder process. From here, implement the required steps to invoke the process operation of the StockQuote process, making sure that they are included within the while loop.

Using the switch activity

Remember our requirement is that we return success if the price matches or is better than the one specified in the order. Obviously, whether the price is better depends on whether we are selling or buying. If we are selling we need the price to be equal to or greater than the asking price; whereas if we are buying, we need the price to be equal to or less than the asking price.

So for this, we will introduce the <switch> activity. Drag a <switch> activity from the Process Activities list of the Component Palette on to your process after the invoke activity for the StockQuote service. Next, click on the plus symbol to expand the <switch> activity. By default, it will have two branches illustrated as follows:

The first branch contains a <case> condition, with a corresponding area where you can drop a sequence of one or more activities that will be executed if the condition evaluates to true.

The second branch contains an <otherwise> subactivity, with a corresponding area for activities. The activities in this branch will only be executed if all case conditions evaluate to false.

Using the switch activity

We want to cater to two separate tests (one for buying the other for selling), so click on the Add Switch Case arrow (highlighted in the preceding screenshot) to add another <case> branch.

Next, we need to define the test condition for each <case>. To do this, click on the corresponding Expression Builder icon to launch the expression builder (circled in the preceding screenshot). For the first one, use the expression builder to create the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Buy' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') >= 
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
	'/ns1:getQuoteResponse/ns1:Amount')

For the second branch, use the expression builder to define the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Sell' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') <=
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
'/ns1:getQuoteResponse/ns1:Amount')

Once we've defined the condition for each case, we just need to create a single <assign> activity in each branch. This needs to set all the values in the outputVariable to the corresponding values in the inputVariable, except for the ActualPrice element, which we should set to the value returned by the StockQuote process. Finally, we also need to set tradeFullfilled to true, so that we exit the while loop.

The simplest way to do this is by dragging the original <assign> we created in the first version of this process onto the first branch and then modify it as appropriate. Then create a similar <assign> activity in the second branch.

Note

You've probably noticed that you could actually combine the two tests into a single test. However, we took this approach to illustrate how you can add multiple branches to a switch.

If we don't have a match, then we have to wait a minute and then circle back round the while loop and try again. As we've already defined a <wait> activity, simply drag this from its current position within the process into the activity area for the <otherwise> activity.

That completes the process, so try deploying it and running it from the console.

Note

The other obvious thing is that this process could potentially run forever if we don't get a stock quote in our favor. One way to solve this would be to put the while activity in a scope and then set a timeout period on the scope so that it would only run for so long.

Creating our StockQuote service

Before we begin, we will create an application (named Chapter05), which we will use for all our samples in this chapter. To do this, follow the same process we used to create our first application in Chapter 2, Writing your First Composite. When prompted to create a project, create an Empty Composite named StockService.

Next, drag a BPEL process from the SOA Component Palette onto our StockService composite. This will launch the Create BPEL Process wizard, specify a name of StockQuote, and select a Synchronous BPEL Process. However, at this stage do not click OK.

Creating our StockQuote service

You may remember when we created our Echo service back in Chapter 2, Writing your First Composite, JDeveloper automatically created a simple WSDL file for our service, with a single input and output field. For our StockQuote service, we need to pass in multiple fields (that is, Stock Ticker and Currency). So, to define the input and output messages for our BPEL process, we are going to make use of a predefined schema StockService.xsd, as shown in the following code snippet (for brevity, only the parts which are relevant to this example are shown. However, the complete schema is provided in the downloadable samples file for the book).

<?xml version="1.0" encoding="windows-1252"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
            xmlns="http://xmlns.packtpub.com/StockService"
            targetNamespace="http://xmlns.packtpub.com/StockService" 
            elementFormDefault="qualified">

	<xsd:element name="getQuote"         type=" tGetQuote"/>
	<xsd:element name="getQuoteResponse" type=" tGetQuoteResponse"/>

	<xsd:complexType name="tGetQuote">
		<xsd:sequence>
			<xsd:element name="stockSymbol" type="xsd:string"/>
			<xsd:element name="currency" type="xsd:string"/>
		</xsd:sequence>
	</xsd:complexType>

	<xsd:complexType name="tGetQuoteResponse">
		<xsd:sequence>
			<xsd:element name="stockSymbol" type="xsd:string"/>
			<xsd:element name="currency" type="xsd:string"/>
			<xsd:element name="amount" type="xsd:decimal"/>
		</xsd:sequence>
	</xsd:complexType>

   … 
   
</xsd:schema>

Importing StockService schema

To override the default input schema element generated by JDeveloper, click on Browse Input Elements … (the magnifying glass circled in the previous screenshot). This will bring up the Type Chooser , as shown in the following screenshot, which allows you to browse all schemas imported by the composite and select an element from them.

Importing StockService schema

In our case, we have yet to import any schemas, so click on Import Schema File … (circled in the previous screenshot). This will launch the Import Schema File window. Click on the magnifying glass to launch the SOA Resource Browser (in File System mode), which will allow us to search our filesystem for an appropriate schema.

Find the StockService.xsd located in the samples folder for Chapter 5 and select this. Ensure that the option to Copy to Project is selected and click OK; JDeveloper will then bring up the Localize Files window. Keep the default options and click OK. This will cause JDeveloper to create a local copy of our XML Schema and any dependant files (of which there are none in this example) within our project.

JDeveloper will now open the schema browser dialog, containing the imported StockService schema. Browse this and select the getQuote element, as shown in the following screenshot:

Importing StockService schema

Repeat this step for the output schema element, but select the getQuoteResponse element. Click OK and this will create our StockQuote process within our composite, as shown in the following screenshot:

Importing StockService schema

Within the composite, double-click the StockQuote process to open it in the BPEL editor. You will see that, by default, JDeveloper has created a skeleton BPEL process, which contains an initial <receive> activity to receive the stock quote request, followed by a <reply> activity to send back the result (as we discussed in the earlier section – Synchronous Messaging). In addition, it will have created two variables; inputVariable, which contains the initial stockquote request, and outputVariable, in which we will place the result to return to the requestor.

Note

If you look in the Projects section of the Application Navigator, you will see that it contains the file StockQuote.wsdl. This contains the WSDL description (including partner link extensions) for our process. If you examine this, you will see that we have a single operation; process, which is used to call the BPEL process.

Calling the external web services

The next step is to call our external web services. For our stock quote service, we are going to use Xignite's quotes web service, which delivers delayed equity price quotes from all U.S. stock exchanges (NYSE, NASDAQ, AMEX, NASDAQ OTC Bulletin Board, and Pink Sheets).

Note

Before you can use this service, you will need to register with Xignite. To do this, or for more information on this and other services provided by Xignite, go to www.xignite.com.

To call a web service in BPEL, we first need to create a partner link (as discussed at the start of this chapter). So from the Component Palette, expand the BPEL Services section and drag a Partner Link (Web Service / Adapter) component into the Partner Link swim lane in your BPEL process. This will pop up the following screen:

Calling the external web services

First enter a name for the partner link, for example, XigniteQuotes. Next we need to specify the WSDL file for the partner link. JDeveloper provides the following ways to do this:

  • SOA Resource Lookup: Allows us to browse the filesystem for WSDL files or any connected application server for deployed services
  • SOA Service Explorer: Allows us to browse other services that are defined within the composite (for example, other BPEL processes, Mediator, or external services)
  • Define Service: This enables us to define adapter services (refer to Chapter 3, Service-enabling Existing Systems) directly within the context of a BPEL process
  • WSDL URL: Directly enter the URL for the WSDL file into the corresponding field

For our reference, we have a local copy of the WSDL for Xignite's quotes service, called XigniteQuotes.wsdl, which is included with the samples for Chapter 5. Click on the SOA Resource Lookup … icon (circled in the preceding screenshot), then browse to and select this file (select Yes if prompted to create a local copy of the file).

JDeveloper will parse the WSDL, and assuming it is successful, it will pop up a window saying that there are no partner link types defined in the current WSDL and ask if you want to create partner links for the file. Click Yes. JDeveloper will then create one Partner Link Type for each port type defined in the WSDL. In cases where we have multiple partner link types, we will need to specify which one to use within our process. To do this, click on the drop-down list next to Partner Link Type and select the appropriate one. In our case, we have selected XigniteQuotesSoap_PL, as shown in the following screenshot:

Calling the external web services

Finally, we need to specify the Partner Role and My Role. When invoking a synchronous service, there will only be a single role defined in the WSDL, which represents the provider of the service. So specify this for the Partner Role and leave My Role as ----- Not Specified -----.

Note

Best practice would dictate that rather than calling the stock quote service directly from within BPEL, we would invoke it via the Oracle Service Bus. This is an area we look at more closely in Chapter 10, oBay Introduction when we define our blueprint for SOA.

If you look at the composite view, you will see that XigniteQuotes is defined as an External Reference and is wired to our BPEL process.

Calling the web service

Once we have defined a partner link for the web service, the next step is to call it. As this is a synchronous service, we will need to use an <invoke> activity to call it, as we described earlier in this chapter.

On the Component Palette, ensure that the BPEL Activities and Components section is expanded. Then from it, drag an Invoke activity on to your BPEL process.

Next, place your mouse over the arrow next to the Invoke activity. Click and hold your mouse button, drag the arrow over your partner link, and then release, as shown in the following screenshot:

Calling the web service

This will then pop up the Edit Invoke activity window, as shown in the following screenshot:

Calling the web service

We need to specify a number of values to configure the Invoke activity, namely:

  • Name: This is the name we want to assign to the Invoke activity, and can be any value. So just assign a meaningful value such as GetQuote.
  • Partner Link: This is the Partner Link whose service we want to invoke; it should already be set to use XigniteQuotes, as we have already linked this activity to that Partner Link. An alternate approach would be to click on the corresponding spotlight icon, which would allow us to select from any Partner Link already defined to the process.
  • Operation: Once we've specified a Partner Link, we need to specify which of its operations we wish to invoke. This presents us with a drop-down list, listing all the operations that are available, for our purpose, select GetSingleQuote.
  • Input: Here we must specify the variable that contains the data to be passed to the web service that's being invoked. It is important that the variable is of type Message, and that it is of the same message type expected by the Operation (that is, as defined in the WSDL file for the web service).

    The simplest way to ensure this is by getting JDeveloper to create the variable for you. To do this, click on the green plus sign to the right of the input variable field. This will bring up the Create Variable window, as shown in the following screenshot. You will notice that JDeveloper creates a default name for the variable (based on the name you gave the invoke operation and the operation that you are calling). You can override this with something more meaningful (for example, QuoteInput).

  • Output: Finally, we must specify the variable into which the value returned by the web service will be placed. As with the input variable, this should be of the type Message and corresponds to the output message defined in the WSDL file for the selected operation. Again, the simplest way to ensure this is to get JDeveloper to create the variable for you.
    Calling the web service

Once you've specified values for all these fields, as illustrated in the preceding screenshot, click OK.

Assigning values to variables

In our previous step, we created the variable QuoteInput, which we pass to our invocation of GetSingleQuote. However, we have yet to initialize the variable or assign any value to it.

To do this, BPEL provides the <assign> activity, which is used to update the values of variables with new data. The <assign> activity typically consists of one or more copy operations. Each copy consists of a target variable, that is, the variable that you wish to assign a value to and a source (this can either be another variable or an XPath expression).

For our purposes, we want to assign the stock symbol passed into our BPEL process to our QuoteInput variable.

To do this, drag an Assign activity from the Component Palette on to your BPEL process at the point just before our Invoke activity. Then double-click on it to open up the Assign configuration window. Click on the green plus sign and select Copy Operation….

This will present us with the Create Copy Operation window, as shown in the following screenshot:

Assigning values to variables

On the left-hand side, we specify the From variable (that is, the source). Here we want to specify the stock symbol passed in as part of the input variable to the BPEL process. So expand the inputVariable tree, and select /ns2:getQuote/ns2:stockSymbol.

For the target, expand QuoteInput and select /ns1:GetSingleQuote/ns1:Symbol.

You will notice that for both the source and target, JDeveloper has created the equivalent XPath expression (circled in the preceding screenshot).

Note

The source and target can either be a simple type (for example, xsd:int, xsd:date, or xsd:string), as in the preceding example. Or a complex type (for example, ns2:getQuote), but make sure the source and target are either of the same type, or at least compatible.

Testing the process

At this stage, even though the process isn't complete, we can still save, deploy, and run our composite. Do this in the same way as previously covered in Chapter 2, Writing your First Composite. When you run the composite from the console you will notice that it doesn't return anything (as we haven't specified this yet). But if you look at the audit trail, you should successfully see the GetSingleQuote operation being invoked. Assuming this is the case, we know we have implemented that part of the process correctly.

Calling the exchange rate web service

The next step of the process is to determine the exchange rate between the requested currency and the US dollar (the currency used by the GetSingleQuote operation). For this, we are going to use the currency convertor service provided by webserviceX.NET .

For more information on this and other services provided by webserviceX.NET, go to www.webservicex.net.

This service provides a single operation ConversionRate, which gets the conversion rate from one currency to another. The WSDL file for this service can be found at the following URL:

http://www.webservicex.net/CurrencyConvertor.asmx?wsdl

For convenience, we have included a local copy of the WSDL for webserviceX.NET's currency convertor service, called CurrencyConvertor.wsdl. It's included with the samples of Chapter 5.

To invoke the ConversionRate operation, we will follow the same basic steps that we did in the previous section to invoke the GetSingleQuote operation. For brevity, we won't repeat them here, but will allow the reader to do this.

Note

To follow the examples, name the input variable for the exchange rate web service ExchangeRateInput and the output variable ExchangeRateOutput.

Assigning constant values to variables

The operation ConversionRate takes two input values as follows:

  • FromCurrency: This should be set to 'USD'
  • ToCurrency: This should be set to the currency field contained within the inputVariable for the BPEL process.

To set the FromCurrency, create another copy operation. However, for the From value, select Expression as the Type (circled in the following screenshot).

This will replace the variable browser with a free format textbox. Here you can specify any value, within quotes, that you wish to assign to your target variable. For our purposes, enter 'USD', as shown in the following screenshot:

Assigning constant values to variables

To set the value of ToCurrency, create another copy operation and copy in the value of the currency field contained within the inputVariable.

At this stage again, save, deploy, and run the composite to validate that we are calling the exchange rate service correctly.

Using the expression builder

The final part of the process is to combine the exchange rate returned by one service with the stock price returned by the other, in order to determine the stock price in the requested currency and return that to the caller of the composite.

To do this, we will again use an <assign> activity. So drag another <assign> activity onto the process, just after our second invoke activity. Now in our previous use of the <assign> activity, we have just used it to copy a value from one variable to another.

Here, it is slightly different, in that we want to combine multiple values into a single value, and to do that, we will need to write the appropriate piece of XPath. Create a copy operation as before, but for the source type, select Expression from the drop-down list, as shown in the following screenshot:

Using the expression builder

Now, if you want, you can type in the XPath expression manually (into the Expression area), but it's far easier and less error prone to use the Expression Builder. To do this, click on the XPath expression builder icon; the calculator icon, which is circled in the preceding screenshot, will pop up the Expression Builder (shown below):

Using the expression builder

The Expression Builder provides a graphical tool for writing XPath expressions, which are executed as part of the copy operation. It consists of the following areas:

  • Expression: The top textbox contains the XPath expression that you are working on. You can either type data directly in here, or use the Expression Builder to insert XPath fragments to build up the XPath required.
  • BPEL variables: This part of the Expression Builder lets you browse the variables defined within your BPEL process. Once you've located the variable that you wish to use, click on the Insert Into Expression button, and this will insert the appropriate code fragment into the XPath expression.

    Note

    The code fragment is inserted at the point within the expression where the cursor is currently positioned.

  • Functions: This shows you all the different types of XPath functions that are available to build up your XPath expression. To make it easier to locate the required function, they are grouped into categories such as String Functions, Mathematical Functions, and so on.

    The drop-down list lets you select the category that you are interested in (for example, Mathematical Functions, as illustrated in the preceding screenshot), and then the window below that lists all the functions available to that group.

    To use a particular function, select the required function, and click Insert Into Expression. This will insert the appropriate XPath fragment into the XPath Expression (again at the point that the cursor is currently positioned).

  • Content Preview: This box displays a preview of the content that would be inserted into the XPath Expression if you clicked the Insert Into Expression button. For example, if you had currently selected a particular BPEL variable, it would show you the XPath to access that variable.
  • Description: If you've currently selected a function, this box provides a brief description of the function, as well as the expected usage and number of parameters.

So let's use this to build our XPath expression. The expression we want to build is a relatively simple one, namely, the stock price returned by the stock quote service multiplied by the exchange rate returned by the exchange rate service.

To build our XPath expression, carry out the following steps:

First, within the BPEL Variables area, in the variable QuoteOutput, locate the element ns1:GetSingleQuoteResult|ns1:Last, as shown in the following screenshot:

Using the expression builder

Then click Insert Into Expression to insert this into the XPath expression.

Next, within the Functions area, select the Mathematical Functions category, and select the multiply function (notice the description in the Description box, as shown in the following screenshot), and insert this into the XPath expression:

Using the expression builder

Finally, back in the BPEL Variables area, locate the element ConversionRateResult within the variable ExchangeRateOutput, and insert that into the XPath expression.

You should now have an XPath expression similar to the one illustrated below, once you are happy with it, click OK.

Using the expression builder

Finally make sure you specify the target part of the copy operation, which should be the amount element within the outputVariable.

In order to complete the <assign> activity, you will need to create two more copy operations to copy the Currency and StockSymbol specified in the inputVariable into the equivalent values in the outputVariable.

Once done, your BPEL process should be complete. So deploy and run the composite.

Asynchronous service

Following our StockQuote service, another service would be a stock order service, which would enable us to buy or sell a particular stock. For this service, a client would need to specify the stock, whether they wanted to buy or sell, the quantity, and the price.

It makes sense to make this an asynchronous service, as once the order has been placed, it may take seconds, minutes, hours, or even days for the order to be matched.

Now, I'm not aware of any trade services that are free to try (probably for a good reason!). However, there is no reason why we can't simulate one. To do this, we will write a simple asynchronous process.

Drag another BPEL process on to our StockService composite and give it the name StockOrder, but specify that it is an asynchronous BPEL process.

As with the StockQuote process, we also want to specify predefined elements for its input and output. The elements we are going to use are placeOrder for the input and placeOrderResponse for the output, the definitions for which are shown in the following code snippet:

<xsd:element name="placeOrder"         type="tPlaceOrder"/>
<xsd:element name="placeOrderResponse" type="tPlaceOrderResponse"/>

<xsd:complexType name="tPlaceOrder">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="bidPrice"    type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

<xsd:complexType name="tPlaceOrderResponse">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="actualPrice" type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

These are also defined in the StockService.xsd that we previously imported into the StockService composite. So, for each field, we click on the magnifying glass to bring up the type chooser and select the appropriate element definitions. Then click OK to create the process. This will create a second BPEL process within our composite, so double-click on this to open it.

You will see that, by default, JDeveloper has created a skeleton asynchronous BPEL process, which contains an initial <receive> activity to receive the stock order request. But this time it's followed by an <invoke> activity to send the result back (as opposed to a <reply> activity used by the synchronous process).

If you look at the WSDL for the process, you will see that it defines two operations: process to call the process, and processResponse, which will be called by the process to send back the result. Thus the client that calls the process operation will need to provide the processResponse callback in order to receive the result (this is something we will look at in more detail in Chapter 15, Message Interaction Patterns.

Now, for the purpose of our simulation, we will assume that the StockOrder request is successful and the actualPrice achieved is always the bid price. So to do this, create an assign operation that copies all the original input values to their corresponding output values. Deploy the composite, and run it from the console.

Note

When you click the Test Web Service button for the StockService composite, you will now be presented with two options: stockorder_client_ep and stockquote_client_ep. These correspond to each of the exposed services we have defined in our composite. Ensure you select stockorder_client_ep, which is wired to our StockOrder process.

This time, you will notice that no result is returned (as it's being processed asynchronously); rather it displays a message to indicate that the service was invoked successfully, as shown in the following screenshot:

Asynchronous service

Click on Launch Message Flow Trace to bring up the trace for the composite, and then select StockOrder to bring up the audit trail for the process. Switch to the flow view, and expand the callbackClient activity at the end of the trace. This will pop up a window showing the details of the response sent by our process, as shown in the following screenshot:

Asynchronous service

Using the wait activity

Now you've probably spotted the most obvious flaw with this simulation, in that the process returns a response almost immediately, which negates the whole point of making it asynchronous.

To make it more realistic, we will use the <wait> activity to wait for a period of time. To do this drag the <wait> activity from the Component Palette onto your BPEL process just before the <assign> activity, and then double-click on it to open the Wait activity window, as shown below.

The <wait> activity allows you to specify that the process wait for a specified duration of time or until a specified deadline. In either case, you specify a fixed value or choose to specify an XPath expression to evaluate the value at runtime.

If you specify Expression, and then click the calculator icon to the right of it, this will launch the Expression Builder that we introduced earlier in the chapter. The result of the expression must evaluate to a valid value of xsd:duration for periods and xsd:dateTime for deadlines. The format of xsd:duration is PnYnMnDTnHnMnS, for example. P1M would be a duration of 1 month and P10DT1H25M would be 10 days, 1 hour and 25 minutes.

For deadlines, the expression should evaluate to a valid value of xsd:date.

The structure of xsd:dateTime is YYYY-MM-DDThh:mm:ss+hh:mm, where the +hh:mm is optional and is the time period offset from UTC (or GMT, if you prefer). Obviously, the offset can be negative or positive.

For example, 2010-01-19T17:37:47-05:00 is the time 17:37:47 on January 19th 2010, 5 hours behind UTC (that is, Eastern Standard Time in the US).

Using the wait activity

For our purposes, we just need to wait for a relatively short period of time, so set it to wait for one minute.

Now save, deploy, and run the composite. If you now look at the audit trail of the process, you will see that it has paused on the <wait> activity (which will be highlighted in orange).

Improving the stock trade service

We have a very trivial trade service, which always results in a successful trade after one minute. Let's see if we can make it a bit more "realistic".

We will modify the process to call the stockQuote service and compare the actual price against the requested price. If the quote we get back matches or is better than the price specified, then we will return a successful trade (at the quoted price). Otherwise we will wait a minute and loop back round and try again.

Creating the while loop

The bulk of this process will now be contained within a while loop, so from the Process Activities list of the Component Palette, drag a While activity into the process.

Click on the plus symbol to expand the While activity. It will now display an area where you can drop a sequence of one or more activities that will be executed every time the process iterates through the loop.

Creating the while loop

We want to iterate through the loop until the trade has been fulfilled, so let's create a variable of type xsd:Boolean called tradeFulfilled and use an <assign> statement before the while loop to set its value to false.

The first step is to create a variable of type xsd:Boolean. Until now, we've used JDeveloper to automatically create the variables we've required, typically as part of the process of defining an Invoke activity. However, that's not an option here.

If you look at the diagram of your BPEL process, you will see that it is surrounded by a light grey dashed box, and on the top left-hand side there are a number of icons. If you click on the top one of these (x), as shown in the following screenshot, this will open a window that lists all the variables defined in the process:

Creating the while loop

At this stage, it will list just the default inputVariable and outputVariable, which were automatically created with the process. Click on the green plus button. This will bring up the Create Variable window, as shown in the following screenshot:

Creating the while loop

Here we simply specify the Name of the variable (for example, tradeFulfilled) and its Type. In our case, we want an xsd:Boolean, so select Simple Type and click the magnifying glass to the right of it.

This will bring up the Type Chooser, which will list all the simple built-in data types defined by XML Schema. Select Boolean and click OK.

We need to initialize the variable to false, so drag an <assign> statement on to your process just before the while loop. Use the function false(), under the category Logical Functions, to achieve this.

Next, we need to set the condition on the while loop, so that it will execute only while tradeFulfilled equals false. Double-click on the while loop. This will open the While activity window, as shown in the following screenshot:

Creating the while loop

We must now specify an XPath expression, which will evaluate to either true or false. If you click on the expression builder icon, which is circled in the preceding screenshot, this will launch the Expression Builder. Use this to build the following expression:

bpws:getVariableData('tradeFullfilled') = false()

Once we are happy with this, click OK.

Checking the price

The first activity we need to perform within the while loop is to get a quote for the stock that we are trading. For this, we will need to invoke the stock quote process we created earlier. As both of these processes are in the same composite, the simplest way to do this is to wire them together.

Switch to the composite view in JDeveloper, next place your mouse over the yellow arrow on the StockOrder process (the one to add a new Reference). Click and hold your mouse button, then drag the arrow onto the blue arrow on the StockQuote process (the one that represents the Service Interface), then release, as shown in the following screenshot:

Checking the price

This will wire these two processes together and create a corresponding partner link in the StockOrder process. From here, implement the required steps to invoke the process operation of the StockQuote process, making sure that they are included within the while loop.

Using the switch activity

Remember our requirement is that we return success if the price matches or is better than the one specified in the order. Obviously, whether the price is better depends on whether we are selling or buying. If we are selling we need the price to be equal to or greater than the asking price; whereas if we are buying, we need the price to be equal to or less than the asking price.

So for this, we will introduce the <switch> activity. Drag a <switch> activity from the Process Activities list of the Component Palette on to your process after the invoke activity for the StockQuote service. Next, click on the plus symbol to expand the <switch> activity. By default, it will have two branches illustrated as follows:

The first branch contains a <case> condition, with a corresponding area where you can drop a sequence of one or more activities that will be executed if the condition evaluates to true.

The second branch contains an <otherwise> subactivity, with a corresponding area for activities. The activities in this branch will only be executed if all case conditions evaluate to false.

Using the switch activity

We want to cater to two separate tests (one for buying the other for selling), so click on the Add Switch Case arrow (highlighted in the preceding screenshot) to add another <case> branch.

Next, we need to define the test condition for each <case>. To do this, click on the corresponding Expression Builder icon to launch the expression builder (circled in the preceding screenshot). For the first one, use the expression builder to create the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Buy' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') >= 
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
	'/ns1:getQuoteResponse/ns1:Amount')

For the second branch, use the expression builder to define the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Sell' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') <=
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
'/ns1:getQuoteResponse/ns1:Amount')

Once we've defined the condition for each case, we just need to create a single <assign> activity in each branch. This needs to set all the values in the outputVariable to the corresponding values in the inputVariable, except for the ActualPrice element, which we should set to the value returned by the StockQuote process. Finally, we also need to set tradeFullfilled to true, so that we exit the while loop.

The simplest way to do this is by dragging the original <assign> we created in the first version of this process onto the first branch and then modify it as appropriate. Then create a similar <assign> activity in the second branch.

Note

You've probably noticed that you could actually combine the two tests into a single test. However, we took this approach to illustrate how you can add multiple branches to a switch.

If we don't have a match, then we have to wait a minute and then circle back round the while loop and try again. As we've already defined a <wait> activity, simply drag this from its current position within the process into the activity area for the <otherwise> activity.

That completes the process, so try deploying it and running it from the console.

Note

The other obvious thing is that this process could potentially run forever if we don't get a stock quote in our favor. One way to solve this would be to put the while activity in a scope and then set a timeout period on the scope so that it would only run for so long.

Importing StockService schema

To override the default input schema element generated by JDeveloper, click on Browse Input Elements … (the magnifying glass circled in the previous screenshot). This will bring up the Type Chooser , as shown in the following screenshot, which allows you to browse all schemas imported by the composite and select an element from them.

Importing StockService schema

In our case, we have yet to import any schemas, so click on Import Schema File … (circled in the previous screenshot). This will launch the Import Schema File window. Click on the magnifying glass to launch the SOA Resource Browser (in File System mode), which will allow us to search our filesystem for an appropriate schema.

Find the StockService.xsd located in the samples folder for Chapter 5 and select this. Ensure that the option to Copy to Project is selected and click OK; JDeveloper will then bring up the Localize Files window. Keep the default options and click OK. This will cause JDeveloper to create a local copy of our XML Schema and any dependant files (of which there are none in this example) within our project.

JDeveloper will now open the schema browser dialog, containing the imported StockService schema. Browse this and select the getQuote element, as shown in the following screenshot:

Importing StockService schema

Repeat this step for the output schema element, but select the getQuoteResponse element. Click OK and this will create our StockQuote process within our composite, as shown in the following screenshot:

Importing StockService schema

Within the composite, double-click the StockQuote process to open it in the BPEL editor. You will see that, by default, JDeveloper has created a skeleton BPEL process, which contains an initial <receive> activity to receive the stock quote request, followed by a <reply> activity to send back the result (as we discussed in the earlier section – Synchronous Messaging). In addition, it will have created two variables; inputVariable, which contains the initial stockquote request, and outputVariable, in which we will place the result to return to the requestor.

Note

If you look in the Projects section of the Application Navigator, you will see that it contains the file StockQuote.wsdl. This contains the WSDL description (including partner link extensions) for our process. If you examine this, you will see that we have a single operation; process, which is used to call the BPEL process.

Calling the external web services

The next step is to call our external web services. For our stock quote service, we are going to use Xignite's quotes web service, which delivers delayed equity price quotes from all U.S. stock exchanges (NYSE, NASDAQ, AMEX, NASDAQ OTC Bulletin Board, and Pink Sheets).

Note

Before you can use this service, you will need to register with Xignite. To do this, or for more information on this and other services provided by Xignite, go to www.xignite.com.

To call a web service in BPEL, we first need to create a partner link (as discussed at the start of this chapter). So from the Component Palette, expand the BPEL Services section and drag a Partner Link (Web Service / Adapter) component into the Partner Link swim lane in your BPEL process. This will pop up the following screen:

Calling the external web services

First enter a name for the partner link, for example, XigniteQuotes. Next we need to specify the WSDL file for the partner link. JDeveloper provides the following ways to do this:

  • SOA Resource Lookup: Allows us to browse the filesystem for WSDL files or any connected application server for deployed services
  • SOA Service Explorer: Allows us to browse other services that are defined within the composite (for example, other BPEL processes, Mediator, or external services)
  • Define Service: This enables us to define adapter services (refer to Chapter 3, Service-enabling Existing Systems) directly within the context of a BPEL process
  • WSDL URL: Directly enter the URL for the WSDL file into the corresponding field

For our reference, we have a local copy of the WSDL for Xignite's quotes service, called XigniteQuotes.wsdl, which is included with the samples for Chapter 5. Click on the SOA Resource Lookup … icon (circled in the preceding screenshot), then browse to and select this file (select Yes if prompted to create a local copy of the file).

JDeveloper will parse the WSDL, and assuming it is successful, it will pop up a window saying that there are no partner link types defined in the current WSDL and ask if you want to create partner links for the file. Click Yes. JDeveloper will then create one Partner Link Type for each port type defined in the WSDL. In cases where we have multiple partner link types, we will need to specify which one to use within our process. To do this, click on the drop-down list next to Partner Link Type and select the appropriate one. In our case, we have selected XigniteQuotesSoap_PL, as shown in the following screenshot:

Calling the external web services

Finally, we need to specify the Partner Role and My Role. When invoking a synchronous service, there will only be a single role defined in the WSDL, which represents the provider of the service. So specify this for the Partner Role and leave My Role as ----- Not Specified -----.

Note

Best practice would dictate that rather than calling the stock quote service directly from within BPEL, we would invoke it via the Oracle Service Bus. This is an area we look at more closely in Chapter 10, oBay Introduction when we define our blueprint for SOA.

If you look at the composite view, you will see that XigniteQuotes is defined as an External Reference and is wired to our BPEL process.

Calling the web service

Once we have defined a partner link for the web service, the next step is to call it. As this is a synchronous service, we will need to use an <invoke> activity to call it, as we described earlier in this chapter.

On the Component Palette, ensure that the BPEL Activities and Components section is expanded. Then from it, drag an Invoke activity on to your BPEL process.

Next, place your mouse over the arrow next to the Invoke activity. Click and hold your mouse button, drag the arrow over your partner link, and then release, as shown in the following screenshot:

Calling the web service

This will then pop up the Edit Invoke activity window, as shown in the following screenshot:

Calling the web service

We need to specify a number of values to configure the Invoke activity, namely:

  • Name: This is the name we want to assign to the Invoke activity, and can be any value. So just assign a meaningful value such as GetQuote.
  • Partner Link: This is the Partner Link whose service we want to invoke; it should already be set to use XigniteQuotes, as we have already linked this activity to that Partner Link. An alternate approach would be to click on the corresponding spotlight icon, which would allow us to select from any Partner Link already defined to the process.
  • Operation: Once we've specified a Partner Link, we need to specify which of its operations we wish to invoke. This presents us with a drop-down list, listing all the operations that are available, for our purpose, select GetSingleQuote.
  • Input: Here we must specify the variable that contains the data to be passed to the web service that's being invoked. It is important that the variable is of type Message, and that it is of the same message type expected by the Operation (that is, as defined in the WSDL file for the web service).

    The simplest way to ensure this is by getting JDeveloper to create the variable for you. To do this, click on the green plus sign to the right of the input variable field. This will bring up the Create Variable window, as shown in the following screenshot. You will notice that JDeveloper creates a default name for the variable (based on the name you gave the invoke operation and the operation that you are calling). You can override this with something more meaningful (for example, QuoteInput).

  • Output: Finally, we must specify the variable into which the value returned by the web service will be placed. As with the input variable, this should be of the type Message and corresponds to the output message defined in the WSDL file for the selected operation. Again, the simplest way to ensure this is to get JDeveloper to create the variable for you.
    Calling the web service

Once you've specified values for all these fields, as illustrated in the preceding screenshot, click OK.

Assigning values to variables

In our previous step, we created the variable QuoteInput, which we pass to our invocation of GetSingleQuote. However, we have yet to initialize the variable or assign any value to it.

To do this, BPEL provides the <assign> activity, which is used to update the values of variables with new data. The <assign> activity typically consists of one or more copy operations. Each copy consists of a target variable, that is, the variable that you wish to assign a value to and a source (this can either be another variable or an XPath expression).

For our purposes, we want to assign the stock symbol passed into our BPEL process to our QuoteInput variable.

To do this, drag an Assign activity from the Component Palette on to your BPEL process at the point just before our Invoke activity. Then double-click on it to open up the Assign configuration window. Click on the green plus sign and select Copy Operation….

This will present us with the Create Copy Operation window, as shown in the following screenshot:

Assigning values to variables

On the left-hand side, we specify the From variable (that is, the source). Here we want to specify the stock symbol passed in as part of the input variable to the BPEL process. So expand the inputVariable tree, and select /ns2:getQuote/ns2:stockSymbol.

For the target, expand QuoteInput and select /ns1:GetSingleQuote/ns1:Symbol.

You will notice that for both the source and target, JDeveloper has created the equivalent XPath expression (circled in the preceding screenshot).

Note

The source and target can either be a simple type (for example, xsd:int, xsd:date, or xsd:string), as in the preceding example. Or a complex type (for example, ns2:getQuote), but make sure the source and target are either of the same type, or at least compatible.

Testing the process

At this stage, even though the process isn't complete, we can still save, deploy, and run our composite. Do this in the same way as previously covered in Chapter 2, Writing your First Composite. When you run the composite from the console you will notice that it doesn't return anything (as we haven't specified this yet). But if you look at the audit trail, you should successfully see the GetSingleQuote operation being invoked. Assuming this is the case, we know we have implemented that part of the process correctly.

Calling the exchange rate web service

The next step of the process is to determine the exchange rate between the requested currency and the US dollar (the currency used by the GetSingleQuote operation). For this, we are going to use the currency convertor service provided by webserviceX.NET .

For more information on this and other services provided by webserviceX.NET, go to www.webservicex.net.

This service provides a single operation ConversionRate, which gets the conversion rate from one currency to another. The WSDL file for this service can be found at the following URL:

http://www.webservicex.net/CurrencyConvertor.asmx?wsdl

For convenience, we have included a local copy of the WSDL for webserviceX.NET's currency convertor service, called CurrencyConvertor.wsdl. It's included with the samples of Chapter 5.

To invoke the ConversionRate operation, we will follow the same basic steps that we did in the previous section to invoke the GetSingleQuote operation. For brevity, we won't repeat them here, but will allow the reader to do this.

Note

To follow the examples, name the input variable for the exchange rate web service ExchangeRateInput and the output variable ExchangeRateOutput.

Assigning constant values to variables

The operation ConversionRate takes two input values as follows:

  • FromCurrency: This should be set to 'USD'
  • ToCurrency: This should be set to the currency field contained within the inputVariable for the BPEL process.

To set the FromCurrency, create another copy operation. However, for the From value, select Expression as the Type (circled in the following screenshot).

This will replace the variable browser with a free format textbox. Here you can specify any value, within quotes, that you wish to assign to your target variable. For our purposes, enter 'USD', as shown in the following screenshot:

Assigning constant values to variables

To set the value of ToCurrency, create another copy operation and copy in the value of the currency field contained within the inputVariable.

At this stage again, save, deploy, and run the composite to validate that we are calling the exchange rate service correctly.

Using the expression builder

The final part of the process is to combine the exchange rate returned by one service with the stock price returned by the other, in order to determine the stock price in the requested currency and return that to the caller of the composite.

To do this, we will again use an <assign> activity. So drag another <assign> activity onto the process, just after our second invoke activity. Now in our previous use of the <assign> activity, we have just used it to copy a value from one variable to another.

Here, it is slightly different, in that we want to combine multiple values into a single value, and to do that, we will need to write the appropriate piece of XPath. Create a copy operation as before, but for the source type, select Expression from the drop-down list, as shown in the following screenshot:

Using the expression builder

Now, if you want, you can type in the XPath expression manually (into the Expression area), but it's far easier and less error prone to use the Expression Builder. To do this, click on the XPath expression builder icon; the calculator icon, which is circled in the preceding screenshot, will pop up the Expression Builder (shown below):

Using the expression builder

The Expression Builder provides a graphical tool for writing XPath expressions, which are executed as part of the copy operation. It consists of the following areas:

  • Expression: The top textbox contains the XPath expression that you are working on. You can either type data directly in here, or use the Expression Builder to insert XPath fragments to build up the XPath required.
  • BPEL variables: This part of the Expression Builder lets you browse the variables defined within your BPEL process. Once you've located the variable that you wish to use, click on the Insert Into Expression button, and this will insert the appropriate code fragment into the XPath expression.

    Note

    The code fragment is inserted at the point within the expression where the cursor is currently positioned.

  • Functions: This shows you all the different types of XPath functions that are available to build up your XPath expression. To make it easier to locate the required function, they are grouped into categories such as String Functions, Mathematical Functions, and so on.

    The drop-down list lets you select the category that you are interested in (for example, Mathematical Functions, as illustrated in the preceding screenshot), and then the window below that lists all the functions available to that group.

    To use a particular function, select the required function, and click Insert Into Expression. This will insert the appropriate XPath fragment into the XPath Expression (again at the point that the cursor is currently positioned).

  • Content Preview: This box displays a preview of the content that would be inserted into the XPath Expression if you clicked the Insert Into Expression button. For example, if you had currently selected a particular BPEL variable, it would show you the XPath to access that variable.
  • Description: If you've currently selected a function, this box provides a brief description of the function, as well as the expected usage and number of parameters.

So let's use this to build our XPath expression. The expression we want to build is a relatively simple one, namely, the stock price returned by the stock quote service multiplied by the exchange rate returned by the exchange rate service.

To build our XPath expression, carry out the following steps:

First, within the BPEL Variables area, in the variable QuoteOutput, locate the element ns1:GetSingleQuoteResult|ns1:Last, as shown in the following screenshot:

Using the expression builder

Then click Insert Into Expression to insert this into the XPath expression.

Next, within the Functions area, select the Mathematical Functions category, and select the multiply function (notice the description in the Description box, as shown in the following screenshot), and insert this into the XPath expression:

Using the expression builder

Finally, back in the BPEL Variables area, locate the element ConversionRateResult within the variable ExchangeRateOutput, and insert that into the XPath expression.

You should now have an XPath expression similar to the one illustrated below, once you are happy with it, click OK.

Using the expression builder

Finally make sure you specify the target part of the copy operation, which should be the amount element within the outputVariable.

In order to complete the <assign> activity, you will need to create two more copy operations to copy the Currency and StockSymbol specified in the inputVariable into the equivalent values in the outputVariable.

Once done, your BPEL process should be complete. So deploy and run the composite.

Asynchronous service

Following our StockQuote service, another service would be a stock order service, which would enable us to buy or sell a particular stock. For this service, a client would need to specify the stock, whether they wanted to buy or sell, the quantity, and the price.

It makes sense to make this an asynchronous service, as once the order has been placed, it may take seconds, minutes, hours, or even days for the order to be matched.

Now, I'm not aware of any trade services that are free to try (probably for a good reason!). However, there is no reason why we can't simulate one. To do this, we will write a simple asynchronous process.

Drag another BPEL process on to our StockService composite and give it the name StockOrder, but specify that it is an asynchronous BPEL process.

As with the StockQuote process, we also want to specify predefined elements for its input and output. The elements we are going to use are placeOrder for the input and placeOrderResponse for the output, the definitions for which are shown in the following code snippet:

<xsd:element name="placeOrder"         type="tPlaceOrder"/>
<xsd:element name="placeOrderResponse" type="tPlaceOrderResponse"/>

<xsd:complexType name="tPlaceOrder">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="bidPrice"    type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

<xsd:complexType name="tPlaceOrderResponse">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="actualPrice" type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

These are also defined in the StockService.xsd that we previously imported into the StockService composite. So, for each field, we click on the magnifying glass to bring up the type chooser and select the appropriate element definitions. Then click OK to create the process. This will create a second BPEL process within our composite, so double-click on this to open it.

You will see that, by default, JDeveloper has created a skeleton asynchronous BPEL process, which contains an initial <receive> activity to receive the stock order request. But this time it's followed by an <invoke> activity to send the result back (as opposed to a <reply> activity used by the synchronous process).

If you look at the WSDL for the process, you will see that it defines two operations: process to call the process, and processResponse, which will be called by the process to send back the result. Thus the client that calls the process operation will need to provide the processResponse callback in order to receive the result (this is something we will look at in more detail in Chapter 15, Message Interaction Patterns.

Now, for the purpose of our simulation, we will assume that the StockOrder request is successful and the actualPrice achieved is always the bid price. So to do this, create an assign operation that copies all the original input values to their corresponding output values. Deploy the composite, and run it from the console.

Note

When you click the Test Web Service button for the StockService composite, you will now be presented with two options: stockorder_client_ep and stockquote_client_ep. These correspond to each of the exposed services we have defined in our composite. Ensure you select stockorder_client_ep, which is wired to our StockOrder process.

This time, you will notice that no result is returned (as it's being processed asynchronously); rather it displays a message to indicate that the service was invoked successfully, as shown in the following screenshot:

Asynchronous service

Click on Launch Message Flow Trace to bring up the trace for the composite, and then select StockOrder to bring up the audit trail for the process. Switch to the flow view, and expand the callbackClient activity at the end of the trace. This will pop up a window showing the details of the response sent by our process, as shown in the following screenshot:

Asynchronous service

Using the wait activity

Now you've probably spotted the most obvious flaw with this simulation, in that the process returns a response almost immediately, which negates the whole point of making it asynchronous.

To make it more realistic, we will use the <wait> activity to wait for a period of time. To do this drag the <wait> activity from the Component Palette onto your BPEL process just before the <assign> activity, and then double-click on it to open the Wait activity window, as shown below.

The <wait> activity allows you to specify that the process wait for a specified duration of time or until a specified deadline. In either case, you specify a fixed value or choose to specify an XPath expression to evaluate the value at runtime.

If you specify Expression, and then click the calculator icon to the right of it, this will launch the Expression Builder that we introduced earlier in the chapter. The result of the expression must evaluate to a valid value of xsd:duration for periods and xsd:dateTime for deadlines. The format of xsd:duration is PnYnMnDTnHnMnS, for example. P1M would be a duration of 1 month and P10DT1H25M would be 10 days, 1 hour and 25 minutes.

For deadlines, the expression should evaluate to a valid value of xsd:date.

The structure of xsd:dateTime is YYYY-MM-DDThh:mm:ss+hh:mm, where the +hh:mm is optional and is the time period offset from UTC (or GMT, if you prefer). Obviously, the offset can be negative or positive.

For example, 2010-01-19T17:37:47-05:00 is the time 17:37:47 on January 19th 2010, 5 hours behind UTC (that is, Eastern Standard Time in the US).

Using the wait activity

For our purposes, we just need to wait for a relatively short period of time, so set it to wait for one minute.

Now save, deploy, and run the composite. If you now look at the audit trail of the process, you will see that it has paused on the <wait> activity (which will be highlighted in orange).

Improving the stock trade service

We have a very trivial trade service, which always results in a successful trade after one minute. Let's see if we can make it a bit more "realistic".

We will modify the process to call the stockQuote service and compare the actual price against the requested price. If the quote we get back matches or is better than the price specified, then we will return a successful trade (at the quoted price). Otherwise we will wait a minute and loop back round and try again.

Creating the while loop

The bulk of this process will now be contained within a while loop, so from the Process Activities list of the Component Palette, drag a While activity into the process.

Click on the plus symbol to expand the While activity. It will now display an area where you can drop a sequence of one or more activities that will be executed every time the process iterates through the loop.

Creating the while loop

We want to iterate through the loop until the trade has been fulfilled, so let's create a variable of type xsd:Boolean called tradeFulfilled and use an <assign> statement before the while loop to set its value to false.

The first step is to create a variable of type xsd:Boolean. Until now, we've used JDeveloper to automatically create the variables we've required, typically as part of the process of defining an Invoke activity. However, that's not an option here.

If you look at the diagram of your BPEL process, you will see that it is surrounded by a light grey dashed box, and on the top left-hand side there are a number of icons. If you click on the top one of these (x), as shown in the following screenshot, this will open a window that lists all the variables defined in the process:

Creating the while loop

At this stage, it will list just the default inputVariable and outputVariable, which were automatically created with the process. Click on the green plus button. This will bring up the Create Variable window, as shown in the following screenshot:

Creating the while loop

Here we simply specify the Name of the variable (for example, tradeFulfilled) and its Type. In our case, we want an xsd:Boolean, so select Simple Type and click the magnifying glass to the right of it.

This will bring up the Type Chooser, which will list all the simple built-in data types defined by XML Schema. Select Boolean and click OK.

We need to initialize the variable to false, so drag an <assign> statement on to your process just before the while loop. Use the function false(), under the category Logical Functions, to achieve this.

Next, we need to set the condition on the while loop, so that it will execute only while tradeFulfilled equals false. Double-click on the while loop. This will open the While activity window, as shown in the following screenshot:

Creating the while loop

We must now specify an XPath expression, which will evaluate to either true or false. If you click on the expression builder icon, which is circled in the preceding screenshot, this will launch the Expression Builder. Use this to build the following expression:

bpws:getVariableData('tradeFullfilled') = false()

Once we are happy with this, click OK.

Checking the price

The first activity we need to perform within the while loop is to get a quote for the stock that we are trading. For this, we will need to invoke the stock quote process we created earlier. As both of these processes are in the same composite, the simplest way to do this is to wire them together.

Switch to the composite view in JDeveloper, next place your mouse over the yellow arrow on the StockOrder process (the one to add a new Reference). Click and hold your mouse button, then drag the arrow onto the blue arrow on the StockQuote process (the one that represents the Service Interface), then release, as shown in the following screenshot:

Checking the price

This will wire these two processes together and create a corresponding partner link in the StockOrder process. From here, implement the required steps to invoke the process operation of the StockQuote process, making sure that they are included within the while loop.

Using the switch activity

Remember our requirement is that we return success if the price matches or is better than the one specified in the order. Obviously, whether the price is better depends on whether we are selling or buying. If we are selling we need the price to be equal to or greater than the asking price; whereas if we are buying, we need the price to be equal to or less than the asking price.

So for this, we will introduce the <switch> activity. Drag a <switch> activity from the Process Activities list of the Component Palette on to your process after the invoke activity for the StockQuote service. Next, click on the plus symbol to expand the <switch> activity. By default, it will have two branches illustrated as follows:

The first branch contains a <case> condition, with a corresponding area where you can drop a sequence of one or more activities that will be executed if the condition evaluates to true.

The second branch contains an <otherwise> subactivity, with a corresponding area for activities. The activities in this branch will only be executed if all case conditions evaluate to false.

Using the switch activity

We want to cater to two separate tests (one for buying the other for selling), so click on the Add Switch Case arrow (highlighted in the preceding screenshot) to add another <case> branch.

Next, we need to define the test condition for each <case>. To do this, click on the corresponding Expression Builder icon to launch the expression builder (circled in the preceding screenshot). For the first one, use the expression builder to create the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Buy' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') >= 
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
	'/ns1:getQuoteResponse/ns1:Amount')

For the second branch, use the expression builder to define the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Sell' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') <=
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
'/ns1:getQuoteResponse/ns1:Amount')

Once we've defined the condition for each case, we just need to create a single <assign> activity in each branch. This needs to set all the values in the outputVariable to the corresponding values in the inputVariable, except for the ActualPrice element, which we should set to the value returned by the StockQuote process. Finally, we also need to set tradeFullfilled to true, so that we exit the while loop.

The simplest way to do this is by dragging the original <assign> we created in the first version of this process onto the first branch and then modify it as appropriate. Then create a similar <assign> activity in the second branch.

Note

You've probably noticed that you could actually combine the two tests into a single test. However, we took this approach to illustrate how you can add multiple branches to a switch.

If we don't have a match, then we have to wait a minute and then circle back round the while loop and try again. As we've already defined a <wait> activity, simply drag this from its current position within the process into the activity area for the <otherwise> activity.

That completes the process, so try deploying it and running it from the console.

Note

The other obvious thing is that this process could potentially run forever if we don't get a stock quote in our favor. One way to solve this would be to put the while activity in a scope and then set a timeout period on the scope so that it would only run for so long.

Calling the external web services

The next step is to call our external web services. For our stock quote service, we are going to use Xignite's quotes web service, which delivers delayed equity price quotes from all U.S. stock exchanges (NYSE, NASDAQ, AMEX, NASDAQ OTC Bulletin Board, and Pink Sheets).

Note

Before you can use this service, you will need to register with Xignite. To do this, or for more information on this and other services provided by Xignite, go to www.xignite.com.

To call a web service in BPEL, we first need to create a partner link (as discussed at the start of this chapter). So from the Component Palette, expand the BPEL Services section and drag a Partner Link (Web Service / Adapter) component into the Partner Link swim lane in your BPEL process. This will pop up the following screen:

Calling the external web services

First enter a name for the partner link, for example, XigniteQuotes. Next we need to specify the WSDL file for the partner link. JDeveloper provides the following ways to do this:

  • SOA Resource Lookup: Allows us to browse the filesystem for WSDL files or any connected application server for deployed services
  • SOA Service Explorer: Allows us to browse other services that are defined within the composite (for example, other BPEL processes, Mediator, or external services)
  • Define Service: This enables us to define adapter services (refer to Chapter 3, Service-enabling Existing Systems) directly within the context of a BPEL process
  • WSDL URL: Directly enter the URL for the WSDL file into the corresponding field

For our reference, we have a local copy of the WSDL for Xignite's quotes service, called XigniteQuotes.wsdl, which is included with the samples for Chapter 5. Click on the SOA Resource Lookup … icon (circled in the preceding screenshot), then browse to and select this file (select Yes if prompted to create a local copy of the file).

JDeveloper will parse the WSDL, and assuming it is successful, it will pop up a window saying that there are no partner link types defined in the current WSDL and ask if you want to create partner links for the file. Click Yes. JDeveloper will then create one Partner Link Type for each port type defined in the WSDL. In cases where we have multiple partner link types, we will need to specify which one to use within our process. To do this, click on the drop-down list next to Partner Link Type and select the appropriate one. In our case, we have selected XigniteQuotesSoap_PL, as shown in the following screenshot:

Calling the external web services

Finally, we need to specify the Partner Role and My Role. When invoking a synchronous service, there will only be a single role defined in the WSDL, which represents the provider of the service. So specify this for the Partner Role and leave My Role as ----- Not Specified -----.

Note

Best practice would dictate that rather than calling the stock quote service directly from within BPEL, we would invoke it via the Oracle Service Bus. This is an area we look at more closely in Chapter 10, oBay Introduction when we define our blueprint for SOA.

If you look at the composite view, you will see that XigniteQuotes is defined as an External Reference and is wired to our BPEL process.

Calling the web service

Once we have defined a partner link for the web service, the next step is to call it. As this is a synchronous service, we will need to use an <invoke> activity to call it, as we described earlier in this chapter.

On the Component Palette, ensure that the BPEL Activities and Components section is expanded. Then from it, drag an Invoke activity on to your BPEL process.

Next, place your mouse over the arrow next to the Invoke activity. Click and hold your mouse button, drag the arrow over your partner link, and then release, as shown in the following screenshot:

Calling the web service

This will then pop up the Edit Invoke activity window, as shown in the following screenshot:

Calling the web service

We need to specify a number of values to configure the Invoke activity, namely:

  • Name: This is the name we want to assign to the Invoke activity, and can be any value. So just assign a meaningful value such as GetQuote.
  • Partner Link: This is the Partner Link whose service we want to invoke; it should already be set to use XigniteQuotes, as we have already linked this activity to that Partner Link. An alternate approach would be to click on the corresponding spotlight icon, which would allow us to select from any Partner Link already defined to the process.
  • Operation: Once we've specified a Partner Link, we need to specify which of its operations we wish to invoke. This presents us with a drop-down list, listing all the operations that are available, for our purpose, select GetSingleQuote.
  • Input: Here we must specify the variable that contains the data to be passed to the web service that's being invoked. It is important that the variable is of type Message, and that it is of the same message type expected by the Operation (that is, as defined in the WSDL file for the web service).

    The simplest way to ensure this is by getting JDeveloper to create the variable for you. To do this, click on the green plus sign to the right of the input variable field. This will bring up the Create Variable window, as shown in the following screenshot. You will notice that JDeveloper creates a default name for the variable (based on the name you gave the invoke operation and the operation that you are calling). You can override this with something more meaningful (for example, QuoteInput).

  • Output: Finally, we must specify the variable into which the value returned by the web service will be placed. As with the input variable, this should be of the type Message and corresponds to the output message defined in the WSDL file for the selected operation. Again, the simplest way to ensure this is to get JDeveloper to create the variable for you.
    Calling the web service

Once you've specified values for all these fields, as illustrated in the preceding screenshot, click OK.

Assigning values to variables

In our previous step, we created the variable QuoteInput, which we pass to our invocation of GetSingleQuote. However, we have yet to initialize the variable or assign any value to it.

To do this, BPEL provides the <assign> activity, which is used to update the values of variables with new data. The <assign> activity typically consists of one or more copy operations. Each copy consists of a target variable, that is, the variable that you wish to assign a value to and a source (this can either be another variable or an XPath expression).

For our purposes, we want to assign the stock symbol passed into our BPEL process to our QuoteInput variable.

To do this, drag an Assign activity from the Component Palette on to your BPEL process at the point just before our Invoke activity. Then double-click on it to open up the Assign configuration window. Click on the green plus sign and select Copy Operation….

This will present us with the Create Copy Operation window, as shown in the following screenshot:

Assigning values to variables

On the left-hand side, we specify the From variable (that is, the source). Here we want to specify the stock symbol passed in as part of the input variable to the BPEL process. So expand the inputVariable tree, and select /ns2:getQuote/ns2:stockSymbol.

For the target, expand QuoteInput and select /ns1:GetSingleQuote/ns1:Symbol.

You will notice that for both the source and target, JDeveloper has created the equivalent XPath expression (circled in the preceding screenshot).

Note

The source and target can either be a simple type (for example, xsd:int, xsd:date, or xsd:string), as in the preceding example. Or a complex type (for example, ns2:getQuote), but make sure the source and target are either of the same type, or at least compatible.

Testing the process

At this stage, even though the process isn't complete, we can still save, deploy, and run our composite. Do this in the same way as previously covered in Chapter 2, Writing your First Composite. When you run the composite from the console you will notice that it doesn't return anything (as we haven't specified this yet). But if you look at the audit trail, you should successfully see the GetSingleQuote operation being invoked. Assuming this is the case, we know we have implemented that part of the process correctly.

Calling the exchange rate web service

The next step of the process is to determine the exchange rate between the requested currency and the US dollar (the currency used by the GetSingleQuote operation). For this, we are going to use the currency convertor service provided by webserviceX.NET .

For more information on this and other services provided by webserviceX.NET, go to www.webservicex.net.

This service provides a single operation ConversionRate, which gets the conversion rate from one currency to another. The WSDL file for this service can be found at the following URL:

http://www.webservicex.net/CurrencyConvertor.asmx?wsdl

For convenience, we have included a local copy of the WSDL for webserviceX.NET's currency convertor service, called CurrencyConvertor.wsdl. It's included with the samples of Chapter 5.

To invoke the ConversionRate operation, we will follow the same basic steps that we did in the previous section to invoke the GetSingleQuote operation. For brevity, we won't repeat them here, but will allow the reader to do this.

Note

To follow the examples, name the input variable for the exchange rate web service ExchangeRateInput and the output variable ExchangeRateOutput.

Assigning constant values to variables

The operation ConversionRate takes two input values as follows:

  • FromCurrency: This should be set to 'USD'
  • ToCurrency: This should be set to the currency field contained within the inputVariable for the BPEL process.

To set the FromCurrency, create another copy operation. However, for the From value, select Expression as the Type (circled in the following screenshot).

This will replace the variable browser with a free format textbox. Here you can specify any value, within quotes, that you wish to assign to your target variable. For our purposes, enter 'USD', as shown in the following screenshot:

Assigning constant values to variables

To set the value of ToCurrency, create another copy operation and copy in the value of the currency field contained within the inputVariable.

At this stage again, save, deploy, and run the composite to validate that we are calling the exchange rate service correctly.

Using the expression builder

The final part of the process is to combine the exchange rate returned by one service with the stock price returned by the other, in order to determine the stock price in the requested currency and return that to the caller of the composite.

To do this, we will again use an <assign> activity. So drag another <assign> activity onto the process, just after our second invoke activity. Now in our previous use of the <assign> activity, we have just used it to copy a value from one variable to another.

Here, it is slightly different, in that we want to combine multiple values into a single value, and to do that, we will need to write the appropriate piece of XPath. Create a copy operation as before, but for the source type, select Expression from the drop-down list, as shown in the following screenshot:

Using the expression builder

Now, if you want, you can type in the XPath expression manually (into the Expression area), but it's far easier and less error prone to use the Expression Builder. To do this, click on the XPath expression builder icon; the calculator icon, which is circled in the preceding screenshot, will pop up the Expression Builder (shown below):

Using the expression builder

The Expression Builder provides a graphical tool for writing XPath expressions, which are executed as part of the copy operation. It consists of the following areas:

  • Expression: The top textbox contains the XPath expression that you are working on. You can either type data directly in here, or use the Expression Builder to insert XPath fragments to build up the XPath required.
  • BPEL variables: This part of the Expression Builder lets you browse the variables defined within your BPEL process. Once you've located the variable that you wish to use, click on the Insert Into Expression button, and this will insert the appropriate code fragment into the XPath expression.

    Note

    The code fragment is inserted at the point within the expression where the cursor is currently positioned.

  • Functions: This shows you all the different types of XPath functions that are available to build up your XPath expression. To make it easier to locate the required function, they are grouped into categories such as String Functions, Mathematical Functions, and so on.

    The drop-down list lets you select the category that you are interested in (for example, Mathematical Functions, as illustrated in the preceding screenshot), and then the window below that lists all the functions available to that group.

    To use a particular function, select the required function, and click Insert Into Expression. This will insert the appropriate XPath fragment into the XPath Expression (again at the point that the cursor is currently positioned).

  • Content Preview: This box displays a preview of the content that would be inserted into the XPath Expression if you clicked the Insert Into Expression button. For example, if you had currently selected a particular BPEL variable, it would show you the XPath to access that variable.
  • Description: If you've currently selected a function, this box provides a brief description of the function, as well as the expected usage and number of parameters.

So let's use this to build our XPath expression. The expression we want to build is a relatively simple one, namely, the stock price returned by the stock quote service multiplied by the exchange rate returned by the exchange rate service.

To build our XPath expression, carry out the following steps:

First, within the BPEL Variables area, in the variable QuoteOutput, locate the element ns1:GetSingleQuoteResult|ns1:Last, as shown in the following screenshot:

Using the expression builder

Then click Insert Into Expression to insert this into the XPath expression.

Next, within the Functions area, select the Mathematical Functions category, and select the multiply function (notice the description in the Description box, as shown in the following screenshot), and insert this into the XPath expression:

Using the expression builder

Finally, back in the BPEL Variables area, locate the element ConversionRateResult within the variable ExchangeRateOutput, and insert that into the XPath expression.

You should now have an XPath expression similar to the one illustrated below, once you are happy with it, click OK.

Using the expression builder

Finally make sure you specify the target part of the copy operation, which should be the amount element within the outputVariable.

In order to complete the <assign> activity, you will need to create two more copy operations to copy the Currency and StockSymbol specified in the inputVariable into the equivalent values in the outputVariable.

Once done, your BPEL process should be complete. So deploy and run the composite.

Asynchronous service

Following our StockQuote service, another service would be a stock order service, which would enable us to buy or sell a particular stock. For this service, a client would need to specify the stock, whether they wanted to buy or sell, the quantity, and the price.

It makes sense to make this an asynchronous service, as once the order has been placed, it may take seconds, minutes, hours, or even days for the order to be matched.

Now, I'm not aware of any trade services that are free to try (probably for a good reason!). However, there is no reason why we can't simulate one. To do this, we will write a simple asynchronous process.

Drag another BPEL process on to our StockService composite and give it the name StockOrder, but specify that it is an asynchronous BPEL process.

As with the StockQuote process, we also want to specify predefined elements for its input and output. The elements we are going to use are placeOrder for the input and placeOrderResponse for the output, the definitions for which are shown in the following code snippet:

<xsd:element name="placeOrder"         type="tPlaceOrder"/>
<xsd:element name="placeOrderResponse" type="tPlaceOrderResponse"/>

<xsd:complexType name="tPlaceOrder">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="bidPrice"    type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

<xsd:complexType name="tPlaceOrderResponse">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="actualPrice" type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

These are also defined in the StockService.xsd that we previously imported into the StockService composite. So, for each field, we click on the magnifying glass to bring up the type chooser and select the appropriate element definitions. Then click OK to create the process. This will create a second BPEL process within our composite, so double-click on this to open it.

You will see that, by default, JDeveloper has created a skeleton asynchronous BPEL process, which contains an initial <receive> activity to receive the stock order request. But this time it's followed by an <invoke> activity to send the result back (as opposed to a <reply> activity used by the synchronous process).

If you look at the WSDL for the process, you will see that it defines two operations: process to call the process, and processResponse, which will be called by the process to send back the result. Thus the client that calls the process operation will need to provide the processResponse callback in order to receive the result (this is something we will look at in more detail in Chapter 15, Message Interaction Patterns.

Now, for the purpose of our simulation, we will assume that the StockOrder request is successful and the actualPrice achieved is always the bid price. So to do this, create an assign operation that copies all the original input values to their corresponding output values. Deploy the composite, and run it from the console.

Note

When you click the Test Web Service button for the StockService composite, you will now be presented with two options: stockorder_client_ep and stockquote_client_ep. These correspond to each of the exposed services we have defined in our composite. Ensure you select stockorder_client_ep, which is wired to our StockOrder process.

This time, you will notice that no result is returned (as it's being processed asynchronously); rather it displays a message to indicate that the service was invoked successfully, as shown in the following screenshot:

Asynchronous service

Click on Launch Message Flow Trace to bring up the trace for the composite, and then select StockOrder to bring up the audit trail for the process. Switch to the flow view, and expand the callbackClient activity at the end of the trace. This will pop up a window showing the details of the response sent by our process, as shown in the following screenshot:

Asynchronous service

Using the wait activity

Now you've probably spotted the most obvious flaw with this simulation, in that the process returns a response almost immediately, which negates the whole point of making it asynchronous.

To make it more realistic, we will use the <wait> activity to wait for a period of time. To do this drag the <wait> activity from the Component Palette onto your BPEL process just before the <assign> activity, and then double-click on it to open the Wait activity window, as shown below.

The <wait> activity allows you to specify that the process wait for a specified duration of time or until a specified deadline. In either case, you specify a fixed value or choose to specify an XPath expression to evaluate the value at runtime.

If you specify Expression, and then click the calculator icon to the right of it, this will launch the Expression Builder that we introduced earlier in the chapter. The result of the expression must evaluate to a valid value of xsd:duration for periods and xsd:dateTime for deadlines. The format of xsd:duration is PnYnMnDTnHnMnS, for example. P1M would be a duration of 1 month and P10DT1H25M would be 10 days, 1 hour and 25 minutes.

For deadlines, the expression should evaluate to a valid value of xsd:date.

The structure of xsd:dateTime is YYYY-MM-DDThh:mm:ss+hh:mm, where the +hh:mm is optional and is the time period offset from UTC (or GMT, if you prefer). Obviously, the offset can be negative or positive.

For example, 2010-01-19T17:37:47-05:00 is the time 17:37:47 on January 19th 2010, 5 hours behind UTC (that is, Eastern Standard Time in the US).

Using the wait activity

For our purposes, we just need to wait for a relatively short period of time, so set it to wait for one minute.

Now save, deploy, and run the composite. If you now look at the audit trail of the process, you will see that it has paused on the <wait> activity (which will be highlighted in orange).

Improving the stock trade service

We have a very trivial trade service, which always results in a successful trade after one minute. Let's see if we can make it a bit more "realistic".

We will modify the process to call the stockQuote service and compare the actual price against the requested price. If the quote we get back matches or is better than the price specified, then we will return a successful trade (at the quoted price). Otherwise we will wait a minute and loop back round and try again.

Creating the while loop

The bulk of this process will now be contained within a while loop, so from the Process Activities list of the Component Palette, drag a While activity into the process.

Click on the plus symbol to expand the While activity. It will now display an area where you can drop a sequence of one or more activities that will be executed every time the process iterates through the loop.

Creating the while loop

We want to iterate through the loop until the trade has been fulfilled, so let's create a variable of type xsd:Boolean called tradeFulfilled and use an <assign> statement before the while loop to set its value to false.

The first step is to create a variable of type xsd:Boolean. Until now, we've used JDeveloper to automatically create the variables we've required, typically as part of the process of defining an Invoke activity. However, that's not an option here.

If you look at the diagram of your BPEL process, you will see that it is surrounded by a light grey dashed box, and on the top left-hand side there are a number of icons. If you click on the top one of these (x), as shown in the following screenshot, this will open a window that lists all the variables defined in the process:

Creating the while loop

At this stage, it will list just the default inputVariable and outputVariable, which were automatically created with the process. Click on the green plus button. This will bring up the Create Variable window, as shown in the following screenshot:

Creating the while loop

Here we simply specify the Name of the variable (for example, tradeFulfilled) and its Type. In our case, we want an xsd:Boolean, so select Simple Type and click the magnifying glass to the right of it.

This will bring up the Type Chooser, which will list all the simple built-in data types defined by XML Schema. Select Boolean and click OK.

We need to initialize the variable to false, so drag an <assign> statement on to your process just before the while loop. Use the function false(), under the category Logical Functions, to achieve this.

Next, we need to set the condition on the while loop, so that it will execute only while tradeFulfilled equals false. Double-click on the while loop. This will open the While activity window, as shown in the following screenshot:

Creating the while loop

We must now specify an XPath expression, which will evaluate to either true or false. If you click on the expression builder icon, which is circled in the preceding screenshot, this will launch the Expression Builder. Use this to build the following expression:

bpws:getVariableData('tradeFullfilled') = false()

Once we are happy with this, click OK.

Checking the price

The first activity we need to perform within the while loop is to get a quote for the stock that we are trading. For this, we will need to invoke the stock quote process we created earlier. As both of these processes are in the same composite, the simplest way to do this is to wire them together.

Switch to the composite view in JDeveloper, next place your mouse over the yellow arrow on the StockOrder process (the one to add a new Reference). Click and hold your mouse button, then drag the arrow onto the blue arrow on the StockQuote process (the one that represents the Service Interface), then release, as shown in the following screenshot:

Checking the price

This will wire these two processes together and create a corresponding partner link in the StockOrder process. From here, implement the required steps to invoke the process operation of the StockQuote process, making sure that they are included within the while loop.

Using the switch activity

Remember our requirement is that we return success if the price matches or is better than the one specified in the order. Obviously, whether the price is better depends on whether we are selling or buying. If we are selling we need the price to be equal to or greater than the asking price; whereas if we are buying, we need the price to be equal to or less than the asking price.

So for this, we will introduce the <switch> activity. Drag a <switch> activity from the Process Activities list of the Component Palette on to your process after the invoke activity for the StockQuote service. Next, click on the plus symbol to expand the <switch> activity. By default, it will have two branches illustrated as follows:

The first branch contains a <case> condition, with a corresponding area where you can drop a sequence of one or more activities that will be executed if the condition evaluates to true.

The second branch contains an <otherwise> subactivity, with a corresponding area for activities. The activities in this branch will only be executed if all case conditions evaluate to false.

Using the switch activity

We want to cater to two separate tests (one for buying the other for selling), so click on the Add Switch Case arrow (highlighted in the preceding screenshot) to add another <case> branch.

Next, we need to define the test condition for each <case>. To do this, click on the corresponding Expression Builder icon to launch the expression builder (circled in the preceding screenshot). For the first one, use the expression builder to create the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Buy' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') >= 
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
	'/ns1:getQuoteResponse/ns1:Amount')

For the second branch, use the expression builder to define the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Sell' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') <=
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
'/ns1:getQuoteResponse/ns1:Amount')

Once we've defined the condition for each case, we just need to create a single <assign> activity in each branch. This needs to set all the values in the outputVariable to the corresponding values in the inputVariable, except for the ActualPrice element, which we should set to the value returned by the StockQuote process. Finally, we also need to set tradeFullfilled to true, so that we exit the while loop.

The simplest way to do this is by dragging the original <assign> we created in the first version of this process onto the first branch and then modify it as appropriate. Then create a similar <assign> activity in the second branch.

Note

You've probably noticed that you could actually combine the two tests into a single test. However, we took this approach to illustrate how you can add multiple branches to a switch.

If we don't have a match, then we have to wait a minute and then circle back round the while loop and try again. As we've already defined a <wait> activity, simply drag this from its current position within the process into the activity area for the <otherwise> activity.

That completes the process, so try deploying it and running it from the console.

Note

The other obvious thing is that this process could potentially run forever if we don't get a stock quote in our favor. One way to solve this would be to put the while activity in a scope and then set a timeout period on the scope so that it would only run for so long.

Calling the web service

Once we have defined a partner link for the web service, the next step is to call it. As this is a synchronous service, we will need to use an <invoke> activity to call it, as we described earlier in this chapter.

On the Component Palette, ensure that the BPEL Activities and Components section is expanded. Then from it, drag an Invoke activity on to your BPEL process.

Next, place your mouse over the arrow next to the Invoke activity. Click and hold your mouse button, drag the arrow over your partner link, and then release, as shown in the following screenshot:

Calling the web service

This will then pop up the Edit Invoke activity window, as shown in the following screenshot:

Calling the web service

We need to specify a number of values to configure the Invoke activity, namely:

  • Name: This is the name we want to assign to the Invoke activity, and can be any value. So just assign a meaningful value such as GetQuote.
  • Partner Link: This is the Partner Link whose service we want to invoke; it should already be set to use XigniteQuotes, as we have already linked this activity to that Partner Link. An alternate approach would be to click on the corresponding spotlight icon, which would allow us to select from any Partner Link already defined to the process.
  • Operation: Once we've specified a Partner Link, we need to specify which of its operations we wish to invoke. This presents us with a drop-down list, listing all the operations that are available, for our purpose, select GetSingleQuote.
  • Input: Here we must specify the variable that contains the data to be passed to the web service that's being invoked. It is important that the variable is of type Message, and that it is of the same message type expected by the Operation (that is, as defined in the WSDL file for the web service).

    The simplest way to ensure this is by getting JDeveloper to create the variable for you. To do this, click on the green plus sign to the right of the input variable field. This will bring up the Create Variable window, as shown in the following screenshot. You will notice that JDeveloper creates a default name for the variable (based on the name you gave the invoke operation and the operation that you are calling). You can override this with something more meaningful (for example, QuoteInput).

  • Output: Finally, we must specify the variable into which the value returned by the web service will be placed. As with the input variable, this should be of the type Message and corresponds to the output message defined in the WSDL file for the selected operation. Again, the simplest way to ensure this is to get JDeveloper to create the variable for you.
    Calling the web service

Once you've specified values for all these fields, as illustrated in the preceding screenshot, click OK.

Assigning values to variables

In our previous step, we created the variable QuoteInput, which we pass to our invocation of GetSingleQuote. However, we have yet to initialize the variable or assign any value to it.

To do this, BPEL provides the <assign> activity, which is used to update the values of variables with new data. The <assign> activity typically consists of one or more copy operations. Each copy consists of a target variable, that is, the variable that you wish to assign a value to and a source (this can either be another variable or an XPath expression).

For our purposes, we want to assign the stock symbol passed into our BPEL process to our QuoteInput variable.

To do this, drag an Assign activity from the Component Palette on to your BPEL process at the point just before our Invoke activity. Then double-click on it to open up the Assign configuration window. Click on the green plus sign and select Copy Operation….

This will present us with the Create Copy Operation window, as shown in the following screenshot:

Assigning values to variables

On the left-hand side, we specify the From variable (that is, the source). Here we want to specify the stock symbol passed in as part of the input variable to the BPEL process. So expand the inputVariable tree, and select /ns2:getQuote/ns2:stockSymbol.

For the target, expand QuoteInput and select /ns1:GetSingleQuote/ns1:Symbol.

You will notice that for both the source and target, JDeveloper has created the equivalent XPath expression (circled in the preceding screenshot).

Note

The source and target can either be a simple type (for example, xsd:int, xsd:date, or xsd:string), as in the preceding example. Or a complex type (for example, ns2:getQuote), but make sure the source and target are either of the same type, or at least compatible.

Testing the process

At this stage, even though the process isn't complete, we can still save, deploy, and run our composite. Do this in the same way as previously covered in Chapter 2, Writing your First Composite. When you run the composite from the console you will notice that it doesn't return anything (as we haven't specified this yet). But if you look at the audit trail, you should successfully see the GetSingleQuote operation being invoked. Assuming this is the case, we know we have implemented that part of the process correctly.

Calling the exchange rate web service

The next step of the process is to determine the exchange rate between the requested currency and the US dollar (the currency used by the GetSingleQuote operation). For this, we are going to use the currency convertor service provided by webserviceX.NET .

For more information on this and other services provided by webserviceX.NET, go to www.webservicex.net.

This service provides a single operation ConversionRate, which gets the conversion rate from one currency to another. The WSDL file for this service can be found at the following URL:

http://www.webservicex.net/CurrencyConvertor.asmx?wsdl

For convenience, we have included a local copy of the WSDL for webserviceX.NET's currency convertor service, called CurrencyConvertor.wsdl. It's included with the samples of Chapter 5.

To invoke the ConversionRate operation, we will follow the same basic steps that we did in the previous section to invoke the GetSingleQuote operation. For brevity, we won't repeat them here, but will allow the reader to do this.

Note

To follow the examples, name the input variable for the exchange rate web service ExchangeRateInput and the output variable ExchangeRateOutput.

Assigning constant values to variables

The operation ConversionRate takes two input values as follows:

  • FromCurrency: This should be set to 'USD'
  • ToCurrency: This should be set to the currency field contained within the inputVariable for the BPEL process.

To set the FromCurrency, create another copy operation. However, for the From value, select Expression as the Type (circled in the following screenshot).

This will replace the variable browser with a free format textbox. Here you can specify any value, within quotes, that you wish to assign to your target variable. For our purposes, enter 'USD', as shown in the following screenshot:

Assigning constant values to variables

To set the value of ToCurrency, create another copy operation and copy in the value of the currency field contained within the inputVariable.

At this stage again, save, deploy, and run the composite to validate that we are calling the exchange rate service correctly.

Using the expression builder

The final part of the process is to combine the exchange rate returned by one service with the stock price returned by the other, in order to determine the stock price in the requested currency and return that to the caller of the composite.

To do this, we will again use an <assign> activity. So drag another <assign> activity onto the process, just after our second invoke activity. Now in our previous use of the <assign> activity, we have just used it to copy a value from one variable to another.

Here, it is slightly different, in that we want to combine multiple values into a single value, and to do that, we will need to write the appropriate piece of XPath. Create a copy operation as before, but for the source type, select Expression from the drop-down list, as shown in the following screenshot:

Using the expression builder

Now, if you want, you can type in the XPath expression manually (into the Expression area), but it's far easier and less error prone to use the Expression Builder. To do this, click on the XPath expression builder icon; the calculator icon, which is circled in the preceding screenshot, will pop up the Expression Builder (shown below):

Using the expression builder

The Expression Builder provides a graphical tool for writing XPath expressions, which are executed as part of the copy operation. It consists of the following areas:

  • Expression: The top textbox contains the XPath expression that you are working on. You can either type data directly in here, or use the Expression Builder to insert XPath fragments to build up the XPath required.
  • BPEL variables: This part of the Expression Builder lets you browse the variables defined within your BPEL process. Once you've located the variable that you wish to use, click on the Insert Into Expression button, and this will insert the appropriate code fragment into the XPath expression.

    Note

    The code fragment is inserted at the point within the expression where the cursor is currently positioned.

  • Functions: This shows you all the different types of XPath functions that are available to build up your XPath expression. To make it easier to locate the required function, they are grouped into categories such as String Functions, Mathematical Functions, and so on.

    The drop-down list lets you select the category that you are interested in (for example, Mathematical Functions, as illustrated in the preceding screenshot), and then the window below that lists all the functions available to that group.

    To use a particular function, select the required function, and click Insert Into Expression. This will insert the appropriate XPath fragment into the XPath Expression (again at the point that the cursor is currently positioned).

  • Content Preview: This box displays a preview of the content that would be inserted into the XPath Expression if you clicked the Insert Into Expression button. For example, if you had currently selected a particular BPEL variable, it would show you the XPath to access that variable.
  • Description: If you've currently selected a function, this box provides a brief description of the function, as well as the expected usage and number of parameters.

So let's use this to build our XPath expression. The expression we want to build is a relatively simple one, namely, the stock price returned by the stock quote service multiplied by the exchange rate returned by the exchange rate service.

To build our XPath expression, carry out the following steps:

First, within the BPEL Variables area, in the variable QuoteOutput, locate the element ns1:GetSingleQuoteResult|ns1:Last, as shown in the following screenshot:

Using the expression builder

Then click Insert Into Expression to insert this into the XPath expression.

Next, within the Functions area, select the Mathematical Functions category, and select the multiply function (notice the description in the Description box, as shown in the following screenshot), and insert this into the XPath expression:

Using the expression builder

Finally, back in the BPEL Variables area, locate the element ConversionRateResult within the variable ExchangeRateOutput, and insert that into the XPath expression.

You should now have an XPath expression similar to the one illustrated below, once you are happy with it, click OK.

Using the expression builder

Finally make sure you specify the target part of the copy operation, which should be the amount element within the outputVariable.

In order to complete the <assign> activity, you will need to create two more copy operations to copy the Currency and StockSymbol specified in the inputVariable into the equivalent values in the outputVariable.

Once done, your BPEL process should be complete. So deploy and run the composite.

Asynchronous service

Following our StockQuote service, another service would be a stock order service, which would enable us to buy or sell a particular stock. For this service, a client would need to specify the stock, whether they wanted to buy or sell, the quantity, and the price.

It makes sense to make this an asynchronous service, as once the order has been placed, it may take seconds, minutes, hours, or even days for the order to be matched.

Now, I'm not aware of any trade services that are free to try (probably for a good reason!). However, there is no reason why we can't simulate one. To do this, we will write a simple asynchronous process.

Drag another BPEL process on to our StockService composite and give it the name StockOrder, but specify that it is an asynchronous BPEL process.

As with the StockQuote process, we also want to specify predefined elements for its input and output. The elements we are going to use are placeOrder for the input and placeOrderResponse for the output, the definitions for which are shown in the following code snippet:

<xsd:element name="placeOrder"         type="tPlaceOrder"/>
<xsd:element name="placeOrderResponse" type="tPlaceOrderResponse"/>

<xsd:complexType name="tPlaceOrder">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="bidPrice"    type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

<xsd:complexType name="tPlaceOrderResponse">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="actualPrice" type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

These are also defined in the StockService.xsd that we previously imported into the StockService composite. So, for each field, we click on the magnifying glass to bring up the type chooser and select the appropriate element definitions. Then click OK to create the process. This will create a second BPEL process within our composite, so double-click on this to open it.

You will see that, by default, JDeveloper has created a skeleton asynchronous BPEL process, which contains an initial <receive> activity to receive the stock order request. But this time it's followed by an <invoke> activity to send the result back (as opposed to a <reply> activity used by the synchronous process).

If you look at the WSDL for the process, you will see that it defines two operations: process to call the process, and processResponse, which will be called by the process to send back the result. Thus the client that calls the process operation will need to provide the processResponse callback in order to receive the result (this is something we will look at in more detail in Chapter 15, Message Interaction Patterns.

Now, for the purpose of our simulation, we will assume that the StockOrder request is successful and the actualPrice achieved is always the bid price. So to do this, create an assign operation that copies all the original input values to their corresponding output values. Deploy the composite, and run it from the console.

Note

When you click the Test Web Service button for the StockService composite, you will now be presented with two options: stockorder_client_ep and stockquote_client_ep. These correspond to each of the exposed services we have defined in our composite. Ensure you select stockorder_client_ep, which is wired to our StockOrder process.

This time, you will notice that no result is returned (as it's being processed asynchronously); rather it displays a message to indicate that the service was invoked successfully, as shown in the following screenshot:

Asynchronous service

Click on Launch Message Flow Trace to bring up the trace for the composite, and then select StockOrder to bring up the audit trail for the process. Switch to the flow view, and expand the callbackClient activity at the end of the trace. This will pop up a window showing the details of the response sent by our process, as shown in the following screenshot:

Asynchronous service

Using the wait activity

Now you've probably spotted the most obvious flaw with this simulation, in that the process returns a response almost immediately, which negates the whole point of making it asynchronous.

To make it more realistic, we will use the <wait> activity to wait for a period of time. To do this drag the <wait> activity from the Component Palette onto your BPEL process just before the <assign> activity, and then double-click on it to open the Wait activity window, as shown below.

The <wait> activity allows you to specify that the process wait for a specified duration of time or until a specified deadline. In either case, you specify a fixed value or choose to specify an XPath expression to evaluate the value at runtime.

If you specify Expression, and then click the calculator icon to the right of it, this will launch the Expression Builder that we introduced earlier in the chapter. The result of the expression must evaluate to a valid value of xsd:duration for periods and xsd:dateTime for deadlines. The format of xsd:duration is PnYnMnDTnHnMnS, for example. P1M would be a duration of 1 month and P10DT1H25M would be 10 days, 1 hour and 25 minutes.

For deadlines, the expression should evaluate to a valid value of xsd:date.

The structure of xsd:dateTime is YYYY-MM-DDThh:mm:ss+hh:mm, where the +hh:mm is optional and is the time period offset from UTC (or GMT, if you prefer). Obviously, the offset can be negative or positive.

For example, 2010-01-19T17:37:47-05:00 is the time 17:37:47 on January 19th 2010, 5 hours behind UTC (that is, Eastern Standard Time in the US).

Using the wait activity

For our purposes, we just need to wait for a relatively short period of time, so set it to wait for one minute.

Now save, deploy, and run the composite. If you now look at the audit trail of the process, you will see that it has paused on the <wait> activity (which will be highlighted in orange).

Improving the stock trade service

We have a very trivial trade service, which always results in a successful trade after one minute. Let's see if we can make it a bit more "realistic".

We will modify the process to call the stockQuote service and compare the actual price against the requested price. If the quote we get back matches or is better than the price specified, then we will return a successful trade (at the quoted price). Otherwise we will wait a minute and loop back round and try again.

Creating the while loop

The bulk of this process will now be contained within a while loop, so from the Process Activities list of the Component Palette, drag a While activity into the process.

Click on the plus symbol to expand the While activity. It will now display an area where you can drop a sequence of one or more activities that will be executed every time the process iterates through the loop.

Creating the while loop

We want to iterate through the loop until the trade has been fulfilled, so let's create a variable of type xsd:Boolean called tradeFulfilled and use an <assign> statement before the while loop to set its value to false.

The first step is to create a variable of type xsd:Boolean. Until now, we've used JDeveloper to automatically create the variables we've required, typically as part of the process of defining an Invoke activity. However, that's not an option here.

If you look at the diagram of your BPEL process, you will see that it is surrounded by a light grey dashed box, and on the top left-hand side there are a number of icons. If you click on the top one of these (x), as shown in the following screenshot, this will open a window that lists all the variables defined in the process:

Creating the while loop

At this stage, it will list just the default inputVariable and outputVariable, which were automatically created with the process. Click on the green plus button. This will bring up the Create Variable window, as shown in the following screenshot:

Creating the while loop

Here we simply specify the Name of the variable (for example, tradeFulfilled) and its Type. In our case, we want an xsd:Boolean, so select Simple Type and click the magnifying glass to the right of it.

This will bring up the Type Chooser, which will list all the simple built-in data types defined by XML Schema. Select Boolean and click OK.

We need to initialize the variable to false, so drag an <assign> statement on to your process just before the while loop. Use the function false(), under the category Logical Functions, to achieve this.

Next, we need to set the condition on the while loop, so that it will execute only while tradeFulfilled equals false. Double-click on the while loop. This will open the While activity window, as shown in the following screenshot:

Creating the while loop

We must now specify an XPath expression, which will evaluate to either true or false. If you click on the expression builder icon, which is circled in the preceding screenshot, this will launch the Expression Builder. Use this to build the following expression:

bpws:getVariableData('tradeFullfilled') = false()

Once we are happy with this, click OK.

Checking the price

The first activity we need to perform within the while loop is to get a quote for the stock that we are trading. For this, we will need to invoke the stock quote process we created earlier. As both of these processes are in the same composite, the simplest way to do this is to wire them together.

Switch to the composite view in JDeveloper, next place your mouse over the yellow arrow on the StockOrder process (the one to add a new Reference). Click and hold your mouse button, then drag the arrow onto the blue arrow on the StockQuote process (the one that represents the Service Interface), then release, as shown in the following screenshot:

Checking the price

This will wire these two processes together and create a corresponding partner link in the StockOrder process. From here, implement the required steps to invoke the process operation of the StockQuote process, making sure that they are included within the while loop.

Using the switch activity

Remember our requirement is that we return success if the price matches or is better than the one specified in the order. Obviously, whether the price is better depends on whether we are selling or buying. If we are selling we need the price to be equal to or greater than the asking price; whereas if we are buying, we need the price to be equal to or less than the asking price.

So for this, we will introduce the <switch> activity. Drag a <switch> activity from the Process Activities list of the Component Palette on to your process after the invoke activity for the StockQuote service. Next, click on the plus symbol to expand the <switch> activity. By default, it will have two branches illustrated as follows:

The first branch contains a <case> condition, with a corresponding area where you can drop a sequence of one or more activities that will be executed if the condition evaluates to true.

The second branch contains an <otherwise> subactivity, with a corresponding area for activities. The activities in this branch will only be executed if all case conditions evaluate to false.

Using the switch activity

We want to cater to two separate tests (one for buying the other for selling), so click on the Add Switch Case arrow (highlighted in the preceding screenshot) to add another <case> branch.

Next, we need to define the test condition for each <case>. To do this, click on the corresponding Expression Builder icon to launch the expression builder (circled in the preceding screenshot). For the first one, use the expression builder to create the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Buy' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') >= 
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
	'/ns1:getQuoteResponse/ns1:Amount')

For the second branch, use the expression builder to define the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Sell' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') <=
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
'/ns1:getQuoteResponse/ns1:Amount')

Once we've defined the condition for each case, we just need to create a single <assign> activity in each branch. This needs to set all the values in the outputVariable to the corresponding values in the inputVariable, except for the ActualPrice element, which we should set to the value returned by the StockQuote process. Finally, we also need to set tradeFullfilled to true, so that we exit the while loop.

The simplest way to do this is by dragging the original <assign> we created in the first version of this process onto the first branch and then modify it as appropriate. Then create a similar <assign> activity in the second branch.

Note

You've probably noticed that you could actually combine the two tests into a single test. However, we took this approach to illustrate how you can add multiple branches to a switch.

If we don't have a match, then we have to wait a minute and then circle back round the while loop and try again. As we've already defined a <wait> activity, simply drag this from its current position within the process into the activity area for the <otherwise> activity.

That completes the process, so try deploying it and running it from the console.

Note

The other obvious thing is that this process could potentially run forever if we don't get a stock quote in our favor. One way to solve this would be to put the while activity in a scope and then set a timeout period on the scope so that it would only run for so long.

Assigning values to variables

In our previous step, we created the variable QuoteInput, which we pass to our invocation of GetSingleQuote. However, we have yet to initialize the variable or assign any value to it.

To do this, BPEL provides the <assign> activity, which is used to update the values of variables with new data. The <assign> activity typically consists of one or more copy operations. Each copy consists of a target variable, that is, the variable that you wish to assign a value to and a source (this can either be another variable or an XPath expression).

For our purposes, we want to assign the stock symbol passed into our BPEL process to our QuoteInput variable.

To do this, drag an Assign activity from the Component Palette on to your BPEL process at the point just before our Invoke activity. Then double-click on it to open up the Assign configuration window. Click on the green plus sign and select Copy Operation….

This will present us with the Create Copy Operation window, as shown in the following screenshot:

Assigning values to variables

On the left-hand side, we specify the From variable (that is, the source). Here we want to specify the stock symbol passed in as part of the input variable to the BPEL process. So expand the inputVariable tree, and select /ns2:getQuote/ns2:stockSymbol.

For the target, expand QuoteInput and select /ns1:GetSingleQuote/ns1:Symbol.

You will notice that for both the source and target, JDeveloper has created the equivalent XPath expression (circled in the preceding screenshot).

Note

The source and target can either be a simple type (for example, xsd:int, xsd:date, or xsd:string), as in the preceding example. Or a complex type (for example, ns2:getQuote), but make sure the source and target are either of the same type, or at least compatible.

Testing the process

At this stage, even though the process isn't complete, we can still save, deploy, and run our composite. Do this in the same way as previously covered in Chapter 2, Writing your First Composite. When you run the composite from the console you will notice that it doesn't return anything (as we haven't specified this yet). But if you look at the audit trail, you should successfully see the GetSingleQuote operation being invoked. Assuming this is the case, we know we have implemented that part of the process correctly.

Calling the exchange rate web service

The next step of the process is to determine the exchange rate between the requested currency and the US dollar (the currency used by the GetSingleQuote operation). For this, we are going to use the currency convertor service provided by webserviceX.NET .

For more information on this and other services provided by webserviceX.NET, go to www.webservicex.net.

This service provides a single operation ConversionRate, which gets the conversion rate from one currency to another. The WSDL file for this service can be found at the following URL:

http://www.webservicex.net/CurrencyConvertor.asmx?wsdl

For convenience, we have included a local copy of the WSDL for webserviceX.NET's currency convertor service, called CurrencyConvertor.wsdl. It's included with the samples of Chapter 5.

To invoke the ConversionRate operation, we will follow the same basic steps that we did in the previous section to invoke the GetSingleQuote operation. For brevity, we won't repeat them here, but will allow the reader to do this.

Note

To follow the examples, name the input variable for the exchange rate web service ExchangeRateInput and the output variable ExchangeRateOutput.

Assigning constant values to variables

The operation ConversionRate takes two input values as follows:

  • FromCurrency: This should be set to 'USD'
  • ToCurrency: This should be set to the currency field contained within the inputVariable for the BPEL process.

To set the FromCurrency, create another copy operation. However, for the From value, select Expression as the Type (circled in the following screenshot).

This will replace the variable browser with a free format textbox. Here you can specify any value, within quotes, that you wish to assign to your target variable. For our purposes, enter 'USD', as shown in the following screenshot:

Assigning constant values to variables

To set the value of ToCurrency, create another copy operation and copy in the value of the currency field contained within the inputVariable.

At this stage again, save, deploy, and run the composite to validate that we are calling the exchange rate service correctly.

Using the expression builder

The final part of the process is to combine the exchange rate returned by one service with the stock price returned by the other, in order to determine the stock price in the requested currency and return that to the caller of the composite.

To do this, we will again use an <assign> activity. So drag another <assign> activity onto the process, just after our second invoke activity. Now in our previous use of the <assign> activity, we have just used it to copy a value from one variable to another.

Here, it is slightly different, in that we want to combine multiple values into a single value, and to do that, we will need to write the appropriate piece of XPath. Create a copy operation as before, but for the source type, select Expression from the drop-down list, as shown in the following screenshot:

Using the expression builder

Now, if you want, you can type in the XPath expression manually (into the Expression area), but it's far easier and less error prone to use the Expression Builder. To do this, click on the XPath expression builder icon; the calculator icon, which is circled in the preceding screenshot, will pop up the Expression Builder (shown below):

Using the expression builder

The Expression Builder provides a graphical tool for writing XPath expressions, which are executed as part of the copy operation. It consists of the following areas:

  • Expression: The top textbox contains the XPath expression that you are working on. You can either type data directly in here, or use the Expression Builder to insert XPath fragments to build up the XPath required.
  • BPEL variables: This part of the Expression Builder lets you browse the variables defined within your BPEL process. Once you've located the variable that you wish to use, click on the Insert Into Expression button, and this will insert the appropriate code fragment into the XPath expression.

    Note

    The code fragment is inserted at the point within the expression where the cursor is currently positioned.

  • Functions: This shows you all the different types of XPath functions that are available to build up your XPath expression. To make it easier to locate the required function, they are grouped into categories such as String Functions, Mathematical Functions, and so on.

    The drop-down list lets you select the category that you are interested in (for example, Mathematical Functions, as illustrated in the preceding screenshot), and then the window below that lists all the functions available to that group.

    To use a particular function, select the required function, and click Insert Into Expression. This will insert the appropriate XPath fragment into the XPath Expression (again at the point that the cursor is currently positioned).

  • Content Preview: This box displays a preview of the content that would be inserted into the XPath Expression if you clicked the Insert Into Expression button. For example, if you had currently selected a particular BPEL variable, it would show you the XPath to access that variable.
  • Description: If you've currently selected a function, this box provides a brief description of the function, as well as the expected usage and number of parameters.

So let's use this to build our XPath expression. The expression we want to build is a relatively simple one, namely, the stock price returned by the stock quote service multiplied by the exchange rate returned by the exchange rate service.

To build our XPath expression, carry out the following steps:

First, within the BPEL Variables area, in the variable QuoteOutput, locate the element ns1:GetSingleQuoteResult|ns1:Last, as shown in the following screenshot:

Using the expression builder

Then click Insert Into Expression to insert this into the XPath expression.

Next, within the Functions area, select the Mathematical Functions category, and select the multiply function (notice the description in the Description box, as shown in the following screenshot), and insert this into the XPath expression:

Using the expression builder

Finally, back in the BPEL Variables area, locate the element ConversionRateResult within the variable ExchangeRateOutput, and insert that into the XPath expression.

You should now have an XPath expression similar to the one illustrated below, once you are happy with it, click OK.

Using the expression builder

Finally make sure you specify the target part of the copy operation, which should be the amount element within the outputVariable.

In order to complete the <assign> activity, you will need to create two more copy operations to copy the Currency and StockSymbol specified in the inputVariable into the equivalent values in the outputVariable.

Once done, your BPEL process should be complete. So deploy and run the composite.

Asynchronous service

Following our StockQuote service, another service would be a stock order service, which would enable us to buy or sell a particular stock. For this service, a client would need to specify the stock, whether they wanted to buy or sell, the quantity, and the price.

It makes sense to make this an asynchronous service, as once the order has been placed, it may take seconds, minutes, hours, or even days for the order to be matched.

Now, I'm not aware of any trade services that are free to try (probably for a good reason!). However, there is no reason why we can't simulate one. To do this, we will write a simple asynchronous process.

Drag another BPEL process on to our StockService composite and give it the name StockOrder, but specify that it is an asynchronous BPEL process.

As with the StockQuote process, we also want to specify predefined elements for its input and output. The elements we are going to use are placeOrder for the input and placeOrderResponse for the output, the definitions for which are shown in the following code snippet:

<xsd:element name="placeOrder"         type="tPlaceOrder"/>
<xsd:element name="placeOrderResponse" type="tPlaceOrderResponse"/>

<xsd:complexType name="tPlaceOrder">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="bidPrice"    type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

<xsd:complexType name="tPlaceOrderResponse">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="actualPrice" type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

These are also defined in the StockService.xsd that we previously imported into the StockService composite. So, for each field, we click on the magnifying glass to bring up the type chooser and select the appropriate element definitions. Then click OK to create the process. This will create a second BPEL process within our composite, so double-click on this to open it.

You will see that, by default, JDeveloper has created a skeleton asynchronous BPEL process, which contains an initial <receive> activity to receive the stock order request. But this time it's followed by an <invoke> activity to send the result back (as opposed to a <reply> activity used by the synchronous process).

If you look at the WSDL for the process, you will see that it defines two operations: process to call the process, and processResponse, which will be called by the process to send back the result. Thus the client that calls the process operation will need to provide the processResponse callback in order to receive the result (this is something we will look at in more detail in Chapter 15, Message Interaction Patterns.

Now, for the purpose of our simulation, we will assume that the StockOrder request is successful and the actualPrice achieved is always the bid price. So to do this, create an assign operation that copies all the original input values to their corresponding output values. Deploy the composite, and run it from the console.

Note

When you click the Test Web Service button for the StockService composite, you will now be presented with two options: stockorder_client_ep and stockquote_client_ep. These correspond to each of the exposed services we have defined in our composite. Ensure you select stockorder_client_ep, which is wired to our StockOrder process.

This time, you will notice that no result is returned (as it's being processed asynchronously); rather it displays a message to indicate that the service was invoked successfully, as shown in the following screenshot:

Asynchronous service

Click on Launch Message Flow Trace to bring up the trace for the composite, and then select StockOrder to bring up the audit trail for the process. Switch to the flow view, and expand the callbackClient activity at the end of the trace. This will pop up a window showing the details of the response sent by our process, as shown in the following screenshot:

Asynchronous service

Using the wait activity

Now you've probably spotted the most obvious flaw with this simulation, in that the process returns a response almost immediately, which negates the whole point of making it asynchronous.

To make it more realistic, we will use the <wait> activity to wait for a period of time. To do this drag the <wait> activity from the Component Palette onto your BPEL process just before the <assign> activity, and then double-click on it to open the Wait activity window, as shown below.

The <wait> activity allows you to specify that the process wait for a specified duration of time or until a specified deadline. In either case, you specify a fixed value or choose to specify an XPath expression to evaluate the value at runtime.

If you specify Expression, and then click the calculator icon to the right of it, this will launch the Expression Builder that we introduced earlier in the chapter. The result of the expression must evaluate to a valid value of xsd:duration for periods and xsd:dateTime for deadlines. The format of xsd:duration is PnYnMnDTnHnMnS, for example. P1M would be a duration of 1 month and P10DT1H25M would be 10 days, 1 hour and 25 minutes.

For deadlines, the expression should evaluate to a valid value of xsd:date.

The structure of xsd:dateTime is YYYY-MM-DDThh:mm:ss+hh:mm, where the +hh:mm is optional and is the time period offset from UTC (or GMT, if you prefer). Obviously, the offset can be negative or positive.

For example, 2010-01-19T17:37:47-05:00 is the time 17:37:47 on January 19th 2010, 5 hours behind UTC (that is, Eastern Standard Time in the US).

Using the wait activity

For our purposes, we just need to wait for a relatively short period of time, so set it to wait for one minute.

Now save, deploy, and run the composite. If you now look at the audit trail of the process, you will see that it has paused on the <wait> activity (which will be highlighted in orange).

Improving the stock trade service

We have a very trivial trade service, which always results in a successful trade after one minute. Let's see if we can make it a bit more "realistic".

We will modify the process to call the stockQuote service and compare the actual price against the requested price. If the quote we get back matches or is better than the price specified, then we will return a successful trade (at the quoted price). Otherwise we will wait a minute and loop back round and try again.

Creating the while loop

The bulk of this process will now be contained within a while loop, so from the Process Activities list of the Component Palette, drag a While activity into the process.

Click on the plus symbol to expand the While activity. It will now display an area where you can drop a sequence of one or more activities that will be executed every time the process iterates through the loop.

Creating the while loop

We want to iterate through the loop until the trade has been fulfilled, so let's create a variable of type xsd:Boolean called tradeFulfilled and use an <assign> statement before the while loop to set its value to false.

The first step is to create a variable of type xsd:Boolean. Until now, we've used JDeveloper to automatically create the variables we've required, typically as part of the process of defining an Invoke activity. However, that's not an option here.

If you look at the diagram of your BPEL process, you will see that it is surrounded by a light grey dashed box, and on the top left-hand side there are a number of icons. If you click on the top one of these (x), as shown in the following screenshot, this will open a window that lists all the variables defined in the process:

Creating the while loop

At this stage, it will list just the default inputVariable and outputVariable, which were automatically created with the process. Click on the green plus button. This will bring up the Create Variable window, as shown in the following screenshot:

Creating the while loop

Here we simply specify the Name of the variable (for example, tradeFulfilled) and its Type. In our case, we want an xsd:Boolean, so select Simple Type and click the magnifying glass to the right of it.

This will bring up the Type Chooser, which will list all the simple built-in data types defined by XML Schema. Select Boolean and click OK.

We need to initialize the variable to false, so drag an <assign> statement on to your process just before the while loop. Use the function false(), under the category Logical Functions, to achieve this.

Next, we need to set the condition on the while loop, so that it will execute only while tradeFulfilled equals false. Double-click on the while loop. This will open the While activity window, as shown in the following screenshot:

Creating the while loop

We must now specify an XPath expression, which will evaluate to either true or false. If you click on the expression builder icon, which is circled in the preceding screenshot, this will launch the Expression Builder. Use this to build the following expression:

bpws:getVariableData('tradeFullfilled') = false()

Once we are happy with this, click OK.

Checking the price

The first activity we need to perform within the while loop is to get a quote for the stock that we are trading. For this, we will need to invoke the stock quote process we created earlier. As both of these processes are in the same composite, the simplest way to do this is to wire them together.

Switch to the composite view in JDeveloper, next place your mouse over the yellow arrow on the StockOrder process (the one to add a new Reference). Click and hold your mouse button, then drag the arrow onto the blue arrow on the StockQuote process (the one that represents the Service Interface), then release, as shown in the following screenshot:

Checking the price

This will wire these two processes together and create a corresponding partner link in the StockOrder process. From here, implement the required steps to invoke the process operation of the StockQuote process, making sure that they are included within the while loop.

Using the switch activity

Remember our requirement is that we return success if the price matches or is better than the one specified in the order. Obviously, whether the price is better depends on whether we are selling or buying. If we are selling we need the price to be equal to or greater than the asking price; whereas if we are buying, we need the price to be equal to or less than the asking price.

So for this, we will introduce the <switch> activity. Drag a <switch> activity from the Process Activities list of the Component Palette on to your process after the invoke activity for the StockQuote service. Next, click on the plus symbol to expand the <switch> activity. By default, it will have two branches illustrated as follows:

The first branch contains a <case> condition, with a corresponding area where you can drop a sequence of one or more activities that will be executed if the condition evaluates to true.

The second branch contains an <otherwise> subactivity, with a corresponding area for activities. The activities in this branch will only be executed if all case conditions evaluate to false.

Using the switch activity

We want to cater to two separate tests (one for buying the other for selling), so click on the Add Switch Case arrow (highlighted in the preceding screenshot) to add another <case> branch.

Next, we need to define the test condition for each <case>. To do this, click on the corresponding Expression Builder icon to launch the expression builder (circled in the preceding screenshot). For the first one, use the expression builder to create the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Buy' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') >= 
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
	'/ns1:getQuoteResponse/ns1:Amount')

For the second branch, use the expression builder to define the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Sell' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') <=
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
'/ns1:getQuoteResponse/ns1:Amount')

Once we've defined the condition for each case, we just need to create a single <assign> activity in each branch. This needs to set all the values in the outputVariable to the corresponding values in the inputVariable, except for the ActualPrice element, which we should set to the value returned by the StockQuote process. Finally, we also need to set tradeFullfilled to true, so that we exit the while loop.

The simplest way to do this is by dragging the original <assign> we created in the first version of this process onto the first branch and then modify it as appropriate. Then create a similar <assign> activity in the second branch.

Note

You've probably noticed that you could actually combine the two tests into a single test. However, we took this approach to illustrate how you can add multiple branches to a switch.

If we don't have a match, then we have to wait a minute and then circle back round the while loop and try again. As we've already defined a <wait> activity, simply drag this from its current position within the process into the activity area for the <otherwise> activity.

That completes the process, so try deploying it and running it from the console.

Note

The other obvious thing is that this process could potentially run forever if we don't get a stock quote in our favor. One way to solve this would be to put the while activity in a scope and then set a timeout period on the scope so that it would only run for so long.

Testing the process

At this stage, even though the process isn't complete, we can still save, deploy, and run our composite. Do this in the same way as previously covered in Chapter 2, Writing your First Composite. When you run the composite from the console you will notice that it doesn't return anything (as we haven't specified this yet). But if you look at the audit trail, you should successfully see the GetSingleQuote operation being invoked. Assuming this is the case, we know we have implemented that part of the process correctly.

Calling the exchange rate web service

The next step of the process is to determine the exchange rate between the requested currency and the US dollar (the currency used by the GetSingleQuote operation). For this, we are going to use the currency convertor service provided by webserviceX.NET .

For more information on this and other services provided by webserviceX.NET, go to www.webservicex.net.

This service provides a single operation ConversionRate, which gets the conversion rate from one currency to another. The WSDL file for this service can be found at the following URL:

http://www.webservicex.net/CurrencyConvertor.asmx?wsdl

For convenience, we have included a local copy of the WSDL for webserviceX.NET's currency convertor service, called CurrencyConvertor.wsdl. It's included with the samples of Chapter 5.

To invoke the ConversionRate operation, we will follow the same basic steps that we did in the previous section to invoke the GetSingleQuote operation. For brevity, we won't repeat them here, but will allow the reader to do this.

Note

To follow the examples, name the input variable for the exchange rate web service ExchangeRateInput and the output variable ExchangeRateOutput.

Assigning constant values to variables

The operation ConversionRate takes two input values as follows:

  • FromCurrency: This should be set to 'USD'
  • ToCurrency: This should be set to the currency field contained within the inputVariable for the BPEL process.

To set the FromCurrency, create another copy operation. However, for the From value, select Expression as the Type (circled in the following screenshot).

This will replace the variable browser with a free format textbox. Here you can specify any value, within quotes, that you wish to assign to your target variable. For our purposes, enter 'USD', as shown in the following screenshot:

Assigning constant values to variables

To set the value of ToCurrency, create another copy operation and copy in the value of the currency field contained within the inputVariable.

At this stage again, save, deploy, and run the composite to validate that we are calling the exchange rate service correctly.

Using the expression builder

The final part of the process is to combine the exchange rate returned by one service with the stock price returned by the other, in order to determine the stock price in the requested currency and return that to the caller of the composite.

To do this, we will again use an <assign> activity. So drag another <assign> activity onto the process, just after our second invoke activity. Now in our previous use of the <assign> activity, we have just used it to copy a value from one variable to another.

Here, it is slightly different, in that we want to combine multiple values into a single value, and to do that, we will need to write the appropriate piece of XPath. Create a copy operation as before, but for the source type, select Expression from the drop-down list, as shown in the following screenshot:

Using the expression builder

Now, if you want, you can type in the XPath expression manually (into the Expression area), but it's far easier and less error prone to use the Expression Builder. To do this, click on the XPath expression builder icon; the calculator icon, which is circled in the preceding screenshot, will pop up the Expression Builder (shown below):

Using the expression builder

The Expression Builder provides a graphical tool for writing XPath expressions, which are executed as part of the copy operation. It consists of the following areas:

  • Expression: The top textbox contains the XPath expression that you are working on. You can either type data directly in here, or use the Expression Builder to insert XPath fragments to build up the XPath required.
  • BPEL variables: This part of the Expression Builder lets you browse the variables defined within your BPEL process. Once you've located the variable that you wish to use, click on the Insert Into Expression button, and this will insert the appropriate code fragment into the XPath expression.

    Note

    The code fragment is inserted at the point within the expression where the cursor is currently positioned.

  • Functions: This shows you all the different types of XPath functions that are available to build up your XPath expression. To make it easier to locate the required function, they are grouped into categories such as String Functions, Mathematical Functions, and so on.

    The drop-down list lets you select the category that you are interested in (for example, Mathematical Functions, as illustrated in the preceding screenshot), and then the window below that lists all the functions available to that group.

    To use a particular function, select the required function, and click Insert Into Expression. This will insert the appropriate XPath fragment into the XPath Expression (again at the point that the cursor is currently positioned).

  • Content Preview: This box displays a preview of the content that would be inserted into the XPath Expression if you clicked the Insert Into Expression button. For example, if you had currently selected a particular BPEL variable, it would show you the XPath to access that variable.
  • Description: If you've currently selected a function, this box provides a brief description of the function, as well as the expected usage and number of parameters.

So let's use this to build our XPath expression. The expression we want to build is a relatively simple one, namely, the stock price returned by the stock quote service multiplied by the exchange rate returned by the exchange rate service.

To build our XPath expression, carry out the following steps:

First, within the BPEL Variables area, in the variable QuoteOutput, locate the element ns1:GetSingleQuoteResult|ns1:Last, as shown in the following screenshot:

Using the expression builder

Then click Insert Into Expression to insert this into the XPath expression.

Next, within the Functions area, select the Mathematical Functions category, and select the multiply function (notice the description in the Description box, as shown in the following screenshot), and insert this into the XPath expression:

Using the expression builder

Finally, back in the BPEL Variables area, locate the element ConversionRateResult within the variable ExchangeRateOutput, and insert that into the XPath expression.

You should now have an XPath expression similar to the one illustrated below, once you are happy with it, click OK.

Using the expression builder

Finally make sure you specify the target part of the copy operation, which should be the amount element within the outputVariable.

In order to complete the <assign> activity, you will need to create two more copy operations to copy the Currency and StockSymbol specified in the inputVariable into the equivalent values in the outputVariable.

Once done, your BPEL process should be complete. So deploy and run the composite.

Asynchronous service

Following our StockQuote service, another service would be a stock order service, which would enable us to buy or sell a particular stock. For this service, a client would need to specify the stock, whether they wanted to buy or sell, the quantity, and the price.

It makes sense to make this an asynchronous service, as once the order has been placed, it may take seconds, minutes, hours, or even days for the order to be matched.

Now, I'm not aware of any trade services that are free to try (probably for a good reason!). However, there is no reason why we can't simulate one. To do this, we will write a simple asynchronous process.

Drag another BPEL process on to our StockService composite and give it the name StockOrder, but specify that it is an asynchronous BPEL process.

As with the StockQuote process, we also want to specify predefined elements for its input and output. The elements we are going to use are placeOrder for the input and placeOrderResponse for the output, the definitions for which are shown in the following code snippet:

<xsd:element name="placeOrder"         type="tPlaceOrder"/>
<xsd:element name="placeOrderResponse" type="tPlaceOrderResponse"/>

<xsd:complexType name="tPlaceOrder">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="bidPrice"    type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

<xsd:complexType name="tPlaceOrderResponse">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="actualPrice" type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

These are also defined in the StockService.xsd that we previously imported into the StockService composite. So, for each field, we click on the magnifying glass to bring up the type chooser and select the appropriate element definitions. Then click OK to create the process. This will create a second BPEL process within our composite, so double-click on this to open it.

You will see that, by default, JDeveloper has created a skeleton asynchronous BPEL process, which contains an initial <receive> activity to receive the stock order request. But this time it's followed by an <invoke> activity to send the result back (as opposed to a <reply> activity used by the synchronous process).

If you look at the WSDL for the process, you will see that it defines two operations: process to call the process, and processResponse, which will be called by the process to send back the result. Thus the client that calls the process operation will need to provide the processResponse callback in order to receive the result (this is something we will look at in more detail in Chapter 15, Message Interaction Patterns.

Now, for the purpose of our simulation, we will assume that the StockOrder request is successful and the actualPrice achieved is always the bid price. So to do this, create an assign operation that copies all the original input values to their corresponding output values. Deploy the composite, and run it from the console.

Note

When you click the Test Web Service button for the StockService composite, you will now be presented with two options: stockorder_client_ep and stockquote_client_ep. These correspond to each of the exposed services we have defined in our composite. Ensure you select stockorder_client_ep, which is wired to our StockOrder process.

This time, you will notice that no result is returned (as it's being processed asynchronously); rather it displays a message to indicate that the service was invoked successfully, as shown in the following screenshot:

Asynchronous service

Click on Launch Message Flow Trace to bring up the trace for the composite, and then select StockOrder to bring up the audit trail for the process. Switch to the flow view, and expand the callbackClient activity at the end of the trace. This will pop up a window showing the details of the response sent by our process, as shown in the following screenshot:

Asynchronous service

Using the wait activity

Now you've probably spotted the most obvious flaw with this simulation, in that the process returns a response almost immediately, which negates the whole point of making it asynchronous.

To make it more realistic, we will use the <wait> activity to wait for a period of time. To do this drag the <wait> activity from the Component Palette onto your BPEL process just before the <assign> activity, and then double-click on it to open the Wait activity window, as shown below.

The <wait> activity allows you to specify that the process wait for a specified duration of time or until a specified deadline. In either case, you specify a fixed value or choose to specify an XPath expression to evaluate the value at runtime.

If you specify Expression, and then click the calculator icon to the right of it, this will launch the Expression Builder that we introduced earlier in the chapter. The result of the expression must evaluate to a valid value of xsd:duration for periods and xsd:dateTime for deadlines. The format of xsd:duration is PnYnMnDTnHnMnS, for example. P1M would be a duration of 1 month and P10DT1H25M would be 10 days, 1 hour and 25 minutes.

For deadlines, the expression should evaluate to a valid value of xsd:date.

The structure of xsd:dateTime is YYYY-MM-DDThh:mm:ss+hh:mm, where the +hh:mm is optional and is the time period offset from UTC (or GMT, if you prefer). Obviously, the offset can be negative or positive.

For example, 2010-01-19T17:37:47-05:00 is the time 17:37:47 on January 19th 2010, 5 hours behind UTC (that is, Eastern Standard Time in the US).

Using the wait activity

For our purposes, we just need to wait for a relatively short period of time, so set it to wait for one minute.

Now save, deploy, and run the composite. If you now look at the audit trail of the process, you will see that it has paused on the <wait> activity (which will be highlighted in orange).

Improving the stock trade service

We have a very trivial trade service, which always results in a successful trade after one minute. Let's see if we can make it a bit more "realistic".

We will modify the process to call the stockQuote service and compare the actual price against the requested price. If the quote we get back matches or is better than the price specified, then we will return a successful trade (at the quoted price). Otherwise we will wait a minute and loop back round and try again.

Creating the while loop

The bulk of this process will now be contained within a while loop, so from the Process Activities list of the Component Palette, drag a While activity into the process.

Click on the plus symbol to expand the While activity. It will now display an area where you can drop a sequence of one or more activities that will be executed every time the process iterates through the loop.

Creating the while loop

We want to iterate through the loop until the trade has been fulfilled, so let's create a variable of type xsd:Boolean called tradeFulfilled and use an <assign> statement before the while loop to set its value to false.

The first step is to create a variable of type xsd:Boolean. Until now, we've used JDeveloper to automatically create the variables we've required, typically as part of the process of defining an Invoke activity. However, that's not an option here.

If you look at the diagram of your BPEL process, you will see that it is surrounded by a light grey dashed box, and on the top left-hand side there are a number of icons. If you click on the top one of these (x), as shown in the following screenshot, this will open a window that lists all the variables defined in the process:

Creating the while loop

At this stage, it will list just the default inputVariable and outputVariable, which were automatically created with the process. Click on the green plus button. This will bring up the Create Variable window, as shown in the following screenshot:

Creating the while loop

Here we simply specify the Name of the variable (for example, tradeFulfilled) and its Type. In our case, we want an xsd:Boolean, so select Simple Type and click the magnifying glass to the right of it.

This will bring up the Type Chooser, which will list all the simple built-in data types defined by XML Schema. Select Boolean and click OK.

We need to initialize the variable to false, so drag an <assign> statement on to your process just before the while loop. Use the function false(), under the category Logical Functions, to achieve this.

Next, we need to set the condition on the while loop, so that it will execute only while tradeFulfilled equals false. Double-click on the while loop. This will open the While activity window, as shown in the following screenshot:

Creating the while loop

We must now specify an XPath expression, which will evaluate to either true or false. If you click on the expression builder icon, which is circled in the preceding screenshot, this will launch the Expression Builder. Use this to build the following expression:

bpws:getVariableData('tradeFullfilled') = false()

Once we are happy with this, click OK.

Checking the price

The first activity we need to perform within the while loop is to get a quote for the stock that we are trading. For this, we will need to invoke the stock quote process we created earlier. As both of these processes are in the same composite, the simplest way to do this is to wire them together.

Switch to the composite view in JDeveloper, next place your mouse over the yellow arrow on the StockOrder process (the one to add a new Reference). Click and hold your mouse button, then drag the arrow onto the blue arrow on the StockQuote process (the one that represents the Service Interface), then release, as shown in the following screenshot:

Checking the price

This will wire these two processes together and create a corresponding partner link in the StockOrder process. From here, implement the required steps to invoke the process operation of the StockQuote process, making sure that they are included within the while loop.

Using the switch activity

Remember our requirement is that we return success if the price matches or is better than the one specified in the order. Obviously, whether the price is better depends on whether we are selling or buying. If we are selling we need the price to be equal to or greater than the asking price; whereas if we are buying, we need the price to be equal to or less than the asking price.

So for this, we will introduce the <switch> activity. Drag a <switch> activity from the Process Activities list of the Component Palette on to your process after the invoke activity for the StockQuote service. Next, click on the plus symbol to expand the <switch> activity. By default, it will have two branches illustrated as follows:

The first branch contains a <case> condition, with a corresponding area where you can drop a sequence of one or more activities that will be executed if the condition evaluates to true.

The second branch contains an <otherwise> subactivity, with a corresponding area for activities. The activities in this branch will only be executed if all case conditions evaluate to false.

Using the switch activity

We want to cater to two separate tests (one for buying the other for selling), so click on the Add Switch Case arrow (highlighted in the preceding screenshot) to add another <case> branch.

Next, we need to define the test condition for each <case>. To do this, click on the corresponding Expression Builder icon to launch the expression builder (circled in the preceding screenshot). For the first one, use the expression builder to create the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Buy' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') >= 
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
	'/ns1:getQuoteResponse/ns1:Amount')

For the second branch, use the expression builder to define the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Sell' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') <=
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
'/ns1:getQuoteResponse/ns1:Amount')

Once we've defined the condition for each case, we just need to create a single <assign> activity in each branch. This needs to set all the values in the outputVariable to the corresponding values in the inputVariable, except for the ActualPrice element, which we should set to the value returned by the StockQuote process. Finally, we also need to set tradeFullfilled to true, so that we exit the while loop.

The simplest way to do this is by dragging the original <assign> we created in the first version of this process onto the first branch and then modify it as appropriate. Then create a similar <assign> activity in the second branch.

Note

You've probably noticed that you could actually combine the two tests into a single test. However, we took this approach to illustrate how you can add multiple branches to a switch.

If we don't have a match, then we have to wait a minute and then circle back round the while loop and try again. As we've already defined a <wait> activity, simply drag this from its current position within the process into the activity area for the <otherwise> activity.

That completes the process, so try deploying it and running it from the console.

Note

The other obvious thing is that this process could potentially run forever if we don't get a stock quote in our favor. One way to solve this would be to put the while activity in a scope and then set a timeout period on the scope so that it would only run for so long.

Calling the exchange rate web service

The next step of the process is to determine the exchange rate between the requested currency and the US dollar (the currency used by the GetSingleQuote operation). For this, we are going to use the currency convertor service provided by webserviceX.NET .

For more information on this and other services provided by webserviceX.NET, go to www.webservicex.net.

This service provides a single operation ConversionRate, which gets the conversion rate from one currency to another. The WSDL file for this service can be found at the following URL:

http://www.webservicex.net/CurrencyConvertor.asmx?wsdl

For convenience, we have included a local copy of the WSDL for webserviceX.NET's currency convertor service, called CurrencyConvertor.wsdl. It's included with the samples of Chapter 5.

To invoke the ConversionRate operation, we will follow the same basic steps that we did in the previous section to invoke the GetSingleQuote operation. For brevity, we won't repeat them here, but will allow the reader to do this.

Note

To follow the examples, name the input variable for the exchange rate web service ExchangeRateInput and the output variable ExchangeRateOutput.

Assigning constant values to variables

The operation ConversionRate takes two input values as follows:

  • FromCurrency: This should be set to 'USD'
  • ToCurrency: This should be set to the currency field contained within the inputVariable for the BPEL process.

To set the FromCurrency, create another copy operation. However, for the From value, select Expression as the Type (circled in the following screenshot).

This will replace the variable browser with a free format textbox. Here you can specify any value, within quotes, that you wish to assign to your target variable. For our purposes, enter 'USD', as shown in the following screenshot:

Assigning constant values to variables

To set the value of ToCurrency, create another copy operation and copy in the value of the currency field contained within the inputVariable.

At this stage again, save, deploy, and run the composite to validate that we are calling the exchange rate service correctly.

Using the expression builder

The final part of the process is to combine the exchange rate returned by one service with the stock price returned by the other, in order to determine the stock price in the requested currency and return that to the caller of the composite.

To do this, we will again use an <assign> activity. So drag another <assign> activity onto the process, just after our second invoke activity. Now in our previous use of the <assign> activity, we have just used it to copy a value from one variable to another.

Here, it is slightly different, in that we want to combine multiple values into a single value, and to do that, we will need to write the appropriate piece of XPath. Create a copy operation as before, but for the source type, select Expression from the drop-down list, as shown in the following screenshot:

Using the expression builder

Now, if you want, you can type in the XPath expression manually (into the Expression area), but it's far easier and less error prone to use the Expression Builder. To do this, click on the XPath expression builder icon; the calculator icon, which is circled in the preceding screenshot, will pop up the Expression Builder (shown below):

Using the expression builder

The Expression Builder provides a graphical tool for writing XPath expressions, which are executed as part of the copy operation. It consists of the following areas:

  • Expression: The top textbox contains the XPath expression that you are working on. You can either type data directly in here, or use the Expression Builder to insert XPath fragments to build up the XPath required.
  • BPEL variables: This part of the Expression Builder lets you browse the variables defined within your BPEL process. Once you've located the variable that you wish to use, click on the Insert Into Expression button, and this will insert the appropriate code fragment into the XPath expression.

    Note

    The code fragment is inserted at the point within the expression where the cursor is currently positioned.

  • Functions: This shows you all the different types of XPath functions that are available to build up your XPath expression. To make it easier to locate the required function, they are grouped into categories such as String Functions, Mathematical Functions, and so on.

    The drop-down list lets you select the category that you are interested in (for example, Mathematical Functions, as illustrated in the preceding screenshot), and then the window below that lists all the functions available to that group.

    To use a particular function, select the required function, and click Insert Into Expression. This will insert the appropriate XPath fragment into the XPath Expression (again at the point that the cursor is currently positioned).

  • Content Preview: This box displays a preview of the content that would be inserted into the XPath Expression if you clicked the Insert Into Expression button. For example, if you had currently selected a particular BPEL variable, it would show you the XPath to access that variable.
  • Description: If you've currently selected a function, this box provides a brief description of the function, as well as the expected usage and number of parameters.

So let's use this to build our XPath expression. The expression we want to build is a relatively simple one, namely, the stock price returned by the stock quote service multiplied by the exchange rate returned by the exchange rate service.

To build our XPath expression, carry out the following steps:

First, within the BPEL Variables area, in the variable QuoteOutput, locate the element ns1:GetSingleQuoteResult|ns1:Last, as shown in the following screenshot:

Using the expression builder

Then click Insert Into Expression to insert this into the XPath expression.

Next, within the Functions area, select the Mathematical Functions category, and select the multiply function (notice the description in the Description box, as shown in the following screenshot), and insert this into the XPath expression:

Using the expression builder

Finally, back in the BPEL Variables area, locate the element ConversionRateResult within the variable ExchangeRateOutput, and insert that into the XPath expression.

You should now have an XPath expression similar to the one illustrated below, once you are happy with it, click OK.

Using the expression builder

Finally make sure you specify the target part of the copy operation, which should be the amount element within the outputVariable.

In order to complete the <assign> activity, you will need to create two more copy operations to copy the Currency and StockSymbol specified in the inputVariable into the equivalent values in the outputVariable.

Once done, your BPEL process should be complete. So deploy and run the composite.

Asynchronous service

Following our StockQuote service, another service would be a stock order service, which would enable us to buy or sell a particular stock. For this service, a client would need to specify the stock, whether they wanted to buy or sell, the quantity, and the price.

It makes sense to make this an asynchronous service, as once the order has been placed, it may take seconds, minutes, hours, or even days for the order to be matched.

Now, I'm not aware of any trade services that are free to try (probably for a good reason!). However, there is no reason why we can't simulate one. To do this, we will write a simple asynchronous process.

Drag another BPEL process on to our StockService composite and give it the name StockOrder, but specify that it is an asynchronous BPEL process.

As with the StockQuote process, we also want to specify predefined elements for its input and output. The elements we are going to use are placeOrder for the input and placeOrderResponse for the output, the definitions for which are shown in the following code snippet:

<xsd:element name="placeOrder"         type="tPlaceOrder"/>
<xsd:element name="placeOrderResponse" type="tPlaceOrderResponse"/>

<xsd:complexType name="tPlaceOrder">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="bidPrice"    type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

<xsd:complexType name="tPlaceOrderResponse">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="actualPrice" type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

These are also defined in the StockService.xsd that we previously imported into the StockService composite. So, for each field, we click on the magnifying glass to bring up the type chooser and select the appropriate element definitions. Then click OK to create the process. This will create a second BPEL process within our composite, so double-click on this to open it.

You will see that, by default, JDeveloper has created a skeleton asynchronous BPEL process, which contains an initial <receive> activity to receive the stock order request. But this time it's followed by an <invoke> activity to send the result back (as opposed to a <reply> activity used by the synchronous process).

If you look at the WSDL for the process, you will see that it defines two operations: process to call the process, and processResponse, which will be called by the process to send back the result. Thus the client that calls the process operation will need to provide the processResponse callback in order to receive the result (this is something we will look at in more detail in Chapter 15, Message Interaction Patterns.

Now, for the purpose of our simulation, we will assume that the StockOrder request is successful and the actualPrice achieved is always the bid price. So to do this, create an assign operation that copies all the original input values to their corresponding output values. Deploy the composite, and run it from the console.

Note

When you click the Test Web Service button for the StockService composite, you will now be presented with two options: stockorder_client_ep and stockquote_client_ep. These correspond to each of the exposed services we have defined in our composite. Ensure you select stockorder_client_ep, which is wired to our StockOrder process.

This time, you will notice that no result is returned (as it's being processed asynchronously); rather it displays a message to indicate that the service was invoked successfully, as shown in the following screenshot:

Asynchronous service

Click on Launch Message Flow Trace to bring up the trace for the composite, and then select StockOrder to bring up the audit trail for the process. Switch to the flow view, and expand the callbackClient activity at the end of the trace. This will pop up a window showing the details of the response sent by our process, as shown in the following screenshot:

Asynchronous service

Using the wait activity

Now you've probably spotted the most obvious flaw with this simulation, in that the process returns a response almost immediately, which negates the whole point of making it asynchronous.

To make it more realistic, we will use the <wait> activity to wait for a period of time. To do this drag the <wait> activity from the Component Palette onto your BPEL process just before the <assign> activity, and then double-click on it to open the Wait activity window, as shown below.

The <wait> activity allows you to specify that the process wait for a specified duration of time or until a specified deadline. In either case, you specify a fixed value or choose to specify an XPath expression to evaluate the value at runtime.

If you specify Expression, and then click the calculator icon to the right of it, this will launch the Expression Builder that we introduced earlier in the chapter. The result of the expression must evaluate to a valid value of xsd:duration for periods and xsd:dateTime for deadlines. The format of xsd:duration is PnYnMnDTnHnMnS, for example. P1M would be a duration of 1 month and P10DT1H25M would be 10 days, 1 hour and 25 minutes.

For deadlines, the expression should evaluate to a valid value of xsd:date.

The structure of xsd:dateTime is YYYY-MM-DDThh:mm:ss+hh:mm, where the +hh:mm is optional and is the time period offset from UTC (or GMT, if you prefer). Obviously, the offset can be negative or positive.

For example, 2010-01-19T17:37:47-05:00 is the time 17:37:47 on January 19th 2010, 5 hours behind UTC (that is, Eastern Standard Time in the US).

Using the wait activity

For our purposes, we just need to wait for a relatively short period of time, so set it to wait for one minute.

Now save, deploy, and run the composite. If you now look at the audit trail of the process, you will see that it has paused on the <wait> activity (which will be highlighted in orange).

Improving the stock trade service

We have a very trivial trade service, which always results in a successful trade after one minute. Let's see if we can make it a bit more "realistic".

We will modify the process to call the stockQuote service and compare the actual price against the requested price. If the quote we get back matches or is better than the price specified, then we will return a successful trade (at the quoted price). Otherwise we will wait a minute and loop back round and try again.

Creating the while loop

The bulk of this process will now be contained within a while loop, so from the Process Activities list of the Component Palette, drag a While activity into the process.

Click on the plus symbol to expand the While activity. It will now display an area where you can drop a sequence of one or more activities that will be executed every time the process iterates through the loop.

Creating the while loop

We want to iterate through the loop until the trade has been fulfilled, so let's create a variable of type xsd:Boolean called tradeFulfilled and use an <assign> statement before the while loop to set its value to false.

The first step is to create a variable of type xsd:Boolean. Until now, we've used JDeveloper to automatically create the variables we've required, typically as part of the process of defining an Invoke activity. However, that's not an option here.

If you look at the diagram of your BPEL process, you will see that it is surrounded by a light grey dashed box, and on the top left-hand side there are a number of icons. If you click on the top one of these (x), as shown in the following screenshot, this will open a window that lists all the variables defined in the process:

Creating the while loop

At this stage, it will list just the default inputVariable and outputVariable, which were automatically created with the process. Click on the green plus button. This will bring up the Create Variable window, as shown in the following screenshot:

Creating the while loop

Here we simply specify the Name of the variable (for example, tradeFulfilled) and its Type. In our case, we want an xsd:Boolean, so select Simple Type and click the magnifying glass to the right of it.

This will bring up the Type Chooser, which will list all the simple built-in data types defined by XML Schema. Select Boolean and click OK.

We need to initialize the variable to false, so drag an <assign> statement on to your process just before the while loop. Use the function false(), under the category Logical Functions, to achieve this.

Next, we need to set the condition on the while loop, so that it will execute only while tradeFulfilled equals false. Double-click on the while loop. This will open the While activity window, as shown in the following screenshot:

Creating the while loop

We must now specify an XPath expression, which will evaluate to either true or false. If you click on the expression builder icon, which is circled in the preceding screenshot, this will launch the Expression Builder. Use this to build the following expression:

bpws:getVariableData('tradeFullfilled') = false()

Once we are happy with this, click OK.

Checking the price

The first activity we need to perform within the while loop is to get a quote for the stock that we are trading. For this, we will need to invoke the stock quote process we created earlier. As both of these processes are in the same composite, the simplest way to do this is to wire them together.

Switch to the composite view in JDeveloper, next place your mouse over the yellow arrow on the StockOrder process (the one to add a new Reference). Click and hold your mouse button, then drag the arrow onto the blue arrow on the StockQuote process (the one that represents the Service Interface), then release, as shown in the following screenshot:

Checking the price

This will wire these two processes together and create a corresponding partner link in the StockOrder process. From here, implement the required steps to invoke the process operation of the StockQuote process, making sure that they are included within the while loop.

Using the switch activity

Remember our requirement is that we return success if the price matches or is better than the one specified in the order. Obviously, whether the price is better depends on whether we are selling or buying. If we are selling we need the price to be equal to or greater than the asking price; whereas if we are buying, we need the price to be equal to or less than the asking price.

So for this, we will introduce the <switch> activity. Drag a <switch> activity from the Process Activities list of the Component Palette on to your process after the invoke activity for the StockQuote service. Next, click on the plus symbol to expand the <switch> activity. By default, it will have two branches illustrated as follows:

The first branch contains a <case> condition, with a corresponding area where you can drop a sequence of one or more activities that will be executed if the condition evaluates to true.

The second branch contains an <otherwise> subactivity, with a corresponding area for activities. The activities in this branch will only be executed if all case conditions evaluate to false.

Using the switch activity

We want to cater to two separate tests (one for buying the other for selling), so click on the Add Switch Case arrow (highlighted in the preceding screenshot) to add another <case> branch.

Next, we need to define the test condition for each <case>. To do this, click on the corresponding Expression Builder icon to launch the expression builder (circled in the preceding screenshot). For the first one, use the expression builder to create the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Buy' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') >= 
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
	'/ns1:getQuoteResponse/ns1:Amount')

For the second branch, use the expression builder to define the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Sell' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') <=
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
'/ns1:getQuoteResponse/ns1:Amount')

Once we've defined the condition for each case, we just need to create a single <assign> activity in each branch. This needs to set all the values in the outputVariable to the corresponding values in the inputVariable, except for the ActualPrice element, which we should set to the value returned by the StockQuote process. Finally, we also need to set tradeFullfilled to true, so that we exit the while loop.

The simplest way to do this is by dragging the original <assign> we created in the first version of this process onto the first branch and then modify it as appropriate. Then create a similar <assign> activity in the second branch.

Note

You've probably noticed that you could actually combine the two tests into a single test. However, we took this approach to illustrate how you can add multiple branches to a switch.

If we don't have a match, then we have to wait a minute and then circle back round the while loop and try again. As we've already defined a <wait> activity, simply drag this from its current position within the process into the activity area for the <otherwise> activity.

That completes the process, so try deploying it and running it from the console.

Note

The other obvious thing is that this process could potentially run forever if we don't get a stock quote in our favor. One way to solve this would be to put the while activity in a scope and then set a timeout period on the scope so that it would only run for so long.

Assigning constant values to variables

The operation ConversionRate takes two input values as follows:

  • FromCurrency: This should be set to 'USD'
  • ToCurrency: This should be set to the currency field contained within the inputVariable for the BPEL process.

To set the FromCurrency, create another copy operation. However, for the From value, select Expression as the Type (circled in the following screenshot).

This will replace the variable browser with a free format textbox. Here you can specify any value, within quotes, that you wish to assign to your target variable. For our purposes, enter 'USD', as shown in the following screenshot:

Assigning constant values to variables

To set the value of ToCurrency, create another copy operation and copy in the value of the currency field contained within the inputVariable.

At this stage again, save, deploy, and run the composite to validate that we are calling the exchange rate service correctly.

Using the expression builder

The final part of the process is to combine the exchange rate returned by one service with the stock price returned by the other, in order to determine the stock price in the requested currency and return that to the caller of the composite.

To do this, we will again use an <assign> activity. So drag another <assign> activity onto the process, just after our second invoke activity. Now in our previous use of the <assign> activity, we have just used it to copy a value from one variable to another.

Here, it is slightly different, in that we want to combine multiple values into a single value, and to do that, we will need to write the appropriate piece of XPath. Create a copy operation as before, but for the source type, select Expression from the drop-down list, as shown in the following screenshot:

Using the expression builder

Now, if you want, you can type in the XPath expression manually (into the Expression area), but it's far easier and less error prone to use the Expression Builder. To do this, click on the XPath expression builder icon; the calculator icon, which is circled in the preceding screenshot, will pop up the Expression Builder (shown below):

Using the expression builder

The Expression Builder provides a graphical tool for writing XPath expressions, which are executed as part of the copy operation. It consists of the following areas:

  • Expression: The top textbox contains the XPath expression that you are working on. You can either type data directly in here, or use the Expression Builder to insert XPath fragments to build up the XPath required.
  • BPEL variables: This part of the Expression Builder lets you browse the variables defined within your BPEL process. Once you've located the variable that you wish to use, click on the Insert Into Expression button, and this will insert the appropriate code fragment into the XPath expression.

    Note

    The code fragment is inserted at the point within the expression where the cursor is currently positioned.

  • Functions: This shows you all the different types of XPath functions that are available to build up your XPath expression. To make it easier to locate the required function, they are grouped into categories such as String Functions, Mathematical Functions, and so on.

    The drop-down list lets you select the category that you are interested in (for example, Mathematical Functions, as illustrated in the preceding screenshot), and then the window below that lists all the functions available to that group.

    To use a particular function, select the required function, and click Insert Into Expression. This will insert the appropriate XPath fragment into the XPath Expression (again at the point that the cursor is currently positioned).

  • Content Preview: This box displays a preview of the content that would be inserted into the XPath Expression if you clicked the Insert Into Expression button. For example, if you had currently selected a particular BPEL variable, it would show you the XPath to access that variable.
  • Description: If you've currently selected a function, this box provides a brief description of the function, as well as the expected usage and number of parameters.

So let's use this to build our XPath expression. The expression we want to build is a relatively simple one, namely, the stock price returned by the stock quote service multiplied by the exchange rate returned by the exchange rate service.

To build our XPath expression, carry out the following steps:

First, within the BPEL Variables area, in the variable QuoteOutput, locate the element ns1:GetSingleQuoteResult|ns1:Last, as shown in the following screenshot:

Using the expression builder

Then click Insert Into Expression to insert this into the XPath expression.

Next, within the Functions area, select the Mathematical Functions category, and select the multiply function (notice the description in the Description box, as shown in the following screenshot), and insert this into the XPath expression:

Using the expression builder

Finally, back in the BPEL Variables area, locate the element ConversionRateResult within the variable ExchangeRateOutput, and insert that into the XPath expression.

You should now have an XPath expression similar to the one illustrated below, once you are happy with it, click OK.

Using the expression builder

Finally make sure you specify the target part of the copy operation, which should be the amount element within the outputVariable.

In order to complete the <assign> activity, you will need to create two more copy operations to copy the Currency and StockSymbol specified in the inputVariable into the equivalent values in the outputVariable.

Once done, your BPEL process should be complete. So deploy and run the composite.

Asynchronous service

Following our StockQuote service, another service would be a stock order service, which would enable us to buy or sell a particular stock. For this service, a client would need to specify the stock, whether they wanted to buy or sell, the quantity, and the price.

It makes sense to make this an asynchronous service, as once the order has been placed, it may take seconds, minutes, hours, or even days for the order to be matched.

Now, I'm not aware of any trade services that are free to try (probably for a good reason!). However, there is no reason why we can't simulate one. To do this, we will write a simple asynchronous process.

Drag another BPEL process on to our StockService composite and give it the name StockOrder, but specify that it is an asynchronous BPEL process.

As with the StockQuote process, we also want to specify predefined elements for its input and output. The elements we are going to use are placeOrder for the input and placeOrderResponse for the output, the definitions for which are shown in the following code snippet:

<xsd:element name="placeOrder"         type="tPlaceOrder"/>
<xsd:element name="placeOrderResponse" type="tPlaceOrderResponse"/>

<xsd:complexType name="tPlaceOrder">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="bidPrice"    type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

<xsd:complexType name="tPlaceOrderResponse">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="actualPrice" type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

These are also defined in the StockService.xsd that we previously imported into the StockService composite. So, for each field, we click on the magnifying glass to bring up the type chooser and select the appropriate element definitions. Then click OK to create the process. This will create a second BPEL process within our composite, so double-click on this to open it.

You will see that, by default, JDeveloper has created a skeleton asynchronous BPEL process, which contains an initial <receive> activity to receive the stock order request. But this time it's followed by an <invoke> activity to send the result back (as opposed to a <reply> activity used by the synchronous process).

If you look at the WSDL for the process, you will see that it defines two operations: process to call the process, and processResponse, which will be called by the process to send back the result. Thus the client that calls the process operation will need to provide the processResponse callback in order to receive the result (this is something we will look at in more detail in Chapter 15, Message Interaction Patterns.

Now, for the purpose of our simulation, we will assume that the StockOrder request is successful and the actualPrice achieved is always the bid price. So to do this, create an assign operation that copies all the original input values to their corresponding output values. Deploy the composite, and run it from the console.

Note

When you click the Test Web Service button for the StockService composite, you will now be presented with two options: stockorder_client_ep and stockquote_client_ep. These correspond to each of the exposed services we have defined in our composite. Ensure you select stockorder_client_ep, which is wired to our StockOrder process.

This time, you will notice that no result is returned (as it's being processed asynchronously); rather it displays a message to indicate that the service was invoked successfully, as shown in the following screenshot:

Asynchronous service

Click on Launch Message Flow Trace to bring up the trace for the composite, and then select StockOrder to bring up the audit trail for the process. Switch to the flow view, and expand the callbackClient activity at the end of the trace. This will pop up a window showing the details of the response sent by our process, as shown in the following screenshot:

Asynchronous service

Using the wait activity

Now you've probably spotted the most obvious flaw with this simulation, in that the process returns a response almost immediately, which negates the whole point of making it asynchronous.

To make it more realistic, we will use the <wait> activity to wait for a period of time. To do this drag the <wait> activity from the Component Palette onto your BPEL process just before the <assign> activity, and then double-click on it to open the Wait activity window, as shown below.

The <wait> activity allows you to specify that the process wait for a specified duration of time or until a specified deadline. In either case, you specify a fixed value or choose to specify an XPath expression to evaluate the value at runtime.

If you specify Expression, and then click the calculator icon to the right of it, this will launch the Expression Builder that we introduced earlier in the chapter. The result of the expression must evaluate to a valid value of xsd:duration for periods and xsd:dateTime for deadlines. The format of xsd:duration is PnYnMnDTnHnMnS, for example. P1M would be a duration of 1 month and P10DT1H25M would be 10 days, 1 hour and 25 minutes.

For deadlines, the expression should evaluate to a valid value of xsd:date.

The structure of xsd:dateTime is YYYY-MM-DDThh:mm:ss+hh:mm, where the +hh:mm is optional and is the time period offset from UTC (or GMT, if you prefer). Obviously, the offset can be negative or positive.

For example, 2010-01-19T17:37:47-05:00 is the time 17:37:47 on January 19th 2010, 5 hours behind UTC (that is, Eastern Standard Time in the US).

Using the wait activity

For our purposes, we just need to wait for a relatively short period of time, so set it to wait for one minute.

Now save, deploy, and run the composite. If you now look at the audit trail of the process, you will see that it has paused on the <wait> activity (which will be highlighted in orange).

Improving the stock trade service

We have a very trivial trade service, which always results in a successful trade after one minute. Let's see if we can make it a bit more "realistic".

We will modify the process to call the stockQuote service and compare the actual price against the requested price. If the quote we get back matches or is better than the price specified, then we will return a successful trade (at the quoted price). Otherwise we will wait a minute and loop back round and try again.

Creating the while loop

The bulk of this process will now be contained within a while loop, so from the Process Activities list of the Component Palette, drag a While activity into the process.

Click on the plus symbol to expand the While activity. It will now display an area where you can drop a sequence of one or more activities that will be executed every time the process iterates through the loop.

Creating the while loop

We want to iterate through the loop until the trade has been fulfilled, so let's create a variable of type xsd:Boolean called tradeFulfilled and use an <assign> statement before the while loop to set its value to false.

The first step is to create a variable of type xsd:Boolean. Until now, we've used JDeveloper to automatically create the variables we've required, typically as part of the process of defining an Invoke activity. However, that's not an option here.

If you look at the diagram of your BPEL process, you will see that it is surrounded by a light grey dashed box, and on the top left-hand side there are a number of icons. If you click on the top one of these (x), as shown in the following screenshot, this will open a window that lists all the variables defined in the process:

Creating the while loop

At this stage, it will list just the default inputVariable and outputVariable, which were automatically created with the process. Click on the green plus button. This will bring up the Create Variable window, as shown in the following screenshot:

Creating the while loop

Here we simply specify the Name of the variable (for example, tradeFulfilled) and its Type. In our case, we want an xsd:Boolean, so select Simple Type and click the magnifying glass to the right of it.

This will bring up the Type Chooser, which will list all the simple built-in data types defined by XML Schema. Select Boolean and click OK.

We need to initialize the variable to false, so drag an <assign> statement on to your process just before the while loop. Use the function false(), under the category Logical Functions, to achieve this.

Next, we need to set the condition on the while loop, so that it will execute only while tradeFulfilled equals false. Double-click on the while loop. This will open the While activity window, as shown in the following screenshot:

Creating the while loop

We must now specify an XPath expression, which will evaluate to either true or false. If you click on the expression builder icon, which is circled in the preceding screenshot, this will launch the Expression Builder. Use this to build the following expression:

bpws:getVariableData('tradeFullfilled') = false()

Once we are happy with this, click OK.

Checking the price

The first activity we need to perform within the while loop is to get a quote for the stock that we are trading. For this, we will need to invoke the stock quote process we created earlier. As both of these processes are in the same composite, the simplest way to do this is to wire them together.

Switch to the composite view in JDeveloper, next place your mouse over the yellow arrow on the StockOrder process (the one to add a new Reference). Click and hold your mouse button, then drag the arrow onto the blue arrow on the StockQuote process (the one that represents the Service Interface), then release, as shown in the following screenshot:

Checking the price

This will wire these two processes together and create a corresponding partner link in the StockOrder process. From here, implement the required steps to invoke the process operation of the StockQuote process, making sure that they are included within the while loop.

Using the switch activity

Remember our requirement is that we return success if the price matches or is better than the one specified in the order. Obviously, whether the price is better depends on whether we are selling or buying. If we are selling we need the price to be equal to or greater than the asking price; whereas if we are buying, we need the price to be equal to or less than the asking price.

So for this, we will introduce the <switch> activity. Drag a <switch> activity from the Process Activities list of the Component Palette on to your process after the invoke activity for the StockQuote service. Next, click on the plus symbol to expand the <switch> activity. By default, it will have two branches illustrated as follows:

The first branch contains a <case> condition, with a corresponding area where you can drop a sequence of one or more activities that will be executed if the condition evaluates to true.

The second branch contains an <otherwise> subactivity, with a corresponding area for activities. The activities in this branch will only be executed if all case conditions evaluate to false.

Using the switch activity

We want to cater to two separate tests (one for buying the other for selling), so click on the Add Switch Case arrow (highlighted in the preceding screenshot) to add another <case> branch.

Next, we need to define the test condition for each <case>. To do this, click on the corresponding Expression Builder icon to launch the expression builder (circled in the preceding screenshot). For the first one, use the expression builder to create the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Buy' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') >= 
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
	'/ns1:getQuoteResponse/ns1:Amount')

For the second branch, use the expression builder to define the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Sell' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') <=
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
'/ns1:getQuoteResponse/ns1:Amount')

Once we've defined the condition for each case, we just need to create a single <assign> activity in each branch. This needs to set all the values in the outputVariable to the corresponding values in the inputVariable, except for the ActualPrice element, which we should set to the value returned by the StockQuote process. Finally, we also need to set tradeFullfilled to true, so that we exit the while loop.

The simplest way to do this is by dragging the original <assign> we created in the first version of this process onto the first branch and then modify it as appropriate. Then create a similar <assign> activity in the second branch.

Note

You've probably noticed that you could actually combine the two tests into a single test. However, we took this approach to illustrate how you can add multiple branches to a switch.

If we don't have a match, then we have to wait a minute and then circle back round the while loop and try again. As we've already defined a <wait> activity, simply drag this from its current position within the process into the activity area for the <otherwise> activity.

That completes the process, so try deploying it and running it from the console.

Note

The other obvious thing is that this process could potentially run forever if we don't get a stock quote in our favor. One way to solve this would be to put the while activity in a scope and then set a timeout period on the scope so that it would only run for so long.

Using the expression builder

The final part of the process is to combine the exchange rate returned by one service with the stock price returned by the other, in order to determine the stock price in the requested currency and return that to the caller of the composite.

To do this, we will again use an <assign> activity. So drag another <assign> activity onto the process, just after our second invoke activity. Now in our previous use of the <assign> activity, we have just used it to copy a value from one variable to another.

Here, it is slightly different, in that we want to combine multiple values into a single value, and to do that, we will need to write the appropriate piece of XPath. Create a copy operation as before, but for the source type, select Expression from the drop-down list, as shown in the following screenshot:

Using the expression builder

Now, if you want, you can type in the XPath expression manually (into the Expression area), but it's far easier and less error prone to use the Expression Builder. To do this, click on the XPath expression builder icon; the calculator icon, which is circled in the preceding screenshot, will pop up the Expression Builder (shown below):

Using the expression builder

The Expression Builder provides a graphical tool for writing XPath expressions, which are executed as part of the copy operation. It consists of the following areas:

  • Expression: The top textbox contains the XPath expression that you are working on. You can either type data directly in here, or use the Expression Builder to insert XPath fragments to build up the XPath required.
  • BPEL variables: This part of the Expression Builder lets you browse the variables defined within your BPEL process. Once you've located the variable that you wish to use, click on the Insert Into Expression button, and this will insert the appropriate code fragment into the XPath expression.

    Note

    The code fragment is inserted at the point within the expression where the cursor is currently positioned.

  • Functions: This shows you all the different types of XPath functions that are available to build up your XPath expression. To make it easier to locate the required function, they are grouped into categories such as String Functions, Mathematical Functions, and so on.

    The drop-down list lets you select the category that you are interested in (for example, Mathematical Functions, as illustrated in the preceding screenshot), and then the window below that lists all the functions available to that group.

    To use a particular function, select the required function, and click Insert Into Expression. This will insert the appropriate XPath fragment into the XPath Expression (again at the point that the cursor is currently positioned).

  • Content Preview: This box displays a preview of the content that would be inserted into the XPath Expression if you clicked the Insert Into Expression button. For example, if you had currently selected a particular BPEL variable, it would show you the XPath to access that variable.
  • Description: If you've currently selected a function, this box provides a brief description of the function, as well as the expected usage and number of parameters.

So let's use this to build our XPath expression. The expression we want to build is a relatively simple one, namely, the stock price returned by the stock quote service multiplied by the exchange rate returned by the exchange rate service.

To build our XPath expression, carry out the following steps:

First, within the BPEL Variables area, in the variable QuoteOutput, locate the element ns1:GetSingleQuoteResult|ns1:Last, as shown in the following screenshot:

Using the expression builder

Then click Insert Into Expression to insert this into the XPath expression.

Next, within the Functions area, select the Mathematical Functions category, and select the multiply function (notice the description in the Description box, as shown in the following screenshot), and insert this into the XPath expression:

Using the expression builder

Finally, back in the BPEL Variables area, locate the element ConversionRateResult within the variable ExchangeRateOutput, and insert that into the XPath expression.

You should now have an XPath expression similar to the one illustrated below, once you are happy with it, click OK.

Using the expression builder

Finally make sure you specify the target part of the copy operation, which should be the amount element within the outputVariable.

In order to complete the <assign> activity, you will need to create two more copy operations to copy the Currency and StockSymbol specified in the inputVariable into the equivalent values in the outputVariable.

Once done, your BPEL process should be complete. So deploy and run the composite.

Asynchronous service

Following our StockQuote service, another service would be a stock order service, which would enable us to buy or sell a particular stock. For this service, a client would need to specify the stock, whether they wanted to buy or sell, the quantity, and the price.

It makes sense to make this an asynchronous service, as once the order has been placed, it may take seconds, minutes, hours, or even days for the order to be matched.

Now, I'm not aware of any trade services that are free to try (probably for a good reason!). However, there is no reason why we can't simulate one. To do this, we will write a simple asynchronous process.

Drag another BPEL process on to our StockService composite and give it the name StockOrder, but specify that it is an asynchronous BPEL process.

As with the StockQuote process, we also want to specify predefined elements for its input and output. The elements we are going to use are placeOrder for the input and placeOrderResponse for the output, the definitions for which are shown in the following code snippet:

<xsd:element name="placeOrder"         type="tPlaceOrder"/>
<xsd:element name="placeOrderResponse" type="tPlaceOrderResponse"/>

<xsd:complexType name="tPlaceOrder">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="bidPrice"    type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

<xsd:complexType name="tPlaceOrderResponse">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="actualPrice" type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

These are also defined in the StockService.xsd that we previously imported into the StockService composite. So, for each field, we click on the magnifying glass to bring up the type chooser and select the appropriate element definitions. Then click OK to create the process. This will create a second BPEL process within our composite, so double-click on this to open it.

You will see that, by default, JDeveloper has created a skeleton asynchronous BPEL process, which contains an initial <receive> activity to receive the stock order request. But this time it's followed by an <invoke> activity to send the result back (as opposed to a <reply> activity used by the synchronous process).

If you look at the WSDL for the process, you will see that it defines two operations: process to call the process, and processResponse, which will be called by the process to send back the result. Thus the client that calls the process operation will need to provide the processResponse callback in order to receive the result (this is something we will look at in more detail in Chapter 15, Message Interaction Patterns.

Now, for the purpose of our simulation, we will assume that the StockOrder request is successful and the actualPrice achieved is always the bid price. So to do this, create an assign operation that copies all the original input values to their corresponding output values. Deploy the composite, and run it from the console.

Note

When you click the Test Web Service button for the StockService composite, you will now be presented with two options: stockorder_client_ep and stockquote_client_ep. These correspond to each of the exposed services we have defined in our composite. Ensure you select stockorder_client_ep, which is wired to our StockOrder process.

This time, you will notice that no result is returned (as it's being processed asynchronously); rather it displays a message to indicate that the service was invoked successfully, as shown in the following screenshot:

Asynchronous service

Click on Launch Message Flow Trace to bring up the trace for the composite, and then select StockOrder to bring up the audit trail for the process. Switch to the flow view, and expand the callbackClient activity at the end of the trace. This will pop up a window showing the details of the response sent by our process, as shown in the following screenshot:

Asynchronous service

Using the wait activity

Now you've probably spotted the most obvious flaw with this simulation, in that the process returns a response almost immediately, which negates the whole point of making it asynchronous.

To make it more realistic, we will use the <wait> activity to wait for a period of time. To do this drag the <wait> activity from the Component Palette onto your BPEL process just before the <assign> activity, and then double-click on it to open the Wait activity window, as shown below.

The <wait> activity allows you to specify that the process wait for a specified duration of time or until a specified deadline. In either case, you specify a fixed value or choose to specify an XPath expression to evaluate the value at runtime.

If you specify Expression, and then click the calculator icon to the right of it, this will launch the Expression Builder that we introduced earlier in the chapter. The result of the expression must evaluate to a valid value of xsd:duration for periods and xsd:dateTime for deadlines. The format of xsd:duration is PnYnMnDTnHnMnS, for example. P1M would be a duration of 1 month and P10DT1H25M would be 10 days, 1 hour and 25 minutes.

For deadlines, the expression should evaluate to a valid value of xsd:date.

The structure of xsd:dateTime is YYYY-MM-DDThh:mm:ss+hh:mm, where the +hh:mm is optional and is the time period offset from UTC (or GMT, if you prefer). Obviously, the offset can be negative or positive.

For example, 2010-01-19T17:37:47-05:00 is the time 17:37:47 on January 19th 2010, 5 hours behind UTC (that is, Eastern Standard Time in the US).

Using the wait activity

For our purposes, we just need to wait for a relatively short period of time, so set it to wait for one minute.

Now save, deploy, and run the composite. If you now look at the audit trail of the process, you will see that it has paused on the <wait> activity (which will be highlighted in orange).

Improving the stock trade service

We have a very trivial trade service, which always results in a successful trade after one minute. Let's see if we can make it a bit more "realistic".

We will modify the process to call the stockQuote service and compare the actual price against the requested price. If the quote we get back matches or is better than the price specified, then we will return a successful trade (at the quoted price). Otherwise we will wait a minute and loop back round and try again.

Creating the while loop

The bulk of this process will now be contained within a while loop, so from the Process Activities list of the Component Palette, drag a While activity into the process.

Click on the plus symbol to expand the While activity. It will now display an area where you can drop a sequence of one or more activities that will be executed every time the process iterates through the loop.

Creating the while loop

We want to iterate through the loop until the trade has been fulfilled, so let's create a variable of type xsd:Boolean called tradeFulfilled and use an <assign> statement before the while loop to set its value to false.

The first step is to create a variable of type xsd:Boolean. Until now, we've used JDeveloper to automatically create the variables we've required, typically as part of the process of defining an Invoke activity. However, that's not an option here.

If you look at the diagram of your BPEL process, you will see that it is surrounded by a light grey dashed box, and on the top left-hand side there are a number of icons. If you click on the top one of these (x), as shown in the following screenshot, this will open a window that lists all the variables defined in the process:

Creating the while loop

At this stage, it will list just the default inputVariable and outputVariable, which were automatically created with the process. Click on the green plus button. This will bring up the Create Variable window, as shown in the following screenshot:

Creating the while loop

Here we simply specify the Name of the variable (for example, tradeFulfilled) and its Type. In our case, we want an xsd:Boolean, so select Simple Type and click the magnifying glass to the right of it.

This will bring up the Type Chooser, which will list all the simple built-in data types defined by XML Schema. Select Boolean and click OK.

We need to initialize the variable to false, so drag an <assign> statement on to your process just before the while loop. Use the function false(), under the category Logical Functions, to achieve this.

Next, we need to set the condition on the while loop, so that it will execute only while tradeFulfilled equals false. Double-click on the while loop. This will open the While activity window, as shown in the following screenshot:

Creating the while loop

We must now specify an XPath expression, which will evaluate to either true or false. If you click on the expression builder icon, which is circled in the preceding screenshot, this will launch the Expression Builder. Use this to build the following expression:

bpws:getVariableData('tradeFullfilled') = false()

Once we are happy with this, click OK.

Checking the price

The first activity we need to perform within the while loop is to get a quote for the stock that we are trading. For this, we will need to invoke the stock quote process we created earlier. As both of these processes are in the same composite, the simplest way to do this is to wire them together.

Switch to the composite view in JDeveloper, next place your mouse over the yellow arrow on the StockOrder process (the one to add a new Reference). Click and hold your mouse button, then drag the arrow onto the blue arrow on the StockQuote process (the one that represents the Service Interface), then release, as shown in the following screenshot:

Checking the price

This will wire these two processes together and create a corresponding partner link in the StockOrder process. From here, implement the required steps to invoke the process operation of the StockQuote process, making sure that they are included within the while loop.

Using the switch activity

Remember our requirement is that we return success if the price matches or is better than the one specified in the order. Obviously, whether the price is better depends on whether we are selling or buying. If we are selling we need the price to be equal to or greater than the asking price; whereas if we are buying, we need the price to be equal to or less than the asking price.

So for this, we will introduce the <switch> activity. Drag a <switch> activity from the Process Activities list of the Component Palette on to your process after the invoke activity for the StockQuote service. Next, click on the plus symbol to expand the <switch> activity. By default, it will have two branches illustrated as follows:

The first branch contains a <case> condition, with a corresponding area where you can drop a sequence of one or more activities that will be executed if the condition evaluates to true.

The second branch contains an <otherwise> subactivity, with a corresponding area for activities. The activities in this branch will only be executed if all case conditions evaluate to false.

Using the switch activity

We want to cater to two separate tests (one for buying the other for selling), so click on the Add Switch Case arrow (highlighted in the preceding screenshot) to add another <case> branch.

Next, we need to define the test condition for each <case>. To do this, click on the corresponding Expression Builder icon to launch the expression builder (circled in the preceding screenshot). For the first one, use the expression builder to create the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Buy' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') >= 
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
	'/ns1:getQuoteResponse/ns1:Amount')

For the second branch, use the expression builder to define the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Sell' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') <=
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
'/ns1:getQuoteResponse/ns1:Amount')

Once we've defined the condition for each case, we just need to create a single <assign> activity in each branch. This needs to set all the values in the outputVariable to the corresponding values in the inputVariable, except for the ActualPrice element, which we should set to the value returned by the StockQuote process. Finally, we also need to set tradeFullfilled to true, so that we exit the while loop.

The simplest way to do this is by dragging the original <assign> we created in the first version of this process onto the first branch and then modify it as appropriate. Then create a similar <assign> activity in the second branch.

Note

You've probably noticed that you could actually combine the two tests into a single test. However, we took this approach to illustrate how you can add multiple branches to a switch.

If we don't have a match, then we have to wait a minute and then circle back round the while loop and try again. As we've already defined a <wait> activity, simply drag this from its current position within the process into the activity area for the <otherwise> activity.

That completes the process, so try deploying it and running it from the console.

Note

The other obvious thing is that this process could potentially run forever if we don't get a stock quote in our favor. One way to solve this would be to put the while activity in a scope and then set a timeout period on the scope so that it would only run for so long.

Asynchronous service

Following our StockQuote service, another service would be a stock order service, which would enable us to buy or sell a particular stock. For this service, a client would need to specify the stock, whether they wanted to buy or sell, the quantity, and the price.

It makes sense to make this an asynchronous service, as once the order has been placed, it may take seconds, minutes, hours, or even days for the order to be matched.

Now, I'm not aware of any trade services that are free to try (probably for a good reason!). However, there is no reason why we can't simulate one. To do this, we will write a simple asynchronous process.

Drag another BPEL process on to our StockService composite and give it the name StockOrder, but specify that it is an asynchronous BPEL process.

As with the StockQuote process, we also want to specify predefined elements for its input and output. The elements we are going to use are placeOrder for the input and placeOrderResponse for the output, the definitions for which are shown in the following code snippet:

<xsd:element name="placeOrder"         type="tPlaceOrder"/>
<xsd:element name="placeOrderResponse" type="tPlaceOrderResponse"/>

<xsd:complexType name="tPlaceOrder">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="bidPrice"    type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

<xsd:complexType name="tPlaceOrderResponse">
  <xsd:sequence>
    <xsd:element name="currency"    type="xsd:string"/>
    <xsd:element name="stockSymbol" type="xsd:string"/>
    <xsd:element name="buySell"     type="xsd:string"/>
    <xsd:element name="quantity"    type="xsd:integer"/>
    <xsd:element name="actualPrice" type="xsd:decimal"/>
  </xsd:sequence>
</xsd:complexType>

These are also defined in the StockService.xsd that we previously imported into the StockService composite. So, for each field, we click on the magnifying glass to bring up the type chooser and select the appropriate element definitions. Then click OK to create the process. This will create a second BPEL process within our composite, so double-click on this to open it.

You will see that, by default, JDeveloper has created a skeleton asynchronous BPEL process, which contains an initial <receive> activity to receive the stock order request. But this time it's followed by an <invoke> activity to send the result back (as opposed to a <reply> activity used by the synchronous process).

If you look at the WSDL for the process, you will see that it defines two operations: process to call the process, and processResponse, which will be called by the process to send back the result. Thus the client that calls the process operation will need to provide the processResponse callback in order to receive the result (this is something we will look at in more detail in Chapter 15, Message Interaction Patterns.

Now, for the purpose of our simulation, we will assume that the StockOrder request is successful and the actualPrice achieved is always the bid price. So to do this, create an assign operation that copies all the original input values to their corresponding output values. Deploy the composite, and run it from the console.

Note

When you click the Test Web Service button for the StockService composite, you will now be presented with two options: stockorder_client_ep and stockquote_client_ep. These correspond to each of the exposed services we have defined in our composite. Ensure you select stockorder_client_ep, which is wired to our StockOrder process.

This time, you will notice that no result is returned (as it's being processed asynchronously); rather it displays a message to indicate that the service was invoked successfully, as shown in the following screenshot:

Asynchronous service

Click on Launch Message Flow Trace to bring up the trace for the composite, and then select StockOrder to bring up the audit trail for the process. Switch to the flow view, and expand the callbackClient activity at the end of the trace. This will pop up a window showing the details of the response sent by our process, as shown in the following screenshot:

Asynchronous service

Using the wait activity

Now you've probably spotted the most obvious flaw with this simulation, in that the process returns a response almost immediately, which negates the whole point of making it asynchronous.

To make it more realistic, we will use the <wait> activity to wait for a period of time. To do this drag the <wait> activity from the Component Palette onto your BPEL process just before the <assign> activity, and then double-click on it to open the Wait activity window, as shown below.

The <wait> activity allows you to specify that the process wait for a specified duration of time or until a specified deadline. In either case, you specify a fixed value or choose to specify an XPath expression to evaluate the value at runtime.

If you specify Expression, and then click the calculator icon to the right of it, this will launch the Expression Builder that we introduced earlier in the chapter. The result of the expression must evaluate to a valid value of xsd:duration for periods and xsd:dateTime for deadlines. The format of xsd:duration is PnYnMnDTnHnMnS, for example. P1M would be a duration of 1 month and P10DT1H25M would be 10 days, 1 hour and 25 minutes.

For deadlines, the expression should evaluate to a valid value of xsd:date.

The structure of xsd:dateTime is YYYY-MM-DDThh:mm:ss+hh:mm, where the +hh:mm is optional and is the time period offset from UTC (or GMT, if you prefer). Obviously, the offset can be negative or positive.

For example, 2010-01-19T17:37:47-05:00 is the time 17:37:47 on January 19th 2010, 5 hours behind UTC (that is, Eastern Standard Time in the US).

Using the wait activity

For our purposes, we just need to wait for a relatively short period of time, so set it to wait for one minute.

Now save, deploy, and run the composite. If you now look at the audit trail of the process, you will see that it has paused on the <wait> activity (which will be highlighted in orange).

Improving the stock trade service

We have a very trivial trade service, which always results in a successful trade after one minute. Let's see if we can make it a bit more "realistic".

We will modify the process to call the stockQuote service and compare the actual price against the requested price. If the quote we get back matches or is better than the price specified, then we will return a successful trade (at the quoted price). Otherwise we will wait a minute and loop back round and try again.

Creating the while loop

The bulk of this process will now be contained within a while loop, so from the Process Activities list of the Component Palette, drag a While activity into the process.

Click on the plus symbol to expand the While activity. It will now display an area where you can drop a sequence of one or more activities that will be executed every time the process iterates through the loop.

Creating the while loop

We want to iterate through the loop until the trade has been fulfilled, so let's create a variable of type xsd:Boolean called tradeFulfilled and use an <assign> statement before the while loop to set its value to false.

The first step is to create a variable of type xsd:Boolean. Until now, we've used JDeveloper to automatically create the variables we've required, typically as part of the process of defining an Invoke activity. However, that's not an option here.

If you look at the diagram of your BPEL process, you will see that it is surrounded by a light grey dashed box, and on the top left-hand side there are a number of icons. If you click on the top one of these (x), as shown in the following screenshot, this will open a window that lists all the variables defined in the process:

Creating the while loop

At this stage, it will list just the default inputVariable and outputVariable, which were automatically created with the process. Click on the green plus button. This will bring up the Create Variable window, as shown in the following screenshot:

Creating the while loop

Here we simply specify the Name of the variable (for example, tradeFulfilled) and its Type. In our case, we want an xsd:Boolean, so select Simple Type and click the magnifying glass to the right of it.

This will bring up the Type Chooser, which will list all the simple built-in data types defined by XML Schema. Select Boolean and click OK.

We need to initialize the variable to false, so drag an <assign> statement on to your process just before the while loop. Use the function false(), under the category Logical Functions, to achieve this.

Next, we need to set the condition on the while loop, so that it will execute only while tradeFulfilled equals false. Double-click on the while loop. This will open the While activity window, as shown in the following screenshot:

Creating the while loop

We must now specify an XPath expression, which will evaluate to either true or false. If you click on the expression builder icon, which is circled in the preceding screenshot, this will launch the Expression Builder. Use this to build the following expression:

bpws:getVariableData('tradeFullfilled') = false()

Once we are happy with this, click OK.

Checking the price

The first activity we need to perform within the while loop is to get a quote for the stock that we are trading. For this, we will need to invoke the stock quote process we created earlier. As both of these processes are in the same composite, the simplest way to do this is to wire them together.

Switch to the composite view in JDeveloper, next place your mouse over the yellow arrow on the StockOrder process (the one to add a new Reference). Click and hold your mouse button, then drag the arrow onto the blue arrow on the StockQuote process (the one that represents the Service Interface), then release, as shown in the following screenshot:

Checking the price

This will wire these two processes together and create a corresponding partner link in the StockOrder process. From here, implement the required steps to invoke the process operation of the StockQuote process, making sure that they are included within the while loop.

Using the switch activity

Remember our requirement is that we return success if the price matches or is better than the one specified in the order. Obviously, whether the price is better depends on whether we are selling or buying. If we are selling we need the price to be equal to or greater than the asking price; whereas if we are buying, we need the price to be equal to or less than the asking price.

So for this, we will introduce the <switch> activity. Drag a <switch> activity from the Process Activities list of the Component Palette on to your process after the invoke activity for the StockQuote service. Next, click on the plus symbol to expand the <switch> activity. By default, it will have two branches illustrated as follows:

The first branch contains a <case> condition, with a corresponding area where you can drop a sequence of one or more activities that will be executed if the condition evaluates to true.

The second branch contains an <otherwise> subactivity, with a corresponding area for activities. The activities in this branch will only be executed if all case conditions evaluate to false.

Using the switch activity

We want to cater to two separate tests (one for buying the other for selling), so click on the Add Switch Case arrow (highlighted in the preceding screenshot) to add another <case> branch.

Next, we need to define the test condition for each <case>. To do this, click on the corresponding Expression Builder icon to launch the expression builder (circled in the preceding screenshot). For the first one, use the expression builder to create the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Buy' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') >= 
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
	'/ns1:getQuoteResponse/ns1:Amount')

For the second branch, use the expression builder to define the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Sell' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') <=
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
'/ns1:getQuoteResponse/ns1:Amount')

Once we've defined the condition for each case, we just need to create a single <assign> activity in each branch. This needs to set all the values in the outputVariable to the corresponding values in the inputVariable, except for the ActualPrice element, which we should set to the value returned by the StockQuote process. Finally, we also need to set tradeFullfilled to true, so that we exit the while loop.

The simplest way to do this is by dragging the original <assign> we created in the first version of this process onto the first branch and then modify it as appropriate. Then create a similar <assign> activity in the second branch.

Note

You've probably noticed that you could actually combine the two tests into a single test. However, we took this approach to illustrate how you can add multiple branches to a switch.

If we don't have a match, then we have to wait a minute and then circle back round the while loop and try again. As we've already defined a <wait> activity, simply drag this from its current position within the process into the activity area for the <otherwise> activity.

That completes the process, so try deploying it and running it from the console.

Note

The other obvious thing is that this process could potentially run forever if we don't get a stock quote in our favor. One way to solve this would be to put the while activity in a scope and then set a timeout period on the scope so that it would only run for so long.

Using the wait activity

Now you've probably spotted the most obvious flaw with this simulation, in that the process returns a response almost immediately, which negates the whole point of making it asynchronous.

To make it more realistic, we will use the <wait> activity to wait for a period of time. To do this drag the <wait> activity from the Component Palette onto your BPEL process just before the <assign> activity, and then double-click on it to open the Wait activity window, as shown below.

The <wait> activity allows you to specify that the process wait for a specified duration of time or until a specified deadline. In either case, you specify a fixed value or choose to specify an XPath expression to evaluate the value at runtime.

If you specify Expression, and then click the calculator icon to the right of it, this will launch the Expression Builder that we introduced earlier in the chapter. The result of the expression must evaluate to a valid value of xsd:duration for periods and xsd:dateTime for deadlines. The format of xsd:duration is PnYnMnDTnHnMnS, for example. P1M would be a duration of 1 month and P10DT1H25M would be 10 days, 1 hour and 25 minutes.

For deadlines, the expression should evaluate to a valid value of xsd:date.

The structure of xsd:dateTime is YYYY-MM-DDThh:mm:ss+hh:mm, where the +hh:mm is optional and is the time period offset from UTC (or GMT, if you prefer). Obviously, the offset can be negative or positive.

For example, 2010-01-19T17:37:47-05:00 is the time 17:37:47 on January 19th 2010, 5 hours behind UTC (that is, Eastern Standard Time in the US).

Using the wait activity

For our purposes, we just need to wait for a relatively short period of time, so set it to wait for one minute.

Now save, deploy, and run the composite. If you now look at the audit trail of the process, you will see that it has paused on the <wait> activity (which will be highlighted in orange).

Improving the stock trade service

We have a very trivial trade service, which always results in a successful trade after one minute. Let's see if we can make it a bit more "realistic".

We will modify the process to call the stockQuote service and compare the actual price against the requested price. If the quote we get back matches or is better than the price specified, then we will return a successful trade (at the quoted price). Otherwise we will wait a minute and loop back round and try again.

Creating the while loop

The bulk of this process will now be contained within a while loop, so from the Process Activities list of the Component Palette, drag a While activity into the process.

Click on the plus symbol to expand the While activity. It will now display an area where you can drop a sequence of one or more activities that will be executed every time the process iterates through the loop.

Creating the while loop

We want to iterate through the loop until the trade has been fulfilled, so let's create a variable of type xsd:Boolean called tradeFulfilled and use an <assign> statement before the while loop to set its value to false.

The first step is to create a variable of type xsd:Boolean. Until now, we've used JDeveloper to automatically create the variables we've required, typically as part of the process of defining an Invoke activity. However, that's not an option here.

If you look at the diagram of your BPEL process, you will see that it is surrounded by a light grey dashed box, and on the top left-hand side there are a number of icons. If you click on the top one of these (x), as shown in the following screenshot, this will open a window that lists all the variables defined in the process:

Creating the while loop

At this stage, it will list just the default inputVariable and outputVariable, which were automatically created with the process. Click on the green plus button. This will bring up the Create Variable window, as shown in the following screenshot:

Creating the while loop

Here we simply specify the Name of the variable (for example, tradeFulfilled) and its Type. In our case, we want an xsd:Boolean, so select Simple Type and click the magnifying glass to the right of it.

This will bring up the Type Chooser, which will list all the simple built-in data types defined by XML Schema. Select Boolean and click OK.

We need to initialize the variable to false, so drag an <assign> statement on to your process just before the while loop. Use the function false(), under the category Logical Functions, to achieve this.

Next, we need to set the condition on the while loop, so that it will execute only while tradeFulfilled equals false. Double-click on the while loop. This will open the While activity window, as shown in the following screenshot:

Creating the while loop

We must now specify an XPath expression, which will evaluate to either true or false. If you click on the expression builder icon, which is circled in the preceding screenshot, this will launch the Expression Builder. Use this to build the following expression:

bpws:getVariableData('tradeFullfilled') = false()

Once we are happy with this, click OK.

Checking the price

The first activity we need to perform within the while loop is to get a quote for the stock that we are trading. For this, we will need to invoke the stock quote process we created earlier. As both of these processes are in the same composite, the simplest way to do this is to wire them together.

Switch to the composite view in JDeveloper, next place your mouse over the yellow arrow on the StockOrder process (the one to add a new Reference). Click and hold your mouse button, then drag the arrow onto the blue arrow on the StockQuote process (the one that represents the Service Interface), then release, as shown in the following screenshot:

Checking the price

This will wire these two processes together and create a corresponding partner link in the StockOrder process. From here, implement the required steps to invoke the process operation of the StockQuote process, making sure that they are included within the while loop.

Using the switch activity

Remember our requirement is that we return success if the price matches or is better than the one specified in the order. Obviously, whether the price is better depends on whether we are selling or buying. If we are selling we need the price to be equal to or greater than the asking price; whereas if we are buying, we need the price to be equal to or less than the asking price.

So for this, we will introduce the <switch> activity. Drag a <switch> activity from the Process Activities list of the Component Palette on to your process after the invoke activity for the StockQuote service. Next, click on the plus symbol to expand the <switch> activity. By default, it will have two branches illustrated as follows:

The first branch contains a <case> condition, with a corresponding area where you can drop a sequence of one or more activities that will be executed if the condition evaluates to true.

The second branch contains an <otherwise> subactivity, with a corresponding area for activities. The activities in this branch will only be executed if all case conditions evaluate to false.

Using the switch activity

We want to cater to two separate tests (one for buying the other for selling), so click on the Add Switch Case arrow (highlighted in the preceding screenshot) to add another <case> branch.

Next, we need to define the test condition for each <case>. To do this, click on the corresponding Expression Builder icon to launch the expression builder (circled in the preceding screenshot). For the first one, use the expression builder to create the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Buy' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') >= 
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
	'/ns1:getQuoteResponse/ns1:Amount')

For the second branch, use the expression builder to define the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Sell' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') <=
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
'/ns1:getQuoteResponse/ns1:Amount')

Once we've defined the condition for each case, we just need to create a single <assign> activity in each branch. This needs to set all the values in the outputVariable to the corresponding values in the inputVariable, except for the ActualPrice element, which we should set to the value returned by the StockQuote process. Finally, we also need to set tradeFullfilled to true, so that we exit the while loop.

The simplest way to do this is by dragging the original <assign> we created in the first version of this process onto the first branch and then modify it as appropriate. Then create a similar <assign> activity in the second branch.

Note

You've probably noticed that you could actually combine the two tests into a single test. However, we took this approach to illustrate how you can add multiple branches to a switch.

If we don't have a match, then we have to wait a minute and then circle back round the while loop and try again. As we've already defined a <wait> activity, simply drag this from its current position within the process into the activity area for the <otherwise> activity.

That completes the process, so try deploying it and running it from the console.

Note

The other obvious thing is that this process could potentially run forever if we don't get a stock quote in our favor. One way to solve this would be to put the while activity in a scope and then set a timeout period on the scope so that it would only run for so long.

Improving the stock trade service

We have a very trivial trade service, which always results in a successful trade after one minute. Let's see if we can make it a bit more "realistic".

We will modify the process to call the stockQuote service and compare the actual price against the requested price. If the quote we get back matches or is better than the price specified, then we will return a successful trade (at the quoted price). Otherwise we will wait a minute and loop back round and try again.

Creating the while loop

The bulk of this process will now be contained within a while loop, so from the Process Activities list of the Component Palette, drag a While activity into the process.

Click on the plus symbol to expand the While activity. It will now display an area where you can drop a sequence of one or more activities that will be executed every time the process iterates through the loop.

Creating the while loop

We want to iterate through the loop until the trade has been fulfilled, so let's create a variable of type xsd:Boolean called tradeFulfilled and use an <assign> statement before the while loop to set its value to false.

The first step is to create a variable of type xsd:Boolean. Until now, we've used JDeveloper to automatically create the variables we've required, typically as part of the process of defining an Invoke activity. However, that's not an option here.

If you look at the diagram of your BPEL process, you will see that it is surrounded by a light grey dashed box, and on the top left-hand side there are a number of icons. If you click on the top one of these (x), as shown in the following screenshot, this will open a window that lists all the variables defined in the process:

Creating the while loop

At this stage, it will list just the default inputVariable and outputVariable, which were automatically created with the process. Click on the green plus button. This will bring up the Create Variable window, as shown in the following screenshot:

Creating the while loop

Here we simply specify the Name of the variable (for example, tradeFulfilled) and its Type. In our case, we want an xsd:Boolean, so select Simple Type and click the magnifying glass to the right of it.

This will bring up the Type Chooser, which will list all the simple built-in data types defined by XML Schema. Select Boolean and click OK.

We need to initialize the variable to false, so drag an <assign> statement on to your process just before the while loop. Use the function false(), under the category Logical Functions, to achieve this.

Next, we need to set the condition on the while loop, so that it will execute only while tradeFulfilled equals false. Double-click on the while loop. This will open the While activity window, as shown in the following screenshot:

Creating the while loop

We must now specify an XPath expression, which will evaluate to either true or false. If you click on the expression builder icon, which is circled in the preceding screenshot, this will launch the Expression Builder. Use this to build the following expression:

bpws:getVariableData('tradeFullfilled') = false()

Once we are happy with this, click OK.

Checking the price

The first activity we need to perform within the while loop is to get a quote for the stock that we are trading. For this, we will need to invoke the stock quote process we created earlier. As both of these processes are in the same composite, the simplest way to do this is to wire them together.

Switch to the composite view in JDeveloper, next place your mouse over the yellow arrow on the StockOrder process (the one to add a new Reference). Click and hold your mouse button, then drag the arrow onto the blue arrow on the StockQuote process (the one that represents the Service Interface), then release, as shown in the following screenshot:

Checking the price

This will wire these two processes together and create a corresponding partner link in the StockOrder process. From here, implement the required steps to invoke the process operation of the StockQuote process, making sure that they are included within the while loop.

Using the switch activity

Remember our requirement is that we return success if the price matches or is better than the one specified in the order. Obviously, whether the price is better depends on whether we are selling or buying. If we are selling we need the price to be equal to or greater than the asking price; whereas if we are buying, we need the price to be equal to or less than the asking price.

So for this, we will introduce the <switch> activity. Drag a <switch> activity from the Process Activities list of the Component Palette on to your process after the invoke activity for the StockQuote service. Next, click on the plus symbol to expand the <switch> activity. By default, it will have two branches illustrated as follows:

The first branch contains a <case> condition, with a corresponding area where you can drop a sequence of one or more activities that will be executed if the condition evaluates to true.

The second branch contains an <otherwise> subactivity, with a corresponding area for activities. The activities in this branch will only be executed if all case conditions evaluate to false.

Using the switch activity

We want to cater to two separate tests (one for buying the other for selling), so click on the Add Switch Case arrow (highlighted in the preceding screenshot) to add another <case> branch.

Next, we need to define the test condition for each <case>. To do this, click on the corresponding Expression Builder icon to launch the expression builder (circled in the preceding screenshot). For the first one, use the expression builder to create the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Buy' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') >= 
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
	'/ns1:getQuoteResponse/ns1:Amount')

For the second branch, use the expression builder to define the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Sell' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') <=
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
'/ns1:getQuoteResponse/ns1:Amount')

Once we've defined the condition for each case, we just need to create a single <assign> activity in each branch. This needs to set all the values in the outputVariable to the corresponding values in the inputVariable, except for the ActualPrice element, which we should set to the value returned by the StockQuote process. Finally, we also need to set tradeFullfilled to true, so that we exit the while loop.

The simplest way to do this is by dragging the original <assign> we created in the first version of this process onto the first branch and then modify it as appropriate. Then create a similar <assign> activity in the second branch.

Note

You've probably noticed that you could actually combine the two tests into a single test. However, we took this approach to illustrate how you can add multiple branches to a switch.

If we don't have a match, then we have to wait a minute and then circle back round the while loop and try again. As we've already defined a <wait> activity, simply drag this from its current position within the process into the activity area for the <otherwise> activity.

That completes the process, so try deploying it and running it from the console.

Note

The other obvious thing is that this process could potentially run forever if we don't get a stock quote in our favor. One way to solve this would be to put the while activity in a scope and then set a timeout period on the scope so that it would only run for so long.

Creating the while loop

The bulk of this process will now be contained within a while loop, so from the Process Activities list of the Component Palette, drag a While activity into the process.

Click on the plus symbol to expand the While activity. It will now display an area where you can drop a sequence of one or more activities that will be executed every time the process iterates through the loop.

Creating the while loop

We want to iterate through the loop until the trade has been fulfilled, so let's create a variable of type xsd:Boolean called tradeFulfilled and use an <assign> statement before the while loop to set its value to false.

The first step is to create a variable of type xsd:Boolean. Until now, we've used JDeveloper to automatically create the variables we've required, typically as part of the process of defining an Invoke activity. However, that's not an option here.

If you look at the diagram of your BPEL process, you will see that it is surrounded by a light grey dashed box, and on the top left-hand side there are a number of icons. If you click on the top one of these (x), as shown in the following screenshot, this will open a window that lists all the variables defined in the process:

Creating the while loop

At this stage, it will list just the default inputVariable and outputVariable, which were automatically created with the process. Click on the green plus button. This will bring up the Create Variable window, as shown in the following screenshot:

Creating the while loop

Here we simply specify the Name of the variable (for example, tradeFulfilled) and its Type. In our case, we want an xsd:Boolean, so select Simple Type and click the magnifying glass to the right of it.

This will bring up the Type Chooser, which will list all the simple built-in data types defined by XML Schema. Select Boolean and click OK.

We need to initialize the variable to false, so drag an <assign> statement on to your process just before the while loop. Use the function false(), under the category Logical Functions, to achieve this.

Next, we need to set the condition on the while loop, so that it will execute only while tradeFulfilled equals false. Double-click on the while loop. This will open the While activity window, as shown in the following screenshot:

Creating the while loop

We must now specify an XPath expression, which will evaluate to either true or false. If you click on the expression builder icon, which is circled in the preceding screenshot, this will launch the Expression Builder. Use this to build the following expression:

bpws:getVariableData('tradeFullfilled') = false()

Once we are happy with this, click OK.

Checking the price

The first activity we need to perform within the while loop is to get a quote for the stock that we are trading. For this, we will need to invoke the stock quote process we created earlier. As both of these processes are in the same composite, the simplest way to do this is to wire them together.

Switch to the composite view in JDeveloper, next place your mouse over the yellow arrow on the StockOrder process (the one to add a new Reference). Click and hold your mouse button, then drag the arrow onto the blue arrow on the StockQuote process (the one that represents the Service Interface), then release, as shown in the following screenshot:

Checking the price

This will wire these two processes together and create a corresponding partner link in the StockOrder process. From here, implement the required steps to invoke the process operation of the StockQuote process, making sure that they are included within the while loop.

Using the switch activity

Remember our requirement is that we return success if the price matches or is better than the one specified in the order. Obviously, whether the price is better depends on whether we are selling or buying. If we are selling we need the price to be equal to or greater than the asking price; whereas if we are buying, we need the price to be equal to or less than the asking price.

So for this, we will introduce the <switch> activity. Drag a <switch> activity from the Process Activities list of the Component Palette on to your process after the invoke activity for the StockQuote service. Next, click on the plus symbol to expand the <switch> activity. By default, it will have two branches illustrated as follows:

The first branch contains a <case> condition, with a corresponding area where you can drop a sequence of one or more activities that will be executed if the condition evaluates to true.

The second branch contains an <otherwise> subactivity, with a corresponding area for activities. The activities in this branch will only be executed if all case conditions evaluate to false.

Using the switch activity

We want to cater to two separate tests (one for buying the other for selling), so click on the Add Switch Case arrow (highlighted in the preceding screenshot) to add another <case> branch.

Next, we need to define the test condition for each <case>. To do this, click on the corresponding Expression Builder icon to launch the expression builder (circled in the preceding screenshot). For the first one, use the expression builder to create the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Buy' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') >= 
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
	'/ns1:getQuoteResponse/ns1:Amount')

For the second branch, use the expression builder to define the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Sell' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') <=
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
'/ns1:getQuoteResponse/ns1:Amount')

Once we've defined the condition for each case, we just need to create a single <assign> activity in each branch. This needs to set all the values in the outputVariable to the corresponding values in the inputVariable, except for the ActualPrice element, which we should set to the value returned by the StockQuote process. Finally, we also need to set tradeFullfilled to true, so that we exit the while loop.

The simplest way to do this is by dragging the original <assign> we created in the first version of this process onto the first branch and then modify it as appropriate. Then create a similar <assign> activity in the second branch.

Note

You've probably noticed that you could actually combine the two tests into a single test. However, we took this approach to illustrate how you can add multiple branches to a switch.

If we don't have a match, then we have to wait a minute and then circle back round the while loop and try again. As we've already defined a <wait> activity, simply drag this from its current position within the process into the activity area for the <otherwise> activity.

That completes the process, so try deploying it and running it from the console.

Note

The other obvious thing is that this process could potentially run forever if we don't get a stock quote in our favor. One way to solve this would be to put the while activity in a scope and then set a timeout period on the scope so that it would only run for so long.

Checking the price

The first activity we need to perform within the while loop is to get a quote for the stock that we are trading. For this, we will need to invoke the stock quote process we created earlier. As both of these processes are in the same composite, the simplest way to do this is to wire them together.

Switch to the composite view in JDeveloper, next place your mouse over the yellow arrow on the StockOrder process (the one to add a new Reference). Click and hold your mouse button, then drag the arrow onto the blue arrow on the StockQuote process (the one that represents the Service Interface), then release, as shown in the following screenshot:

Checking the price

This will wire these two processes together and create a corresponding partner link in the StockOrder process. From here, implement the required steps to invoke the process operation of the StockQuote process, making sure that they are included within the while loop.

Using the switch activity

Remember our requirement is that we return success if the price matches or is better than the one specified in the order. Obviously, whether the price is better depends on whether we are selling or buying. If we are selling we need the price to be equal to or greater than the asking price; whereas if we are buying, we need the price to be equal to or less than the asking price.

So for this, we will introduce the <switch> activity. Drag a <switch> activity from the Process Activities list of the Component Palette on to your process after the invoke activity for the StockQuote service. Next, click on the plus symbol to expand the <switch> activity. By default, it will have two branches illustrated as follows:

The first branch contains a <case> condition, with a corresponding area where you can drop a sequence of one or more activities that will be executed if the condition evaluates to true.

The second branch contains an <otherwise> subactivity, with a corresponding area for activities. The activities in this branch will only be executed if all case conditions evaluate to false.

Using the switch activity

We want to cater to two separate tests (one for buying the other for selling), so click on the Add Switch Case arrow (highlighted in the preceding screenshot) to add another <case> branch.

Next, we need to define the test condition for each <case>. To do this, click on the corresponding Expression Builder icon to launch the expression builder (circled in the preceding screenshot). For the first one, use the expression builder to create the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Buy' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') >= 
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
	'/ns1:getQuoteResponse/ns1:Amount')

For the second branch, use the expression builder to define the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Sell' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') <=
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
'/ns1:getQuoteResponse/ns1:Amount')

Once we've defined the condition for each case, we just need to create a single <assign> activity in each branch. This needs to set all the values in the outputVariable to the corresponding values in the inputVariable, except for the ActualPrice element, which we should set to the value returned by the StockQuote process. Finally, we also need to set tradeFullfilled to true, so that we exit the while loop.

The simplest way to do this is by dragging the original <assign> we created in the first version of this process onto the first branch and then modify it as appropriate. Then create a similar <assign> activity in the second branch.

Note

You've probably noticed that you could actually combine the two tests into a single test. However, we took this approach to illustrate how you can add multiple branches to a switch.

If we don't have a match, then we have to wait a minute and then circle back round the while loop and try again. As we've already defined a <wait> activity, simply drag this from its current position within the process into the activity area for the <otherwise> activity.

That completes the process, so try deploying it and running it from the console.

Note

The other obvious thing is that this process could potentially run forever if we don't get a stock quote in our favor. One way to solve this would be to put the while activity in a scope and then set a timeout period on the scope so that it would only run for so long.

Using the switch activity

Remember our requirement is that we return success if the price matches or is better than the one specified in the order. Obviously, whether the price is better depends on whether we are selling or buying. If we are selling we need the price to be equal to or greater than the asking price; whereas if we are buying, we need the price to be equal to or less than the asking price.

So for this, we will introduce the <switch> activity. Drag a <switch> activity from the Process Activities list of the Component Palette on to your process after the invoke activity for the StockQuote service. Next, click on the plus symbol to expand the <switch> activity. By default, it will have two branches illustrated as follows:

The first branch contains a <case> condition, with a corresponding area where you can drop a sequence of one or more activities that will be executed if the condition evaluates to true.

The second branch contains an <otherwise> subactivity, with a corresponding area for activities. The activities in this branch will only be executed if all case conditions evaluate to false.

Using the switch activity

We want to cater to two separate tests (one for buying the other for selling), so click on the Add Switch Case arrow (highlighted in the preceding screenshot) to add another <case> branch.

Next, we need to define the test condition for each <case>. To do this, click on the corresponding Expression Builder icon to launch the expression builder (circled in the preceding screenshot). For the first one, use the expression builder to create the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Buy' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') >= 
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
	'/ns1:getQuoteResponse/ns1:Amount')

For the second branch, use the expression builder to define the following:

bpws:getVariableData (	'inputVariable','payload',
	'/ns1:PlaceOrder/ns1:BuySell') = 'Sell' and 
bpws:getVariableData (	'inputVariable', 'payload', 
	'/ns1:PlaceOrder/ns1:BidPrice') <=
bpws:getVariableData (	'stockQuoteOutput', 'payload', 
'/ns1:getQuoteResponse/ns1:Amount')

Once we've defined the condition for each case, we just need to create a single <assign> activity in each branch. This needs to set all the values in the outputVariable to the corresponding values in the inputVariable, except for the ActualPrice element, which we should set to the value returned by the StockQuote process. Finally, we also need to set tradeFullfilled to true, so that we exit the while loop.

The simplest way to do this is by dragging the original <assign> we created in the first version of this process onto the first branch and then modify it as appropriate. Then create a similar <assign> activity in the second branch.

Note

You've probably noticed that you could actually combine the two tests into a single test. However, we took this approach to illustrate how you can add multiple branches to a switch.

If we don't have a match, then we have to wait a minute and then circle back round the while loop and try again. As we've already defined a <wait> activity, simply drag this from its current position within the process into the activity area for the <otherwise> activity.

That completes the process, so try deploying it and running it from the console.

Note

The other obvious thing is that this process could potentially run forever if we don't get a stock quote in our favor. One way to solve this would be to put the while activity in a scope and then set a timeout period on the scope so that it would only run for so long.

Summary

In this chapter, we've gone beyond individual services and looked at how we can use BPEL to quickly assemble these services into composite services. By using this same approach, we can also implement end-to-end business processes or complete composite applications (something we will do in the second section of this book).

You may have also noticed that although BPEL provides a rich set of constructs for describing the assembly of a set of existing services, it doesn't try to reinvent the wheel where functionality is already provided by existing SOA standards. Rather, it has been designed to fit naturally with and leverage the existing XML and web services specifications such as XML Schema, XPath, XSLT, and of course, WSDL, and SOAP.

This chapter should have given you a solid introduction to the basic structure of a BPEL process, its key constructs, and the difference between a synchronous and asynchronous service. Building the examples will help to reinforce this as well as give you an excellent grasp of how to use JDeveloper to build BPEL processes.

Even though this chapter will have given you a good introduction to BPEL, we haven't yet looked at much of its advanced functionality such as its ability to handle long running processes, its fault and exception management, and how it uses compensation to undo events in the case of failures. These are areas we will cover in more detail in later chapters of the book.

Chapter 6. Adding in Human Workflow

Many business processes require an element of human activity. Common tasks include approving an expense item or purchase order. But even fully automated processes can require human involvement, especially when things go wrong.

In this chapter, we will introduce you to the various parts of the human workflow component of the Oracle SOA Suite and take you through a practical example to create and run your first "simple" workflow. Once we've done that, we will examine how to carry out other basic workflow activities such as how to:

  • Dynamically assign a task to a user or group based on the content of the task
  • Cancel or change a workflow task while it's still in process
  • Enable the workflow user to request additional details about a task
  • Reassign, delegate, or escalate a task, either manually or through the use of user-defined business rules

Workflow overview

The following diagram illustrates the three, typical participants in any workflow:

Workflow overview

On the left-hand side we have the BPEL process, which creates the task and submits it to the human workflow service. Once it has initiated the task, the process itself will pause until the completed task is returned.

On the right-hand side we have the user who carries out the task. Tasks can either be directly assigned to a user or to a group to which the user belongs; in this case they need to claim the task before they can work on it. When working on a task, a user typically does this via the BPM Worklist Application, which is a web-based application included as part of the SOA Suite.

Sitting between the BPEL process and the worklist application is the human workflow service. It is responsible for routing the task to the appropriate user or group, managing the lifecycle of a task until it completes, and returning the result to the initiator (that is, the BPEL process in the preceding diagram).

Note

The human workflow services have a full set of WSDL and Java APIs that allow us to build our own custom equivalent of the BPM worklist application. This is an area we examine in Chapter 17, Workflow Patterns.

The human workflow service utilizes an external identity store for details of users, their privileges, and which groups they belong to. In a production deployment, you would typically configure the identity store to be an LDAP repository such as Oracle Internet Directory or Active Directory .

Note

For the sake of simplicity, the workflow examples within this book make use of the sample user community provided by Oracle. To install this community, go to http://www.oracle.com/technology/sample_code/products/hwf/index.html and download the file workflow-001-DemoCommunitySeedApp. Unzip this file and follow the instructions in README.txt.

Leave approval workflow

For our first workflow, we will create a very simple BPEL process that takes a leave request and creates a simple approval task for the individual's manager, who can then either approve or reject the request.

The first step is to create a composite containing a simple asynchronous leave approval BPEL process. The input and output schema elements for the process are defined in LeaveRequest.xsd, as shown in the following code snippet (note that the schema is also provided in the samples folder for Chapter 6):

<?xml version="1.0" encoding="windows-1252"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
           xmlns="http://schemas.packtpub.com/LeaveRequest"
           targetNamespace="http://schemas.packtpub.com/LeaveRequest"
           elementFormDefault="qualified" >
  <xsd:element name="leaveRequest" type="tLeaveRequest"/>
  <xsd:complexType name="tLeaveRequest">
    <xsd:sequence>
      <xsd:element name="employeeId" type="xsd:string"/>
      <xsd:element name="fullName" type="xsd:string" />
      <xsd:element name="startDate" type="xsd:date" />      
      <xsd:element name="endDate" type="xsd:date" />
      <xsd:element name="leaveType" type="xsd:string" />  
      <xsd:element name="leaveReason" type="xsd:string"/>
      <xsd:element name="requestStatus" type="xsd:string"/>
    </xsd:sequence>
  </xsd:complexType>
</xsd:schema>

Make sure you import this file as part of the process of creating the BPEL process and set the input and output schema elements to LeaveRequest.

Defining the human task

Once you've created your composite, drag a Human Task Component from the SOA Component Palette onto it. This will pop up the following screen:

Defining the human task

Give the task a meaningful name (for example, LeaveRequest) and click OK. This will add a Human Task with the corresponding name to our composite, as shown in the following screenshot:

Defining the human task

Double-click on the LeaveRequest task tab. This will open up the task definition form as a new tab within JDeveloper (as shown in the following screenshot) where we can configure our task:

Defining the human task

By default, JDeveloper displays the General subtab where we define the basic details about the task.

Note

For readers familiar with Oracle SOA Suite 10gR3, you will notice the task definition form looks a lot simpler. This is because it's been restructured to organize the task configuration parameters into categories, each accessed by a corresponding tab (rather than display them all on the same form as was previously the case).

The key things we need to define for the task are its Title, what the possible Outcomes are (that is, leave request approved or rejected), the Parameters (or payload) of the task, and who to route or assign it to.

On the General tab, give the task a Title, such as Approval Required for Leave Request. Note that this is what a user will see in their work queue if they are allocated the task. For the time being we can leave the other values (Description, Outcomes, Priority, Category, and Owner) with their default values.

Specifying task parameters

Next, we need to define the task data, that is, the content of the task that we want the approver to base their decision upon. For this, we can specify one or more parameters; each parameter can be a standard XML type such as string, integer, or boolean. In addition, we can use any type or element defined in one of our imported XML schemas.

For our purposes, we simply want to pass in the leave request received by the BPEL process. To do this, select the Data tab, click on the plus symbol (circled in the following screenshot), and select Add other payload:

Specifying task parameters

This will launch the Add Task Parameter window:

Specifying task parameters

Ensure that Element is selected as the parameter type and then click on the corresponding search icon to bring up the standard type chooser. From here, just browse the LeaveRequest schema file that we imported at the start, and select the LeaveRequest element.

If we check Editable via worklist, anyone who has write access to the task payload will be able to update the content of this parameter. In our case, we will leave it unchecked.

Click OK. We should now have a LeaveRequest parameter defined for our task.

Specifying task assignment and routing policy

Finally, we need to specify who is going to approve the task. We do this by creating an Assignment and Routing Policy. An assignment and routing policy consists of one or more stages that can be executed sequentially or in parallel (or any combination thereof), with each stage consisting of one or more participant types that in turn can also be sequential or in parallel (or any combination thereof). A participant type can be:

  • Single: Used to specify a single user or group to assign the task to
  • Serial: Used when a set of users must work in sequence, for example, when a task has to proceed through several layers of a management chain
  • Parallel: Used when a set of users must work in parallel, a common usage for this is when a group of participants need to vote on an outcome
  • FYI: Used to send a notification to a user or group

For our purposes we need a single stage containing one participant of type Single approver (we will examine the other types in more detail in Chapter 17, Workflow Patterns). Select the Assignment tab. You will see that, by default, our task consists of a single stage named Stage1, as shown in the following screenshot:

Specifying task assignment and routing policy

First, we will give our stage a more meaningful name. To do this, select the stage by clicking on its name. The stage will turn gray to indicate that it has been selected, as shown in the preceding screenshot. Then select Edit (circled in the preceding screenshot). This will bring up the Edit window. Give it an appropriate name, and click OK.

Specifying task assignment and routing policy

Next, we need to add a participant of type Single to our Approval stage. First, select the <Edit Participant> section of our stage by clicking on it. It will turn gray to indicate that it has been selected, as shown in the following screenshot:

Specifying task assignment and routing policy

Select Edit (circled in the preceding screenshot). This will launch the Add Participant Type window.

Note

You will notice that the menu icons in the Assignment tab are context-sensitive, based on whether you have selected one or more stages or participants.

Specifying task assignment and routing policy

By default, a participant type of Single approver is selected, which is fine for our purpose. Labels are used to provide a meaningful description of the routing rules and are also useful if we specify multiple participants for a stage. So for our purpose, just enter a meaningful value (for example, Manager Approval).

We now need to specify the list of participants that the task is going to be assigned to. Each participant can either be a specific user, group, or application role (and we can have any combination of these in our list).

For our purpose, we are going to assume that the CEO of the company is required to approve every holiday, so we will always assign it to cdickens. This is probably not ideal! But we will revisit this later in the chapter to look at how we can make it more realistic.

Click on the plus symbol, and select Add User, as shown in the preceding screenshot. This will add a participant of type User to our participant list, as shown in the following screenshot. We can either directly enter the name of a user into the Value field or click the browse icon () to bring up the identity lookup dialog. This allows you to search and browse the users and groups defined in the identity service.

Specifying task assignment and routing policy

Once you've specified the participant details, click OK; this will take us back to the task definition window, which will have been updated with our routing policy. Select Save on JDeveloper to make sure you save the task definition.

Invoking our human task from BPEL

So far, we have defined our human task. The next step is to incorporate it into our LeaveApproval BPEL process. To do this, drag a Human Task activity from the BPEL component palette onto our process, as shown in the following screenshot:

Invoking our human task from BPEL

JDeveloper will prompt us to specify the Task Definition to use for this activity; from the drop-down list, select LeaveRequest. This will present us with the Human Task activity window from where we can configure the task within the context of our BPEL process:

Invoking our human task from BPEL

The first value we need to specify is the Task Title. This is optional, since if we don't specify a value, it will use the task title we specified earlier as part of the task definition. We want to make the task title a bit friendlier, so first type in (without the quotes):

Leave Request for

Then click on the calculator icon to the right of the Task Title field. This will launch the now familiar Expression Builder. Here, from the inputVariable, just select the element:

ns1:LeaveRequest/ns1:fullName

This expression will be appended to the end of our title text embedded between <% %> to give the following:

Leave Request for <%bpws:getVariableData('inputVariable', 'payload', '/ns1:LeaveRequest/ns1:fullName')%>

At runtime, the BPEL process will evaluate the expression between <% %> and substitute the result. For now, we won't specify a task initiator as this is optional, and we will leave the Priority set to 3.

The final thing to specify is the value of each of the Task Parameters defined to the task. Click on the browse icon () for the LeaveRequest parameter, and this will bring up the Task Parameters window, which allows you to browse the variables defined to the BPEL process. Select the LeaveRequest element passed in as part of the inputVariable for the BPEL process.

This completes the configuration of the task, so click OK. This will return us to the BPEL process, which will now contain our human task activity, followed by an additional switch. If you expand the switch, you will see it contains a Case for each of our task outcomes (APPROVE or REJECT), where we can specify the appropriate next step in our process. For the purpose of this example, we don't need to do anything. However, in a real system we might update the HR system with details of the leave, if it was approved.

Your composite is now complete, so deploy it to the server in the normal way.

Creating the user interface to process the task

So far, we have defined the task that needs to be carried out and plugged it into a simple BPEL process. What we need to do next is implement the part of the user interface that allows someone to view the details of our specific task and then either approve or reject the leave request.

Out-of-the-box, SOA Suite provides the worklist application with all the main workflow user interface screens and a framework in which to plug your task-specific interface component. This can be developed from scratch if you want, using ADF, but the simplest way is to get JDeveloper to generate an ADF form based on the task definition.

To do this, go back to the task definition form, click on Create Form, and select Auto-Generate Task Form, as shown in the following screenshot:

Creating the user interface to process the task

This will launch the Create Project window, prompting us to specify the name of the project in which to create our form. Specify an appropriate name, such as LeaveRequestForm, and click OK.

This will generate an ADF form plus all the supporting components; JDeveloper will automatically open the form, which can then be customized as required.

Creating the user interface to process the task

To deploy the form, click on the Application menu (circled in the preceding screenshot) and select Deploy | LeaveRequestForm. This will launch the Deployment dialog. Select Deploy to Application Server, and click on Next. On the Select Server page, uncheck the option Deploy to all server instances in the domain and click on Next. On the Server Instances page, select the SOA server instance, click on Next, and then click Finish.

Running the workflow process

Log into the SOA console and launch the composite, ensuring that you specify a valid employee ID (such as jcooper). This will invoke the BPEL process, which in turn will create the LeaveRequest task.

If you browse the audit trail for the composite, you will see it paused at the LeaveRequest activity, as shown in the following screenshot:

Running the workflow process

Click on the LeaveRequest activity, and this will bring up the Activity Audit Trail for the workflow task, showing that it is assigned to cdickens, as shown in the following screenshot:

Running the workflow process

At the moment, the composite will wait forever, until the task is either approved or rejected. To do that, we need to log into the BPM worklist application to process the task.

Defining the human task

Once you've created your composite, drag a Human Task Component from the SOA Component Palette onto it. This will pop up the following screen:

Defining the human task

Give the task a meaningful name (for example, LeaveRequest) and click OK. This will add a Human Task with the corresponding name to our composite, as shown in the following screenshot:

Defining the human task

Double-click on the LeaveRequest task tab. This will open up the task definition form as a new tab within JDeveloper (as shown in the following screenshot) where we can configure our task:

Defining the human task

By default, JDeveloper displays the General subtab where we define the basic details about the task.

Note

For readers familiar with Oracle SOA Suite 10gR3, you will notice the task definition form looks a lot simpler. This is because it's been restructured to organize the task configuration parameters into categories, each accessed by a corresponding tab (rather than display them all on the same form as was previously the case).

The key things we need to define for the task are its Title, what the possible Outcomes are (that is, leave request approved or rejected), the Parameters (or payload) of the task, and who to route or assign it to.

On the General tab, give the task a Title, such as Approval Required for Leave Request. Note that this is what a user will see in their work queue if they are allocated the task. For the time being we can leave the other values (Description, Outcomes, Priority, Category, and Owner) with their default values.

Specifying task parameters

Next, we need to define the task data, that is, the content of the task that we want the approver to base their decision upon. For this, we can specify one or more parameters; each parameter can be a standard XML type such as string, integer, or boolean. In addition, we can use any type or element defined in one of our imported XML schemas.

For our purposes, we simply want to pass in the leave request received by the BPEL process. To do this, select the Data tab, click on the plus symbol (circled in the following screenshot), and select Add other payload:

Specifying task parameters

This will launch the Add Task Parameter window:

Specifying task parameters

Ensure that Element is selected as the parameter type and then click on the corresponding search icon to bring up the standard type chooser. From here, just browse the LeaveRequest schema file that we imported at the start, and select the LeaveRequest element.

If we check Editable via worklist, anyone who has write access to the task payload will be able to update the content of this parameter. In our case, we will leave it unchecked.

Click OK. We should now have a LeaveRequest parameter defined for our task.

Specifying task assignment and routing policy

Finally, we need to specify who is going to approve the task. We do this by creating an Assignment and Routing Policy. An assignment and routing policy consists of one or more stages that can be executed sequentially or in parallel (or any combination thereof), with each stage consisting of one or more participant types that in turn can also be sequential or in parallel (or any combination thereof). A participant type can be:

  • Single: Used to specify a single user or group to assign the task to
  • Serial: Used when a set of users must work in sequence, for example, when a task has to proceed through several layers of a management chain
  • Parallel: Used when a set of users must work in parallel, a common usage for this is when a group of participants need to vote on an outcome
  • FYI: Used to send a notification to a user or group

For our purposes we need a single stage containing one participant of type Single approver (we will examine the other types in more detail in Chapter 17, Workflow Patterns). Select the Assignment tab. You will see that, by default, our task consists of a single stage named Stage1, as shown in the following screenshot:

Specifying task assignment and routing policy

First, we will give our stage a more meaningful name. To do this, select the stage by clicking on its name. The stage will turn gray to indicate that it has been selected, as shown in the preceding screenshot. Then select Edit (circled in the preceding screenshot). This will bring up the Edit window. Give it an appropriate name, and click OK.

Specifying task assignment and routing policy

Next, we need to add a participant of type Single to our Approval stage. First, select the <Edit Participant> section of our stage by clicking on it. It will turn gray to indicate that it has been selected, as shown in the following screenshot:

Specifying task assignment and routing policy

Select Edit (circled in the preceding screenshot). This will launch the Add Participant Type window.

Note

You will notice that the menu icons in the Assignment tab are context-sensitive, based on whether you have selected one or more stages or participants.

Specifying task assignment and routing policy

By default, a participant type of Single approver is selected, which is fine for our purpose. Labels are used to provide a meaningful description of the routing rules and are also useful if we specify multiple participants for a stage. So for our purpose, just enter a meaningful value (for example, Manager Approval).

We now need to specify the list of participants that the task is going to be assigned to. Each participant can either be a specific user, group, or application role (and we can have any combination of these in our list).

For our purpose, we are going to assume that the CEO of the company is required to approve every holiday, so we will always assign it to cdickens. This is probably not ideal! But we will revisit this later in the chapter to look at how we can make it more realistic.

Click on the plus symbol, and select Add User, as shown in the preceding screenshot. This will add a participant of type User to our participant list, as shown in the following screenshot. We can either directly enter the name of a user into the Value field or click the browse icon () to bring up the identity lookup dialog. This allows you to search and browse the users and groups defined in the identity service.

Specifying task assignment and routing policy

Once you've specified the participant details, click OK; this will take us back to the task definition window, which will have been updated with our routing policy. Select Save on JDeveloper to make sure you save the task definition.

Invoking our human task from BPEL

So far, we have defined our human task. The next step is to incorporate it into our LeaveApproval BPEL process. To do this, drag a Human Task activity from the BPEL component palette onto our process, as shown in the following screenshot:

Invoking our human task from BPEL

JDeveloper will prompt us to specify the Task Definition to use for this activity; from the drop-down list, select LeaveRequest. This will present us with the Human Task activity window from where we can configure the task within the context of our BPEL process:

Invoking our human task from BPEL

The first value we need to specify is the Task Title. This is optional, since if we don't specify a value, it will use the task title we specified earlier as part of the task definition. We want to make the task title a bit friendlier, so first type in (without the quotes):

Leave Request for

Then click on the calculator icon to the right of the Task Title field. This will launch the now familiar Expression Builder. Here, from the inputVariable, just select the element:

ns1:LeaveRequest/ns1:fullName

This expression will be appended to the end of our title text embedded between <% %> to give the following:

Leave Request for <%bpws:getVariableData('inputVariable', 'payload', '/ns1:LeaveRequest/ns1:fullName')%>

At runtime, the BPEL process will evaluate the expression between <% %> and substitute the result. For now, we won't specify a task initiator as this is optional, and we will leave the Priority set to 3.

The final thing to specify is the value of each of the Task Parameters defined to the task. Click on the browse icon () for the LeaveRequest parameter, and this will bring up the Task Parameters window, which allows you to browse the variables defined to the BPEL process. Select the LeaveRequest element passed in as part of the inputVariable for the BPEL process.

This completes the configuration of the task, so click OK. This will return us to the BPEL process, which will now contain our human task activity, followed by an additional switch. If you expand the switch, you will see it contains a Case for each of our task outcomes (APPROVE or REJECT), where we can specify the appropriate next step in our process. For the purpose of this example, we don't need to do anything. However, in a real system we might update the HR system with details of the leave, if it was approved.

Your composite is now complete, so deploy it to the server in the normal way.

Creating the user interface to process the task

So far, we have defined the task that needs to be carried out and plugged it into a simple BPEL process. What we need to do next is implement the part of the user interface that allows someone to view the details of our specific task and then either approve or reject the leave request.

Out-of-the-box, SOA Suite provides the worklist application with all the main workflow user interface screens and a framework in which to plug your task-specific interface component. This can be developed from scratch if you want, using ADF, but the simplest way is to get JDeveloper to generate an ADF form based on the task definition.

To do this, go back to the task definition form, click on Create Form, and select Auto-Generate Task Form, as shown in the following screenshot:

Creating the user interface to process the task

This will launch the Create Project window, prompting us to specify the name of the project in which to create our form. Specify an appropriate name, such as LeaveRequestForm, and click OK.

This will generate an ADF form plus all the supporting components; JDeveloper will automatically open the form, which can then be customized as required.

Creating the user interface to process the task

To deploy the form, click on the Application menu (circled in the preceding screenshot) and select Deploy | LeaveRequestForm. This will launch the Deployment dialog. Select Deploy to Application Server, and click on Next. On the Select Server page, uncheck the option Deploy to all server instances in the domain and click on Next. On the Server Instances page, select the SOA server instance, click on Next, and then click Finish.

Running the workflow process

Log into the SOA console and launch the composite, ensuring that you specify a valid employee ID (such as jcooper). This will invoke the BPEL process, which in turn will create the LeaveRequest task.

If you browse the audit trail for the composite, you will see it paused at the LeaveRequest activity, as shown in the following screenshot:

Running the workflow process

Click on the LeaveRequest activity, and this will bring up the Activity Audit Trail for the workflow task, showing that it is assigned to cdickens, as shown in the following screenshot:

Running the workflow process

At the moment, the composite will wait forever, until the task is either approved or rejected. To do that, we need to log into the BPM worklist application to process the task.

Specifying task parameters

Next, we need to define the task data, that is, the content of the task that we want the approver to base their decision upon. For this, we can specify one or more parameters; each parameter can be a standard XML type such as string, integer, or boolean. In addition, we can use any type or element defined in one of our imported XML schemas.

For our purposes, we simply want to pass in the leave request received by the BPEL process. To do this, select the Data tab, click on the plus symbol (circled in the following screenshot), and select Add other payload:

Specifying task parameters

This will launch the Add Task Parameter window:

Specifying task parameters

Ensure that Element is selected as the parameter type and then click on the corresponding search icon to bring up the standard type chooser. From here, just browse the LeaveRequest schema file that we imported at the start, and select the LeaveRequest element.

If we check Editable via worklist, anyone who has write access to the task payload will be able to update the content of this parameter. In our case, we will leave it unchecked.

Click OK. We should now have a LeaveRequest parameter defined for our task.

Specifying task assignment and routing policy

Finally, we need to specify who is going to approve the task. We do this by creating an Assignment and Routing Policy. An assignment and routing policy consists of one or more stages that can be executed sequentially or in parallel (or any combination thereof), with each stage consisting of one or more participant types that in turn can also be sequential or in parallel (or any combination thereof). A participant type can be:

  • Single: Used to specify a single user or group to assign the task to
  • Serial: Used when a set of users must work in sequence, for example, when a task has to proceed through several layers of a management chain
  • Parallel: Used when a set of users must work in parallel, a common usage for this is when a group of participants need to vote on an outcome
  • FYI: Used to send a notification to a user or group

For our purposes we need a single stage containing one participant of type Single approver (we will examine the other types in more detail in Chapter 17, Workflow Patterns). Select the Assignment tab. You will see that, by default, our task consists of a single stage named Stage1, as shown in the following screenshot:

Specifying task assignment and routing policy

First, we will give our stage a more meaningful name. To do this, select the stage by clicking on its name. The stage will turn gray to indicate that it has been selected, as shown in the preceding screenshot. Then select Edit (circled in the preceding screenshot). This will bring up the Edit window. Give it an appropriate name, and click OK.

Specifying task assignment and routing policy

Next, we need to add a participant of type Single to our Approval stage. First, select the <Edit Participant> section of our stage by clicking on it. It will turn gray to indicate that it has been selected, as shown in the following screenshot:

Specifying task assignment and routing policy

Select Edit (circled in the preceding screenshot). This will launch the Add Participant Type window.

Note

You will notice that the menu icons in the Assignment tab are context-sensitive, based on whether you have selected one or more stages or participants.

Specifying task assignment and routing policy

By default, a participant type of Single approver is selected, which is fine for our purpose. Labels are used to provide a meaningful description of the routing rules and are also useful if we specify multiple participants for a stage. So for our purpose, just enter a meaningful value (for example, Manager Approval).

We now need to specify the list of participants that the task is going to be assigned to. Each participant can either be a specific user, group, or application role (and we can have any combination of these in our list).

For our purpose, we are going to assume that the CEO of the company is required to approve every holiday, so we will always assign it to cdickens. This is probably not ideal! But we will revisit this later in the chapter to look at how we can make it more realistic.

Click on the plus symbol, and select Add User, as shown in the preceding screenshot. This will add a participant of type User to our participant list, as shown in the following screenshot. We can either directly enter the name of a user into the Value field or click the browse icon () to bring up the identity lookup dialog. This allows you to search and browse the users and groups defined in the identity service.

Specifying task assignment and routing policy

Once you've specified the participant details, click OK; this will take us back to the task definition window, which will have been updated with our routing policy. Select Save on JDeveloper to make sure you save the task definition.

Invoking our human task from BPEL

So far, we have defined our human task. The next step is to incorporate it into our LeaveApproval BPEL process. To do this, drag a Human Task activity from the BPEL component palette onto our process, as shown in the following screenshot:

Invoking our human task from BPEL

JDeveloper will prompt us to specify the Task Definition to use for this activity; from the drop-down list, select LeaveRequest. This will present us with the Human Task activity window from where we can configure the task within the context of our BPEL process:

Invoking our human task from BPEL

The first value we need to specify is the Task Title. This is optional, since if we don't specify a value, it will use the task title we specified earlier as part of the task definition. We want to make the task title a bit friendlier, so first type in (without the quotes):

Leave Request for

Then click on the calculator icon to the right of the Task Title field. This will launch the now familiar Expression Builder. Here, from the inputVariable, just select the element:

ns1:LeaveRequest/ns1:fullName

This expression will be appended to the end of our title text embedded between <% %> to give the following:

Leave Request for <%bpws:getVariableData('inputVariable', 'payload', '/ns1:LeaveRequest/ns1:fullName')%>

At runtime, the BPEL process will evaluate the expression between <% %> and substitute the result. For now, we won't specify a task initiator as this is optional, and we will leave the Priority set to 3.

The final thing to specify is the value of each of the Task Parameters defined to the task. Click on the browse icon () for the LeaveRequest parameter, and this will bring up the Task Parameters window, which allows you to browse the variables defined to the BPEL process. Select the LeaveRequest element passed in as part of the inputVariable for the BPEL process.

This completes the configuration of the task, so click OK. This will return us to the BPEL process, which will now contain our human task activity, followed by an additional switch. If you expand the switch, you will see it contains a Case for each of our task outcomes (APPROVE or REJECT), where we can specify the appropriate next step in our process. For the purpose of this example, we don't need to do anything. However, in a real system we might update the HR system with details of the leave, if it was approved.

Your composite is now complete, so deploy it to the server in the normal way.

Creating the user interface to process the task

So far, we have defined the task that needs to be carried out and plugged it into a simple BPEL process. What we need to do next is implement the part of the user interface that allows someone to view the details of our specific task and then either approve or reject the leave request.

Out-of-the-box, SOA Suite provides the worklist application with all the main workflow user interface screens and a framework in which to plug your task-specific interface component. This can be developed from scratch if you want, using ADF, but the simplest way is to get JDeveloper to generate an ADF form based on the task definition.

To do this, go back to the task definition form, click on Create Form, and select Auto-Generate Task Form, as shown in the following screenshot:

Creating the user interface to process the task

This will launch the Create Project window, prompting us to specify the name of the project in which to create our form. Specify an appropriate name, such as LeaveRequestForm, and click OK.

This will generate an ADF form plus all the supporting components; JDeveloper will automatically open the form, which can then be customized as required.

Creating the user interface to process the task

To deploy the form, click on the Application menu (circled in the preceding screenshot) and select Deploy | LeaveRequestForm. This will launch the Deployment dialog. Select Deploy to Application Server, and click on Next. On the Select Server page, uncheck the option Deploy to all server instances in the domain and click on Next. On the Server Instances page, select the SOA server instance, click on Next, and then click Finish.

Running the workflow process

Log into the SOA console and launch the composite, ensuring that you specify a valid employee ID (such as jcooper). This will invoke the BPEL process, which in turn will create the LeaveRequest task.

If you browse the audit trail for the composite, you will see it paused at the LeaveRequest activity, as shown in the following screenshot:

Running the workflow process

Click on the LeaveRequest activity, and this will bring up the Activity Audit Trail for the workflow task, showing that it is assigned to cdickens, as shown in the following screenshot:

Running the workflow process

At the moment, the composite will wait forever, until the task is either approved or rejected. To do that, we need to log into the BPM worklist application to process the task.

Specifying task assignment and routing policy

Finally, we need to specify who is going to approve the task. We do this by creating an Assignment and Routing Policy. An assignment and routing policy consists of one or more stages that can be executed sequentially or in parallel (or any combination thereof), with each stage consisting of one or more participant types that in turn can also be sequential or in parallel (or any combination thereof). A participant type can be:

  • Single: Used to specify a single user or group to assign the task to
  • Serial: Used when a set of users must work in sequence, for example, when a task has to proceed through several layers of a management chain
  • Parallel: Used when a set of users must work in parallel, a common usage for this is when a group of participants need to vote on an outcome
  • FYI: Used to send a notification to a user or group

For our purposes we need a single stage containing one participant of type Single approver (we will examine the other types in more detail in Chapter 17, Workflow Patterns). Select the Assignment tab. You will see that, by default, our task consists of a single stage named Stage1, as shown in the following screenshot:

Specifying task assignment and routing policy

First, we will give our stage a more meaningful name. To do this, select the stage by clicking on its name. The stage will turn gray to indicate that it has been selected, as shown in the preceding screenshot. Then select Edit (circled in the preceding screenshot). This will bring up the Edit window. Give it an appropriate name, and click OK.

Specifying task assignment and routing policy

Next, we need to add a participant of type Single to our Approval stage. First, select the <Edit Participant> section of our stage by clicking on it. It will turn gray to indicate that it has been selected, as shown in the following screenshot:

Specifying task assignment and routing policy

Select Edit (circled in the preceding screenshot). This will launch the Add Participant Type window.

Note

You will notice that the menu icons in the Assignment tab are context-sensitive, based on whether you have selected one or more stages or participants.

Specifying task assignment and routing policy

By default, a participant type of Single approver is selected, which is fine for our purpose. Labels are used to provide a meaningful description of the routing rules and are also useful if we specify multiple participants for a stage. So for our purpose, just enter a meaningful value (for example, Manager Approval).

We now need to specify the list of participants that the task is going to be assigned to. Each participant can either be a specific user, group, or application role (and we can have any combination of these in our list).

For our purpose, we are going to assume that the CEO of the company is required to approve every holiday, so we will always assign it to cdickens. This is probably not ideal! But we will revisit this later in the chapter to look at how we can make it more realistic.

Click on the plus symbol, and select Add User, as shown in the preceding screenshot. This will add a participant of type User to our participant list, as shown in the following screenshot. We can either directly enter the name of a user into the Value field or click the browse icon () to bring up the identity lookup dialog. This allows you to search and browse the users and groups defined in the identity service.

Specifying task assignment and routing policy

Once you've specified the participant details, click OK; this will take us back to the task definition window, which will have been updated with our routing policy. Select Save on JDeveloper to make sure you save the task definition.

Invoking our human task from BPEL

So far, we have defined our human task. The next step is to incorporate it into our LeaveApproval BPEL process. To do this, drag a Human Task activity from the BPEL component palette onto our process, as shown in the following screenshot:

Invoking our human task from BPEL

JDeveloper will prompt us to specify the Task Definition to use for this activity; from the drop-down list, select LeaveRequest. This will present us with the Human Task activity window from where we can configure the task within the context of our BPEL process:

Invoking our human task from BPEL

The first value we need to specify is the Task Title. This is optional, since if we don't specify a value, it will use the task title we specified earlier as part of the task definition. We want to make the task title a bit friendlier, so first type in (without the quotes):

Leave Request for

Then click on the calculator icon to the right of the Task Title field. This will launch the now familiar Expression Builder. Here, from the inputVariable, just select the element:

ns1:LeaveRequest/ns1:fullName

This expression will be appended to the end of our title text embedded between <% %> to give the following:

Leave Request for <%bpws:getVariableData('inputVariable', 'payload', '/ns1:LeaveRequest/ns1:fullName')%>

At runtime, the BPEL process will evaluate the expression between <% %> and substitute the result. For now, we won't specify a task initiator as this is optional, and we will leave the Priority set to 3.

The final thing to specify is the value of each of the Task Parameters defined to the task. Click on the browse icon () for the LeaveRequest parameter, and this will bring up the Task Parameters window, which allows you to browse the variables defined to the BPEL process. Select the LeaveRequest element passed in as part of the inputVariable for the BPEL process.

This completes the configuration of the task, so click OK. This will return us to the BPEL process, which will now contain our human task activity, followed by an additional switch. If you expand the switch, you will see it contains a Case for each of our task outcomes (APPROVE or REJECT), where we can specify the appropriate next step in our process. For the purpose of this example, we don't need to do anything. However, in a real system we might update the HR system with details of the leave, if it was approved.

Your composite is now complete, so deploy it to the server in the normal way.

Creating the user interface to process the task

So far, we have defined the task that needs to be carried out and plugged it into a simple BPEL process. What we need to do next is implement the part of the user interface that allows someone to view the details of our specific task and then either approve or reject the leave request.

Out-of-the-box, SOA Suite provides the worklist application with all the main workflow user interface screens and a framework in which to plug your task-specific interface component. This can be developed from scratch if you want, using ADF, but the simplest way is to get JDeveloper to generate an ADF form based on the task definition.

To do this, go back to the task definition form, click on Create Form, and select Auto-Generate Task Form, as shown in the following screenshot:

Creating the user interface to process the task

This will launch the Create Project window, prompting us to specify the name of the project in which to create our form. Specify an appropriate name, such as LeaveRequestForm, and click OK.

This will generate an ADF form plus all the supporting components; JDeveloper will automatically open the form, which can then be customized as required.

Creating the user interface to process the task

To deploy the form, click on the Application menu (circled in the preceding screenshot) and select Deploy | LeaveRequestForm. This will launch the Deployment dialog. Select Deploy to Application Server, and click on Next. On the Select Server page, uncheck the option Deploy to all server instances in the domain and click on Next. On the Server Instances page, select the SOA server instance, click on Next, and then click Finish.

Running the workflow process

Log into the SOA console and launch the composite, ensuring that you specify a valid employee ID (such as jcooper). This will invoke the BPEL process, which in turn will create the LeaveRequest task.

If you browse the audit trail for the composite, you will see it paused at the LeaveRequest activity, as shown in the following screenshot:

Running the workflow process

Click on the LeaveRequest activity, and this will bring up the Activity Audit Trail for the workflow task, showing that it is assigned to cdickens, as shown in the following screenshot:

Running the workflow process

At the moment, the composite will wait forever, until the task is either approved or rejected. To do that, we need to log into the BPM worklist application to process the task.

Invoking our human task from BPEL

So far, we have defined our human task. The next step is to incorporate it into our LeaveApproval BPEL process. To do this, drag a Human Task activity from the BPEL component palette onto our process, as shown in the following screenshot:

Invoking our human task from BPEL

JDeveloper will prompt us to specify the Task Definition to use for this activity; from the drop-down list, select LeaveRequest. This will present us with the Human Task activity window from where we can configure the task within the context of our BPEL process:

Invoking our human task from BPEL

The first value we need to specify is the Task Title. This is optional, since if we don't specify a value, it will use the task title we specified earlier as part of the task definition. We want to make the task title a bit friendlier, so first type in (without the quotes):

Leave Request for

Then click on the calculator icon to the right of the Task Title field. This will launch the now familiar Expression Builder. Here, from the inputVariable, just select the element:

ns1:LeaveRequest/ns1:fullName

This expression will be appended to the end of our title text embedded between <% %> to give the following:

Leave Request for <%bpws:getVariableData('inputVariable', 'payload', '/ns1:LeaveRequest/ns1:fullName')%>

At runtime, the BPEL process will evaluate the expression between <% %> and substitute the result. For now, we won't specify a task initiator as this is optional, and we will leave the Priority set to 3.

The final thing to specify is the value of each of the Task Parameters defined to the task. Click on the browse icon () for the LeaveRequest parameter, and this will bring up the Task Parameters window, which allows you to browse the variables defined to the BPEL process. Select the LeaveRequest element passed in as part of the inputVariable for the BPEL process.

This completes the configuration of the task, so click OK. This will return us to the BPEL process, which will now contain our human task activity, followed by an additional switch. If you expand the switch, you will see it contains a Case for each of our task outcomes (APPROVE or REJECT), where we can specify the appropriate next step in our process. For the purpose of this example, we don't need to do anything. However, in a real system we might update the HR system with details of the leave, if it was approved.

Your composite is now complete, so deploy it to the server in the normal way.

Creating the user interface to process the task

So far, we have defined the task that needs to be carried out and plugged it into a simple BPEL process. What we need to do next is implement the part of the user interface that allows someone to view the details of our specific task and then either approve or reject the leave request.

Out-of-the-box, SOA Suite provides the worklist application with all the main workflow user interface screens and a framework in which to plug your task-specific interface component. This can be developed from scratch if you want, using ADF, but the simplest way is to get JDeveloper to generate an ADF form based on the task definition.

To do this, go back to the task definition form, click on Create Form, and select Auto-Generate Task Form, as shown in the following screenshot:

Creating the user interface to process the task

This will launch the Create Project window, prompting us to specify the name of the project in which to create our form. Specify an appropriate name, such as LeaveRequestForm, and click OK.

This will generate an ADF form plus all the supporting components; JDeveloper will automatically open the form, which can then be customized as required.

Creating the user interface to process the task

To deploy the form, click on the Application menu (circled in the preceding screenshot) and select Deploy | LeaveRequestForm. This will launch the Deployment dialog. Select Deploy to Application Server, and click on Next. On the Select Server page, uncheck the option Deploy to all server instances in the domain and click on Next. On the Server Instances page, select the SOA server instance, click on Next, and then click Finish.

Running the workflow process

Log into the SOA console and launch the composite, ensuring that you specify a valid employee ID (such as jcooper). This will invoke the BPEL process, which in turn will create the LeaveRequest task.

If you browse the audit trail for the composite, you will see it paused at the LeaveRequest activity, as shown in the following screenshot:

Running the workflow process

Click on the LeaveRequest activity, and this will bring up the Activity Audit Trail for the workflow task, showing that it is assigned to cdickens, as shown in the following screenshot:

Running the workflow process

At the moment, the composite will wait forever, until the task is either approved or rejected. To do that, we need to log into the BPM worklist application to process the task.

Creating the user interface to process the task

So far, we have defined the task that needs to be carried out and plugged it into a simple BPEL process. What we need to do next is implement the part of the user interface that allows someone to view the details of our specific task and then either approve or reject the leave request.

Out-of-the-box, SOA Suite provides the worklist application with all the main workflow user interface screens and a framework in which to plug your task-specific interface component. This can be developed from scratch if you want, using ADF, but the simplest way is to get JDeveloper to generate an ADF form based on the task definition.

To do this, go back to the task definition form, click on Create Form, and select Auto-Generate Task Form, as shown in the following screenshot:

Creating the user interface to process the task

This will launch the Create Project window, prompting us to specify the name of the project in which to create our form. Specify an appropriate name, such as LeaveRequestForm, and click OK.

This will generate an ADF form plus all the supporting components; JDeveloper will automatically open the form, which can then be customized as required.

Creating the user interface to process the task

To deploy the form, click on the Application menu (circled in the preceding screenshot) and select Deploy | LeaveRequestForm. This will launch the Deployment dialog. Select Deploy to Application Server, and click on Next. On the Select Server page, uncheck the option Deploy to all server instances in the domain and click on Next. On the Server Instances page, select the SOA server instance, click on Next, and then click Finish.

Running the workflow process

Log into the SOA console and launch the composite, ensuring that you specify a valid employee ID (such as jcooper). This will invoke the BPEL process, which in turn will create the LeaveRequest task.

If you browse the audit trail for the composite, you will see it paused at the LeaveRequest activity, as shown in the following screenshot:

Running the workflow process

Click on the LeaveRequest activity, and this will bring up the Activity Audit Trail for the workflow task, showing that it is assigned to cdickens, as shown in the following screenshot:

Running the workflow process

At the moment, the composite will wait forever, until the task is either approved or rejected. To do that, we need to log into the BPM worklist application to process the task.

Running the workflow process

Log into the SOA console and launch the composite, ensuring that you specify a valid employee ID (such as jcooper). This will invoke the BPEL process, which in turn will create the LeaveRequest task.

If you browse the audit trail for the composite, you will see it paused at the LeaveRequest activity, as shown in the following screenshot:

Running the workflow process

Click on the LeaveRequest activity, and this will bring up the Activity Audit Trail for the workflow task, showing that it is assigned to cdickens, as shown in the following screenshot:

Running the workflow process

At the moment, the composite will wait forever, until the task is either approved or rejected. To do that, we need to log into the BPM worklist application to process the task.

Processing tasks with the worklist application

To launch the worklist application, open up a browser and enter the following URL:

http://<hostname>:<port>/integration/worklistapp

This will bring up the login screen for the BPM worklist application; log in as cdickens (password welcome1). This will bring you into the My Tasks tab, which provides access to our various tasks and work queues. By default, it displays our inbox, which lists all the tasks currently allocated to us (or any groups that we belong to). We can then filter this based on assignee and task status.

The application also provides a number of other views that enable us to quickly identify high priority tasks, tasks due soon or new tasks. In addition, we can also define our own views.

Processing tasks with the worklist application

Here, you should see the LeaveRequest task created by our process. Click on the task and it will display details of the task in the bottom pane of the page, like the one shown in the following screenshot:

Processing tasks with the worklist application

If we study this, we can see it is made up of the following five areas:

  • Actions: Contains the actions that can be performed on a task. This is split into two parts. The first is a drop-down list that lists standard actions available for tasks such as Escalate and Suspend, which we will examine later. The second is a set of buttons that correspond to each of the outcomes defined in the task definition (that is, Approve or Reject).
  • Details: Contains the standard header information about the task, a summary of which was displayed for each task in our work queue. In the preceding screenshot this is minimized. To expand it, click on the > sign (circled in the preceding screenshot).
  • Contents: This contains the task specific payload, in our case, details of the leave request. This may be editable, depending on how we configure the task.
  • History: Provides a history (in tabular and pictorial form) of when the task was created, who it's been assigned to, and so on. This is useful as it provides a complete audit trail of the task. Note this is also available in the SOA console.
  • Comments, Attachments: Here we can add comments or attach documents to the task. This can be especially useful when a task is exchanged between multiple participants.

For our purpose, we just want to approve or reject the task, so just click the appropriate button. This will complete the task and remove it from our work queue.

However, change the search filter for the task list to show tasks with a completed status and you will see that the task is still there. If you select the task, it will display its details in the task pane, where you can view the content of the task but no longer perform any actions as it is now complete.

Go back to the SOA console and look at the audit trail for the process, you will see that it is completed.

Improving the workflow

At this point, we have a simple workflow up and running. However, we have the following issues with it:

  • At the moment, all requests go to the CEO, but it would be better if requests went to the applicant's manager.
  • Also, what happens if the requester makes a mistake with his/her request, or changes their mind? How do we let the original requester amend or cancel their request?
  • What if the approver needs additional information about a task, is there a simple way to enable that?

Dynamic task assignment

There are two approaches here. One is to assign the task to a specific group, which may contain one or more individuals. A classic example would be to assign a support request to the customer support group.

The other is to dynamically specify the user to assign to a task at runtime, based on the value of some parameter, which is roughly what we want to do. Actually, we want to look up the manager of the employee requesting the task and assign it to them.

If we go back to the Human Task Definition form (refer to Defining the human task section), and double-click on the Manager Approval step in the routing policy we defined, this will reopen the Edit Participant Type form. For the Data Type, specify that you want to select the participant By Expression, and then click on the browse icon for the Value field (circled in the following screenshot):

Dynamic task assignment

This will open up the Expression Builder , which was introduced in Chapter 5, Using BPEL to Build Composite Services and Business Processess. However, the key thing to notice here is that we only have access to the content of the task we are working on (not the full content of the BPEL process).

We need to create an expression that evaluates to the user ID of the employee's manager. Fortunately, one of the services that come with workflow is the identity service, which provides us with a simple way of querying the underlying identity layer to find out details about a user. In our case, we can use the getManager function to get the ID of the manager.

So within the Expression Builder, select the Identity Service Functions, and from here, select the getManager function and insert it into the expression. We now need to pass it the employee ID of whoever is requesting the leave. Expand the task payload; you will find it contains the content of the leave request. Select the employeeId and insert that as the parameter, as shown in the following screenshot:

Dynamic task assignment

You can now save the task, redeploy it, and run the process. Assuming you specify that the request is for jcooper, you will need to log in as jstein to approve the task.

Assigning tasks to multiple users or groups

So far, we have only looked at scenarios where we assign a task to a single user. However, workflow enables you to either assign a task to multiple users, or to one or more groups (or a combination of the two).

In this case, every user who is a member of the group or has the task assigned to them will be able to see the task on their queue. However, before anyone can work on the task, they must first claim it. Once claimed, other users will still be able to see the task, but only the user who has claimed the task will be able to perform any operations on it.

Note

Although group assignments are more likely to be static, you can also specify them dynamically in the same way we have for the user.

Cancelling or modifying a task

Another common requirement is to cancel or modify a task before it has completed the workflow. If we take our example, suppose that having submitted the leave request we changed our mind. Ideally we would like to be able to withdraw the task or modify it before someone goes to the effort of approving it.

Withdrawing a task

You may remember that when we first added the task to the BPEL process we had a field where we could specify a task initiator that we previously left blank. Well, if you specify a task initiator they are effectively the creator of the task and have the ability to withdraw the task.

To specify the task initiator, go back to your BPEL process and double-click on the Human Task. This will reopen the Human Task Configuration window (see Initializing the Workflow Parameter section), click the icon to the right of the initiator field, and this will launch the Expression Builder. Use this to specify the employeeId as the task initiator.

Now save the process, redeploy it, and run the process. Again, specify that the request is for jcooper, then log into the worklist application as jstein. You should notice that the task creator is jcooper. Don't approve the task, rather log out and log back into the worklist application as jcooper.

This will take you into the My Tasks tab, which is probably empty, but if you click the Initiated Tasks tab, then this will list all the tasks that you have initiated. If you look at the task, you will see that you can perform a single action on the task, which is to withdraw it.

Modifying a task

When we defined the task parameters on the task definition form, we had the option to specify if the parameters are Editable via Worklist, and at the time we didn't select this option. If this option is selected, then anyone who the task is assigned to has the ability to modify the task payload, including the task owner and initiator.

Difference between task owner and initiator

Now you may have noticed while specifying the various task details that as well as being able to specify the task initiator, we can also specify the task owner. At this point, you may be asking what is the difference between these two roles?

The simple answer is the task owner has more administrative privileges when it comes to a task. The task initiator is the person who creates a particular instance of a task. Say, in our example, jcooper and jstein both request leave. In this case, they are both initiators and can each withdraw the task they requested (but not each other's).

On the other hand, the task owner may be the holiday administrator. They are responsible for administering all leave requests. This enables them to perform operations on behalf of any of the assigned task participants; additionally they can also reassign or escalate tasks.

The task owner can either be specified as part of the task definition, or on the Advanced tab of the BPEL Human Task Configuration window.

Note

If no task owner is specified, it defaults to the system administrator.

When the task owner logs into the worklist application they will see an additional tab, Administration Tasks, which will list all the tasks for which they are the task owner.

Requesting additional information about a task

Once assigned a task, sometimes you need additional information about it, before you can complete it. In our example, the manager may need more information about the reason for the leave request.

If a task initiator has been specified, then on the Task details form we have the option of selecting Request Information. If we select this option we are presented with the Request More Information form, where we can select who we want more information from, and enter details of the information required (which will be added as a comment to the task).

This will then assign the task to the initiator. The task will then appear on the task creators work queue, with a state of Info Requested. The task creator can either update the details of the task (if allowed) or add their own comment to provide the additional information. Once done, they can choose the action Submit Information, and the task will be reassigned back to whoever requested the additional information.

This feature is automatically enabled when the task is opened. You can disable this feature if you want by overriding the default access settings for Actions in the Access tab of the task configuration form.

Note

We can request additional information, not just from the person who created the task, but anyone else who has already worked on the task or anyone else that we need further information from.

Dynamic task assignment

There are two approaches here. One is to assign the task to a specific group, which may contain one or more individuals. A classic example would be to assign a support request to the customer support group.

The other is to dynamically specify the user to assign to a task at runtime, based on the value of some parameter, which is roughly what we want to do. Actually, we want to look up the manager of the employee requesting the task and assign it to them.

If we go back to the Human Task Definition form (refer to Defining the human task section), and double-click on the Manager Approval step in the routing policy we defined, this will reopen the Edit Participant Type form. For the Data Type, specify that you want to select the participant By Expression, and then click on the browse icon for the Value field (circled in the following screenshot):

Dynamic task assignment

This will open up the Expression Builder , which was introduced in Chapter 5, Using BPEL to Build Composite Services and Business Processess. However, the key thing to notice here is that we only have access to the content of the task we are working on (not the full content of the BPEL process).

We need to create an expression that evaluates to the user ID of the employee's manager. Fortunately, one of the services that come with workflow is the identity service, which provides us with a simple way of querying the underlying identity layer to find out details about a user. In our case, we can use the getManager function to get the ID of the manager.

So within the Expression Builder, select the Identity Service Functions, and from here, select the getManager function and insert it into the expression. We now need to pass it the employee ID of whoever is requesting the leave. Expand the task payload; you will find it contains the content of the leave request. Select the employeeId and insert that as the parameter, as shown in the following screenshot:

Dynamic task assignment

You can now save the task, redeploy it, and run the process. Assuming you specify that the request is for jcooper, you will need to log in as jstein to approve the task.

Assigning tasks to multiple users or groups

So far, we have only looked at scenarios where we assign a task to a single user. However, workflow enables you to either assign a task to multiple users, or to one or more groups (or a combination of the two).

In this case, every user who is a member of the group or has the task assigned to them will be able to see the task on their queue. However, before anyone can work on the task, they must first claim it. Once claimed, other users will still be able to see the task, but only the user who has claimed the task will be able to perform any operations on it.

Note

Although group assignments are more likely to be static, you can also specify them dynamically in the same way we have for the user.

Cancelling or modifying a task

Another common requirement is to cancel or modify a task before it has completed the workflow. If we take our example, suppose that having submitted the leave request we changed our mind. Ideally we would like to be able to withdraw the task or modify it before someone goes to the effort of approving it.

Withdrawing a task

You may remember that when we first added the task to the BPEL process we had a field where we could specify a task initiator that we previously left blank. Well, if you specify a task initiator they are effectively the creator of the task and have the ability to withdraw the task.

To specify the task initiator, go back to your BPEL process and double-click on the Human Task. This will reopen the Human Task Configuration window (see Initializing the Workflow Parameter section), click the icon to the right of the initiator field, and this will launch the Expression Builder. Use this to specify the employeeId as the task initiator.

Now save the process, redeploy it, and run the process. Again, specify that the request is for jcooper, then log into the worklist application as jstein. You should notice that the task creator is jcooper. Don't approve the task, rather log out and log back into the worklist application as jcooper.

This will take you into the My Tasks tab, which is probably empty, but if you click the Initiated Tasks tab, then this will list all the tasks that you have initiated. If you look at the task, you will see that you can perform a single action on the task, which is to withdraw it.

Modifying a task

When we defined the task parameters on the task definition form, we had the option to specify if the parameters are Editable via Worklist, and at the time we didn't select this option. If this option is selected, then anyone who the task is assigned to has the ability to modify the task payload, including the task owner and initiator.

Difference between task owner and initiator

Now you may have noticed while specifying the various task details that as well as being able to specify the task initiator, we can also specify the task owner. At this point, you may be asking what is the difference between these two roles?

The simple answer is the task owner has more administrative privileges when it comes to a task. The task initiator is the person who creates a particular instance of a task. Say, in our example, jcooper and jstein both request leave. In this case, they are both initiators and can each withdraw the task they requested (but not each other's).

On the other hand, the task owner may be the holiday administrator. They are responsible for administering all leave requests. This enables them to perform operations on behalf of any of the assigned task participants; additionally they can also reassign or escalate tasks.

The task owner can either be specified as part of the task definition, or on the Advanced tab of the BPEL Human Task Configuration window.

Note

If no task owner is specified, it defaults to the system administrator.

When the task owner logs into the worklist application they will see an additional tab, Administration Tasks, which will list all the tasks for which they are the task owner.

Requesting additional information about a task

Once assigned a task, sometimes you need additional information about it, before you can complete it. In our example, the manager may need more information about the reason for the leave request.

If a task initiator has been specified, then on the Task details form we have the option of selecting Request Information. If we select this option we are presented with the Request More Information form, where we can select who we want more information from, and enter details of the information required (which will be added as a comment to the task).

This will then assign the task to the initiator. The task will then appear on the task creators work queue, with a state of Info Requested. The task creator can either update the details of the task (if allowed) or add their own comment to provide the additional information. Once done, they can choose the action Submit Information, and the task will be reassigned back to whoever requested the additional information.

This feature is automatically enabled when the task is opened. You can disable this feature if you want by overriding the default access settings for Actions in the Access tab of the task configuration form.

Note

We can request additional information, not just from the person who created the task, but anyone else who has already worked on the task or anyone else that we need further information from.

Assigning tasks to multiple users or groups

So far, we have only looked at scenarios where we assign a task to a single user. However, workflow enables you to either assign a task to multiple users, or to one or more groups (or a combination of the two).

In this case, every user who is a member of the group or has the task assigned to them will be able to see the task on their queue. However, before anyone can work on the task, they must first claim it. Once claimed, other users will still be able to see the task, but only the user who has claimed the task will be able to perform any operations on it.

Note

Although group assignments are more likely to be static, you can also specify them dynamically in the same way we have for the user.

Cancelling or modifying a task

Another common requirement is to cancel or modify a task before it has completed the workflow. If we take our example, suppose that having submitted the leave request we changed our mind. Ideally we would like to be able to withdraw the task or modify it before someone goes to the effort of approving it.

Withdrawing a task

You may remember that when we first added the task to the BPEL process we had a field where we could specify a task initiator that we previously left blank. Well, if you specify a task initiator they are effectively the creator of the task and have the ability to withdraw the task.

To specify the task initiator, go back to your BPEL process and double-click on the Human Task. This will reopen the Human Task Configuration window (see Initializing the Workflow Parameter section), click the icon to the right of the initiator field, and this will launch the Expression Builder. Use this to specify the employeeId as the task initiator.

Now save the process, redeploy it, and run the process. Again, specify that the request is for jcooper, then log into the worklist application as jstein. You should notice that the task creator is jcooper. Don't approve the task, rather log out and log back into the worklist application as jcooper.

This will take you into the My Tasks tab, which is probably empty, but if you click the Initiated Tasks tab, then this will list all the tasks that you have initiated. If you look at the task, you will see that you can perform a single action on the task, which is to withdraw it.

Modifying a task

When we defined the task parameters on the task definition form, we had the option to specify if the parameters are Editable via Worklist, and at the time we didn't select this option. If this option is selected, then anyone who the task is assigned to has the ability to modify the task payload, including the task owner and initiator.

Difference between task owner and initiator

Now you may have noticed while specifying the various task details that as well as being able to specify the task initiator, we can also specify the task owner. At this point, you may be asking what is the difference between these two roles?

The simple answer is the task owner has more administrative privileges when it comes to a task. The task initiator is the person who creates a particular instance of a task. Say, in our example, jcooper and jstein both request leave. In this case, they are both initiators and can each withdraw the task they requested (but not each other's).

On the other hand, the task owner may be the holiday administrator. They are responsible for administering all leave requests. This enables them to perform operations on behalf of any of the assigned task participants; additionally they can also reassign or escalate tasks.

The task owner can either be specified as part of the task definition, or on the Advanced tab of the BPEL Human Task Configuration window.

Note

If no task owner is specified, it defaults to the system administrator.

When the task owner logs into the worklist application they will see an additional tab, Administration Tasks, which will list all the tasks for which they are the task owner.

Requesting additional information about a task

Once assigned a task, sometimes you need additional information about it, before you can complete it. In our example, the manager may need more information about the reason for the leave request.

If a task initiator has been specified, then on the Task details form we have the option of selecting Request Information. If we select this option we are presented with the Request More Information form, where we can select who we want more information from, and enter details of the information required (which will be added as a comment to the task).

This will then assign the task to the initiator. The task will then appear on the task creators work queue, with a state of Info Requested. The task creator can either update the details of the task (if allowed) or add their own comment to provide the additional information. Once done, they can choose the action Submit Information, and the task will be reassigned back to whoever requested the additional information.

This feature is automatically enabled when the task is opened. You can disable this feature if you want by overriding the default access settings for Actions in the Access tab of the task configuration form.

Note

We can request additional information, not just from the person who created the task, but anyone else who has already worked on the task or anyone else that we need further information from.

Cancelling or modifying a task

Another common requirement is to cancel or modify a task before it has completed the workflow. If we take our example, suppose that having submitted the leave request we changed our mind. Ideally we would like to be able to withdraw the task or modify it before someone goes to the effort of approving it.

Withdrawing a task

You may remember that when we first added the task to the BPEL process we had a field where we could specify a task initiator that we previously left blank. Well, if you specify a task initiator they are effectively the creator of the task and have the ability to withdraw the task.

To specify the task initiator, go back to your BPEL process and double-click on the Human Task. This will reopen the Human Task Configuration window (see Initializing the Workflow Parameter section), click the icon to the right of the initiator field, and this will launch the Expression Builder. Use this to specify the employeeId as the task initiator.

Now save the process, redeploy it, and run the process. Again, specify that the request is for jcooper, then log into the worklist application as jstein. You should notice that the task creator is jcooper. Don't approve the task, rather log out and log back into the worklist application as jcooper.

This will take you into the My Tasks tab, which is probably empty, but if you click the Initiated Tasks tab, then this will list all the tasks that you have initiated. If you look at the task, you will see that you can perform a single action on the task, which is to withdraw it.

Modifying a task

When we defined the task parameters on the task definition form, we had the option to specify if the parameters are Editable via Worklist, and at the time we didn't select this option. If this option is selected, then anyone who the task is assigned to has the ability to modify the task payload, including the task owner and initiator.

Difference between task owner and initiator

Now you may have noticed while specifying the various task details that as well as being able to specify the task initiator, we can also specify the task owner. At this point, you may be asking what is the difference between these two roles?

The simple answer is the task owner has more administrative privileges when it comes to a task. The task initiator is the person who creates a particular instance of a task. Say, in our example, jcooper and jstein both request leave. In this case, they are both initiators and can each withdraw the task they requested (but not each other's).

On the other hand, the task owner may be the holiday administrator. They are responsible for administering all leave requests. This enables them to perform operations on behalf of any of the assigned task participants; additionally they can also reassign or escalate tasks.

The task owner can either be specified as part of the task definition, or on the Advanced tab of the BPEL Human Task Configuration window.

Note

If no task owner is specified, it defaults to the system administrator.

When the task owner logs into the worklist application they will see an additional tab, Administration Tasks, which will list all the tasks for which they are the task owner.

Requesting additional information about a task

Once assigned a task, sometimes you need additional information about it, before you can complete it. In our example, the manager may need more information about the reason for the leave request.

If a task initiator has been specified, then on the Task details form we have the option of selecting Request Information. If we select this option we are presented with the Request More Information form, where we can select who we want more information from, and enter details of the information required (which will be added as a comment to the task).

This will then assign the task to the initiator. The task will then appear on the task creators work queue, with a state of Info Requested. The task creator can either update the details of the task (if allowed) or add their own comment to provide the additional information. Once done, they can choose the action Submit Information, and the task will be reassigned back to whoever requested the additional information.

This feature is automatically enabled when the task is opened. You can disable this feature if you want by overriding the default access settings for Actions in the Access tab of the task configuration form.

Note

We can request additional information, not just from the person who created the task, but anyone else who has already worked on the task or anyone else that we need further information from.

Withdrawing a task

You may remember that when we first added the task to the BPEL process we had a field where we could specify a task initiator that we previously left blank. Well, if you specify a task initiator they are effectively the creator of the task and have the ability to withdraw the task.

To specify the task initiator, go back to your BPEL process and double-click on the Human Task. This will reopen the Human Task Configuration window (see Initializing the Workflow Parameter section), click the icon to the right of the initiator field, and this will launch the Expression Builder. Use this to specify the employeeId as the task initiator.

Now save the process, redeploy it, and run the process. Again, specify that the request is for jcooper, then log into the worklist application as jstein. You should notice that the task creator is jcooper. Don't approve the task, rather log out and log back into the worklist application as jcooper.

This will take you into the My Tasks tab, which is probably empty, but if you click the Initiated Tasks tab, then this will list all the tasks that you have initiated. If you look at the task, you will see that you can perform a single action on the task, which is to withdraw it.

Modifying a task

When we defined the task parameters on the task definition form, we had the option to specify if the parameters are Editable via Worklist, and at the time we didn't select this option. If this option is selected, then anyone who the task is assigned to has the ability to modify the task payload, including the task owner and initiator.

Difference between task owner and initiator

Now you may have noticed while specifying the various task details that as well as being able to specify the task initiator, we can also specify the task owner. At this point, you may be asking what is the difference between these two roles?

The simple answer is the task owner has more administrative privileges when it comes to a task. The task initiator is the person who creates a particular instance of a task. Say, in our example, jcooper and jstein both request leave. In this case, they are both initiators and can each withdraw the task they requested (but not each other's).

On the other hand, the task owner may be the holiday administrator. They are responsible for administering all leave requests. This enables them to perform operations on behalf of any of the assigned task participants; additionally they can also reassign or escalate tasks.

The task owner can either be specified as part of the task definition, or on the Advanced tab of the BPEL Human Task Configuration window.

Note

If no task owner is specified, it defaults to the system administrator.

When the task owner logs into the worklist application they will see an additional tab, Administration Tasks, which will list all the tasks for which they are the task owner.

Requesting additional information about a task

Once assigned a task, sometimes you need additional information about it, before you can complete it. In our example, the manager may need more information about the reason for the leave request.

If a task initiator has been specified, then on the Task details form we have the option of selecting Request Information. If we select this option we are presented with the Request More Information form, where we can select who we want more information from, and enter details of the information required (which will be added as a comment to the task).

This will then assign the task to the initiator. The task will then appear on the task creators work queue, with a state of Info Requested. The task creator can either update the details of the task (if allowed) or add their own comment to provide the additional information. Once done, they can choose the action Submit Information, and the task will be reassigned back to whoever requested the additional information.

This feature is automatically enabled when the task is opened. You can disable this feature if you want by overriding the default access settings for Actions in the Access tab of the task configuration form.

Note

We can request additional information, not just from the person who created the task, but anyone else who has already worked on the task or anyone else that we need further information from.

Modifying a task

When we defined the task parameters on the task definition form, we had the option to specify if the parameters are Editable via Worklist, and at the time we didn't select this option. If this option is selected, then anyone who the task is assigned to has the ability to modify the task payload, including the task owner and initiator.

Difference between task owner and initiator

Now you may have noticed while specifying the various task details that as well as being able to specify the task initiator, we can also specify the task owner. At this point, you may be asking what is the difference between these two roles?

The simple answer is the task owner has more administrative privileges when it comes to a task. The task initiator is the person who creates a particular instance of a task. Say, in our example, jcooper and jstein both request leave. In this case, they are both initiators and can each withdraw the task they requested (but not each other's).

On the other hand, the task owner may be the holiday administrator. They are responsible for administering all leave requests. This enables them to perform operations on behalf of any of the assigned task participants; additionally they can also reassign or escalate tasks.

The task owner can either be specified as part of the task definition, or on the Advanced tab of the BPEL Human Task Configuration window.

Note

If no task owner is specified, it defaults to the system administrator.

When the task owner logs into the worklist application they will see an additional tab, Administration Tasks, which will list all the tasks for which they are the task owner.

Requesting additional information about a task

Once assigned a task, sometimes you need additional information about it, before you can complete it. In our example, the manager may need more information about the reason for the leave request.

If a task initiator has been specified, then on the Task details form we have the option of selecting Request Information. If we select this option we are presented with the Request More Information form, where we can select who we want more information from, and enter details of the information required (which will be added as a comment to the task).

This will then assign the task to the initiator. The task will then appear on the task creators work queue, with a state of Info Requested. The task creator can either update the details of the task (if allowed) or add their own comment to provide the additional information. Once done, they can choose the action Submit Information, and the task will be reassigned back to whoever requested the additional information.

This feature is automatically enabled when the task is opened. You can disable this feature if you want by overriding the default access settings for Actions in the Access tab of the task configuration form.

Note

We can request additional information, not just from the person who created the task, but anyone else who has already worked on the task or anyone else that we need further information from.

Difference between task owner and initiator

Now you may have noticed while specifying the various task details that as well as being able to specify the task initiator, we can also specify the task owner. At this point, you may be asking what is the difference between these two roles?

The simple answer is the task owner has more administrative privileges when it comes to a task. The task initiator is the person who creates a particular instance of a task. Say, in our example, jcooper and jstein both request leave. In this case, they are both initiators and can each withdraw the task they requested (but not each other's).

On the other hand, the task owner may be the holiday administrator. They are responsible for administering all leave requests. This enables them to perform operations on behalf of any of the assigned task participants; additionally they can also reassign or escalate tasks.

The task owner can either be specified as part of the task definition, or on the Advanced tab of the BPEL Human Task Configuration window.

Note

If no task owner is specified, it defaults to the system administrator.

When the task owner logs into the worklist application they will see an additional tab, Administration Tasks, which will list all the tasks for which they are the task owner.

Requesting additional information about a task

Once assigned a task, sometimes you need additional information about it, before you can complete it. In our example, the manager may need more information about the reason for the leave request.

If a task initiator has been specified, then on the Task details form we have the option of selecting Request Information. If we select this option we are presented with the Request More Information form, where we can select who we want more information from, and enter details of the information required (which will be added as a comment to the task).

This will then assign the task to the initiator. The task will then appear on the task creators work queue, with a state of Info Requested. The task creator can either update the details of the task (if allowed) or add their own comment to provide the additional information. Once done, they can choose the action Submit Information, and the task will be reassigned back to whoever requested the additional information.

This feature is automatically enabled when the task is opened. You can disable this feature if you want by overriding the default access settings for Actions in the Access tab of the task configuration form.

Note

We can request additional information, not just from the person who created the task, but anyone else who has already worked on the task or anyone else that we need further information from.

Requesting additional information about a task

Once assigned a task, sometimes you need additional information about it, before you can complete it. In our example, the manager may need more information about the reason for the leave request.

If a task initiator has been specified, then on the Task details form we have the option of selecting Request Information. If we select this option we are presented with the Request More Information form, where we can select who we want more information from, and enter details of the information required (which will be added as a comment to the task).

This will then assign the task to the initiator. The task will then appear on the task creators work queue, with a state of Info Requested. The task creator can either update the details of the task (if allowed) or add their own comment to provide the additional information. Once done, they can choose the action Submit Information, and the task will be reassigned back to whoever requested the additional information.

This feature is automatically enabled when the task is opened. You can disable this feature if you want by overriding the default access settings for Actions in the Access tab of the task configuration form.

Note

We can request additional information, not just from the person who created the task, but anyone else who has already worked on the task or anyone else that we need further information from.

Managing the assignment of tasks

There is often a requirement to reassign tasks; maybe the task approver is about to go on leave themselves. Before they go, they may want to reassign all uncompleted tasks so they can be dealt with by someone else while they are away.

Alternatively, the individual may have already gone on leave (or be indisposed for some other reason) with a series of tasks already on their queue, which their manager may need to reassign to someone else.

Depending on a user's privileges and whether they are a manager, the worklist application provides a number of methods for either reassigning, delegating, or escalating tasks. We will examine these in detail below.

Reassigning reportee tasks

If a user has any direct reports, then the worklist application will treat them as a manager. This will give them additional privileges to work on tasks that are either assigned to any of their direct reports or groups that they own.

Within the work list application, managers have the additional tab, My Staff Tasks. If they select this, it will list all tasks currently assigned to any of their reports.

The list can be further filtered by selecting Advanced Search and specifying an appropriate query. For example, you could just show tasks assigned to a particular user or high priority tasks about to expire.

The manager has two basic options when it comes to staff tasks, they can either work on the task directly themselves, where they can carry out the same sets of actions as the assignee. Alternatively, they can choose to reassign the task to another of their direct reports or to any of the groups that they own.

To see how we do this, log in as wfaulk (jstein's manager), and click on My Staff Tasks. Select the task(s) you want to reassign; then from the Actions drop-down list, select Reassign. This will open the Reassign Task window, as shown in the following screenshot:

Reassigning reportee tasks

Here we have the option to either Reassign or Delegate the task. Stick with the Reassign option for the time being, as we will look at delegation shortly.

The remainder of the screen allows us to search for the users and or groups that we want to reassign the task to. You can choose to search just Users or Groups. In addition, you can further filter the list on the ID of the user or group, as well as the first name or last name of the user.

When specifying the search criteria, you can use a * to match any character. For example, the pattern st* will bring back the list of users whose user ID, first, or last name begin with st.

You will also notice that if you select a user, the Details panel will display basic information about the user, including their Manager, Reportees, and any Roles they have.

Use the arrows to move users/groups that you wish to reassign the task to from the search results box to the Selected box, and then click OK.

Reassigning your own task

In addition to reassigning staff tasks, any user can reassign their own tasks. To do this, they simply open the task from their task list as normal and select the Reassign option from the Action drop-down list. This will bring up the Reassign Task form that we just looked at.

An important point here is that the same restrictions on who a user can assign a task to apply regardless of whether it's the user's own task or a task belonging to one of their reportees.

Thus, users who have no direct reports will not be able to reassign their task to any other user. However, if they are a group owner, they will still have the ability to reassign the task to the group.

Note

If a user has the role "BPMWorkflowReassign", then they are allowed to reassign a task to anyone.

Delegating tasks

The other option we have when reassigning a task is to delegate it. This is very similar to reassigning a task, but with a number of key differences as follows:

  • You can only delegate a task to a single user
  • You cannot delegate a task to a group
  • You can delegate a task to anyone regardless of where they are in the organizational hierarchy

When you delegate a task it is assigned to a new user, but it also remains on your work queue so that either you or the delegated user can work on the task.

Escalating tasks

There will often be cases where a user needs to escalate the task. To do this, they simply select the task from their task list as normal and choose Escalate from the Action drop-down list. This will reassign the task to the user's manager.

Note

Tasks can also be automatically escalated, usually if not handled within a specified period of time. This is specified in Expiration and Escalation Policy, which forms part of the task definition.

Using rules to automatically manage tasks

Even though it's possible to manually reassign tasks, this can be inefficient and time-consuming. An alternative approach is to automate this using workflow rules.

You can either define a rule to be applied to a particular task type (for example our leave request) or to all tasks. In addition, you can also specify when a rule is active, which can be during vacation periods, for a specified time period, or active all the time (which is the default).

You can specify various filter criteria that are applied to the task attributes (for example, priority, initiator, acquired by) to further restrict which tasks the rule applies to.

Once you've specified the matching criteria for a rule, you can then specify whether you want to reassign or delegate the task. Essentially, the same criteria applies to whomever you are allowed to reassign a task to (if you were to do it manually, as covered in the previous section, with the added caveat that you can only reassign a task to a single user or group).

For rules defined for a particular task type, we have the option of being able to automatically set the task outcome. In the case of our leave request task, we can write a rule to automatically approve all leave requests that are one day in duration.

The final option is to take no action, which may seem a bit strange. However, this serves a couple of useful purposes. Often you only want a rule to be active at certain periods of time. One way to do this is to just specify a date range. An alternative is to use this to turn the rule on and off, as required over time.

The other use comes in when you define multiple rules. Rules are evaluated in order against a task until a rule is found that matches a particular task.

For example, to create a rule that reassigned all tasks, except say an expense approval task, you would do the following. Define two rules, a generic rule to reassign any task and a specific rule that matched the expense approval task that did nothing. We would then order the rules so that the expense approval rule triggered first. This way, the generic rule to reassign a task would be triggered for all tasks except the expense approval task.

Setting up a sample rule

For example, let's say Robert Stevenson (user ID rsteven) is John Steinbeck's deputy, and we want to create a rule that reassigns all leave requests assigned to jstein to rsteven except for any leave request made by rsteven.

To do this, you log onto the worklist application as jstein, and click on the Preferences link on the top-right-hand corner of the worklist title bar. This will bring you into the My Rules tab, where a user can configure various rules for managing the assignment of tasks. By default it displays the users currently defined Vacation Period (which in this case is disabled).

Setting up a sample rule

Select the My Rules folder (below Vacation Period (Disabled)), and click on the plus icon (circled in the preceding screenshot). This will display the template for defining a new rule.

Setting up a sample rule

Enter a suitable name for the rule, but leave the checkbox Use as vacation rule unchecked. If we were to check this, then the rule would only be active during the user's vacation period.

Next we want to specify which tasks the rule should apply to. Click on the search icon to the right and this will pop up the Task Type Browser, where we can search for the required task type. Select the LeaveRequest task for the process default/LeaveApproval/1.0.

We will not specify a time period for the rule, as we want it to be active all the time. We now need to specify the conditions that apply to the rule and the appropriate action to take. First let's add the condition to prevent the rule reassigning leave requests made by rsteven.

From the Add Condition drop-down list, select the task attribute to which we want to apply the rule, which is, in our case, the Creator (that is, the task initiator), and then click the plus icon (circled in the following screenshot):

Setting up a sample rule

This will insert a condition line for testing the Creator attribute into our rule, as shown in the following screenshot:

Setting up a sample rule

In the drop-down list, select the test to be applied to the attribute. So in our case, we select isn't and finally specify the user (rsteven). You can either directly enter the user ID or click the magnifying glass icon to search for the user with the user search facility we introduced earlier.

Finally, specify the task action, which is to reassign the task to rsteven. Your rule description should now look like the one shown in the following screenshot:

Setting up a sample rule

Finally, click on Save to create the rule. Once you have created the rule, try creating two leave requests, one for jcooper and another for rsteven. You should see that only the request created for jcooper is reassigned to rsteven.

Log in as rsteven, and select the leave request that has been reassigned to that user. If you examine the full task history, you will see that it shows which rule was triggered to cause the task to be reassigned.

Note

A user can also specify rule conditions against the content of the task payload through the use of flex fields, as well as define rules for any groups that they own. We will examine flex fields in Chapter 17, Workflow Patterns.

Reassigning reportee tasks

If a user has any direct reports, then the worklist application will treat them as a manager. This will give them additional privileges to work on tasks that are either assigned to any of their direct reports or groups that they own.

Within the work list application, managers have the additional tab, My Staff Tasks. If they select this, it will list all tasks currently assigned to any of their reports.

The list can be further filtered by selecting Advanced Search and specifying an appropriate query. For example, you could just show tasks assigned to a particular user or high priority tasks about to expire.

The manager has two basic options when it comes to staff tasks, they can either work on the task directly themselves, where they can carry out the same sets of actions as the assignee. Alternatively, they can choose to reassign the task to another of their direct reports or to any of the groups that they own.

To see how we do this, log in as wfaulk (jstein's manager), and click on My Staff Tasks. Select the task(s) you want to reassign; then from the Actions drop-down list, select Reassign. This will open the Reassign Task window, as shown in the following screenshot:

Reassigning reportee tasks

Here we have the option to either Reassign or Delegate the task. Stick with the Reassign option for the time being, as we will look at delegation shortly.

The remainder of the screen allows us to search for the users and or groups that we want to reassign the task to. You can choose to search just Users or Groups. In addition, you can further filter the list on the ID of the user or group, as well as the first name or last name of the user.

When specifying the search criteria, you can use a * to match any character. For example, the pattern st* will bring back the list of users whose user ID, first, or last name begin with st.

You will also notice that if you select a user, the Details panel will display basic information about the user, including their Manager, Reportees, and any Roles they have.

Use the arrows to move users/groups that you wish to reassign the task to from the search results box to the Selected box, and then click OK.

Reassigning your own task

In addition to reassigning staff tasks, any user can reassign their own tasks. To do this, they simply open the task from their task list as normal and select the Reassign option from the Action drop-down list. This will bring up the Reassign Task form that we just looked at.

An important point here is that the same restrictions on who a user can assign a task to apply regardless of whether it's the user's own task or a task belonging to one of their reportees.

Thus, users who have no direct reports will not be able to reassign their task to any other user. However, if they are a group owner, they will still have the ability to reassign the task to the group.

Note

If a user has the role "BPMWorkflowReassign", then they are allowed to reassign a task to anyone.

Delegating tasks

The other option we have when reassigning a task is to delegate it. This is very similar to reassigning a task, but with a number of key differences as follows:

  • You can only delegate a task to a single user
  • You cannot delegate a task to a group
  • You can delegate a task to anyone regardless of where they are in the organizational hierarchy

When you delegate a task it is assigned to a new user, but it also remains on your work queue so that either you or the delegated user can work on the task.

Escalating tasks

There will often be cases where a user needs to escalate the task. To do this, they simply select the task from their task list as normal and choose Escalate from the Action drop-down list. This will reassign the task to the user's manager.

Note

Tasks can also be automatically escalated, usually if not handled within a specified period of time. This is specified in Expiration and Escalation Policy, which forms part of the task definition.

Using rules to automatically manage tasks

Even though it's possible to manually reassign tasks, this can be inefficient and time-consuming. An alternative approach is to automate this using workflow rules.

You can either define a rule to be applied to a particular task type (for example our leave request) or to all tasks. In addition, you can also specify when a rule is active, which can be during vacation periods, for a specified time period, or active all the time (which is the default).

You can specify various filter criteria that are applied to the task attributes (for example, priority, initiator, acquired by) to further restrict which tasks the rule applies to.

Once you've specified the matching criteria for a rule, you can then specify whether you want to reassign or delegate the task. Essentially, the same criteria applies to whomever you are allowed to reassign a task to (if you were to do it manually, as covered in the previous section, with the added caveat that you can only reassign a task to a single user or group).

For rules defined for a particular task type, we have the option of being able to automatically set the task outcome. In the case of our leave request task, we can write a rule to automatically approve all leave requests that are one day in duration.

The final option is to take no action, which may seem a bit strange. However, this serves a couple of useful purposes. Often you only want a rule to be active at certain periods of time. One way to do this is to just specify a date range. An alternative is to use this to turn the rule on and off, as required over time.

The other use comes in when you define multiple rules. Rules are evaluated in order against a task until a rule is found that matches a particular task.

For example, to create a rule that reassigned all tasks, except say an expense approval task, you would do the following. Define two rules, a generic rule to reassign any task and a specific rule that matched the expense approval task that did nothing. We would then order the rules so that the expense approval rule triggered first. This way, the generic rule to reassign a task would be triggered for all tasks except the expense approval task.

Setting up a sample rule

For example, let's say Robert Stevenson (user ID rsteven) is John Steinbeck's deputy, and we want to create a rule that reassigns all leave requests assigned to jstein to rsteven except for any leave request made by rsteven.

To do this, you log onto the worklist application as jstein, and click on the Preferences link on the top-right-hand corner of the worklist title bar. This will bring you into the My Rules tab, where a user can configure various rules for managing the assignment of tasks. By default it displays the users currently defined Vacation Period (which in this case is disabled).

Setting up a sample rule

Select the My Rules folder (below Vacation Period (Disabled)), and click on the plus icon (circled in the preceding screenshot). This will display the template for defining a new rule.

Setting up a sample rule

Enter a suitable name for the rule, but leave the checkbox Use as vacation rule unchecked. If we were to check this, then the rule would only be active during the user's vacation period.

Next we want to specify which tasks the rule should apply to. Click on the search icon to the right and this will pop up the Task Type Browser, where we can search for the required task type. Select the LeaveRequest task for the process default/LeaveApproval/1.0.

We will not specify a time period for the rule, as we want it to be active all the time. We now need to specify the conditions that apply to the rule and the appropriate action to take. First let's add the condition to prevent the rule reassigning leave requests made by rsteven.

From the Add Condition drop-down list, select the task attribute to which we want to apply the rule, which is, in our case, the Creator (that is, the task initiator), and then click the plus icon (circled in the following screenshot):

Setting up a sample rule

This will insert a condition line for testing the Creator attribute into our rule, as shown in the following screenshot:

Setting up a sample rule

In the drop-down list, select the test to be applied to the attribute. So in our case, we select isn't and finally specify the user (rsteven). You can either directly enter the user ID or click the magnifying glass icon to search for the user with the user search facility we introduced earlier.

Finally, specify the task action, which is to reassign the task to rsteven. Your rule description should now look like the one shown in the following screenshot:

Setting up a sample rule

Finally, click on Save to create the rule. Once you have created the rule, try creating two leave requests, one for jcooper and another for rsteven. You should see that only the request created for jcooper is reassigned to rsteven.

Log in as rsteven, and select the leave request that has been reassigned to that user. If you examine the full task history, you will see that it shows which rule was triggered to cause the task to be reassigned.

Note

A user can also specify rule conditions against the content of the task payload through the use of flex fields, as well as define rules for any groups that they own. We will examine flex fields in Chapter 17, Workflow Patterns.

Reassigning your own task

In addition to reassigning staff tasks, any user can reassign their own tasks. To do this, they simply open the task from their task list as normal and select the Reassign option from the Action drop-down list. This will bring up the Reassign Task form that we just looked at.

An important point here is that the same restrictions on who a user can assign a task to apply regardless of whether it's the user's own task or a task belonging to one of their reportees.

Thus, users who have no direct reports will not be able to reassign their task to any other user. However, if they are a group owner, they will still have the ability to reassign the task to the group.

Note

If a user has the role "BPMWorkflowReassign", then they are allowed to reassign a task to anyone.

Delegating tasks

The other option we have when reassigning a task is to delegate it. This is very similar to reassigning a task, but with a number of key differences as follows:

  • You can only delegate a task to a single user
  • You cannot delegate a task to a group
  • You can delegate a task to anyone regardless of where they are in the organizational hierarchy

When you delegate a task it is assigned to a new user, but it also remains on your work queue so that either you or the delegated user can work on the task.

Escalating tasks

There will often be cases where a user needs to escalate the task. To do this, they simply select the task from their task list as normal and choose Escalate from the Action drop-down list. This will reassign the task to the user's manager.

Note

Tasks can also be automatically escalated, usually if not handled within a specified period of time. This is specified in Expiration and Escalation Policy, which forms part of the task definition.

Using rules to automatically manage tasks

Even though it's possible to manually reassign tasks, this can be inefficient and time-consuming. An alternative approach is to automate this using workflow rules.

You can either define a rule to be applied to a particular task type (for example our leave request) or to all tasks. In addition, you can also specify when a rule is active, which can be during vacation periods, for a specified time period, or active all the time (which is the default).

You can specify various filter criteria that are applied to the task attributes (for example, priority, initiator, acquired by) to further restrict which tasks the rule applies to.

Once you've specified the matching criteria for a rule, you can then specify whether you want to reassign or delegate the task. Essentially, the same criteria applies to whomever you are allowed to reassign a task to (if you were to do it manually, as covered in the previous section, with the added caveat that you can only reassign a task to a single user or group).

For rules defined for a particular task type, we have the option of being able to automatically set the task outcome. In the case of our leave request task, we can write a rule to automatically approve all leave requests that are one day in duration.

The final option is to take no action, which may seem a bit strange. However, this serves a couple of useful purposes. Often you only want a rule to be active at certain periods of time. One way to do this is to just specify a date range. An alternative is to use this to turn the rule on and off, as required over time.

The other use comes in when you define multiple rules. Rules are evaluated in order against a task until a rule is found that matches a particular task.

For example, to create a rule that reassigned all tasks, except say an expense approval task, you would do the following. Define two rules, a generic rule to reassign any task and a specific rule that matched the expense approval task that did nothing. We would then order the rules so that the expense approval rule triggered first. This way, the generic rule to reassign a task would be triggered for all tasks except the expense approval task.

Setting up a sample rule

For example, let's say Robert Stevenson (user ID rsteven) is John Steinbeck's deputy, and we want to create a rule that reassigns all leave requests assigned to jstein to rsteven except for any leave request made by rsteven.

To do this, you log onto the worklist application as jstein, and click on the Preferences link on the top-right-hand corner of the worklist title bar. This will bring you into the My Rules tab, where a user can configure various rules for managing the assignment of tasks. By default it displays the users currently defined Vacation Period (which in this case is disabled).

Setting up a sample rule

Select the My Rules folder (below Vacation Period (Disabled)), and click on the plus icon (circled in the preceding screenshot). This will display the template for defining a new rule.

Setting up a sample rule

Enter a suitable name for the rule, but leave the checkbox Use as vacation rule unchecked. If we were to check this, then the rule would only be active during the user's vacation period.

Next we want to specify which tasks the rule should apply to. Click on the search icon to the right and this will pop up the Task Type Browser, where we can search for the required task type. Select the LeaveRequest task for the process default/LeaveApproval/1.0.

We will not specify a time period for the rule, as we want it to be active all the time. We now need to specify the conditions that apply to the rule and the appropriate action to take. First let's add the condition to prevent the rule reassigning leave requests made by rsteven.

From the Add Condition drop-down list, select the task attribute to which we want to apply the rule, which is, in our case, the Creator (that is, the task initiator), and then click the plus icon (circled in the following screenshot):

Setting up a sample rule

This will insert a condition line for testing the Creator attribute into our rule, as shown in the following screenshot:

Setting up a sample rule

In the drop-down list, select the test to be applied to the attribute. So in our case, we select isn't and finally specify the user (rsteven). You can either directly enter the user ID or click the magnifying glass icon to search for the user with the user search facility we introduced earlier.

Finally, specify the task action, which is to reassign the task to rsteven. Your rule description should now look like the one shown in the following screenshot:

Setting up a sample rule

Finally, click on Save to create the rule. Once you have created the rule, try creating two leave requests, one for jcooper and another for rsteven. You should see that only the request created for jcooper is reassigned to rsteven.

Log in as rsteven, and select the leave request that has been reassigned to that user. If you examine the full task history, you will see that it shows which rule was triggered to cause the task to be reassigned.

Note

A user can also specify rule conditions against the content of the task payload through the use of flex fields, as well as define rules for any groups that they own. We will examine flex fields in Chapter 17, Workflow Patterns.

Delegating tasks

The other option we have when reassigning a task is to delegate it. This is very similar to reassigning a task, but with a number of key differences as follows:

  • You can only delegate a task to a single user
  • You cannot delegate a task to a group
  • You can delegate a task to anyone regardless of where they are in the organizational hierarchy

When you delegate a task it is assigned to a new user, but it also remains on your work queue so that either you or the delegated user can work on the task.

Escalating tasks

There will often be cases where a user needs to escalate the task. To do this, they simply select the task from their task list as normal and choose Escalate from the Action drop-down list. This will reassign the task to the user's manager.

Note

Tasks can also be automatically escalated, usually if not handled within a specified period of time. This is specified in Expiration and Escalation Policy, which forms part of the task definition.

Using rules to automatically manage tasks

Even though it's possible to manually reassign tasks, this can be inefficient and time-consuming. An alternative approach is to automate this using workflow rules.

You can either define a rule to be applied to a particular task type (for example our leave request) or to all tasks. In addition, you can also specify when a rule is active, which can be during vacation periods, for a specified time period, or active all the time (which is the default).

You can specify various filter criteria that are applied to the task attributes (for example, priority, initiator, acquired by) to further restrict which tasks the rule applies to.

Once you've specified the matching criteria for a rule, you can then specify whether you want to reassign or delegate the task. Essentially, the same criteria applies to whomever you are allowed to reassign a task to (if you were to do it manually, as covered in the previous section, with the added caveat that you can only reassign a task to a single user or group).

For rules defined for a particular task type, we have the option of being able to automatically set the task outcome. In the case of our leave request task, we can write a rule to automatically approve all leave requests that are one day in duration.

The final option is to take no action, which may seem a bit strange. However, this serves a couple of useful purposes. Often you only want a rule to be active at certain periods of time. One way to do this is to just specify a date range. An alternative is to use this to turn the rule on and off, as required over time.

The other use comes in when you define multiple rules. Rules are evaluated in order against a task until a rule is found that matches a particular task.

For example, to create a rule that reassigned all tasks, except say an expense approval task, you would do the following. Define two rules, a generic rule to reassign any task and a specific rule that matched the expense approval task that did nothing. We would then order the rules so that the expense approval rule triggered first. This way, the generic rule to reassign a task would be triggered for all tasks except the expense approval task.

Setting up a sample rule

For example, let's say Robert Stevenson (user ID rsteven) is John Steinbeck's deputy, and we want to create a rule that reassigns all leave requests assigned to jstein to rsteven except for any leave request made by rsteven.

To do this, you log onto the worklist application as jstein, and click on the Preferences link on the top-right-hand corner of the worklist title bar. This will bring you into the My Rules tab, where a user can configure various rules for managing the assignment of tasks. By default it displays the users currently defined Vacation Period (which in this case is disabled).

Setting up a sample rule

Select the My Rules folder (below Vacation Period (Disabled)), and click on the plus icon (circled in the preceding screenshot). This will display the template for defining a new rule.

Setting up a sample rule

Enter a suitable name for the rule, but leave the checkbox Use as vacation rule unchecked. If we were to check this, then the rule would only be active during the user's vacation period.

Next we want to specify which tasks the rule should apply to. Click on the search icon to the right and this will pop up the Task Type Browser, where we can search for the required task type. Select the LeaveRequest task for the process default/LeaveApproval/1.0.

We will not specify a time period for the rule, as we want it to be active all the time. We now need to specify the conditions that apply to the rule and the appropriate action to take. First let's add the condition to prevent the rule reassigning leave requests made by rsteven.

From the Add Condition drop-down list, select the task attribute to which we want to apply the rule, which is, in our case, the Creator (that is, the task initiator), and then click the plus icon (circled in the following screenshot):

Setting up a sample rule

This will insert a condition line for testing the Creator attribute into our rule, as shown in the following screenshot:

Setting up a sample rule

In the drop-down list, select the test to be applied to the attribute. So in our case, we select isn't and finally specify the user (rsteven). You can either directly enter the user ID or click the magnifying glass icon to search for the user with the user search facility we introduced earlier.

Finally, specify the task action, which is to reassign the task to rsteven. Your rule description should now look like the one shown in the following screenshot:

Setting up a sample rule

Finally, click on Save to create the rule. Once you have created the rule, try creating two leave requests, one for jcooper and another for rsteven. You should see that only the request created for jcooper is reassigned to rsteven.

Log in as rsteven, and select the leave request that has been reassigned to that user. If you examine the full task history, you will see that it shows which rule was triggered to cause the task to be reassigned.

Note

A user can also specify rule conditions against the content of the task payload through the use of flex fields, as well as define rules for any groups that they own. We will examine flex fields in Chapter 17, Workflow Patterns.

Escalating tasks

There will often be cases where a user needs to escalate the task. To do this, they simply select the task from their task list as normal and choose Escalate from the Action drop-down list. This will reassign the task to the user's manager.

Note

Tasks can also be automatically escalated, usually if not handled within a specified period of time. This is specified in Expiration and Escalation Policy, which forms part of the task definition.

Using rules to automatically manage tasks

Even though it's possible to manually reassign tasks, this can be inefficient and time-consuming. An alternative approach is to automate this using workflow rules.

You can either define a rule to be applied to a particular task type (for example our leave request) or to all tasks. In addition, you can also specify when a rule is active, which can be during vacation periods, for a specified time period, or active all the time (which is the default).

You can specify various filter criteria that are applied to the task attributes (for example, priority, initiator, acquired by) to further restrict which tasks the rule applies to.

Once you've specified the matching criteria for a rule, you can then specify whether you want to reassign or delegate the task. Essentially, the same criteria applies to whomever you are allowed to reassign a task to (if you were to do it manually, as covered in the previous section, with the added caveat that you can only reassign a task to a single user or group).

For rules defined for a particular task type, we have the option of being able to automatically set the task outcome. In the case of our leave request task, we can write a rule to automatically approve all leave requests that are one day in duration.

The final option is to take no action, which may seem a bit strange. However, this serves a couple of useful purposes. Often you only want a rule to be active at certain periods of time. One way to do this is to just specify a date range. An alternative is to use this to turn the rule on and off, as required over time.

The other use comes in when you define multiple rules. Rules are evaluated in order against a task until a rule is found that matches a particular task.

For example, to create a rule that reassigned all tasks, except say an expense approval task, you would do the following. Define two rules, a generic rule to reassign any task and a specific rule that matched the expense approval task that did nothing. We would then order the rules so that the expense approval rule triggered first. This way, the generic rule to reassign a task would be triggered for all tasks except the expense approval task.

Setting up a sample rule

For example, let's say Robert Stevenson (user ID rsteven) is John Steinbeck's deputy, and we want to create a rule that reassigns all leave requests assigned to jstein to rsteven except for any leave request made by rsteven.

To do this, you log onto the worklist application as jstein, and click on the Preferences link on the top-right-hand corner of the worklist title bar. This will bring you into the My Rules tab, where a user can configure various rules for managing the assignment of tasks. By default it displays the users currently defined Vacation Period (which in this case is disabled).

Setting up a sample rule

Select the My Rules folder (below Vacation Period (Disabled)), and click on the plus icon (circled in the preceding screenshot). This will display the template for defining a new rule.

Setting up a sample rule

Enter a suitable name for the rule, but leave the checkbox Use as vacation rule unchecked. If we were to check this, then the rule would only be active during the user's vacation period.

Next we want to specify which tasks the rule should apply to. Click on the search icon to the right and this will pop up the Task Type Browser, where we can search for the required task type. Select the LeaveRequest task for the process default/LeaveApproval/1.0.

We will not specify a time period for the rule, as we want it to be active all the time. We now need to specify the conditions that apply to the rule and the appropriate action to take. First let's add the condition to prevent the rule reassigning leave requests made by rsteven.

From the Add Condition drop-down list, select the task attribute to which we want to apply the rule, which is, in our case, the Creator (that is, the task initiator), and then click the plus icon (circled in the following screenshot):

Setting up a sample rule

This will insert a condition line for testing the Creator attribute into our rule, as shown in the following screenshot:

Setting up a sample rule

In the drop-down list, select the test to be applied to the attribute. So in our case, we select isn't and finally specify the user (rsteven). You can either directly enter the user ID or click the magnifying glass icon to search for the user with the user search facility we introduced earlier.

Finally, specify the task action, which is to reassign the task to rsteven. Your rule description should now look like the one shown in the following screenshot:

Setting up a sample rule

Finally, click on Save to create the rule. Once you have created the rule, try creating two leave requests, one for jcooper and another for rsteven. You should see that only the request created for jcooper is reassigned to rsteven.

Log in as rsteven, and select the leave request that has been reassigned to that user. If you examine the full task history, you will see that it shows which rule was triggered to cause the task to be reassigned.

Note

A user can also specify rule conditions against the content of the task payload through the use of flex fields, as well as define rules for any groups that they own. We will examine flex fields in Chapter 17, Workflow Patterns.

Using rules to automatically manage tasks

Even though it's possible to manually reassign tasks, this can be inefficient and time-consuming. An alternative approach is to automate this using workflow rules.

You can either define a rule to be applied to a particular task type (for example our leave request) or to all tasks. In addition, you can also specify when a rule is active, which can be during vacation periods, for a specified time period, or active all the time (which is the default).

You can specify various filter criteria that are applied to the task attributes (for example, priority, initiator, acquired by) to further restrict which tasks the rule applies to.

Once you've specified the matching criteria for a rule, you can then specify whether you want to reassign or delegate the task. Essentially, the same criteria applies to whomever you are allowed to reassign a task to (if you were to do it manually, as covered in the previous section, with the added caveat that you can only reassign a task to a single user or group).

For rules defined for a particular task type, we have the option of being able to automatically set the task outcome. In the case of our leave request task, we can write a rule to automatically approve all leave requests that are one day in duration.

The final option is to take no action, which may seem a bit strange. However, this serves a couple of useful purposes. Often you only want a rule to be active at certain periods of time. One way to do this is to just specify a date range. An alternative is to use this to turn the rule on and off, as required over time.

The other use comes in when you define multiple rules. Rules are evaluated in order against a task until a rule is found that matches a particular task.

For example, to create a rule that reassigned all tasks, except say an expense approval task, you would do the following. Define two rules, a generic rule to reassign any task and a specific rule that matched the expense approval task that did nothing. We would then order the rules so that the expense approval rule triggered first. This way, the generic rule to reassign a task would be triggered for all tasks except the expense approval task.

Setting up a sample rule

For example, let's say Robert Stevenson (user ID rsteven) is John Steinbeck's deputy, and we want to create a rule that reassigns all leave requests assigned to jstein to rsteven except for any leave request made by rsteven.

To do this, you log onto the worklist application as jstein, and click on the Preferences link on the top-right-hand corner of the worklist title bar. This will bring you into the My Rules tab, where a user can configure various rules for managing the assignment of tasks. By default it displays the users currently defined Vacation Period (which in this case is disabled).

Setting up a sample rule

Select the My Rules folder (below Vacation Period (Disabled)), and click on the plus icon (circled in the preceding screenshot). This will display the template for defining a new rule.

Setting up a sample rule

Enter a suitable name for the rule, but leave the checkbox Use as vacation rule unchecked. If we were to check this, then the rule would only be active during the user's vacation period.

Next we want to specify which tasks the rule should apply to. Click on the search icon to the right and this will pop up the Task Type Browser, where we can search for the required task type. Select the LeaveRequest task for the process default/LeaveApproval/1.0.

We will not specify a time period for the rule, as we want it to be active all the time. We now need to specify the conditions that apply to the rule and the appropriate action to take. First let's add the condition to prevent the rule reassigning leave requests made by rsteven.

From the Add Condition drop-down list, select the task attribute to which we want to apply the rule, which is, in our case, the Creator (that is, the task initiator), and then click the plus icon (circled in the following screenshot):

Setting up a sample rule

This will insert a condition line for testing the Creator attribute into our rule, as shown in the following screenshot:

Setting up a sample rule

In the drop-down list, select the test to be applied to the attribute. So in our case, we select isn't and finally specify the user (rsteven). You can either directly enter the user ID or click the magnifying glass icon to search for the user with the user search facility we introduced earlier.

Finally, specify the task action, which is to reassign the task to rsteven. Your rule description should now look like the one shown in the following screenshot:

Setting up a sample rule

Finally, click on Save to create the rule. Once you have created the rule, try creating two leave requests, one for jcooper and another for rsteven. You should see that only the request created for jcooper is reassigned to rsteven.

Log in as rsteven, and select the leave request that has been reassigned to that user. If you examine the full task history, you will see that it shows which rule was triggered to cause the task to be reassigned.

Note

A user can also specify rule conditions against the content of the task payload through the use of flex fields, as well as define rules for any groups that they own. We will examine flex fields in Chapter 17, Workflow Patterns.

Setting up a sample rule

For example, let's say Robert Stevenson (user ID rsteven) is John Steinbeck's deputy, and we want to create a rule that reassigns all leave requests assigned to jstein to rsteven except for any leave request made by rsteven.

To do this, you log onto the worklist application as jstein, and click on the Preferences link on the top-right-hand corner of the worklist title bar. This will bring you into the My Rules tab, where a user can configure various rules for managing the assignment of tasks. By default it displays the users currently defined Vacation Period (which in this case is disabled).

Setting up a sample rule

Select the My Rules folder (below Vacation Period (Disabled)), and click on the plus icon (circled in the preceding screenshot). This will display the template for defining a new rule.

Setting up a sample rule

Enter a suitable name for the rule, but leave the checkbox Use as vacation rule unchecked. If we were to check this, then the rule would only be active during the user's vacation period.

Next we want to specify which tasks the rule should apply to. Click on the search icon to the right and this will pop up the Task Type Browser, where we can search for the required task type. Select the LeaveRequest task for the process default/LeaveApproval/1.0.

We will not specify a time period for the rule, as we want it to be active all the time. We now need to specify the conditions that apply to the rule and the appropriate action to take. First let's add the condition to prevent the rule reassigning leave requests made by rsteven.

From the Add Condition drop-down list, select the task attribute to which we want to apply the rule, which is, in our case, the Creator (that is, the task initiator), and then click the plus icon (circled in the following screenshot):

Setting up a sample rule

This will insert a condition line for testing the Creator attribute into our rule, as shown in the following screenshot:

Setting up a sample rule

In the drop-down list, select the test to be applied to the attribute. So in our case, we select isn't and finally specify the user (rsteven). You can either directly enter the user ID or click the magnifying glass icon to search for the user with the user search facility we introduced earlier.

Finally, specify the task action, which is to reassign the task to rsteven. Your rule description should now look like the one shown in the following screenshot:

Setting up a sample rule

Finally, click on Save to create the rule. Once you have created the rule, try creating two leave requests, one for jcooper and another for rsteven. You should see that only the request created for jcooper is reassigned to rsteven.

Log in as rsteven, and select the leave request that has been reassigned to that user. If you examine the full task history, you will see that it shows which rule was triggered to cause the task to be reassigned.

Note

A user can also specify rule conditions against the content of the task payload through the use of flex fields, as well as define rules for any groups that they own. We will examine flex fields in Chapter 17, Workflow Patterns.

Summary

Human workflow is a key requirement for many projects. In this chapter, we saw how easy it is to insert a human task into a BPEL process, as well as implement the corresponding user interface to process the task.

We also looked at how business users can use the BPM worklist application to process their tasks as well as manage routing them, including reassigning, delegating, and escalating tasks. We also looked at how business users could automate most of the task management by defining business rules to automatically delegate, reassign, or complete a task.

Chapter 7. Using Business Rules to Define Decision Points

At runtime, there may be many potential paths through a BPEL process, controlled by conditional statements such as switch or while activities. Typically, the business rules that govern which path to take at any given point are written as XPath expressions embedded within the appropriate activity.

Although this is an acceptable approach, we often find that while the process itself may be relatively static, the business rules embedded within the activities may change on a more frequent basis. This will require us to update the BPEL process and redeploy it, even though the process flow itself hasn't changed.

In addition, by embedding the rule directly within the decision point, we often end up having to reimplement the same rule every time it is used, either within the same process or across multiple processes. Apart from being inefficient, this can lead to inconsistent implementations of the rules, as well as requiring us to update the rules in multiple places every time it changes.

The Oracle Business Rules engine that comes as part of the SOA Suite provides a declarative mechanism for defining business rules externally to our application. This not only ensures that each rule is used in a consistent fashion, but in addition, it makes it simpler and quicker to modify. We only have to modify a rule once and can do this with almost immediate effect, thus increasing the agility of our solution.

For those of you familiar with 10gR3, you will notice that JDeveloper comes with a new rules editor which is a lot more intuitive and simpler to use than the old browser-based editor. In addition, 11gR1 introduces decision tables , which provide a spreadsheet-like format for defining rules. While still very much a developer-oriented tool, these improvements make the tool a lot friendlier for business analysts, allowing them to better understand the rules that have been written as well as make simple changes.

In this chapter, we will introduce the new rules editor and look at how we can use it to define a decisions service to automate the approval of leave requests. Then, once we've done this, we'll see how to invoke the rule from the leave approval BPEL process. We will first implement these as a standard set of rules and then examine how we can simplify these rules by using a decision table.

Business rule concepts

Before we implement our first rule, let's briefly introduce the key components which make up a business rule. These are:

  • Facts: Represent the data or business objects that rules are applied to.
  • Rules: A rule consists of two parts, namely, an IF part that consists of one or more tests to be applied to a fact(s), and a THEN part that lists the actions to be carried out, should the test evaluate to true.
  • Rule Set: As the name implies, it is just a set of one or more related rules that are designed to work together.
  • Dictionary: A dictionary is the container of all components that make up a business rule. It holds all the Facts, Rule Sets, and Rules for a business rule.

In addition, a dictionary may also contain decision tables, functions, variables, and constraints. We will introduce these in more detail later in this chapter.

To execute a business rule, you assert (submit) one or more facts to the rules engine. It will apply the rules to the facts, that is, each fact will be tested against the IF part of the rule, and if it evaluates to true, then it will perform the specified actions for that fact. This may result in the creation of new facts or the modification of existing facts (which may result in further rule evaluation).

XML facts

The rule engine supports four types of facts: Java Facts , XML Facts, RL Facts, and ADF Facts . The type of fact that you want to use typically depends on the context in which you will be using the rules engine.

For example, if you are calling the rule engine from Java, then you would work with Java Facts as this provides a more integrated way of combining the two components. As we are using the rule engine within a composite, it makes sense to use XML facts.

The rule editor uses XML schemas to generate JAXB 2.0 classes, which are then imported to implement the corresponding XML facts. Using JAXB, particularly when used in conjunction with BPEL, places a number of constraints on how we define our XML schemas, including:

  • Within BPEL, you can only define variables based on globally defined elements. Thus all input and output facts passed to the decision service must be defined as global elements within our XML schemas.
  • When defining the input and output facts for any complexType (for example, tLeaveRequest), there can only be one global element of that type (for example, leaveRequest).
  • The element naming convention for JAXB means that elements or types with underscores in their names can cause compilation errors.

Decision services

To invoke a business rule within a composite, we need to go through a number of steps. First, we must create a session with the rules engine, then we can assert one or more facts, before executing the ruleset and finally we can retrieve the results.

We do this via a decision service (or function). This is essentially a web-service wrapper around a rules dictionary, which takes care of managing the session with the rules engine as well as governing which ruleset we wish to apply.

The wrapper allows a composite to assert one or more facts, execute a ruleset(s) against the asserted facts, retrieve the results, and then reset the session. This can be done within a single invocation of an operation or over multiple operations.

Leave approval business rule

For our first rule, we are going to build on our leave request example from the previous chapter, Adding in Human Workflow. If you remember, we implemented a simple process requiring every leave request to go to an individual's manager for approval. However, what we would like is a rule that automatically approves a request as long as it meets certain company guidelines.

To begin with, we will write a simple rule to automatically approve a leave request that is of the type Vacation and only for one day's duration. This is a pretty trivial example, but once we've done this, we will look at how to extend this rule to handle more complex examples.

Creating a decision service

Within JDeveloper, open up your LeaveApproval application from the previous chapter (or alternately open the sample provided with the book). Open up the composite.xml file for the application and then from the Component Palette, drag-and-drop a Business Rule onto the composite, as shown in the following screenshot:

Creating a decision service

This will launch the Create Business Rules dialog, as shown in the following screenshot:

Creating a decision service

The first step is to give our dictionary a name, such as LeaveApprovalRules, and a corresponding Package name.

In addition, we need to specify the Input and Output facts that we will pass to our decision service. For our purpose, we will pass in a single leave request. The rule engine will then apply the rules that we define and update the status of the leave request to either Approved or Manual (to indicate the request needs to be manually approved).

So we need to define a single input fact and output fact, both of type leaveRequest. To do this, click on the plus symbol (marked in the preceding screenshot), and select Input.

This will bring up the standard Type Chooser window; browse the LeaveRequest.xsd and select leaveRequest. Do the same again to specify an Output fact.

Note

When creating facts based on an XML schema, the rules editor will generate corresponding JAXB Java classes and place them in the specified Package. It is a good practice to specify a different package name for every XML schema to prevent conflicting class definitions.

Next, click the Advanced tab. Here we can see that JDeveloper has given the default name LeaveApprovalRules_DecisionService_1 to our decision service. Give it a more meaningful name such as LeaveApprovalDecisonService.

Creating a decision service

Now click OK. JDeveloper will inform you that it is creating the business rule dictionary for LeaveApprovalRules. Once completed, your composite should now look as shown in the following screenshot:

Creating a decision service

We are now ready to implement our business rules. Double-click on the LeaveApprovalRules component, and this will launch the rules editor, which is shown in the next screenshot.

Implementing our business rules

The rules editor allows you to view/edit the various components which make up your business rules. To select a particular component, such as Facts, Functions, Globals, and so on, just click on the corresponding tab down the left-hand side.

Implementing our business rules

You will see that, by default, JDeveloper has created a skeleton rules dictionary-based on the inputs we just specified.

Select the Facts tab (as shown in the preceding screenshot). You will see that it contains two XML facts (TLeaveRequest and com.packtpub.schemas.leaverequest.ObjectFactory), which are based on the inputs/outputs we defined earlier as well as a set of standard Java facts, which are automatically included within a rules dictionary.

Next, select the Decision Functions tab. You will see that it contains a single decision function LeaveApprovalDecisonService (that is, the name we specified on the Advanced tab when creating our business rule).

We will introduce some of the other tabs later in this chapter, but for the time being, we will start by defining our first rule. By default, the rules editor will have created a single ruleset with the name Ruleset_1. Click on the Ruleset_1 tab to open up the ruleset within the editor.

Expand the ruleset to show its details by clicking on the plus symbol (circled in the following screenshot). We can see that the ruleset has three properties: Name, Description, and Effective Date.

The Effective Date enables us to specify a period in time for which the ruleset will be applied, allowing you to define multiple versions of the same ruleset. For example, a current ruleset and a future version that we wish to come into effect at a defined time in the future.

Rename the ruleset to something more meaningful, for example, Employee Leave Approval Policy; add a description if you want and ensure that Effective Date is set to Always Valid.

Adding a rule to our ruleset

To add a rule, click the green plus symbol on the top-right-hand corner, and select Create Rule, as shown in the following screenshot (alternatively click on the Create Rule button, circled in the following screenshot).

Adding a rule to our ruleset

This will add a rule to our ruleset with the default name Rule_1, as shown in the following screenshot. Here, we can see that a rule consists of two parts, an IF part, which consists of one or more tests to be applied to a fact or facts, and a THEN part, which specifies the actions to be carried out, should the test evaluate to true.

To give the rule a more meaningful name, simply click on the name and enter a new name (for example, One Day Vacation). By clicking on the <enter description> element, you can also add a description for the rule.

Adding a rule to our ruleset

Creating the IF clause

For our leave approval rule, we need to define two tests, one to check that the request is only for a day in duration, which we can do by checking that the start date equals the end date, and the second to check that the request is of type Vacation.

To define the first test, click on <insert test>. This will add the line <operand> = = <operand> under the IF statement where we can define the test condition.

Creating the IF clause

Click on the first <operand>. This will display a drop-down list listing the valid facts and their attributes that we can test. From here, we can select the value to be tested, for example, TLeaveRequest.startDate in our case.

Creating the IF clause

Next from the operator drop-down list, select the test to be applied to the first operand (== in our case). We can either choose to compare it to a specified value or a second Operand. For our purpose, we want to check that the request.startDate equals the request.endDate, so click on the operand and select this from the drop-down list.

To create our second test, we follow pretty much the same process. This time we want to test that the operand leaveRequest.leaveType is equal to the value Vacation, so select the right-hand operator and type this in directly:

Creating the IF clause

Note, the rule editor has automatically inserted an and clause between our two tests. If you click on this, you have the option of changing this to an or clause.

Creating the Then clause

Now that we have defined our test, we need to define the action to take if the test evaluates to true. Click on <insert action>. This will display a drop-down list where you need to specify the Action Type you wish to carry out.

Creating the Then clause

The rule editor allows us to choose from the following action types:

  • assert new: We use this to create and assert a new fact, for example, a new LeaveRequest. Once asserted, the new fact will be evaluated by the rules engine against the ruleset.
  • modify: We can use this to either assign a value to a variable or a fact attribute; in our case we want to assign a status of Approved to the requestStatus property.
  • retract: This enables you to retract any of the facts matched in the pattern (for example, TLeaveRequest) so that it will no longer be evaluated as part of the ruleset.
  • call: This allows you to call a function to perform one or more actions.

The actions assert new and retract are important when we are dealing with rulesets that deal with multiple interdependent facts, as this allows us to control which facts are being evaluated by the rule engine at any particular time. Here, we are only dealing with a single fact, so we don't examine these constructs in this chapter, leaving them to Chapter 18, Using Business Rules to Implement Services.

For our purposes, we want to update the status of our leave, so select modify. Our rule should now look as shown in the following screenshot:

Creating the Then clause

The next step is to specify the fact to be modified. Click on the <target> element and you will be presented with a list of facts that are within scope. In our case, this will only be the TLeaveRequest that has just been matched by the IF clause, so select this. Our rule will now appear, as shown in the following screenshot:

Creating the Then clause

We now need to specify the properties we wish to modify, click on <add property> to open the Properties dialog. This will display a list of all the facts properties, allowing us to modify them as appropriate.

Select the Value cell for requestStatus. From here, you can directly enter a value, select a value from the drop-down list, or launch the expression builder. For our purposes, just enter the string Approved, as shown in the following screenshot, and then click Close.

Creating the Then clause

We don't need to specify values for any of the other properties, as the rules engine will only update those properties where a new value has been specified.

This completes the definition of our first rule. The next step is to wire it into our BPEL process.

Creating the Then clause

Creating a decision service

Within JDeveloper, open up your LeaveApproval application from the previous chapter (or alternately open the sample provided with the book). Open up the composite.xml file for the application and then from the Component Palette, drag-and-drop a Business Rule onto the composite, as shown in the following screenshot:

Creating a decision service

This will launch the Create Business Rules dialog, as shown in the following screenshot:

Creating a decision service

The first step is to give our dictionary a name, such as LeaveApprovalRules, and a corresponding Package name.

In addition, we need to specify the Input and Output facts that we will pass to our decision service. For our purpose, we will pass in a single leave request. The rule engine will then apply the rules that we define and update the status of the leave request to either Approved or Manual (to indicate the request needs to be manually approved).

So we need to define a single input fact and output fact, both of type leaveRequest. To do this, click on the plus symbol (marked in the preceding screenshot), and select Input.

This will bring up the standard Type Chooser window; browse the LeaveRequest.xsd and select leaveRequest. Do the same again to specify an Output fact.

Note

When creating facts based on an XML schema, the rules editor will generate corresponding JAXB Java classes and place them in the specified Package. It is a good practice to specify a different package name for every XML schema to prevent conflicting class definitions.

Next, click the Advanced tab. Here we can see that JDeveloper has given the default name LeaveApprovalRules_DecisionService_1 to our decision service. Give it a more meaningful name such as LeaveApprovalDecisonService.

Creating a decision service

Now click OK. JDeveloper will inform you that it is creating the business rule dictionary for LeaveApprovalRules. Once completed, your composite should now look as shown in the following screenshot:

Creating a decision service

We are now ready to implement our business rules. Double-click on the LeaveApprovalRules component, and this will launch the rules editor, which is shown in the next screenshot.

Implementing our business rules

The rules editor allows you to view/edit the various components which make up your business rules. To select a particular component, such as Facts, Functions, Globals, and so on, just click on the corresponding tab down the left-hand side.

Implementing our business rules

You will see that, by default, JDeveloper has created a skeleton rules dictionary-based on the inputs we just specified.

Select the Facts tab (as shown in the preceding screenshot). You will see that it contains two XML facts (TLeaveRequest and com.packtpub.schemas.leaverequest.ObjectFactory), which are based on the inputs/outputs we defined earlier as well as a set of standard Java facts, which are automatically included within a rules dictionary.

Next, select the Decision Functions tab. You will see that it contains a single decision function LeaveApprovalDecisonService (that is, the name we specified on the Advanced tab when creating our business rule).

We will introduce some of the other tabs later in this chapter, but for the time being, we will start by defining our first rule. By default, the rules editor will have created a single ruleset with the name Ruleset_1. Click on the Ruleset_1 tab to open up the ruleset within the editor.

Expand the ruleset to show its details by clicking on the plus symbol (circled in the following screenshot). We can see that the ruleset has three properties: Name, Description, and Effective Date.

The Effective Date enables us to specify a period in time for which the ruleset will be applied, allowing you to define multiple versions of the same ruleset. For example, a current ruleset and a future version that we wish to come into effect at a defined time in the future.

Rename the ruleset to something more meaningful, for example, Employee Leave Approval Policy; add a description if you want and ensure that Effective Date is set to Always Valid.

Adding a rule to our ruleset

To add a rule, click the green plus symbol on the top-right-hand corner, and select Create Rule, as shown in the following screenshot (alternatively click on the Create Rule button, circled in the following screenshot).

Adding a rule to our ruleset

This will add a rule to our ruleset with the default name Rule_1, as shown in the following screenshot. Here, we can see that a rule consists of two parts, an IF part, which consists of one or more tests to be applied to a fact or facts, and a THEN part, which specifies the actions to be carried out, should the test evaluate to true.

To give the rule a more meaningful name, simply click on the name and enter a new name (for example, One Day Vacation). By clicking on the <enter description> element, you can also add a description for the rule.

Adding a rule to our ruleset

Creating the IF clause

For our leave approval rule, we need to define two tests, one to check that the request is only for a day in duration, which we can do by checking that the start date equals the end date, and the second to check that the request is of type Vacation.

To define the first test, click on <insert test>. This will add the line <operand> = = <operand> under the IF statement where we can define the test condition.

Creating the IF clause

Click on the first <operand>. This will display a drop-down list listing the valid facts and their attributes that we can test. From here, we can select the value to be tested, for example, TLeaveRequest.startDate in our case.

Creating the IF clause

Next from the operator drop-down list, select the test to be applied to the first operand (== in our case). We can either choose to compare it to a specified value or a second Operand. For our purpose, we want to check that the request.startDate equals the request.endDate, so click on the operand and select this from the drop-down list.

To create our second test, we follow pretty much the same process. This time we want to test that the operand leaveRequest.leaveType is equal to the value Vacation, so select the right-hand operator and type this in directly:

Creating the IF clause

Note, the rule editor has automatically inserted an and clause between our two tests. If you click on this, you have the option of changing this to an or clause.

Creating the Then clause

Now that we have defined our test, we need to define the action to take if the test evaluates to true. Click on <insert action>. This will display a drop-down list where you need to specify the Action Type you wish to carry out.

Creating the Then clause

The rule editor allows us to choose from the following action types:

  • assert new: We use this to create and assert a new fact, for example, a new LeaveRequest. Once asserted, the new fact will be evaluated by the rules engine against the ruleset.
  • modify: We can use this to either assign a value to a variable or a fact attribute; in our case we want to assign a status of Approved to the requestStatus property.
  • retract: This enables you to retract any of the facts matched in the pattern (for example, TLeaveRequest) so that it will no longer be evaluated as part of the ruleset.
  • call: This allows you to call a function to perform one or more actions.

The actions assert new and retract are important when we are dealing with rulesets that deal with multiple interdependent facts, as this allows us to control which facts are being evaluated by the rule engine at any particular time. Here, we are only dealing with a single fact, so we don't examine these constructs in this chapter, leaving them to Chapter 18, Using Business Rules to Implement Services.

For our purposes, we want to update the status of our leave, so select modify. Our rule should now look as shown in the following screenshot:

Creating the Then clause

The next step is to specify the fact to be modified. Click on the <target> element and you will be presented with a list of facts that are within scope. In our case, this will only be the TLeaveRequest that has just been matched by the IF clause, so select this. Our rule will now appear, as shown in the following screenshot:

Creating the Then clause

We now need to specify the properties we wish to modify, click on <add property> to open the Properties dialog. This will display a list of all the facts properties, allowing us to modify them as appropriate.

Select the Value cell for requestStatus. From here, you can directly enter a value, select a value from the drop-down list, or launch the expression builder. For our purposes, just enter the string Approved, as shown in the following screenshot, and then click Close.

Creating the Then clause

We don't need to specify values for any of the other properties, as the rules engine will only update those properties where a new value has been specified.

This completes the definition of our first rule. The next step is to wire it into our BPEL process.

Creating the Then clause

Implementing our business rules

The rules editor allows you to view/edit the various components which make up your business rules. To select a particular component, such as Facts, Functions, Globals, and so on, just click on the corresponding tab down the left-hand side.

Implementing our business rules

You will see that, by default, JDeveloper has created a skeleton rules dictionary-based on the inputs we just specified.

Select the Facts tab (as shown in the preceding screenshot). You will see that it contains two XML facts (TLeaveRequest and com.packtpub.schemas.leaverequest.ObjectFactory), which are based on the inputs/outputs we defined earlier as well as a set of standard Java facts, which are automatically included within a rules dictionary.

Next, select the Decision Functions tab. You will see that it contains a single decision function LeaveApprovalDecisonService (that is, the name we specified on the Advanced tab when creating our business rule).

We will introduce some of the other tabs later in this chapter, but for the time being, we will start by defining our first rule. By default, the rules editor will have created a single ruleset with the name Ruleset_1. Click on the Ruleset_1 tab to open up the ruleset within the editor.

Expand the ruleset to show its details by clicking on the plus symbol (circled in the following screenshot). We can see that the ruleset has three properties: Name, Description, and Effective Date.

The Effective Date enables us to specify a period in time for which the ruleset will be applied, allowing you to define multiple versions of the same ruleset. For example, a current ruleset and a future version that we wish to come into effect at a defined time in the future.

Rename the ruleset to something more meaningful, for example, Employee Leave Approval Policy; add a description if you want and ensure that Effective Date is set to Always Valid.

Adding a rule to our ruleset

To add a rule, click the green plus symbol on the top-right-hand corner, and select Create Rule, as shown in the following screenshot (alternatively click on the Create Rule button, circled in the following screenshot).

Adding a rule to our ruleset

This will add a rule to our ruleset with the default name Rule_1, as shown in the following screenshot. Here, we can see that a rule consists of two parts, an IF part, which consists of one or more tests to be applied to a fact or facts, and a THEN part, which specifies the actions to be carried out, should the test evaluate to true.

To give the rule a more meaningful name, simply click on the name and enter a new name (for example, One Day Vacation). By clicking on the <enter description> element, you can also add a description for the rule.

Adding a rule to our ruleset

Creating the IF clause

For our leave approval rule, we need to define two tests, one to check that the request is only for a day in duration, which we can do by checking that the start date equals the end date, and the second to check that the request is of type Vacation.

To define the first test, click on <insert test>. This will add the line <operand> = = <operand> under the IF statement where we can define the test condition.

Creating the IF clause

Click on the first <operand>. This will display a drop-down list listing the valid facts and their attributes that we can test. From here, we can select the value to be tested, for example, TLeaveRequest.startDate in our case.

Creating the IF clause

Next from the operator drop-down list, select the test to be applied to the first operand (== in our case). We can either choose to compare it to a specified value or a second Operand. For our purpose, we want to check that the request.startDate equals the request.endDate, so click on the operand and select this from the drop-down list.

To create our second test, we follow pretty much the same process. This time we want to test that the operand leaveRequest.leaveType is equal to the value Vacation, so select the right-hand operator and type this in directly:

Creating the IF clause

Note, the rule editor has automatically inserted an and clause between our two tests. If you click on this, you have the option of changing this to an or clause.

Creating the Then clause

Now that we have defined our test, we need to define the action to take if the test evaluates to true. Click on <insert action>. This will display a drop-down list where you need to specify the Action Type you wish to carry out.

Creating the Then clause

The rule editor allows us to choose from the following action types:

  • assert new: We use this to create and assert a new fact, for example, a new LeaveRequest. Once asserted, the new fact will be evaluated by the rules engine against the ruleset.
  • modify: We can use this to either assign a value to a variable or a fact attribute; in our case we want to assign a status of Approved to the requestStatus property.
  • retract: This enables you to retract any of the facts matched in the pattern (for example, TLeaveRequest) so that it will no longer be evaluated as part of the ruleset.
  • call: This allows you to call a function to perform one or more actions.

The actions assert new and retract are important when we are dealing with rulesets that deal with multiple interdependent facts, as this allows us to control which facts are being evaluated by the rule engine at any particular time. Here, we are only dealing with a single fact, so we don't examine these constructs in this chapter, leaving them to Chapter 18, Using Business Rules to Implement Services.

For our purposes, we want to update the status of our leave, so select modify. Our rule should now look as shown in the following screenshot:

Creating the Then clause

The next step is to specify the fact to be modified. Click on the <target> element and you will be presented with a list of facts that are within scope. In our case, this will only be the TLeaveRequest that has just been matched by the IF clause, so select this. Our rule will now appear, as shown in the following screenshot:

Creating the Then clause

We now need to specify the properties we wish to modify, click on <add property> to open the Properties dialog. This will display a list of all the facts properties, allowing us to modify them as appropriate.

Select the Value cell for requestStatus. From here, you can directly enter a value, select a value from the drop-down list, or launch the expression builder. For our purposes, just enter the string Approved, as shown in the following screenshot, and then click Close.

Creating the Then clause

We don't need to specify values for any of the other properties, as the rules engine will only update those properties where a new value has been specified.

This completes the definition of our first rule. The next step is to wire it into our BPEL process.

Creating the Then clause

Adding a rule to our ruleset

To add a rule, click the green plus symbol on the top-right-hand corner, and select Create Rule, as shown in the following screenshot (alternatively click on the Create Rule button, circled in the following screenshot).

Adding a rule to our ruleset

This will add a rule to our ruleset with the default name Rule_1, as shown in the following screenshot. Here, we can see that a rule consists of two parts, an IF part, which consists of one or more tests to be applied to a fact or facts, and a THEN part, which specifies the actions to be carried out, should the test evaluate to true.

To give the rule a more meaningful name, simply click on the name and enter a new name (for example, One Day Vacation). By clicking on the <enter description> element, you can also add a description for the rule.

Adding a rule to our ruleset

Creating the IF clause

For our leave approval rule, we need to define two tests, one to check that the request is only for a day in duration, which we can do by checking that the start date equals the end date, and the second to check that the request is of type Vacation.

To define the first test, click on <insert test>. This will add the line <operand> = = <operand> under the IF statement where we can define the test condition.

Creating the IF clause

Click on the first <operand>. This will display a drop-down list listing the valid facts and their attributes that we can test. From here, we can select the value to be tested, for example, TLeaveRequest.startDate in our case.

Creating the IF clause

Next from the operator drop-down list, select the test to be applied to the first operand (== in our case). We can either choose to compare it to a specified value or a second Operand. For our purpose, we want to check that the request.startDate equals the request.endDate, so click on the operand and select this from the drop-down list.

To create our second test, we follow pretty much the same process. This time we want to test that the operand leaveRequest.leaveType is equal to the value Vacation, so select the right-hand operator and type this in directly:

Creating the IF clause

Note, the rule editor has automatically inserted an and clause between our two tests. If you click on this, you have the option of changing this to an or clause.

Creating the Then clause

Now that we have defined our test, we need to define the action to take if the test evaluates to true. Click on <insert action>. This will display a drop-down list where you need to specify the Action Type you wish to carry out.

Creating the Then clause

The rule editor allows us to choose from the following action types:

  • assert new: We use this to create and assert a new fact, for example, a new LeaveRequest. Once asserted, the new fact will be evaluated by the rules engine against the ruleset.
  • modify: We can use this to either assign a value to a variable or a fact attribute; in our case we want to assign a status of Approved to the requestStatus property.
  • retract: This enables you to retract any of the facts matched in the pattern (for example, TLeaveRequest) so that it will no longer be evaluated as part of the ruleset.
  • call: This allows you to call a function to perform one or more actions.

The actions assert new and retract are important when we are dealing with rulesets that deal with multiple interdependent facts, as this allows us to control which facts are being evaluated by the rule engine at any particular time. Here, we are only dealing with a single fact, so we don't examine these constructs in this chapter, leaving them to Chapter 18, Using Business Rules to Implement Services.

For our purposes, we want to update the status of our leave, so select modify. Our rule should now look as shown in the following screenshot:

Creating the Then clause

The next step is to specify the fact to be modified. Click on the <target> element and you will be presented with a list of facts that are within scope. In our case, this will only be the TLeaveRequest that has just been matched by the IF clause, so select this. Our rule will now appear, as shown in the following screenshot:

Creating the Then clause

We now need to specify the properties we wish to modify, click on <add property> to open the Properties dialog. This will display a list of all the facts properties, allowing us to modify them as appropriate.

Select the Value cell for requestStatus. From here, you can directly enter a value, select a value from the drop-down list, or launch the expression builder. For our purposes, just enter the string Approved, as shown in the following screenshot, and then click Close.

Creating the Then clause

We don't need to specify values for any of the other properties, as the rules engine will only update those properties where a new value has been specified.

This completes the definition of our first rule. The next step is to wire it into our BPEL process.

Creating the Then clause

Creating the IF clause

For our leave approval rule, we need to define two tests, one to check that the request is only for a day in duration, which we can do by checking that the start date equals the end date, and the second to check that the request is of type Vacation.

To define the first test, click on <insert test>. This will add the line <operand> = = <operand> under the IF statement where we can define the test condition.

Creating the IF clause

Click on the first <operand>. This will display a drop-down list listing the valid facts and their attributes that we can test. From here, we can select the value to be tested, for example, TLeaveRequest.startDate in our case.

Creating the IF clause

Next from the operator drop-down list, select the test to be applied to the first operand (== in our case). We can either choose to compare it to a specified value or a second Operand. For our purpose, we want to check that the request.startDate equals the request.endDate, so click on the operand and select this from the drop-down list.

To create our second test, we follow pretty much the same process. This time we want to test that the operand leaveRequest.leaveType is equal to the value Vacation, so select the right-hand operator and type this in directly:

Creating the IF clause

Note, the rule editor has automatically inserted an and clause between our two tests. If you click on this, you have the option of changing this to an or clause.

Creating the Then clause

Now that we have defined our test, we need to define the action to take if the test evaluates to true. Click on <insert action>. This will display a drop-down list where you need to specify the Action Type you wish to carry out.

Creating the Then clause

The rule editor allows us to choose from the following action types:

  • assert new: We use this to create and assert a new fact, for example, a new LeaveRequest. Once asserted, the new fact will be evaluated by the rules engine against the ruleset.
  • modify: We can use this to either assign a value to a variable or a fact attribute; in our case we want to assign a status of Approved to the requestStatus property.
  • retract: This enables you to retract any of the facts matched in the pattern (for example, TLeaveRequest) so that it will no longer be evaluated as part of the ruleset.
  • call: This allows you to call a function to perform one or more actions.

The actions assert new and retract are important when we are dealing with rulesets that deal with multiple interdependent facts, as this allows us to control which facts are being evaluated by the rule engine at any particular time. Here, we are only dealing with a single fact, so we don't examine these constructs in this chapter, leaving them to Chapter 18, Using Business Rules to Implement Services.

For our purposes, we want to update the status of our leave, so select modify. Our rule should now look as shown in the following screenshot:

Creating the Then clause

The next step is to specify the fact to be modified. Click on the <target> element and you will be presented with a list of facts that are within scope. In our case, this will only be the TLeaveRequest that has just been matched by the IF clause, so select this. Our rule will now appear, as shown in the following screenshot:

Creating the Then clause

We now need to specify the properties we wish to modify, click on <add property> to open the Properties dialog. This will display a list of all the facts properties, allowing us to modify them as appropriate.

Select the Value cell for requestStatus. From here, you can directly enter a value, select a value from the drop-down list, or launch the expression builder. For our purposes, just enter the string Approved, as shown in the following screenshot, and then click Close.

Creating the Then clause

We don't need to specify values for any of the other properties, as the rules engine will only update those properties where a new value has been specified.

This completes the definition of our first rule. The next step is to wire it into our BPEL process.

Creating the Then clause

Creating the Then clause

Now that we have defined our test, we need to define the action to take if the test evaluates to true. Click on <insert action>. This will display a drop-down list where you need to specify the Action Type you wish to carry out.

Creating the Then clause

The rule editor allows us to choose from the following action types:

  • assert new: We use this to create and assert a new fact, for example, a new LeaveRequest. Once asserted, the new fact will be evaluated by the rules engine against the ruleset.
  • modify: We can use this to either assign a value to a variable or a fact attribute; in our case we want to assign a status of Approved to the requestStatus property.
  • retract: This enables you to retract any of the facts matched in the pattern (for example, TLeaveRequest) so that it will no longer be evaluated as part of the ruleset.
  • call: This allows you to call a function to perform one or more actions.

The actions assert new and retract are important when we are dealing with rulesets that deal with multiple interdependent facts, as this allows us to control which facts are being evaluated by the rule engine at any particular time. Here, we are only dealing with a single fact, so we don't examine these constructs in this chapter, leaving them to Chapter 18, Using Business Rules to Implement Services.

For our purposes, we want to update the status of our leave, so select modify. Our rule should now look as shown in the following screenshot:

Creating the Then clause

The next step is to specify the fact to be modified. Click on the <target> element and you will be presented with a list of facts that are within scope. In our case, this will only be the TLeaveRequest that has just been matched by the IF clause, so select this. Our rule will now appear, as shown in the following screenshot:

Creating the Then clause

We now need to specify the properties we wish to modify, click on <add property> to open the Properties dialog. This will display a list of all the facts properties, allowing us to modify them as appropriate.

Select the Value cell for requestStatus. From here, you can directly enter a value, select a value from the drop-down list, or launch the expression builder. For our purposes, just enter the string Approved, as shown in the following screenshot, and then click Close.

Creating the Then clause

We don't need to specify values for any of the other properties, as the rules engine will only update those properties where a new value has been specified.

This completes the definition of our first rule. The next step is to wire it into our BPEL process.

Creating the Then clause

Calling a business rule from BPEL

Save the rule, and then switch back to our composite and double-click the LeaveRequest BPEL process to edit it. Drag a Business Rule from the BPEL Activities and Components palette into your BPEL process (before the Human Task activity). This will open the Business Rule dialog (as shown in the following screenshot):

Calling a business rule from BPEL

First, we need to specify a name for the Business Rule activity within our BPEL process, so give it a meaningful name such as LeaveApprovalRules.

Next we need to specify the Business Rule Dictionary that we wish to use. If we click on the drop-down list, it will list all the dictionaries within our composite application, which in our case is LeaveApprovalRules that we have just defined.

Select this and the rule dialog will be updated (as shown in the following screenshot) to enable us to specify additional information about how we want to invoke the rule. First, we need to select the decision service that we want to invoke from BPEL. Our rule only contains a single decision service, LeaveApprovalDecisionService, so select it.

Once we've specified the service, we need to specify how we want to invoke the decision service. We specify this through the Operation attribute. Here we have two options:

  • Execute function and reset the session
  • Execute function

If we choose the option Execute function and thus don't reset the session, if we were then to call the decision service several times within the same instance of our BPEL process, each new invocation would reuse the same session and would also evaluate facts asserted in any previous invocation. For our purposes, we just need to assert a single fact and run the ruleset, so accept the default value of Execute function and reset the session(we will look at other modes of operation in more detail in Chapter 18, Using Business Rules to Implement Services).

Calling a business rule from BPEL

Assigning facts

The final step to invoke our business rules is to assign BPEL variables to the input and output facts. Click on the green plus symbol (as shown in the preceding screenshot), and this will launch the Decision Fact Map window, as shown in the following screenshot:

Assigning facts

At first glance, this looks like the standard Create Copy Operation window that we use when carrying out assigns within BPEL (which in reality is exactly what it is).

The key difference is that we are using this to assign values to the input facts to be submitted to the rules engines, so the Type on the To side of the copy operation is a Business Rule Facts.

The reverse is true for an output fact, where we use this dialog to map the output from the decision service back into a corresponding BPEL variable.

For our purpose, we just want to map the initial LeaveRequest in the process inputVariable into the corresponding fact, as shown in the preceding screenshot. Then we will map the output fact, which will contain our updated LeaveRequest back into our inputVariable.

Note

When JDeveloper opens the Decision Fact Map window, the Variables folder for the Business Rules Facts (circled in the preceding screenshot) is closed and it appears that there are no input facts. You must double-click on this to open it and expose the facts.

We have now wired the rule invocation into our BPEL process, before finally running our process; we need to modify our process to only invoke the workflow if the leave request hasn't been automatically approved.

To do this, just drag a switch onto your process, and then drag your workflow task into the first branch in the switch and define a test to check that the LeaveRequest hasn't been approved. You are now ready to deploy and run your modified process.

Assigning facts

The final step to invoke our business rules is to assign BPEL variables to the input and output facts. Click on the green plus symbol (as shown in the preceding screenshot), and this will launch the Decision Fact Map window, as shown in the following screenshot:

Assigning facts

At first glance, this looks like the standard Create Copy Operation window that we use when carrying out assigns within BPEL (which in reality is exactly what it is).

The key difference is that we are using this to assign values to the input facts to be submitted to the rules engines, so the Type on the To side of the copy operation is a Business Rule Facts.

The reverse is true for an output fact, where we use this dialog to map the output from the decision service back into a corresponding BPEL variable.

For our purpose, we just want to map the initial LeaveRequest in the process inputVariable into the corresponding fact, as shown in the preceding screenshot. Then we will map the output fact, which will contain our updated LeaveRequest back into our inputVariable.

Note

When JDeveloper opens the Decision Fact Map window, the Variables folder for the Business Rules Facts (circled in the preceding screenshot) is closed and it appears that there are no input facts. You must double-click on this to open it and expose the facts.

We have now wired the rule invocation into our BPEL process, before finally running our process; we need to modify our process to only invoke the workflow if the leave request hasn't been automatically approved.

To do this, just drag a switch onto your process, and then drag your workflow task into the first branch in the switch and define a test to check that the LeaveRequest hasn't been approved. You are now ready to deploy and run your modified process.

Using functions

Our current rule only approves vacations of one day in duration, requiring all other leave requests to be manually approved. Ideally, we would like to approve holidays of varying duration as long as sufficient notice has been given, for example:

  • Approve vacations of one day in duration with a start date that's two weeks or more in the future
  • Approve if for 2-3 days and more than 30 days in the future
  • Approve if 5 days or less and more than 60 days in the future
  • Approve if 10 days or less and more than 120 days in the future

To write these rules, we will need to calculate the duration of the leave period, as well as calculate how long it is before the start date. Out of the box, the rule engine provides the Duration extension methods, which allow us to calculate the number of days between two dates, but doesn't allow us to exclude weekends.

So we will need to write our own logic to calculate these values. Rather than embedding this logic directly in each rule, best practice dictates that we place this logic into a separate function. This not only ensures that we have a single version of the logic to implement but minimizes the size of our rules, thus making them simpler and easier to maintain. For our purposes, we will create the following functions:

  • startsIn: Which returns the number of days before the specified start date
  • leaveDuration: Which returns the number of days from the start date to the end date, excluding weekends

Creating a function

To create our first function, within the rule editor, click on the Functions tab. This will list all the functions currently defined to our ruleset. To create a new function, click on the green plus icon, as shown in the following screenshot:

Creating a function

This will add a new function with a default name (for example, Function_1) to our list. Click on the function name to select it and update it to startsIn. From the drop-down list, select the Return Type of the function, which is int in our case.

Next, we need to specify the arguments we wish to pass to our function. Click on the green plus sign, as shown in the following screenshot, and this will add an argument to our list. Here we can specify the argument name (for example, startDate), and from the drop-down list, the argument Type, which should be XMLGregorianCalendar (when creating XML facts, the JAXB processor maps the type xsd:date to javax.xml.datatype.XMLGregorianCalendar).

Note

The list of valid types is made up of the basic types (for example, int, double, char, and so on), plus the XML facts (excluding object factories) and the Java Facts (excluding the Rules Extension Method) defined in our rules dictionary.

Creating a function

The final step is to implement the business logic of our function, which consists of one or more actions. We enter these actions in the Body section of the function. The first action we need to create is one that creates a local variable of type calendar, which holds the current date.

To do this, click on <insert action> within the Body section of our function. The rule editor will display a drop-down list that lists all the available actions.

Creating a function

For our purpose, we want to create a new variable and assign a value to it, so select the assign new action, as shown in the preceding screenshot. This will insert a template for the assign new action into our function body (as shown in the following screenshot). We then configure the action by clicking on each part within the template and defining it as appropriate.

Creating a function

The first part we need to define is the type of variable we wish to create. Click on the <type> element within our <assign> statement, and the rule editor displays a drop-down list displaying all the available types. For our purposes, select Calendar.

Next, click on var. This will prompt us to enter the name of the variable that we want to create. Specify today, and hit enter.

Creating a function

Finally, we need to specify the value we want to initialize our variable with. Click on the <expression> element. The rule editor will display a drop-down box listing all the valid values we can assign to our variable, as shown in the following screenshot:

Creating a function

Select Calendar.getInstance(), which will initialize our variable to hold the current date.

For our second action, we want to calculate the number of days before the specified start date and place the result into the variable duration. To calculate this, we will make use of the Duration extension method provided with the rules engine.

We will do this by defining another assign new action in a similar way to the previous action. The key difference is how we specify the <expression>. This time, instead of selecting a value from the drop-down list, click on the Expression Builder icon (circled in the preceding screenshot) to launch the Expression Builder for the rules editor.

Creating a function

The Expression Builder provides a graphical tool for writing rule expressions and is accessed from various parts of the rule editor. It consists of the following areas:

  • Expression: The top textbox contains the rule expression that you are working on. You can either type data directly in here or use the Expression Builder to insert code fragments to build up the expression required.
  • Variables, Functions, Operators, Constants: This part of the Expression Builder lets you browse the various components that you can insert into your expression. Once you've located the component that you wish to use, click the Insert Into Expression button, and this will insert the appropriate code fragment into the expression.

    Note

    The code fragment is inserted at the point within the expression that the cursor is currently positioned.

  • Content Preview: This box displays a preview of the content that would be inserted into the expression if you clicked the Insert Into Expression button.

So let's use this to build our rules expression. The expression we want to build is a relatively simple one, namely:

Duration.days between(today,startDate) + 1

To build our expression, carry out the following steps. First, within the Functions tab, locate the function Duration.days between and insert this into the expression (as shown in the previous screenshot).

Next, within the Variables tab, locate the variable today. Then within the expression, highlight the first argument of the function (as shown in the following screenshot), and click Insert Into Expression.

Creating a function

This will update the value of the first argument to contain today; repeat this to update the second argument to contain startDate. Next, manually enter +1 to the end of the expression to complete it and click OK.

Finally add a third action to return the duration. The completed body of our function looks as shown in the following screenshot:

Creating a function

To implement our leaveDuration function, we follow the same approach (for details of this, see the code samples included with the book).

Testing a function

JDeveloper provides a test option that allows us to run a function in JDeveloper without the need to deploy it first. However, it will only allow us to run functions with no input parameters and returns a type of boolean.

In order to test our startsIn function, we need to write a wrapper function (for example, testStartsIn) which creates the required input parameters for our function, invokes it, and then prints out the result. So the body of our test function will look as shown in the following screenshot:

Testing a function

To run this, with the Functions tab, select the testStartsIn function, and click the Test button, as shown in the following screenshot:

Testing a function

Note

If there are any validation errors within our rules dictionary, then the Test button will be disabled.

This will execute the function and open a window displaying the result of the function and any output as shown in the following screenshot:

Testing a function

Testing decision service functions

We can also use this approach to test our decision service. The body for this test function appears as shown in the following screenshot:

Testing decision service functions

A couple of interesting points to note about this: the statement call RL.watch.all() will cause the function to output details about how the facts are being processed and which rules are being activated. This is something we cover in more detail in Chapter 18, Using Business Rules to Implement Services.

The other point to note is that the decision service return type is a result List, so we need to extract our fact from this list and cast it to the appropriate fact type in order to examine its content. We do this with the statement:

assign leaveRequest = (TLeaveRequest) resultList.get(0)

Invoking a function from within a rule

The final step is to invoke the functions as required from our ruleset. Before writing the additional rules for a vacation of less than 3, 5, and 10 days respectively, we will update our existing rule to use these new functions.

Go back to the One Day Vacation rule, and select the first test (so it has an orange box around it). Right-click and select Delete Test from the drop-down list, as shown in the following screenshot:

Invoking a function from within a rule

Next, click on <insert test> to add a new test to our IF clause. Click on the left operand. This time, instead of selecting an item from the drop-down list, click on the calculator icon to launch the Expression Builder and use it to build the expression:

startsIn(TLeaveRequest.startDate)

Set the value of the operator to >=. Finally, enter the value of 14 for the second operand. Follow the same approach to add another test to check that the leave duration is only one day. Our updated rule should now looks as shown in the following screenshot:

Invoking a function from within a rule

Once we have completed our test pattern, we can click validate just to check that its syntax is correct. Having completed this test, we can define similar approval rules for vacations of 3, 5, and 10 days respectively.

When completed, save your dictionary and rerun the leave approval process; you should now see that the vacations that match our leave approval rules are automatically approved.

Creating a function

To create our first function, within the rule editor, click on the Functions tab. This will list all the functions currently defined to our ruleset. To create a new function, click on the green plus icon, as shown in the following screenshot:

Creating a function

This will add a new function with a default name (for example, Function_1) to our list. Click on the function name to select it and update it to startsIn. From the drop-down list, select the Return Type of the function, which is int in our case.

Next, we need to specify the arguments we wish to pass to our function. Click on the green plus sign, as shown in the following screenshot, and this will add an argument to our list. Here we can specify the argument name (for example, startDate), and from the drop-down list, the argument Type, which should be XMLGregorianCalendar (when creating XML facts, the JAXB processor maps the type xsd:date to javax.xml.datatype.XMLGregorianCalendar).

Note

The list of valid types is made up of the basic types (for example, int, double, char, and so on), plus the XML facts (excluding object factories) and the Java Facts (excluding the Rules Extension Method) defined in our rules dictionary.

Creating a function

The final step is to implement the business logic of our function, which consists of one or more actions. We enter these actions in the Body section of the function. The first action we need to create is one that creates a local variable of type calendar, which holds the current date.

To do this, click on <insert action> within the Body section of our function. The rule editor will display a drop-down list that lists all the available actions.

Creating a function

For our purpose, we want to create a new variable and assign a value to it, so select the assign new action, as shown in the preceding screenshot. This will insert a template for the assign new action into our function body (as shown in the following screenshot). We then configure the action by clicking on each part within the template and defining it as appropriate.

Creating a function

The first part we need to define is the type of variable we wish to create. Click on the <type> element within our <assign> statement, and the rule editor displays a drop-down list displaying all the available types. For our purposes, select Calendar.

Next, click on var. This will prompt us to enter the name of the variable that we want to create. Specify today, and hit enter.

Creating a function

Finally, we need to specify the value we want to initialize our variable with. Click on the <expression> element. The rule editor will display a drop-down box listing all the valid values we can assign to our variable, as shown in the following screenshot:

Creating a function

Select Calendar.getInstance(), which will initialize our variable to hold the current date.

For our second action, we want to calculate the number of days before the specified start date and place the result into the variable duration. To calculate this, we will make use of the Duration extension method provided with the rules engine.

We will do this by defining another assign new action in a similar way to the previous action. The key difference is how we specify the <expression>. This time, instead of selecting a value from the drop-down list, click on the Expression Builder icon (circled in the preceding screenshot) to launch the Expression Builder for the rules editor.

Creating a function

The Expression Builder provides a graphical tool for writing rule expressions and is accessed from various parts of the rule editor. It consists of the following areas:

  • Expression: The top textbox contains the rule expression that you are working on. You can either type data directly in here or use the Expression Builder to insert code fragments to build up the expression required.
  • Variables, Functions, Operators, Constants: This part of the Expression Builder lets you browse the various components that you can insert into your expression. Once you've located the component that you wish to use, click the Insert Into Expression button, and this will insert the appropriate code fragment into the expression.

    Note

    The code fragment is inserted at the point within the expression that the cursor is currently positioned.

  • Content Preview: This box displays a preview of the content that would be inserted into the expression if you clicked the Insert Into Expression button.

So let's use this to build our rules expression. The expression we want to build is a relatively simple one, namely:

Duration.days between(today,startDate) + 1

To build our expression, carry out the following steps. First, within the Functions tab, locate the function Duration.days between and insert this into the expression (as shown in the previous screenshot).

Next, within the Variables tab, locate the variable today. Then within the expression, highlight the first argument of the function (as shown in the following screenshot), and click Insert Into Expression.

Creating a function

This will update the value of the first argument to contain today; repeat this to update the second argument to contain startDate. Next, manually enter +1 to the end of the expression to complete it and click OK.

Finally add a third action to return the duration. The completed body of our function looks as shown in the following screenshot:

Creating a function

To implement our leaveDuration function, we follow the same approach (for details of this, see the code samples included with the book).

Testing a function

JDeveloper provides a test option that allows us to run a function in JDeveloper without the need to deploy it first. However, it will only allow us to run functions with no input parameters and returns a type of boolean.

In order to test our startsIn function, we need to write a wrapper function (for example, testStartsIn) which creates the required input parameters for our function, invokes it, and then prints out the result. So the body of our test function will look as shown in the following screenshot:

Testing a function

To run this, with the Functions tab, select the testStartsIn function, and click the Test button, as shown in the following screenshot:

Testing a function

Note

If there are any validation errors within our rules dictionary, then the Test button will be disabled.

This will execute the function and open a window displaying the result of the function and any output as shown in the following screenshot:

Testing a function

Testing decision service functions

We can also use this approach to test our decision service. The body for this test function appears as shown in the following screenshot:

Testing decision service functions

A couple of interesting points to note about this: the statement call RL.watch.all() will cause the function to output details about how the facts are being processed and which rules are being activated. This is something we cover in more detail in Chapter 18, Using Business Rules to Implement Services.

The other point to note is that the decision service return type is a result List, so we need to extract our fact from this list and cast it to the appropriate fact type in order to examine its content. We do this with the statement:

assign leaveRequest = (TLeaveRequest) resultList.get(0)

Invoking a function from within a rule

The final step is to invoke the functions as required from our ruleset. Before writing the additional rules for a vacation of less than 3, 5, and 10 days respectively, we will update our existing rule to use these new functions.

Go back to the One Day Vacation rule, and select the first test (so it has an orange box around it). Right-click and select Delete Test from the drop-down list, as shown in the following screenshot:

Invoking a function from within a rule

Next, click on <insert test> to add a new test to our IF clause. Click on the left operand. This time, instead of selecting an item from the drop-down list, click on the calculator icon to launch the Expression Builder and use it to build the expression:

startsIn(TLeaveRequest.startDate)

Set the value of the operator to >=. Finally, enter the value of 14 for the second operand. Follow the same approach to add another test to check that the leave duration is only one day. Our updated rule should now looks as shown in the following screenshot:

Invoking a function from within a rule

Once we have completed our test pattern, we can click validate just to check that its syntax is correct. Having completed this test, we can define similar approval rules for vacations of 3, 5, and 10 days respectively.

When completed, save your dictionary and rerun the leave approval process; you should now see that the vacations that match our leave approval rules are automatically approved.

Testing a function

JDeveloper provides a test option that allows us to run a function in JDeveloper without the need to deploy it first. However, it will only allow us to run functions with no input parameters and returns a type of boolean.

In order to test our startsIn function, we need to write a wrapper function (for example, testStartsIn) which creates the required input parameters for our function, invokes it, and then prints out the result. So the body of our test function will look as shown in the following screenshot:

Testing a function

To run this, with the Functions tab, select the testStartsIn function, and click the Test button, as shown in the following screenshot:

Testing a function

Note

If there are any validation errors within our rules dictionary, then the Test button will be disabled.

This will execute the function and open a window displaying the result of the function and any output as shown in the following screenshot:

Testing a function

Testing decision service functions

We can also use this approach to test our decision service. The body for this test function appears as shown in the following screenshot:

Testing decision service functions

A couple of interesting points to note about this: the statement call RL.watch.all() will cause the function to output details about how the facts are being processed and which rules are being activated. This is something we cover in more detail in Chapter 18, Using Business Rules to Implement Services.

The other point to note is that the decision service return type is a result List, so we need to extract our fact from this list and cast it to the appropriate fact type in order to examine its content. We do this with the statement:

assign leaveRequest = (TLeaveRequest) resultList.get(0)

Invoking a function from within a rule

The final step is to invoke the functions as required from our ruleset. Before writing the additional rules for a vacation of less than 3, 5, and 10 days respectively, we will update our existing rule to use these new functions.

Go back to the One Day Vacation rule, and select the first test (so it has an orange box around it). Right-click and select Delete Test from the drop-down list, as shown in the following screenshot:

Invoking a function from within a rule

Next, click on <insert test> to add a new test to our IF clause. Click on the left operand. This time, instead of selecting an item from the drop-down list, click on the calculator icon to launch the Expression Builder and use it to build the expression:

startsIn(TLeaveRequest.startDate)

Set the value of the operator to >=. Finally, enter the value of 14 for the second operand. Follow the same approach to add another test to check that the leave duration is only one day. Our updated rule should now looks as shown in the following screenshot:

Invoking a function from within a rule

Once we have completed our test pattern, we can click validate just to check that its syntax is correct. Having completed this test, we can define similar approval rules for vacations of 3, 5, and 10 days respectively.

When completed, save your dictionary and rerun the leave approval process; you should now see that the vacations that match our leave approval rules are automatically approved.

Testing decision service functions

We can also use this approach to test our decision service. The body for this test function appears as shown in the following screenshot:

Testing decision service functions

A couple of interesting points to note about this: the statement call RL.watch.all() will cause the function to output details about how the facts are being processed and which rules are being activated. This is something we cover in more detail in Chapter 18, Using Business Rules to Implement Services.

The other point to note is that the decision service return type is a result List, so we need to extract our fact from this list and cast it to the appropriate fact type in order to examine its content. We do this with the statement:

assign leaveRequest = (TLeaveRequest) resultList.get(0)

Invoking a function from within a rule

The final step is to invoke the functions as required from our ruleset. Before writing the additional rules for a vacation of less than 3, 5, and 10 days respectively, we will update our existing rule to use these new functions.

Go back to the One Day Vacation rule, and select the first test (so it has an orange box around it). Right-click and select Delete Test from the drop-down list, as shown in the following screenshot:

Invoking a function from within a rule

Next, click on <insert test> to add a new test to our IF clause. Click on the left operand. This time, instead of selecting an item from the drop-down list, click on the calculator icon to launch the Expression Builder and use it to build the expression:

startsIn(TLeaveRequest.startDate)

Set the value of the operator to >=. Finally, enter the value of 14 for the second operand. Follow the same approach to add another test to check that the leave duration is only one day. Our updated rule should now looks as shown in the following screenshot:

Invoking a function from within a rule

Once we have completed our test pattern, we can click validate just to check that its syntax is correct. Having completed this test, we can define similar approval rules for vacations of 3, 5, and 10 days respectively.

When completed, save your dictionary and rerun the leave approval process; you should now see that the vacations that match our leave approval rules are automatically approved.

Invoking a function from within a rule

The final step is to invoke the functions as required from our ruleset. Before writing the additional rules for a vacation of less than 3, 5, and 10 days respectively, we will update our existing rule to use these new functions.

Go back to the One Day Vacation rule, and select the first test (so it has an orange box around it). Right-click and select Delete Test from the drop-down list, as shown in the following screenshot:

Invoking a function from within a rule

Next, click on <insert test> to add a new test to our IF clause. Click on the left operand. This time, instead of selecting an item from the drop-down list, click on the calculator icon to launch the Expression Builder and use it to build the expression:

startsIn(TLeaveRequest.startDate)

Set the value of the operator to >=. Finally, enter the value of 14 for the second operand. Follow the same approach to add another test to check that the leave duration is only one day. Our updated rule should now looks as shown in the following screenshot:

Invoking a function from within a rule

Once we have completed our test pattern, we can click validate just to check that its syntax is correct. Having completed this test, we can define similar approval rules for vacations of 3, 5, and 10 days respectively.

When completed, save your dictionary and rerun the leave approval process; you should now see that the vacations that match our leave approval rules are automatically approved.

Using decision tables

Our updated ruleset consists of four rules that are very repetitive in nature. It would make more sense to specify the rule just once and then parameterize it in a tabular fashion. This is effectively what decision tables allow you to do.

Note

Before creating your decision table, you will need to delete the rules we have just defined, otherwise we will end up with two versions of the same rules within our ruleset.

Defining a bucket set

When creating a decision table, you are often required to specify a list of values or a range of values that apply to a particular rule. For example, in the case of our vacation approval rule, we will need to specify the following ranges of leave duration values that we are interested in:

  • 1 day
  • 2-3 days
  • 4-5 days
  • 6-10 days

We define these in a bucketset. To do this, select the Bucketsets tab in the rule editor, then click on the green plus symbol and select List of Ranges from the drop-down list, as shown in the following screenshot:

Defining a bucket set

This will create a new bucketset called Buckset_1. Click on the name and change it to something more meaningful such as LeaveDuration. By default, the bucketset will have a Datatype of int, which is fine for our purposes.

Click on the pencil icon. This will launch the Edit Bucketset - LeaveDuration window, as shown in the following screenshot:

Defining a bucket set

A bucketset, as its name implies, consists of one or more buckets, each corresponding to a range of values. For each bucket, you specify its Endpoint and whether the endpoint is included within the bucket. The range of values covered by a bucket is from the endpoint of the bucket to the endpoint of the next bucket.

You can also choose whether to include the specified endpoint in its corresponding bucket. If you don't, then the endpoint will be included in the preceding bucket.

For example, in the preceding screenshot, the second bucket (with the endpoint of 5) covers the integer values from 6 (as the endpoint 5 isn't included in the bucket) to 10 (the end point of the next bucket).

It is good practice to specify a meaningful alias for each bucket, as when you reference a bucket in a decision table, you do so using its alias. If you don't specify an alias, then it will default to the description in the Range.

In preparation for defining our decision table, we have defined two bucketsets: LeaveDuration, as shown in the preceding screenshot, and StartsIn.

Creating a decision table

To create a decision table, select the Employee Leave Approval ruleset tab. Click on the green plus icon and select Create Decision Table, as shown in the following screenshot:

Creating a decision table

This will add an empty decision table to our ruleset, as shown in the following screenshot:

Creating a decision table

The decision table consists of three areas: the first is for defining our tests (or conditions), the second is for conflict resolution (for resolving overlapping rules within our decision table), and the final area is for defining our actions.

Click on <insert condition>. This will add an empty condition with the name C1 to our ruleset. At the same time, the rule editor will also add an additional column to our decision table. This represents our first rule and is given the name R1. To specify the condition that we want to test, double-click on C1. This will bring up a drop-down list (similar to the one used to define an operand within the test part of a rule), as shown in the following screenshot:

Creating a decision table

As with our original rule, the first condition we want to test is the type of leave request, so select TLeaveRequest.leaveType from the drop-down list.

For our first rule, we want to check that the leave request is of type Vacation, so click on the appropriate cell (the intersection of C1 and R1). The rule editor will present us with a drop-down listing our options. In this case, directly enter Vacation, as shown in the following screenshot:

Creating a decision table

The next step is to add a second condition to test the leave duration. To do this, click on the green plus icon and select Conditions. This will add another condition row to our decision table. Click on <edit condition> and use the expression builder to define the following:

leaveDuration(TLeaveRequest.startDate, TLeaveRequest.endDate)

For each rule, we need to test the result of this function against the appropriate value in our LeaveDuration bucketset. Before we can do this, we must first associate the condition with that bucketset. To do this, ensure that the condition cell is selected and then click on the drop-down list above it and select LeaveDuration, as shown in the following screenshot:

Creating a decision table

The next step is to check that the leave duration is one day, so click on the appropriate cell (the intersection of C2 and R1). The rule editor will present us with a drop-down listing our options, which will be the list of buckets in the LeaveDuration bucketset. From here, select the option 1 day.

Creating a decision table

Add three more rules to our decision table (to add a rule, click on the green plus icon and select Rule). For R2, specify a leave duration of 2..3 days, for R3 4..5 days, and R4 6..10 days.

For each of these rules, we want to check that the leave type is Vacation. Rather than specifying this individually for each rule (which we could do), we can merge these into a single cell and specify the test just once. To do this, select each cell (hold down the Ctrl key while you do this) and then right-click. From the drop-down list, select Merge Selected Cells.

Creating a decision table

Next, we need to add the final condition as follows:

startsIn(TLeaveRequest.startDate)

To check whether sufficient notice has been given to automatically approve the vacation request, add this in the normal way and associate the condition with the StartsIn bucketset.

For our first rule, we want to approve the leave request if it starts in 14 or more days time, so select ALL the appropriate buckets from our bucketset (as shown in the following screenshot). Complete the test for rules R2, R3, and R4.

Creating a decision table

The final step is to specify the action we want to take for each of our rules. Click on <insert action>. This will display a drop-down list where you need to specify the Action Type you wish to carry out. Select Modify. This will insert a modify action into our decision table; double-click on this to open the Action Editor (as shown in the following screenshot):

Creating a decision table

The Form option allows us to select from the drop-down list which action we want to perform. For the Modify action, we first need to specify the fact we wish to update, so select TLeaveRequest in the Target section.

The Arguments section will then be populated to list all the properties for the selected fact. Select requestStatus and enter a value of Approved. Also select the cell to be parameterized. If you don't specify this, then it forces every rule within our decision table to use the same value.

Finally, ensure that the checkbox Always Selected is unchecked (we will see why in a moment) and click OK. This will return us to our decision table, as shown in the following screenshot:

Creating a decision table

At this point, the action will contain an identical configuration for each rule, which we can then modify as appropriate.

Each rule has an associated checkbox for the action, which, by default, is unchecked. This specifies whether that action should be taken for that rule. In our case, we want each rule to update the request status, so ensure that the checkbox is selected for every rule (as shown in the preceding screenshot).

Note

If you had checked the Always Selected checkbox in the Action Editor, then the action would be selected for each rule and would also be read-only to prevent you from modifying it.

The action will also contain a row for every property that we are modifying, which, in our example, is just one (requestStatus). As we selected this property to be parameterized, we could override the specified value for each individual rule.

Conflict resolution

This almost completes our decision table. However, we will add one more rule to handle any other scenario that isn't covered by our current ruleset. Add one more rule, but don't specify any values for any of the conditions, so the rule will apply to everything. In the actions section, specify a value of Manual to indicate that the request requires manual approval.

Upon doing this, the rule editor will add a row to the conflicts section of the decision table, as shown in the following screenshot:

Conflict resolution

This is indicating that R5 is in conflict with R1, R2, R3, and R4, that is, that they both apply to the same scenario. Double-click on the conflict warning for R1, and this will launch the Conflict Resolution window, as shown in the following screenshot:

Conflict resolution

Here, we can specify how we wish to handle the conflict. Click on the drop-down list and select Override to specify that R1 takes precedence over R5. Do the same for rules R2, R3, and R4. The decision table will be updated to show no conflicts and that rules R1 to R4 override R5.

This completes our decision table, so save the rules dictionary and redeploy the leave approval composite to test it.

Defining a bucket set

When creating a decision table, you are often required to specify a list of values or a range of values that apply to a particular rule. For example, in the case of our vacation approval rule, we will need to specify the following ranges of leave duration values that we are interested in:

  • 1 day
  • 2-3 days
  • 4-5 days
  • 6-10 days

We define these in a bucketset. To do this, select the Bucketsets tab in the rule editor, then click on the green plus symbol and select List of Ranges from the drop-down list, as shown in the following screenshot:

Defining a bucket set

This will create a new bucketset called Buckset_1. Click on the name and change it to something more meaningful such as LeaveDuration. By default, the bucketset will have a Datatype of int, which is fine for our purposes.

Click on the pencil icon. This will launch the Edit Bucketset - LeaveDuration window, as shown in the following screenshot:

Defining a bucket set

A bucketset, as its name implies, consists of one or more buckets, each corresponding to a range of values. For each bucket, you specify its Endpoint and whether the endpoint is included within the bucket. The range of values covered by a bucket is from the endpoint of the bucket to the endpoint of the next bucket.

You can also choose whether to include the specified endpoint in its corresponding bucket. If you don't, then the endpoint will be included in the preceding bucket.

For example, in the preceding screenshot, the second bucket (with the endpoint of 5) covers the integer values from 6 (as the endpoint 5 isn't included in the bucket) to 10 (the end point of the next bucket).

It is good practice to specify a meaningful alias for each bucket, as when you reference a bucket in a decision table, you do so using its alias. If you don't specify an alias, then it will default to the description in the Range.

In preparation for defining our decision table, we have defined two bucketsets: LeaveDuration, as shown in the preceding screenshot, and StartsIn.

Creating a decision table

To create a decision table, select the Employee Leave Approval ruleset tab. Click on the green plus icon and select Create Decision Table, as shown in the following screenshot:

Creating a decision table

This will add an empty decision table to our ruleset, as shown in the following screenshot:

Creating a decision table

The decision table consists of three areas: the first is for defining our tests (or conditions), the second is for conflict resolution (for resolving overlapping rules within our decision table), and the final area is for defining our actions.

Click on <insert condition>. This will add an empty condition with the name C1 to our ruleset. At the same time, the rule editor will also add an additional column to our decision table. This represents our first rule and is given the name R1. To specify the condition that we want to test, double-click on C1. This will bring up a drop-down list (similar to the one used to define an operand within the test part of a rule), as shown in the following screenshot:

Creating a decision table

As with our original rule, the first condition we want to test is the type of leave request, so select TLeaveRequest.leaveType from the drop-down list.

For our first rule, we want to check that the leave request is of type Vacation, so click on the appropriate cell (the intersection of C1 and R1). The rule editor will present us with a drop-down listing our options. In this case, directly enter Vacation, as shown in the following screenshot:

Creating a decision table

The next step is to add a second condition to test the leave duration. To do this, click on the green plus icon and select Conditions. This will add another condition row to our decision table. Click on <edit condition> and use the expression builder to define the following:

leaveDuration(TLeaveRequest.startDate, TLeaveRequest.endDate)

For each rule, we need to test the result of this function against the appropriate value in our LeaveDuration bucketset. Before we can do this, we must first associate the condition with that bucketset. To do this, ensure that the condition cell is selected and then click on the drop-down list above it and select LeaveDuration, as shown in the following screenshot:

Creating a decision table

The next step is to check that the leave duration is one day, so click on the appropriate cell (the intersection of C2 and R1). The rule editor will present us with a drop-down listing our options, which will be the list of buckets in the LeaveDuration bucketset. From here, select the option 1 day.

Creating a decision table

Add three more rules to our decision table (to add a rule, click on the green plus icon and select Rule). For R2, specify a leave duration of 2..3 days, for R3 4..5 days, and R4 6..10 days.

For each of these rules, we want to check that the leave type is Vacation. Rather than specifying this individually for each rule (which we could do), we can merge these into a single cell and specify the test just once. To do this, select each cell (hold down the Ctrl key while you do this) and then right-click. From the drop-down list, select Merge Selected Cells.

Creating a decision table

Next, we need to add the final condition as follows:

startsIn(TLeaveRequest.startDate)

To check whether sufficient notice has been given to automatically approve the vacation request, add this in the normal way and associate the condition with the StartsIn bucketset.

For our first rule, we want to approve the leave request if it starts in 14 or more days time, so select ALL the appropriate buckets from our bucketset (as shown in the following screenshot). Complete the test for rules R2, R3, and R4.

Creating a decision table

The final step is to specify the action we want to take for each of our rules. Click on <insert action>. This will display a drop-down list where you need to specify the Action Type you wish to carry out. Select Modify. This will insert a modify action into our decision table; double-click on this to open the Action Editor (as shown in the following screenshot):

Creating a decision table

The Form option allows us to select from the drop-down list which action we want to perform. For the Modify action, we first need to specify the fact we wish to update, so select TLeaveRequest in the Target section.

The Arguments section will then be populated to list all the properties for the selected fact. Select requestStatus and enter a value of Approved. Also select the cell to be parameterized. If you don't specify this, then it forces every rule within our decision table to use the same value.

Finally, ensure that the checkbox Always Selected is unchecked (we will see why in a moment) and click OK. This will return us to our decision table, as shown in the following screenshot:

Creating a decision table

At this point, the action will contain an identical configuration for each rule, which we can then modify as appropriate.

Each rule has an associated checkbox for the action, which, by default, is unchecked. This specifies whether that action should be taken for that rule. In our case, we want each rule to update the request status, so ensure that the checkbox is selected for every rule (as shown in the preceding screenshot).

Note

If you had checked the Always Selected checkbox in the Action Editor, then the action would be selected for each rule and would also be read-only to prevent you from modifying it.

The action will also contain a row for every property that we are modifying, which, in our example, is just one (requestStatus). As we selected this property to be parameterized, we could override the specified value for each individual rule.

Conflict resolution

This almost completes our decision table. However, we will add one more rule to handle any other scenario that isn't covered by our current ruleset. Add one more rule, but don't specify any values for any of the conditions, so the rule will apply to everything. In the actions section, specify a value of Manual to indicate that the request requires manual approval.

Upon doing this, the rule editor will add a row to the conflicts section of the decision table, as shown in the following screenshot:

Conflict resolution

This is indicating that R5 is in conflict with R1, R2, R3, and R4, that is, that they both apply to the same scenario. Double-click on the conflict warning for R1, and this will launch the Conflict Resolution window, as shown in the following screenshot:

Conflict resolution

Here, we can specify how we wish to handle the conflict. Click on the drop-down list and select Override to specify that R1 takes precedence over R5. Do the same for rules R2, R3, and R4. The decision table will be updated to show no conflicts and that rules R1 to R4 override R5.

This completes our decision table, so save the rules dictionary and redeploy the leave approval composite to test it.

Creating a decision table

To create a decision table, select the Employee Leave Approval ruleset tab. Click on the green plus icon and select Create Decision Table, as shown in the following screenshot:

Creating a decision table

This will add an empty decision table to our ruleset, as shown in the following screenshot:

Creating a decision table

The decision table consists of three areas: the first is for defining our tests (or conditions), the second is for conflict resolution (for resolving overlapping rules within our decision table), and the final area is for defining our actions.

Click on <insert condition>. This will add an empty condition with the name C1 to our ruleset. At the same time, the rule editor will also add an additional column to our decision table. This represents our first rule and is given the name R1. To specify the condition that we want to test, double-click on C1. This will bring up a drop-down list (similar to the one used to define an operand within the test part of a rule), as shown in the following screenshot:

Creating a decision table

As with our original rule, the first condition we want to test is the type of leave request, so select TLeaveRequest.leaveType from the drop-down list.

For our first rule, we want to check that the leave request is of type Vacation, so click on the appropriate cell (the intersection of C1 and R1). The rule editor will present us with a drop-down listing our options. In this case, directly enter Vacation, as shown in the following screenshot:

Creating a decision table

The next step is to add a second condition to test the leave duration. To do this, click on the green plus icon and select Conditions. This will add another condition row to our decision table. Click on <edit condition> and use the expression builder to define the following:

leaveDuration(TLeaveRequest.startDate, TLeaveRequest.endDate)

For each rule, we need to test the result of this function against the appropriate value in our LeaveDuration bucketset. Before we can do this, we must first associate the condition with that bucketset. To do this, ensure that the condition cell is selected and then click on the drop-down list above it and select LeaveDuration, as shown in the following screenshot:

Creating a decision table

The next step is to check that the leave duration is one day, so click on the appropriate cell (the intersection of C2 and R1). The rule editor will present us with a drop-down listing our options, which will be the list of buckets in the LeaveDuration bucketset. From here, select the option 1 day.

Creating a decision table

Add three more rules to our decision table (to add a rule, click on the green plus icon and select Rule). For R2, specify a leave duration of 2..3 days, for R3 4..5 days, and R4 6..10 days.

For each of these rules, we want to check that the leave type is Vacation. Rather than specifying this individually for each rule (which we could do), we can merge these into a single cell and specify the test just once. To do this, select each cell (hold down the Ctrl key while you do this) and then right-click. From the drop-down list, select Merge Selected Cells.

Creating a decision table

Next, we need to add the final condition as follows:

startsIn(TLeaveRequest.startDate)

To check whether sufficient notice has been given to automatically approve the vacation request, add this in the normal way and associate the condition with the StartsIn bucketset.

For our first rule, we want to approve the leave request if it starts in 14 or more days time, so select ALL the appropriate buckets from our bucketset (as shown in the following screenshot). Complete the test for rules R2, R3, and R4.

Creating a decision table

The final step is to specify the action we want to take for each of our rules. Click on <insert action>. This will display a drop-down list where you need to specify the Action Type you wish to carry out. Select Modify. This will insert a modify action into our decision table; double-click on this to open the Action Editor (as shown in the following screenshot):

Creating a decision table

The Form option allows us to select from the drop-down list which action we want to perform. For the Modify action, we first need to specify the fact we wish to update, so select TLeaveRequest in the Target section.

The Arguments section will then be populated to list all the properties for the selected fact. Select requestStatus and enter a value of Approved. Also select the cell to be parameterized. If you don't specify this, then it forces every rule within our decision table to use the same value.

Finally, ensure that the checkbox Always Selected is unchecked (we will see why in a moment) and click OK. This will return us to our decision table, as shown in the following screenshot:

Creating a decision table

At this point, the action will contain an identical configuration for each rule, which we can then modify as appropriate.

Each rule has an associated checkbox for the action, which, by default, is unchecked. This specifies whether that action should be taken for that rule. In our case, we want each rule to update the request status, so ensure that the checkbox is selected for every rule (as shown in the preceding screenshot).

Note

If you had checked the Always Selected checkbox in the Action Editor, then the action would be selected for each rule and would also be read-only to prevent you from modifying it.

The action will also contain a row for every property that we are modifying, which, in our example, is just one (requestStatus). As we selected this property to be parameterized, we could override the specified value for each individual rule.

Conflict resolution

This almost completes our decision table. However, we will add one more rule to handle any other scenario that isn't covered by our current ruleset. Add one more rule, but don't specify any values for any of the conditions, so the rule will apply to everything. In the actions section, specify a value of Manual to indicate that the request requires manual approval.

Upon doing this, the rule editor will add a row to the conflicts section of the decision table, as shown in the following screenshot:

Conflict resolution

This is indicating that R5 is in conflict with R1, R2, R3, and R4, that is, that they both apply to the same scenario. Double-click on the conflict warning for R1, and this will launch the Conflict Resolution window, as shown in the following screenshot:

Conflict resolution

Here, we can specify how we wish to handle the conflict. Click on the drop-down list and select Override to specify that R1 takes precedence over R5. Do the same for rules R2, R3, and R4. The decision table will be updated to show no conflicts and that rules R1 to R4 override R5.

This completes our decision table, so save the rules dictionary and redeploy the leave approval composite to test it.

Conflict resolution

This almost completes our decision table. However, we will add one more rule to handle any other scenario that isn't covered by our current ruleset. Add one more rule, but don't specify any values for any of the conditions, so the rule will apply to everything. In the actions section, specify a value of Manual to indicate that the request requires manual approval.

Upon doing this, the rule editor will add a row to the conflicts section of the decision table, as shown in the following screenshot:

Conflict resolution

This is indicating that R5 is in conflict with R1, R2, R3, and R4, that is, that they both apply to the same scenario. Double-click on the conflict warning for R1, and this will launch the Conflict Resolution window, as shown in the following screenshot:

Conflict resolution

Here, we can specify how we wish to handle the conflict. Click on the drop-down list and select Override to specify that R1 takes precedence over R5. Do the same for rules R2, R3, and R4. The decision table will be updated to show no conflicts and that rules R1 to R4 override R5.

This completes our decision table, so save the rules dictionary and redeploy the leave approval composite to test it.

Summary

Business rules are a key component of any application. Traditionally, these rules are buried deep within the code of an application, making them very difficult to change.

Yet, in a typical application, it is the business rules that change most frequently, by separating these out as a specialized service, it allows us to change these rules without having to modify the overall application.

In this chapter, we have looked at how we can use the Oracle Business Rules engine to implement such rules, and how we can invoke these from within BPEL as a decision service.

It's worth noting that you are not restricted to calling these rules from just BPEL, as the rules engine comes with a Java API that allows it to be easily invoked from any Java application, or alternatively, you can expose the rules as web services, which can then be invoked from any web service client.

Finally, while in this chapter, we have only looked at very simple rules. The Oracle Business Rules engine implements the industry standard Rete Algorithm, making it ideal for evaluating a large number of interdependent rules and facts. We examine some of these capabilities in more detail in Chapter 18, Using Business Rules to Implement Services.

Chapter 8. Using Business Events

In the previous chapters, we focused on routing messages to the correct destination and managing process flow. All of this requires knowledge of the dependencies, and for most business processes and service integrations, this is required to ensure that everything works reliably. However, even with transformation and service abstraction, there are still dependencies between services. In this chapter, we will look at the tools available in SOA Suite for completely decoupling providers of messages from consumers of messages. This is useful if we wish to add new attributes that do not require responses to be returned, for example, adding fraud detection services or usage auditing services. In these cases, we just want message producers to publish events, and allow new services to receive these events by subscribing to them without impacting the publisher. This is the function of the Event Delivery Network (EDN) in the SOA Suite and is the focus of this chapter.

How EDN differs from traditional messaging

Message Oriented Middleware (MOM) uses queuing technologies to isolate producers from consumers. The classic MOM product was IBM MQ Series , but other products in this space include Tibco Rendezvous and Oracle AQ . Messages may be delivered point to point (a single service consumes the message) or one-to-many (multiple services may consume the same message). The Java Messaging Service (JMS) provides an abstraction over messaging systems and supports both one-to-one interactions through queues and one-to-many interactions through topics. When using JMS to subscribe to an event, a developer must know the format of data associated with the event and the message channel (topic or queue) on which to listen to receive the event. This message channel must be configured for each event and filters might be added to restrict the messages delivered. The Event Delivery Network (EDN) takes the view that the publishers and subscribers of a message, known as an event, only need to know the subject matter, the event name, and the event data format. All the delivery details can be hidden under the covers. EDN uses JMS to deliver events from subscribers to publishers, but the configuration of JMS queues and topics and any associated filters is hidden from users of the EDN service.

The following table highlights the differences between traditional MOM and EDN. As can be seen, the focus of EDN is to make it very easy for event producers to publish an event that can then be received by an arbitrary number of event subscribers. EDN developers only need to be aware of the events themselves, as all the underlying delivery mechanisms are taken care of within the EDN.

Interaction Pattern

Messaging Support

Configuration

EDN Notes

Request/Reply

Separate JMS queues are used for Request and Response messages. JMS Message Headers are used to correlate requests with responses.

Request and response queues must be configured with appropriate connection factories and message store.

EDN does not support request/reply. It is not possible to target the receiver of an event. Event subscribers are not visible to event producers and so cannot be directly targeted. Similarly, event producers are not visible to event subscribers and so it is not possible to send direct replies just to the originator of the event.

One way guaranteed delivery

Single JMS queue with a single subscriber

Queue must be configured with appropriate connection factory and message store

EDN does not support guaranteed one way delivery of events. An event producer has no way of knowing how many subscribers will receive the message or if any subscribers will receive the message.

One-to-many message delivery

Single JMS topic with zero or more subscribers

Topic must be configured with appropriate connection factory and message store.

EDN supports exactly this message interaction pattern without the need to configure any JMS artifacts. EDN uses JMS but this is hidden from the developer.

A sample use case

Consider an auction process. The basic auction process accepts new items for auction from a seller service, accepts bids from a bidder service, and identifies the winning bid in an auction service. All these operations require co-ordination between the services involved.

There may be concerns about the proprietary of some bids, but the time taken to validate the bid and ensure that it is legitimate may be viewed as too lengthy and instead it may be desired to have a background process to validate bids, so we do not slow down the normal bid taking process.

This is an extension to the business functionality that does not require a conversation between the bid process and the validation service. In fact, a conversation may slow down the auction process and increase the time taken to accept and confirm bids and winners. This is an excellent use case for the EDN because of the following features:

  • No conversation is required between the event consumer (bid legitimacy process) and the event producer (auction process).
  • The event consumer can be added to the system without any changes to the event producer. As long as the auction process is publishing bid events, adding the bid validator will not impact the auction process.
  • Additional producers may be added to the system without impacting existing producers and/or consumers. For example, different auction systems may raise the same event.
  • Additional consumers may also be added independently of either existing producers and/or consumers. For example, an automated bidding service may make use of this event later without impacting the existing services.

Event Delivery Network essentials

The EDN is a very simple but very powerful concept, and we will now explain the basic principles associated with it.

Events

An event is the message that will be published and consumed on the Event Delivery Network. Events consist of three parts:

  1. A namespace that identifies the general area that the event is associated with and helps to avoid clashes between event names
  2. A name that identifies the type of event within a namespace
  3. A data type that defines the data associated with the event

Namespaces of events behave in the same way as namespaces in XML and identify the solution area of the events and avoid clashes between events with the same name that belong to different solution areas. For example, a namespace would differentiate an event called NewOrder in a military command and control system from an event called NewOrder in a logistics system.

Events are defined using an XML-based language called the Event Description Language (EDL).

We can model business events from within JDeveloper by clicking on the event icon Events on the top-left-hand corner of the composite editor as shown in the following screenshot.

Events

This brings up the Create Event Definition File dialog to allow us to create a new EDL file to contain our event definitions.

Events

After defining the EDL File Name, the Directory it resides in, and the Namespace of the EDL file, we can add events to it by clicking on the green plus symbol Events. This takes us to the Add an Event dialog, where we can choose the XML Element that represents the data content of the event and give the event a Name.

Event elements and data types should always be defined in an XML schema file, which is separate from other SOA XML artifacts such as message schemas. This is because the events may be used across many areas, and they should not have dependencies on other SOA artifacts.

Events

After completing the definition of our EDL file, it is displayed in JDeveloper, where we can continue to add or remove events.

Events

The EDL file itself is a very simple format, shown as follows:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<definitions xmlns=http://schemas.oracle.com/events/edltargetNamespace="AuctionEventDefinitions">
    <schema-import namespace="auction.events"location="xsd/AuctionEvents.xsd"/>
        <event-definition name="NewAuctionEvent">
        <content xmlns:ns0=" auction.events"element="ns0:NewAuction"/>
        </event-definition>
        <event-definition name="NewBidEvent">
          <content xmlns:ns0=" auction.events" element="ns0:NewBid"/>
        </event-definition>
</definitions>

The targetNamespace attribute of the <definitions> element defines the namespace for the events. XML-type definitions are imported using <schema-import>, and <event-definition> is used to define events and their associated element.

Event publishers

Events are created by event publishers. An event publisher may be a Mediator or BPEL component in an SCA Assembly or it may be a custom Java application. The event publisher raises the event by creating an XML element and passing it to the Event Delivery Network. Once passed to the EDN, the publisher is unaware of how many subscribers consume the event or even if the event is consumed by any subscriber. This provides a degree of isolation between the publisher and subscribers. The EDN is responsible for keeping track of who is subscribing and ensuring that they receive the new event.

Publishing an event using the Mediator component

When using a Mediator to publish an event, we usually want to publish the event in parallel with other processing that the Mediator is doing. Typically, we want the Mediator to take the request from some service call and publish the event.

In the following example, we have a Mediator NewAuctionMediator that routes to a dummy service implemented by a Mediator ServiceMediator that uses Echo functionality to provide a synchronous response to its caller. We will raise a NewAuction event as part of the inbound processing in the Mediator. Note that although the Mediator will route a reply to the specific caller of the composite, the event will be routed to all subscribers of that event type.

Publishing an event using the Mediator component

We can raise an event based on the input request in the following fashion.

We open the routing Mediator component, in this case, NewAuctionMediator, and add a new sequential static routing rule. When the Target Type dialog comes up, select the Event button to generate an event.

Publishing an event using the Mediator component

In the Event Chooser dialog, we can choose the Event Definition File that contains our event. We can either browse for an existing EDL file using the magnifying glass icon or we can create a new EDL file by using the thunderbolt icon. Once an EDL file has been chosen, we can then select the specific event that we wish to produce.

Publishing an event using the Mediator component

If we choose an existing EDL file that is not in our project, JDeveloper will assume that we want to copy the EDL file and its associated XML schemas into the project. On the Localize Files dialog, we have the choice of keeping existing directory relationships between files or flattening any existing directory structure.

Publishing an event using the Mediator component

Once the event is selected, we then just need to add a transformation to transform the input of the request to the event format.

Publishing an event using BPEL

We can also publish events using BPEL. Using the previous example, we may decide that rather than just publishing a NewAuction event that contains the input data used to create the auction, we also wish to include the auction identifier generated by the response to the new auction request. This is best achieved using the BPEL event publisher.

Using the previous example, we insert a BPEL process between the service requestor, in this case, NewAuctionMediator, and the service provider, in this case ServiceProvider. We use the Base on a WSDL template to create the process and select the WSDL of the target service as our WSDL, in this case the WSDL for ServiceMediator. We then rewire the composite so that the target service is now invoked by our new BPEL process, and the original client of the service is now a client of the BPEL process, as shown in the following diagram:

Publishing an event using BPEL

We then edit the BPEL process to invoke the original target service. Because we have used the target service WSDL for the WSDL of the BPEL process, we can use the input and output variables of the process as parameters to invoke the target service. With this done, we are ready to publish the event as part of our BPEL process.

Publishing an event using BPEL

We publish an event from within BPEL by using an invoke and setting the Interaction Type to be Event rather than the more usual Partner Link.

Publishing an event using BPEL

This allows us to select an event and a variable to hold the data for that event. The event itself is chosen from the Event Chooser dialog, which was introduced in the previous section. We then need to add an assign statement to initialize the variable used to populate the event. The fact that this invoke is actually raising an event is identified by the lightning symbol on the Invoke.

Publishing an event using BPEL

Note that when we raise an event, there is no indication provided as to how to route or deliver the event. In the next section, we will look at how events are consumed.

Publishing an event using BPEL

Publishing an event using Java

We can also publish and consume events using Java. In this section, we will look at how Java code can be used to publish an event.

To publish an event, we need to go through the following steps:

  1. Create the event
  2. Connect to the Event Delivery Network
  3. Publish the event on the connection

Creating the event

We create the event in Java by using the oracle.integration.platform.blocks.event.BusinessEventBuilder class. An instance of this class is created by a static factory method called newInstance. We need to provide a qualified name (QName that includes the namespace and the entity name) and a body for the event through setter methods on the builder class. Once these have been set, we can call createEvent to generate an instance of a BusinessEvent.

BusinessEventBuilder builder = BusinessEventBuilder.newInstance();
QName name = new QName("http://schemas.oracle.com/events/edl/AuctionEventDefinitions","NewAuctionEvent");
builder.setEventName(name);
XMLElement content = …
builder.setBody(content);
BusinessEvent event = builder.createEvent();

The event name is a combination of the event schema, acting as an XML namespace, and the event name. The content is the XML element containing the event content.

Once we have a BusinessEvent, we need a connection to the Event Delivery Network in order to publish the event.

Creating the event connection

The event connection can be created using either a JMS queue connection to the Event Delivery Network or an Oracle AQ connection. The latter requires the use of a data source and is the approach we will show. We obtain an oracle.fabric.blocks.event.BusinessEventConnection from a BusinessEventConnectionFactory. We will use the AQ version of this connection factory, which is provided by the oracle.integration.platform.blocks.event.saq. SAQRemoteBusinessEventConnectionFactory class.

DataSource ds = …
  BusinessEventConnectionFactory factory = new
    SAQRemoteBusinessEventConnectionFactory(ds, ds, null);
  BusinessEventConnection conn =
    factory.createBusinessEventConnection();

We use the connection factory to create a connection to Event Delivery Network. The data source we provide must be configured to connect to the SOA infrastructure schema in the database, which by default is called <PREFIX>_SOAInfra.

Publishing the event

Now that we have an EDN connection, we can publish our event on it by calling publishEvent:

conn.publishEvent(event, EVENT_PRIORITY);

This publishes our event on our previously created connection. The event priority is usually set to 3, but it is not used in this release.

Event subscribers

Events are consumed by event subscribers. In a similar fashion to event publishers, event subscribers may be BPEL processes or Mediators. When subscribing to an event, the subscriber can filter the events. Subscribers subscribe to a specific event within an event namespace. They can limit the instances of an event that they receive by applying a filter. Only events for which the filter evaluates to true are delivered to the event subscriber. Event filters are XPath expressions.

Consuming an event using Mediator

To consume an event, we can add an event subscriber to a Mediator. We do this by clicking on the green plus sign next to the Event Subscriptions label under Routing Rules.

Consuming an event using Mediator

This brings us to the Subscribed Events dialog where, by clicking on the green plus sign, we add a new event subscription using the Event Chooser dialog introduced in the section on publishing an event.

Consuming an event using Mediator

Having chosen an event, we can determine how the event is to be delivered using the Consistency option. Transactions are discussed later in Chapter 15, Advanced SOA Suite Architecture. The consistency options are:

  • Exactly once by selecting the one and only one option. This makes the event delivery transaction part of the Mediator transaction. If the Mediator transaction completes successfully, then the event will also be marked as read, otherwise it will be rolled back and thus appear not to have been delivered. Transaction boundaries are explained in more detail in Chapter 15, Advanced SOA Suite Architecture
  • At least once by selecting guaranteed. This keeps the Mediator transaction separate from the delivery transaction. The event will be delivered to the subscriber, but any errors in the Mediator may cause that event to be lost to that subscriber.
  • immediate makes the delivery transaction part of the event publishing transaction. This option should be avoided as it couples the subscriber and the publisher.

    Tip

    Avoid using the immediate delivery option as it tightly couples the publisher to a subscriber that it should be unaware of. The Java API for this is marked as deprecated, and it is likely that this option will disappear in the future.

The Run as publisher option allows the Mediator to run in the same security context as the event publisher. This allows the subscriber component to perform any actions that the publisher could perform.

The Filter option brings up the Expression Builder dialog when clicked on. This allows us to construct an XPath expression to limit the delivered event to only those for which the XPath expression resolves to true.

Consuming an event using Mediator

A component in a composite that has subscribed to an event has a lightning bolt Consuming an event using Mediator on its service side to identify it within a composite.

Consuming an event using Mediator

Consuming an event using BPEL

To subscribe to an event using BPEL, we can create a BPEL process using the Subscribe to Events template. This allows us to add the events to which we wish to subscribe to in a similar fashion to having the Mediator subscribe to events by using the Event Chooser dialog and adding quality of service options and filters.

Consuming an event using BPEL

This creates a BPEL process with a single receive activity that identifies itself as subscribing to an event by the lightening icon Consuming an event using BPEL on the receive.

Consuming an event using BPEL

Events may also be subscribed to by adding a BPEL Receive activity to the process and choosing an Interaction Type of Event.

Consuming an event using BPEL

Events

An event is the message that will be published and consumed on the Event Delivery Network. Events consist of three parts:

  1. A namespace that identifies the general area that the event is associated with and helps to avoid clashes between event names
  2. A name that identifies the type of event within a namespace
  3. A data type that defines the data associated with the event

Namespaces of events behave in the same way as namespaces in XML and identify the solution area of the events and avoid clashes between events with the same name that belong to different solution areas. For example, a namespace would differentiate an event called NewOrder in a military command and control system from an event called NewOrder in a logistics system.

Events are defined using an XML-based language called the Event Description Language (EDL).

We can model business events from within JDeveloper by clicking on the event icon Events on the top-left-hand corner of the composite editor as shown in the following screenshot.

Events

This brings up the Create Event Definition File dialog to allow us to create a new EDL file to contain our event definitions.

Events

After defining the EDL File Name, the Directory it resides in, and the Namespace of the EDL file, we can add events to it by clicking on the green plus symbol Events. This takes us to the Add an Event dialog, where we can choose the XML Element that represents the data content of the event and give the event a Name.

Event elements and data types should always be defined in an XML schema file, which is separate from other SOA XML artifacts such as message schemas. This is because the events may be used across many areas, and they should not have dependencies on other SOA artifacts.

Events

After completing the definition of our EDL file, it is displayed in JDeveloper, where we can continue to add or remove events.

Events

The EDL file itself is a very simple format, shown as follows:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<definitions xmlns=http://schemas.oracle.com/events/edltargetNamespace="AuctionEventDefinitions">
    <schema-import namespace="auction.events"location="xsd/AuctionEvents.xsd"/>
        <event-definition name="NewAuctionEvent">
        <content xmlns:ns0=" auction.events"element="ns0:NewAuction"/>
        </event-definition>
        <event-definition name="NewBidEvent">
          <content xmlns:ns0=" auction.events" element="ns0:NewBid"/>
        </event-definition>
</definitions>

The targetNamespace attribute of the <definitions> element defines the namespace for the events. XML-type definitions are imported using <schema-import>, and <event-definition> is used to define events and their associated element.

Event publishers

Events are created by event publishers. An event publisher may be a Mediator or BPEL component in an SCA Assembly or it may be a custom Java application. The event publisher raises the event by creating an XML element and passing it to the Event Delivery Network. Once passed to the EDN, the publisher is unaware of how many subscribers consume the event or even if the event is consumed by any subscriber. This provides a degree of isolation between the publisher and subscribers. The EDN is responsible for keeping track of who is subscribing and ensuring that they receive the new event.

Publishing an event using the Mediator component

When using a Mediator to publish an event, we usually want to publish the event in parallel with other processing that the Mediator is doing. Typically, we want the Mediator to take the request from some service call and publish the event.

In the following example, we have a Mediator NewAuctionMediator that routes to a dummy service implemented by a Mediator ServiceMediator that uses Echo functionality to provide a synchronous response to its caller. We will raise a NewAuction event as part of the inbound processing in the Mediator. Note that although the Mediator will route a reply to the specific caller of the composite, the event will be routed to all subscribers of that event type.

Publishing an event using the Mediator component

We can raise an event based on the input request in the following fashion.

We open the routing Mediator component, in this case, NewAuctionMediator, and add a new sequential static routing rule. When the Target Type dialog comes up, select the Event button to generate an event.

Publishing an event using the Mediator component

In the Event Chooser dialog, we can choose the Event Definition File that contains our event. We can either browse for an existing EDL file using the magnifying glass icon or we can create a new EDL file by using the thunderbolt icon. Once an EDL file has been chosen, we can then select the specific event that we wish to produce.

Publishing an event using the Mediator component

If we choose an existing EDL file that is not in our project, JDeveloper will assume that we want to copy the EDL file and its associated XML schemas into the project. On the Localize Files dialog, we have the choice of keeping existing directory relationships between files or flattening any existing directory structure.

Publishing an event using the Mediator component

Once the event is selected, we then just need to add a transformation to transform the input of the request to the event format.

Publishing an event using BPEL

We can also publish events using BPEL. Using the previous example, we may decide that rather than just publishing a NewAuction event that contains the input data used to create the auction, we also wish to include the auction identifier generated by the response to the new auction request. This is best achieved using the BPEL event publisher.

Using the previous example, we insert a BPEL process between the service requestor, in this case, NewAuctionMediator, and the service provider, in this case ServiceProvider. We use the Base on a WSDL template to create the process and select the WSDL of the target service as our WSDL, in this case the WSDL for ServiceMediator. We then rewire the composite so that the target service is now invoked by our new BPEL process, and the original client of the service is now a client of the BPEL process, as shown in the following diagram:

Publishing an event using BPEL

We then edit the BPEL process to invoke the original target service. Because we have used the target service WSDL for the WSDL of the BPEL process, we can use the input and output variables of the process as parameters to invoke the target service. With this done, we are ready to publish the event as part of our BPEL process.

Publishing an event using BPEL

We publish an event from within BPEL by using an invoke and setting the Interaction Type to be Event rather than the more usual Partner Link.

Publishing an event using BPEL

This allows us to select an event and a variable to hold the data for that event. The event itself is chosen from the Event Chooser dialog, which was introduced in the previous section. We then need to add an assign statement to initialize the variable used to populate the event. The fact that this invoke is actually raising an event is identified by the lightning symbol on the Invoke.

Publishing an event using BPEL

Note that when we raise an event, there is no indication provided as to how to route or deliver the event. In the next section, we will look at how events are consumed.

Publishing an event using BPEL

Publishing an event using Java

We can also publish and consume events using Java. In this section, we will look at how Java code can be used to publish an event.

To publish an event, we need to go through the following steps:

  1. Create the event
  2. Connect to the Event Delivery Network
  3. Publish the event on the connection

Creating the event

We create the event in Java by using the oracle.integration.platform.blocks.event.BusinessEventBuilder class. An instance of this class is created by a static factory method called newInstance. We need to provide a qualified name (QName that includes the namespace and the entity name) and a body for the event through setter methods on the builder class. Once these have been set, we can call createEvent to generate an instance of a BusinessEvent.

BusinessEventBuilder builder = BusinessEventBuilder.newInstance();
QName name = new QName("http://schemas.oracle.com/events/edl/AuctionEventDefinitions","NewAuctionEvent");
builder.setEventName(name);
XMLElement content = …
builder.setBody(content);
BusinessEvent event = builder.createEvent();

The event name is a combination of the event schema, acting as an XML namespace, and the event name. The content is the XML element containing the event content.

Once we have a BusinessEvent, we need a connection to the Event Delivery Network in order to publish the event.

Creating the event connection

The event connection can be created using either a JMS queue connection to the Event Delivery Network or an Oracle AQ connection. The latter requires the use of a data source and is the approach we will show. We obtain an oracle.fabric.blocks.event.BusinessEventConnection from a BusinessEventConnectionFactory. We will use the AQ version of this connection factory, which is provided by the oracle.integration.platform.blocks.event.saq. SAQRemoteBusinessEventConnectionFactory class.

DataSource ds = …
  BusinessEventConnectionFactory factory = new
    SAQRemoteBusinessEventConnectionFactory(ds, ds, null);
  BusinessEventConnection conn =
    factory.createBusinessEventConnection();

We use the connection factory to create a connection to Event Delivery Network. The data source we provide must be configured to connect to the SOA infrastructure schema in the database, which by default is called <PREFIX>_SOAInfra.

Publishing the event

Now that we have an EDN connection, we can publish our event on it by calling publishEvent:

conn.publishEvent(event, EVENT_PRIORITY);

This publishes our event on our previously created connection. The event priority is usually set to 3, but it is not used in this release.

Event subscribers

Events are consumed by event subscribers. In a similar fashion to event publishers, event subscribers may be BPEL processes or Mediators. When subscribing to an event, the subscriber can filter the events. Subscribers subscribe to a specific event within an event namespace. They can limit the instances of an event that they receive by applying a filter. Only events for which the filter evaluates to true are delivered to the event subscriber. Event filters are XPath expressions.

Consuming an event using Mediator

To consume an event, we can add an event subscriber to a Mediator. We do this by clicking on the green plus sign next to the Event Subscriptions label under Routing Rules.

Consuming an event using Mediator

This brings us to the Subscribed Events dialog where, by clicking on the green plus sign, we add a new event subscription using the Event Chooser dialog introduced in the section on publishing an event.

Consuming an event using Mediator

Having chosen an event, we can determine how the event is to be delivered using the Consistency option. Transactions are discussed later in Chapter 15, Advanced SOA Suite Architecture. The consistency options are:

  • Exactly once by selecting the one and only one option. This makes the event delivery transaction part of the Mediator transaction. If the Mediator transaction completes successfully, then the event will also be marked as read, otherwise it will be rolled back and thus appear not to have been delivered. Transaction boundaries are explained in more detail in Chapter 15, Advanced SOA Suite Architecture
  • At least once by selecting guaranteed. This keeps the Mediator transaction separate from the delivery transaction. The event will be delivered to the subscriber, but any errors in the Mediator may cause that event to be lost to that subscriber.
  • immediate makes the delivery transaction part of the event publishing transaction. This option should be avoided as it couples the subscriber and the publisher.

    Tip

    Avoid using the immediate delivery option as it tightly couples the publisher to a subscriber that it should be unaware of. The Java API for this is marked as deprecated, and it is likely that this option will disappear in the future.

The Run as publisher option allows the Mediator to run in the same security context as the event publisher. This allows the subscriber component to perform any actions that the publisher could perform.

The Filter option brings up the Expression Builder dialog when clicked on. This allows us to construct an XPath expression to limit the delivered event to only those for which the XPath expression resolves to true.

Consuming an event using Mediator

A component in a composite that has subscribed to an event has a lightning bolt Consuming an event using Mediator on its service side to identify it within a composite.

Consuming an event using Mediator

Consuming an event using BPEL

To subscribe to an event using BPEL, we can create a BPEL process using the Subscribe to Events template. This allows us to add the events to which we wish to subscribe to in a similar fashion to having the Mediator subscribe to events by using the Event Chooser dialog and adding quality of service options and filters.

Consuming an event using BPEL

This creates a BPEL process with a single receive activity that identifies itself as subscribing to an event by the lightening icon Consuming an event using BPEL on the receive.

Consuming an event using BPEL

Events may also be subscribed to by adding a BPEL Receive activity to the process and choosing an Interaction Type of Event.

Consuming an event using BPEL

Event publishers

Events are created by event publishers. An event publisher may be a Mediator or BPEL component in an SCA Assembly or it may be a custom Java application. The event publisher raises the event by creating an XML element and passing it to the Event Delivery Network. Once passed to the EDN, the publisher is unaware of how many subscribers consume the event or even if the event is consumed by any subscriber. This provides a degree of isolation between the publisher and subscribers. The EDN is responsible for keeping track of who is subscribing and ensuring that they receive the new event.

Publishing an event using the Mediator component

When using a Mediator to publish an event, we usually want to publish the event in parallel with other processing that the Mediator is doing. Typically, we want the Mediator to take the request from some service call and publish the event.

In the following example, we have a Mediator NewAuctionMediator that routes to a dummy service implemented by a Mediator ServiceMediator that uses Echo functionality to provide a synchronous response to its caller. We will raise a NewAuction event as part of the inbound processing in the Mediator. Note that although the Mediator will route a reply to the specific caller of the composite, the event will be routed to all subscribers of that event type.

Publishing an event using the Mediator component

We can raise an event based on the input request in the following fashion.

We open the routing Mediator component, in this case, NewAuctionMediator, and add a new sequential static routing rule. When the Target Type dialog comes up, select the Event button to generate an event.

Publishing an event using the Mediator component

In the Event Chooser dialog, we can choose the Event Definition File that contains our event. We can either browse for an existing EDL file using the magnifying glass icon or we can create a new EDL file by using the thunderbolt icon. Once an EDL file has been chosen, we can then select the specific event that we wish to produce.

Publishing an event using the Mediator component

If we choose an existing EDL file that is not in our project, JDeveloper will assume that we want to copy the EDL file and its associated XML schemas into the project. On the Localize Files dialog, we have the choice of keeping existing directory relationships between files or flattening any existing directory structure.

Publishing an event using the Mediator component

Once the event is selected, we then just need to add a transformation to transform the input of the request to the event format.

Publishing an event using BPEL

We can also publish events using BPEL. Using the previous example, we may decide that rather than just publishing a NewAuction event that contains the input data used to create the auction, we also wish to include the auction identifier generated by the response to the new auction request. This is best achieved using the BPEL event publisher.

Using the previous example, we insert a BPEL process between the service requestor, in this case, NewAuctionMediator, and the service provider, in this case ServiceProvider. We use the Base on a WSDL template to create the process and select the WSDL of the target service as our WSDL, in this case the WSDL for ServiceMediator. We then rewire the composite so that the target service is now invoked by our new BPEL process, and the original client of the service is now a client of the BPEL process, as shown in the following diagram:

Publishing an event using BPEL

We then edit the BPEL process to invoke the original target service. Because we have used the target service WSDL for the WSDL of the BPEL process, we can use the input and output variables of the process as parameters to invoke the target service. With this done, we are ready to publish the event as part of our BPEL process.

Publishing an event using BPEL

We publish an event from within BPEL by using an invoke and setting the Interaction Type to be Event rather than the more usual Partner Link.

Publishing an event using BPEL

This allows us to select an event and a variable to hold the data for that event. The event itself is chosen from the Event Chooser dialog, which was introduced in the previous section. We then need to add an assign statement to initialize the variable used to populate the event. The fact that this invoke is actually raising an event is identified by the lightning symbol on the Invoke.

Publishing an event using BPEL

Note that when we raise an event, there is no indication provided as to how to route or deliver the event. In the next section, we will look at how events are consumed.

Publishing an event using BPEL

Publishing an event using Java

We can also publish and consume events using Java. In this section, we will look at how Java code can be used to publish an event.

To publish an event, we need to go through the following steps:

  1. Create the event
  2. Connect to the Event Delivery Network
  3. Publish the event on the connection

Creating the event

We create the event in Java by using the oracle.integration.platform.blocks.event.BusinessEventBuilder class. An instance of this class is created by a static factory method called newInstance. We need to provide a qualified name (QName that includes the namespace and the entity name) and a body for the event through setter methods on the builder class. Once these have been set, we can call createEvent to generate an instance of a BusinessEvent.

BusinessEventBuilder builder = BusinessEventBuilder.newInstance();
QName name = new QName("http://schemas.oracle.com/events/edl/AuctionEventDefinitions","NewAuctionEvent");
builder.setEventName(name);
XMLElement content = …
builder.setBody(content);
BusinessEvent event = builder.createEvent();

The event name is a combination of the event schema, acting as an XML namespace, and the event name. The content is the XML element containing the event content.

Once we have a BusinessEvent, we need a connection to the Event Delivery Network in order to publish the event.

Creating the event connection

The event connection can be created using either a JMS queue connection to the Event Delivery Network or an Oracle AQ connection. The latter requires the use of a data source and is the approach we will show. We obtain an oracle.fabric.blocks.event.BusinessEventConnection from a BusinessEventConnectionFactory. We will use the AQ version of this connection factory, which is provided by the oracle.integration.platform.blocks.event.saq. SAQRemoteBusinessEventConnectionFactory class.

DataSource ds = …
  BusinessEventConnectionFactory factory = new
    SAQRemoteBusinessEventConnectionFactory(ds, ds, null);
  BusinessEventConnection conn =
    factory.createBusinessEventConnection();

We use the connection factory to create a connection to Event Delivery Network. The data source we provide must be configured to connect to the SOA infrastructure schema in the database, which by default is called <PREFIX>_SOAInfra.

Publishing the event

Now that we have an EDN connection, we can publish our event on it by calling publishEvent:

conn.publishEvent(event, EVENT_PRIORITY);

This publishes our event on our previously created connection. The event priority is usually set to 3, but it is not used in this release.

Event subscribers

Events are consumed by event subscribers. In a similar fashion to event publishers, event subscribers may be BPEL processes or Mediators. When subscribing to an event, the subscriber can filter the events. Subscribers subscribe to a specific event within an event namespace. They can limit the instances of an event that they receive by applying a filter. Only events for which the filter evaluates to true are delivered to the event subscriber. Event filters are XPath expressions.

Consuming an event using Mediator

To consume an event, we can add an event subscriber to a Mediator. We do this by clicking on the green plus sign next to the Event Subscriptions label under Routing Rules.

Consuming an event using Mediator

This brings us to the Subscribed Events dialog where, by clicking on the green plus sign, we add a new event subscription using the Event Chooser dialog introduced in the section on publishing an event.

Consuming an event using Mediator

Having chosen an event, we can determine how the event is to be delivered using the Consistency option. Transactions are discussed later in Chapter 15, Advanced SOA Suite Architecture. The consistency options are:

  • Exactly once by selecting the one and only one option. This makes the event delivery transaction part of the Mediator transaction. If the Mediator transaction completes successfully, then the event will also be marked as read, otherwise it will be rolled back and thus appear not to have been delivered. Transaction boundaries are explained in more detail in Chapter 15, Advanced SOA Suite Architecture
  • At least once by selecting guaranteed. This keeps the Mediator transaction separate from the delivery transaction. The event will be delivered to the subscriber, but any errors in the Mediator may cause that event to be lost to that subscriber.
  • immediate makes the delivery transaction part of the event publishing transaction. This option should be avoided as it couples the subscriber and the publisher.

    Tip

    Avoid using the immediate delivery option as it tightly couples the publisher to a subscriber that it should be unaware of. The Java API for this is marked as deprecated, and it is likely that this option will disappear in the future.

The Run as publisher option allows the Mediator to run in the same security context as the event publisher. This allows the subscriber component to perform any actions that the publisher could perform.

The Filter option brings up the Expression Builder dialog when clicked on. This allows us to construct an XPath expression to limit the delivered event to only those for which the XPath expression resolves to true.

Consuming an event using Mediator

A component in a composite that has subscribed to an event has a lightning bolt Consuming an event using Mediator on its service side to identify it within a composite.

Consuming an event using Mediator

Consuming an event using BPEL

To subscribe to an event using BPEL, we can create a BPEL process using the Subscribe to Events template. This allows us to add the events to which we wish to subscribe to in a similar fashion to having the Mediator subscribe to events by using the Event Chooser dialog and adding quality of service options and filters.

Consuming an event using BPEL

This creates a BPEL process with a single receive activity that identifies itself as subscribing to an event by the lightening icon Consuming an event using BPEL on the receive.

Consuming an event using BPEL

Events may also be subscribed to by adding a BPEL Receive activity to the process and choosing an Interaction Type of Event.

Consuming an event using BPEL

Publishing an event using the Mediator component

When using a Mediator to publish an event, we usually want to publish the event in parallel with other processing that the Mediator is doing. Typically, we want the Mediator to take the request from some service call and publish the event.

In the following example, we have a Mediator NewAuctionMediator that routes to a dummy service implemented by a Mediator ServiceMediator that uses Echo functionality to provide a synchronous response to its caller. We will raise a NewAuction event as part of the inbound processing in the Mediator. Note that although the Mediator will route a reply to the specific caller of the composite, the event will be routed to all subscribers of that event type.

Publishing an event using the Mediator component

We can raise an event based on the input request in the following fashion.

We open the routing Mediator component, in this case, NewAuctionMediator, and add a new sequential static routing rule. When the Target Type dialog comes up, select the Event button to generate an event.

Publishing an event using the Mediator component

In the Event Chooser dialog, we can choose the Event Definition File that contains our event. We can either browse for an existing EDL file using the magnifying glass icon or we can create a new EDL file by using the thunderbolt icon. Once an EDL file has been chosen, we can then select the specific event that we wish to produce.

Publishing an event using the Mediator component

If we choose an existing EDL file that is not in our project, JDeveloper will assume that we want to copy the EDL file and its associated XML schemas into the project. On the Localize Files dialog, we have the choice of keeping existing directory relationships between files or flattening any existing directory structure.

Publishing an event using the Mediator component

Once the event is selected, we then just need to add a transformation to transform the input of the request to the event format.

Publishing an event using BPEL

We can also publish events using BPEL. Using the previous example, we may decide that rather than just publishing a NewAuction event that contains the input data used to create the auction, we also wish to include the auction identifier generated by the response to the new auction request. This is best achieved using the BPEL event publisher.

Using the previous example, we insert a BPEL process between the service requestor, in this case, NewAuctionMediator, and the service provider, in this case ServiceProvider. We use the Base on a WSDL template to create the process and select the WSDL of the target service as our WSDL, in this case the WSDL for ServiceMediator. We then rewire the composite so that the target service is now invoked by our new BPEL process, and the original client of the service is now a client of the BPEL process, as shown in the following diagram:

Publishing an event using BPEL

We then edit the BPEL process to invoke the original target service. Because we have used the target service WSDL for the WSDL of the BPEL process, we can use the input and output variables of the process as parameters to invoke the target service. With this done, we are ready to publish the event as part of our BPEL process.

Publishing an event using BPEL

We publish an event from within BPEL by using an invoke and setting the Interaction Type to be Event rather than the more usual Partner Link.

Publishing an event using BPEL

This allows us to select an event and a variable to hold the data for that event. The event itself is chosen from the Event Chooser dialog, which was introduced in the previous section. We then need to add an assign statement to initialize the variable used to populate the event. The fact that this invoke is actually raising an event is identified by the lightning symbol on the Invoke.

Publishing an event using BPEL

Note that when we raise an event, there is no indication provided as to how to route or deliver the event. In the next section, we will look at how events are consumed.

Publishing an event using BPEL

Publishing an event using Java

We can also publish and consume events using Java. In this section, we will look at how Java code can be used to publish an event.

To publish an event, we need to go through the following steps:

  1. Create the event
  2. Connect to the Event Delivery Network
  3. Publish the event on the connection

Creating the event

We create the event in Java by using the oracle.integration.platform.blocks.event.BusinessEventBuilder class. An instance of this class is created by a static factory method called newInstance. We need to provide a qualified name (QName that includes the namespace and the entity name) and a body for the event through setter methods on the builder class. Once these have been set, we can call createEvent to generate an instance of a BusinessEvent.

BusinessEventBuilder builder = BusinessEventBuilder.newInstance();
QName name = new QName("http://schemas.oracle.com/events/edl/AuctionEventDefinitions","NewAuctionEvent");
builder.setEventName(name);
XMLElement content = …
builder.setBody(content);
BusinessEvent event = builder.createEvent();

The event name is a combination of the event schema, acting as an XML namespace, and the event name. The content is the XML element containing the event content.

Once we have a BusinessEvent, we need a connection to the Event Delivery Network in order to publish the event.

Creating the event connection

The event connection can be created using either a JMS queue connection to the Event Delivery Network or an Oracle AQ connection. The latter requires the use of a data source and is the approach we will show. We obtain an oracle.fabric.blocks.event.BusinessEventConnection from a BusinessEventConnectionFactory. We will use the AQ version of this connection factory, which is provided by the oracle.integration.platform.blocks.event.saq. SAQRemoteBusinessEventConnectionFactory class.

DataSource ds = …
  BusinessEventConnectionFactory factory = new
    SAQRemoteBusinessEventConnectionFactory(ds, ds, null);
  BusinessEventConnection conn =
    factory.createBusinessEventConnection();

We use the connection factory to create a connection to Event Delivery Network. The data source we provide must be configured to connect to the SOA infrastructure schema in the database, which by default is called <PREFIX>_SOAInfra.

Publishing the event

Now that we have an EDN connection, we can publish our event on it by calling publishEvent:

conn.publishEvent(event, EVENT_PRIORITY);

This publishes our event on our previously created connection. The event priority is usually set to 3, but it is not used in this release.

Event subscribers

Events are consumed by event subscribers. In a similar fashion to event publishers, event subscribers may be BPEL processes or Mediators. When subscribing to an event, the subscriber can filter the events. Subscribers subscribe to a specific event within an event namespace. They can limit the instances of an event that they receive by applying a filter. Only events for which the filter evaluates to true are delivered to the event subscriber. Event filters are XPath expressions.

Consuming an event using Mediator

To consume an event, we can add an event subscriber to a Mediator. We do this by clicking on the green plus sign next to the Event Subscriptions label under Routing Rules.

Consuming an event using Mediator

This brings us to the Subscribed Events dialog where, by clicking on the green plus sign, we add a new event subscription using the Event Chooser dialog introduced in the section on publishing an event.

Consuming an event using Mediator

Having chosen an event, we can determine how the event is to be delivered using the Consistency option. Transactions are discussed later in Chapter 15, Advanced SOA Suite Architecture. The consistency options are:

  • Exactly once by selecting the one and only one option. This makes the event delivery transaction part of the Mediator transaction. If the Mediator transaction completes successfully, then the event will also be marked as read, otherwise it will be rolled back and thus appear not to have been delivered. Transaction boundaries are explained in more detail in Chapter 15, Advanced SOA Suite Architecture
  • At least once by selecting guaranteed. This keeps the Mediator transaction separate from the delivery transaction. The event will be delivered to the subscriber, but any errors in the Mediator may cause that event to be lost to that subscriber.
  • immediate makes the delivery transaction part of the event publishing transaction. This option should be avoided as it couples the subscriber and the publisher.

    Tip

    Avoid using the immediate delivery option as it tightly couples the publisher to a subscriber that it should be unaware of. The Java API for this is marked as deprecated, and it is likely that this option will disappear in the future.

The Run as publisher option allows the Mediator to run in the same security context as the event publisher. This allows the subscriber component to perform any actions that the publisher could perform.

The Filter option brings up the Expression Builder dialog when clicked on. This allows us to construct an XPath expression to limit the delivered event to only those for which the XPath expression resolves to true.

Consuming an event using Mediator

A component in a composite that has subscribed to an event has a lightning bolt Consuming an event using Mediator on its service side to identify it within a composite.

Consuming an event using Mediator

Consuming an event using BPEL

To subscribe to an event using BPEL, we can create a BPEL process using the Subscribe to Events template. This allows us to add the events to which we wish to subscribe to in a similar fashion to having the Mediator subscribe to events by using the Event Chooser dialog and adding quality of service options and filters.

Consuming an event using BPEL

This creates a BPEL process with a single receive activity that identifies itself as subscribing to an event by the lightening icon Consuming an event using BPEL on the receive.

Consuming an event using BPEL

Events may also be subscribed to by adding a BPEL Receive activity to the process and choosing an Interaction Type of Event.

Consuming an event using BPEL

Publishing an event using BPEL

We can also publish events using BPEL. Using the previous example, we may decide that rather than just publishing a NewAuction event that contains the input data used to create the auction, we also wish to include the auction identifier generated by the response to the new auction request. This is best achieved using the BPEL event publisher.

Using the previous example, we insert a BPEL process between the service requestor, in this case, NewAuctionMediator, and the service provider, in this case ServiceProvider. We use the Base on a WSDL template to create the process and select the WSDL of the target service as our WSDL, in this case the WSDL for ServiceMediator. We then rewire the composite so that the target service is now invoked by our new BPEL process, and the original client of the service is now a client of the BPEL process, as shown in the following diagram:

Publishing an event using BPEL

We then edit the BPEL process to invoke the original target service. Because we have used the target service WSDL for the WSDL of the BPEL process, we can use the input and output variables of the process as parameters to invoke the target service. With this done, we are ready to publish the event as part of our BPEL process.

Publishing an event using BPEL

We publish an event from within BPEL by using an invoke and setting the Interaction Type to be Event rather than the more usual Partner Link.

Publishing an event using BPEL

This allows us to select an event and a variable to hold the data for that event. The event itself is chosen from the Event Chooser dialog, which was introduced in the previous section. We then need to add an assign statement to initialize the variable used to populate the event. The fact that this invoke is actually raising an event is identified by the lightning symbol on the Invoke.

Publishing an event using BPEL

Note that when we raise an event, there is no indication provided as to how to route or deliver the event. In the next section, we will look at how events are consumed.

Publishing an event using BPEL

Publishing an event using Java

We can also publish and consume events using Java. In this section, we will look at how Java code can be used to publish an event.

To publish an event, we need to go through the following steps:

  1. Create the event
  2. Connect to the Event Delivery Network
  3. Publish the event on the connection

Creating the event

We create the event in Java by using the oracle.integration.platform.blocks.event.BusinessEventBuilder class. An instance of this class is created by a static factory method called newInstance. We need to provide a qualified name (QName that includes the namespace and the entity name) and a body for the event through setter methods on the builder class. Once these have been set, we can call createEvent to generate an instance of a BusinessEvent.

BusinessEventBuilder builder = BusinessEventBuilder.newInstance();
QName name = new QName("http://schemas.oracle.com/events/edl/AuctionEventDefinitions","NewAuctionEvent");
builder.setEventName(name);
XMLElement content = …
builder.setBody(content);
BusinessEvent event = builder.createEvent();

The event name is a combination of the event schema, acting as an XML namespace, and the event name. The content is the XML element containing the event content.

Once we have a BusinessEvent, we need a connection to the Event Delivery Network in order to publish the event.

Creating the event connection

The event connection can be created using either a JMS queue connection to the Event Delivery Network or an Oracle AQ connection. The latter requires the use of a data source and is the approach we will show. We obtain an oracle.fabric.blocks.event.BusinessEventConnection from a BusinessEventConnectionFactory. We will use the AQ version of this connection factory, which is provided by the oracle.integration.platform.blocks.event.saq. SAQRemoteBusinessEventConnectionFactory class.

DataSource ds = …
  BusinessEventConnectionFactory factory = new
    SAQRemoteBusinessEventConnectionFactory(ds, ds, null);
  BusinessEventConnection conn =
    factory.createBusinessEventConnection();

We use the connection factory to create a connection to Event Delivery Network. The data source we provide must be configured to connect to the SOA infrastructure schema in the database, which by default is called <PREFIX>_SOAInfra.

Publishing the event

Now that we have an EDN connection, we can publish our event on it by calling publishEvent:

conn.publishEvent(event, EVENT_PRIORITY);

This publishes our event on our previously created connection. The event priority is usually set to 3, but it is not used in this release.

Event subscribers

Events are consumed by event subscribers. In a similar fashion to event publishers, event subscribers may be BPEL processes or Mediators. When subscribing to an event, the subscriber can filter the events. Subscribers subscribe to a specific event within an event namespace. They can limit the instances of an event that they receive by applying a filter. Only events for which the filter evaluates to true are delivered to the event subscriber. Event filters are XPath expressions.

Consuming an event using Mediator

To consume an event, we can add an event subscriber to a Mediator. We do this by clicking on the green plus sign next to the Event Subscriptions label under Routing Rules.

Consuming an event using Mediator

This brings us to the Subscribed Events dialog where, by clicking on the green plus sign, we add a new event subscription using the Event Chooser dialog introduced in the section on publishing an event.

Consuming an event using Mediator

Having chosen an event, we can determine how the event is to be delivered using the Consistency option. Transactions are discussed later in Chapter 15, Advanced SOA Suite Architecture. The consistency options are:

  • Exactly once by selecting the one and only one option. This makes the event delivery transaction part of the Mediator transaction. If the Mediator transaction completes successfully, then the event will also be marked as read, otherwise it will be rolled back and thus appear not to have been delivered. Transaction boundaries are explained in more detail in Chapter 15, Advanced SOA Suite Architecture
  • At least once by selecting guaranteed. This keeps the Mediator transaction separate from the delivery transaction. The event will be delivered to the subscriber, but any errors in the Mediator may cause that event to be lost to that subscriber.
  • immediate makes the delivery transaction part of the event publishing transaction. This option should be avoided as it couples the subscriber and the publisher.

    Tip

    Avoid using the immediate delivery option as it tightly couples the publisher to a subscriber that it should be unaware of. The Java API for this is marked as deprecated, and it is likely that this option will disappear in the future.

The Run as publisher option allows the Mediator to run in the same security context as the event publisher. This allows the subscriber component to perform any actions that the publisher could perform.

The Filter option brings up the Expression Builder dialog when clicked on. This allows us to construct an XPath expression to limit the delivered event to only those for which the XPath expression resolves to true.

Consuming an event using Mediator

A component in a composite that has subscribed to an event has a lightning bolt Consuming an event using Mediator on its service side to identify it within a composite.

Consuming an event using Mediator

Consuming an event using BPEL

To subscribe to an event using BPEL, we can create a BPEL process using the Subscribe to Events template. This allows us to add the events to which we wish to subscribe to in a similar fashion to having the Mediator subscribe to events by using the Event Chooser dialog and adding quality of service options and filters.

Consuming an event using BPEL

This creates a BPEL process with a single receive activity that identifies itself as subscribing to an event by the lightening icon Consuming an event using BPEL on the receive.

Consuming an event using BPEL

Events may also be subscribed to by adding a BPEL Receive activity to the process and choosing an Interaction Type of Event.

Consuming an event using BPEL

Publishing an event using Java

We can also publish and consume events using Java. In this section, we will look at how Java code can be used to publish an event.

To publish an event, we need to go through the following steps:

  1. Create the event
  2. Connect to the Event Delivery Network
  3. Publish the event on the connection

Creating the event

We create the event in Java by using the oracle.integration.platform.blocks.event.BusinessEventBuilder class. An instance of this class is created by a static factory method called newInstance. We need to provide a qualified name (QName that includes the namespace and the entity name) and a body for the event through setter methods on the builder class. Once these have been set, we can call createEvent to generate an instance of a BusinessEvent.

BusinessEventBuilder builder = BusinessEventBuilder.newInstance();
QName name = new QName("http://schemas.oracle.com/events/edl/AuctionEventDefinitions","NewAuctionEvent");
builder.setEventName(name);
XMLElement content = …
builder.setBody(content);
BusinessEvent event = builder.createEvent();

The event name is a combination of the event schema, acting as an XML namespace, and the event name. The content is the XML element containing the event content.

Once we have a BusinessEvent, we need a connection to the Event Delivery Network in order to publish the event.

Creating the event connection

The event connection can be created using either a JMS queue connection to the Event Delivery Network or an Oracle AQ connection. The latter requires the use of a data source and is the approach we will show. We obtain an oracle.fabric.blocks.event.BusinessEventConnection from a BusinessEventConnectionFactory. We will use the AQ version of this connection factory, which is provided by the oracle.integration.platform.blocks.event.saq. SAQRemoteBusinessEventConnectionFactory class.

DataSource ds = …
  BusinessEventConnectionFactory factory = new
    SAQRemoteBusinessEventConnectionFactory(ds, ds, null);
  BusinessEventConnection conn =
    factory.createBusinessEventConnection();

We use the connection factory to create a connection to Event Delivery Network. The data source we provide must be configured to connect to the SOA infrastructure schema in the database, which by default is called <PREFIX>_SOAInfra.

Publishing the event

Now that we have an EDN connection, we can publish our event on it by calling publishEvent:

conn.publishEvent(event, EVENT_PRIORITY);

This publishes our event on our previously created connection. The event priority is usually set to 3, but it is not used in this release.

Event subscribers

Events are consumed by event subscribers. In a similar fashion to event publishers, event subscribers may be BPEL processes or Mediators. When subscribing to an event, the subscriber can filter the events. Subscribers subscribe to a specific event within an event namespace. They can limit the instances of an event that they receive by applying a filter. Only events for which the filter evaluates to true are delivered to the event subscriber. Event filters are XPath expressions.

Consuming an event using Mediator

To consume an event, we can add an event subscriber to a Mediator. We do this by clicking on the green plus sign next to the Event Subscriptions label under Routing Rules.

Consuming an event using Mediator

This brings us to the Subscribed Events dialog where, by clicking on the green plus sign, we add a new event subscription using the Event Chooser dialog introduced in the section on publishing an event.

Consuming an event using Mediator

Having chosen an event, we can determine how the event is to be delivered using the Consistency option. Transactions are discussed later in Chapter 15, Advanced SOA Suite Architecture. The consistency options are:

  • Exactly once by selecting the one and only one option. This makes the event delivery transaction part of the Mediator transaction. If the Mediator transaction completes successfully, then the event will also be marked as read, otherwise it will be rolled back and thus appear not to have been delivered. Transaction boundaries are explained in more detail in Chapter 15, Advanced SOA Suite Architecture
  • At least once by selecting guaranteed. This keeps the Mediator transaction separate from the delivery transaction. The event will be delivered to the subscriber, but any errors in the Mediator may cause that event to be lost to that subscriber.
  • immediate makes the delivery transaction part of the event publishing transaction. This option should be avoided as it couples the subscriber and the publisher.

    Tip

    Avoid using the immediate delivery option as it tightly couples the publisher to a subscriber that it should be unaware of. The Java API for this is marked as deprecated, and it is likely that this option will disappear in the future.

The Run as publisher option allows the Mediator to run in the same security context as the event publisher. This allows the subscriber component to perform any actions that the publisher could perform.

The Filter option brings up the Expression Builder dialog when clicked on. This allows us to construct an XPath expression to limit the delivered event to only those for which the XPath expression resolves to true.

Consuming an event using Mediator

A component in a composite that has subscribed to an event has a lightning bolt Consuming an event using Mediator on its service side to identify it within a composite.

Consuming an event using Mediator

Consuming an event using BPEL

To subscribe to an event using BPEL, we can create a BPEL process using the Subscribe to Events template. This allows us to add the events to which we wish to subscribe to in a similar fashion to having the Mediator subscribe to events by using the Event Chooser dialog and adding quality of service options and filters.

Consuming an event using BPEL

This creates a BPEL process with a single receive activity that identifies itself as subscribing to an event by the lightening icon Consuming an event using BPEL on the receive.

Consuming an event using BPEL

Events may also be subscribed to by adding a BPEL Receive activity to the process and choosing an Interaction Type of Event.

Consuming an event using BPEL

Creating the event

We create the event in Java by using the oracle.integration.platform.blocks.event.BusinessEventBuilder class. An instance of this class is created by a static factory method called newInstance. We need to provide a qualified name (QName that includes the namespace and the entity name) and a body for the event through setter methods on the builder class. Once these have been set, we can call createEvent to generate an instance of a BusinessEvent.

BusinessEventBuilder builder = BusinessEventBuilder.newInstance();
QName name = new QName("http://schemas.oracle.com/events/edl/AuctionEventDefinitions","NewAuctionEvent");
builder.setEventName(name);
XMLElement content = …
builder.setBody(content);
BusinessEvent event = builder.createEvent();

The event name is a combination of the event schema, acting as an XML namespace, and the event name. The content is the XML element containing the event content.

Once we have a BusinessEvent, we need a connection to the Event Delivery Network in order to publish the event.

Creating the event connection

The event connection can be created using either a JMS queue connection to the Event Delivery Network or an Oracle AQ connection. The latter requires the use of a data source and is the approach we will show. We obtain an oracle.fabric.blocks.event.BusinessEventConnection from a BusinessEventConnectionFactory. We will use the AQ version of this connection factory, which is provided by the oracle.integration.platform.blocks.event.saq. SAQRemoteBusinessEventConnectionFactory class.

DataSource ds = …
  BusinessEventConnectionFactory factory = new
    SAQRemoteBusinessEventConnectionFactory(ds, ds, null);
  BusinessEventConnection conn =
    factory.createBusinessEventConnection();

We use the connection factory to create a connection to Event Delivery Network. The data source we provide must be configured to connect to the SOA infrastructure schema in the database, which by default is called <PREFIX>_SOAInfra.

Publishing the event

Now that we have an EDN connection, we can publish our event on it by calling publishEvent:

conn.publishEvent(event, EVENT_PRIORITY);

This publishes our event on our previously created connection. The event priority is usually set to 3, but it is not used in this release.

Event subscribers

Events are consumed by event subscribers. In a similar fashion to event publishers, event subscribers may be BPEL processes or Mediators. When subscribing to an event, the subscriber can filter the events. Subscribers subscribe to a specific event within an event namespace. They can limit the instances of an event that they receive by applying a filter. Only events for which the filter evaluates to true are delivered to the event subscriber. Event filters are XPath expressions.

Consuming an event using Mediator

To consume an event, we can add an event subscriber to a Mediator. We do this by clicking on the green plus sign next to the Event Subscriptions label under Routing Rules.

Consuming an event using Mediator

This brings us to the Subscribed Events dialog where, by clicking on the green plus sign, we add a new event subscription using the Event Chooser dialog introduced in the section on publishing an event.

Consuming an event using Mediator

Having chosen an event, we can determine how the event is to be delivered using the Consistency option. Transactions are discussed later in Chapter 15, Advanced SOA Suite Architecture. The consistency options are:

  • Exactly once by selecting the one and only one option. This makes the event delivery transaction part of the Mediator transaction. If the Mediator transaction completes successfully, then the event will also be marked as read, otherwise it will be rolled back and thus appear not to have been delivered. Transaction boundaries are explained in more detail in Chapter 15, Advanced SOA Suite Architecture
  • At least once by selecting guaranteed. This keeps the Mediator transaction separate from the delivery transaction. The event will be delivered to the subscriber, but any errors in the Mediator may cause that event to be lost to that subscriber.
  • immediate makes the delivery transaction part of the event publishing transaction. This option should be avoided as it couples the subscriber and the publisher.
    Tip

    Avoid using the immediate delivery option as it tightly couples the publisher to a subscriber that it should be unaware of. The Java API for this is marked as deprecated, and it is likely that this option will disappear in the future.

The Run as publisher option allows the Mediator to run in the same security context as the event publisher. This allows the subscriber component to perform any actions that the publisher could perform.

The Filter option brings up the Expression Builder dialog when clicked on. This allows us to construct an XPath expression to limit the delivered event to only those for which the XPath expression resolves to true.

Consuming an event using Mediator

A component in a composite that has subscribed to an event has a lightning bolt Consuming an event using Mediator on its service side to identify it within a composite.

Consuming an event using Mediator
Consuming an event using BPEL

To subscribe to an event using BPEL, we can create a BPEL process using the Subscribe to Events template. This allows us to add the events to which we wish to subscribe to in a similar fashion to having the Mediator subscribe to events by using the Event Chooser dialog and adding quality of service options and filters.

Consuming an event using BPEL

This creates a BPEL process with a single receive activity that identifies itself as subscribing to an event by the lightening icon Consuming an event using BPEL on the receive.

Consuming an event using BPEL

Events may also be subscribed to by adding a BPEL Receive activity to the process and choosing an Interaction Type of Event.

Consuming an event using BPEL

Creating the event connection

The event connection can be created using either a JMS queue connection to the Event Delivery Network or an Oracle AQ connection. The latter requires the use of a data source and is the approach we will show. We obtain an oracle.fabric.blocks.event.BusinessEventConnection from a BusinessEventConnectionFactory. We will use the AQ version of this connection factory, which is provided by the oracle.integration.platform.blocks.event.saq. SAQRemoteBusinessEventConnectionFactory class.

DataSource ds = …
  BusinessEventConnectionFactory factory = new
    SAQRemoteBusinessEventConnectionFactory(ds, ds, null);
  BusinessEventConnection conn =
    factory.createBusinessEventConnection();

We use the connection factory to create a connection to Event Delivery Network. The data source we provide must be configured to connect to the SOA infrastructure schema in the database, which by default is called <PREFIX>_SOAInfra.

Publishing the event

Now that we have an EDN connection, we can publish our event on it by calling publishEvent:

conn.publishEvent(event, EVENT_PRIORITY);

This publishes our event on our previously created connection. The event priority is usually set to 3, but it is not used in this release.

Event subscribers

Events are consumed by event subscribers. In a similar fashion to event publishers, event subscribers may be BPEL processes or Mediators. When subscribing to an event, the subscriber can filter the events. Subscribers subscribe to a specific event within an event namespace. They can limit the instances of an event that they receive by applying a filter. Only events for which the filter evaluates to true are delivered to the event subscriber. Event filters are XPath expressions.

Consuming an event using Mediator

To consume an event, we can add an event subscriber to a Mediator. We do this by clicking on the green plus sign next to the Event Subscriptions label under Routing Rules.

Consuming an event using Mediator

This brings us to the Subscribed Events dialog where, by clicking on the green plus sign, we add a new event subscription using the Event Chooser dialog introduced in the section on publishing an event.

Consuming an event using Mediator

Having chosen an event, we can determine how the event is to be delivered using the Consistency option. Transactions are discussed later in Chapter 15, Advanced SOA Suite Architecture. The consistency options are:

  • Exactly once by selecting the one and only one option. This makes the event delivery transaction part of the Mediator transaction. If the Mediator transaction completes successfully, then the event will also be marked as read, otherwise it will be rolled back and thus appear not to have been delivered. Transaction boundaries are explained in more detail in Chapter 15, Advanced SOA Suite Architecture
  • At least once by selecting guaranteed. This keeps the Mediator transaction separate from the delivery transaction. The event will be delivered to the subscriber, but any errors in the Mediator may cause that event to be lost to that subscriber.
  • immediate makes the delivery transaction part of the event publishing transaction. This option should be avoided as it couples the subscriber and the publisher.
    Tip

    Avoid using the immediate delivery option as it tightly couples the publisher to a subscriber that it should be unaware of. The Java API for this is marked as deprecated, and it is likely that this option will disappear in the future.

The Run as publisher option allows the Mediator to run in the same security context as the event publisher. This allows the subscriber component to perform any actions that the publisher could perform.

The Filter option brings up the Expression Builder dialog when clicked on. This allows us to construct an XPath expression to limit the delivered event to only those for which the XPath expression resolves to true.

Consuming an event using Mediator

A component in a composite that has subscribed to an event has a lightning bolt Consuming an event using Mediator on its service side to identify it within a composite.

Consuming an event using Mediator
Consuming an event using BPEL

To subscribe to an event using BPEL, we can create a BPEL process using the Subscribe to Events template. This allows us to add the events to which we wish to subscribe to in a similar fashion to having the Mediator subscribe to events by using the Event Chooser dialog and adding quality of service options and filters.

Consuming an event using BPEL

This creates a BPEL process with a single receive activity that identifies itself as subscribing to an event by the lightening icon Consuming an event using BPEL on the receive.

Consuming an event using BPEL

Events may also be subscribed to by adding a BPEL Receive activity to the process and choosing an Interaction Type of Event.

Consuming an event using BPEL

Publishing the event

Now that we have an EDN connection, we can publish our event on it by calling publishEvent:

conn.publishEvent(event, EVENT_PRIORITY);

This publishes our event on our previously created connection. The event priority is usually set to 3, but it is not used in this release.

Event subscribers

Events are consumed by event subscribers. In a similar fashion to event publishers, event subscribers may be BPEL processes or Mediators. When subscribing to an event, the subscriber can filter the events. Subscribers subscribe to a specific event within an event namespace. They can limit the instances of an event that they receive by applying a filter. Only events for which the filter evaluates to true are delivered to the event subscriber. Event filters are XPath expressions.

Consuming an event using Mediator

To consume an event, we can add an event subscriber to a Mediator. We do this by clicking on the green plus sign next to the Event Subscriptions label under Routing Rules.

Consuming an event using Mediator

This brings us to the Subscribed Events dialog where, by clicking on the green plus sign, we add a new event subscription using the Event Chooser dialog introduced in the section on publishing an event.

Consuming an event using Mediator

Having chosen an event, we can determine how the event is to be delivered using the Consistency option. Transactions are discussed later in Chapter 15, Advanced SOA Suite Architecture. The consistency options are:

  • Exactly once by selecting the one and only one option. This makes the event delivery transaction part of the Mediator transaction. If the Mediator transaction completes successfully, then the event will also be marked as read, otherwise it will be rolled back and thus appear not to have been delivered. Transaction boundaries are explained in more detail in Chapter 15, Advanced SOA Suite Architecture
  • At least once by selecting guaranteed. This keeps the Mediator transaction separate from the delivery transaction. The event will be delivered to the subscriber, but any errors in the Mediator may cause that event to be lost to that subscriber.
  • immediate makes the delivery transaction part of the event publishing transaction. This option should be avoided as it couples the subscriber and the publisher.
    Tip

    Avoid using the immediate delivery option as it tightly couples the publisher to a subscriber that it should be unaware of. The Java API for this is marked as deprecated, and it is likely that this option will disappear in the future.

The Run as publisher option allows the Mediator to run in the same security context as the event publisher. This allows the subscriber component to perform any actions that the publisher could perform.

The Filter option brings up the Expression Builder dialog when clicked on. This allows us to construct an XPath expression to limit the delivered event to only those for which the XPath expression resolves to true.

Consuming an event using Mediator

A component in a composite that has subscribed to an event has a lightning bolt Consuming an event using Mediator on its service side to identify it within a composite.

Consuming an event using Mediator
Consuming an event using BPEL

To subscribe to an event using BPEL, we can create a BPEL process using the Subscribe to Events template. This allows us to add the events to which we wish to subscribe to in a similar fashion to having the Mediator subscribe to events by using the Event Chooser dialog and adding quality of service options and filters.

Consuming an event using BPEL

This creates a BPEL process with a single receive activity that identifies itself as subscribing to an event by the lightening icon Consuming an event using BPEL on the receive.

Consuming an event using BPEL

Events may also be subscribed to by adding a BPEL Receive activity to the process and choosing an Interaction Type of Event.

Consuming an event using BPEL

Event subscribers

Events are consumed by event subscribers. In a similar fashion to event publishers, event subscribers may be BPEL processes or Mediators. When subscribing to an event, the subscriber can filter the events. Subscribers subscribe to a specific event within an event namespace. They can limit the instances of an event that they receive by applying a filter. Only events for which the filter evaluates to true are delivered to the event subscriber. Event filters are XPath expressions.

Consuming an event using Mediator

To consume an event, we can add an event subscriber to a Mediator. We do this by clicking on the green plus sign next to the Event Subscriptions label under Routing Rules.

Consuming an event using Mediator

This brings us to the Subscribed Events dialog where, by clicking on the green plus sign, we add a new event subscription using the Event Chooser dialog introduced in the section on publishing an event.

Consuming an event using Mediator

Having chosen an event, we can determine how the event is to be delivered using the Consistency option. Transactions are discussed later in Chapter 15, Advanced SOA Suite Architecture. The consistency options are:

  • Exactly once by selecting the one and only one option. This makes the event delivery transaction part of the Mediator transaction. If the Mediator transaction completes successfully, then the event will also be marked as read, otherwise it will be rolled back and thus appear not to have been delivered. Transaction boundaries are explained in more detail in Chapter 15, Advanced SOA Suite Architecture
  • At least once by selecting guaranteed. This keeps the Mediator transaction separate from the delivery transaction. The event will be delivered to the subscriber, but any errors in the Mediator may cause that event to be lost to that subscriber.
  • immediate makes the delivery transaction part of the event publishing transaction. This option should be avoided as it couples the subscriber and the publisher.

    Tip

    Avoid using the immediate delivery option as it tightly couples the publisher to a subscriber that it should be unaware of. The Java API for this is marked as deprecated, and it is likely that this option will disappear in the future.

The Run as publisher option allows the Mediator to run in the same security context as the event publisher. This allows the subscriber component to perform any actions that the publisher could perform.

The Filter option brings up the Expression Builder dialog when clicked on. This allows us to construct an XPath expression to limit the delivered event to only those for which the XPath expression resolves to true.

Consuming an event using Mediator

A component in a composite that has subscribed to an event has a lightning bolt Consuming an event using Mediator on its service side to identify it within a composite.

Consuming an event using Mediator

Consuming an event using BPEL

To subscribe to an event using BPEL, we can create a BPEL process using the Subscribe to Events template. This allows us to add the events to which we wish to subscribe to in a similar fashion to having the Mediator subscribe to events by using the Event Chooser dialog and adding quality of service options and filters.

Consuming an event using BPEL

This creates a BPEL process with a single receive activity that identifies itself as subscribing to an event by the lightening icon Consuming an event using BPEL on the receive.

Consuming an event using BPEL

Events may also be subscribed to by adding a BPEL Receive activity to the process and choosing an Interaction Type of Event.

Consuming an event using BPEL

Consuming an event using Mediator

To consume an event, we can add an event subscriber to a Mediator. We do this by clicking on the green plus sign next to the Event Subscriptions label under Routing Rules.

Consuming an event using Mediator

This brings us to the Subscribed Events dialog where, by clicking on the green plus sign, we add a new event subscription using the Event Chooser dialog introduced in the section on publishing an event.

Consuming an event using Mediator

Having chosen an event, we can determine how the event is to be delivered using the Consistency option. Transactions are discussed later in Chapter 15, Advanced SOA Suite Architecture. The consistency options are:

  • Exactly once by selecting the one and only one option. This makes the event delivery transaction part of the Mediator transaction. If the Mediator transaction completes successfully, then the event will also be marked as read, otherwise it will be rolled back and thus appear not to have been delivered. Transaction boundaries are explained in more detail in Chapter 15, Advanced SOA Suite Architecture
  • At least once by selecting guaranteed. This keeps the Mediator transaction separate from the delivery transaction. The event will be delivered to the subscriber, but any errors in the Mediator may cause that event to be lost to that subscriber.
  • immediate makes the delivery transaction part of the event publishing transaction. This option should be avoided as it couples the subscriber and the publisher.

    Tip

    Avoid using the immediate delivery option as it tightly couples the publisher to a subscriber that it should be unaware of. The Java API for this is marked as deprecated, and it is likely that this option will disappear in the future.

The Run as publisher option allows the Mediator to run in the same security context as the event publisher. This allows the subscriber component to perform any actions that the publisher could perform.

The Filter option brings up the Expression Builder dialog when clicked on. This allows us to construct an XPath expression to limit the delivered event to only those for which the XPath expression resolves to true.

Consuming an event using Mediator

A component in a composite that has subscribed to an event has a lightning bolt Consuming an event using Mediator on its service side to identify it within a composite.

Consuming an event using Mediator

Consuming an event using BPEL

To subscribe to an event using BPEL, we can create a BPEL process using the Subscribe to Events template. This allows us to add the events to which we wish to subscribe to in a similar fashion to having the Mediator subscribe to events by using the Event Chooser dialog and adding quality of service options and filters.

Consuming an event using BPEL

This creates a BPEL process with a single receive activity that identifies itself as subscribing to an event by the lightening icon Consuming an event using BPEL on the receive.

Consuming an event using BPEL

Events may also be subscribed to by adding a BPEL Receive activity to the process and choosing an Interaction Type of Event.

Consuming an event using BPEL

Consuming an event using BPEL

To subscribe to an event using BPEL, we can create a BPEL process using the Subscribe to Events template. This allows us to add the events to which we wish to subscribe to in a similar fashion to having the Mediator subscribe to events by using the Event Chooser dialog and adding quality of service options and filters.

Consuming an event using BPEL

This creates a BPEL process with a single receive activity that identifies itself as subscribing to an event by the lightening icon Consuming an event using BPEL on the receive.

Consuming an event using BPEL

Events may also be subscribed to by adding a BPEL Receive activity to the process and choosing an Interaction Type of Event.

Consuming an event using BPEL

EDN publishing patterns with SOA Suite

The table in this section summarizes the different ways in which events may be published within the SOA Suite depending on the requirement.

Requirement

Pattern

Publish an event on receipt of a message

A Mediator can achieve this by implementing the target service interface and passing the message through the target while adding a publish event item in sequence.

Publish an event on a synchronous message response

A BPEL process can achieve this by implementing the target service interface and passing the message through the target and passing the response back to the caller. Either before or after the return to the caller, the process can publish an event item using data from the response.

Publish an event on a synchronous message request and reply

A BPEL process can achieve this by implementing the target service interface and passing the message through the target and passing the response back to the caller. Either before or after the return to the caller, the process can publish an event item using data from the request and the response.

Publish an event on an asynchronous response

A BPEL process can achieve this by implementing the async interface, and before or after passing the message from the target back to the caller, it can publish an event item using data from the response.

Publish an event on an asynchronous message request and reply

A BPEL process can achieve this by implementing the target service interface and the callback interface and passing the message through the target and passing the callback back to the caller. Either before or after the callback to the caller, the process can publish an event item using data from the request and the response.

Publish an event on an event

A Mediator can achieve this by subscribing to an event and then publishing an event.

We will now look at how each of these patterns may be implemented.

Publishing an event on receipt of a message

If we receive a message, either a one way or a request/reply interaction, we can use the Mediator to publish an event based on the content of the inbound message by using a static routing rule to raise the event before or after forwarding the request to a target service, as shown in the following screenshot:

Publishing an event on receipt of a message

Publishing an event on a synchronous message response

If we wish to raise an event based on the response to a request/reply interaction, then we need to use a BPEL process to invoke the target service and then raise the event based on the content of the response, as shown in the following screenshot:

Publishing an event on a synchronous message response

Publishing an event on a synchronous message request and reply

When an event needs to be raised, based on the content of both the request and reply parts of a synchronous interaction, a BPEL process can be used to do this. The pattern is essentially the same as the previous pattern, except that in the <assign> to the event variable, we include data from both the request message and the reply message.

Publishing an event on an asynchronous response

When an event needs to be raised based on the content of an asynchronous response, we can use a BPEL process to do this. We invoke the target service and get the reply. Then, either before or after sending the reply back to the initiator of the service interaction, we can raise the event, as shown in the following screenshot:

Publishing an event on an asynchronous response

Publishing an event on an asynchronous message request and reply

When an event needs to be raised based on the content of both the request and reply parts of an asynchronous interaction, a BPEL process can be used to do this. The pattern is essentially the same as the previous pattern, except that in the <assign> to the event variable, we include data from both the request message and the reply message.

Publishing an event on an event

We can use a Mediator to raise an event based on an incoming event. We may want to do this to map events from one namespace to another or to manage backwards compatibility between different versions of an event without having to change subscribers or publishers. The Mediator can simply raise the outgoing event based on the incoming event by using a sequential routing rule, as shown in the following screenshot:

Publishing an event on an event

Publishing an event on receipt of a message

If we receive a message, either a one way or a request/reply interaction, we can use the Mediator to publish an event based on the content of the inbound message by using a static routing rule to raise the event before or after forwarding the request to a target service, as shown in the following screenshot:

Publishing an event on receipt of a message

Publishing an event on a synchronous message response

If we wish to raise an event based on the response to a request/reply interaction, then we need to use a BPEL process to invoke the target service and then raise the event based on the content of the response, as shown in the following screenshot:

Publishing an event on a synchronous message response

Publishing an event on a synchronous message request and reply

When an event needs to be raised, based on the content of both the request and reply parts of a synchronous interaction, a BPEL process can be used to do this. The pattern is essentially the same as the previous pattern, except that in the <assign> to the event variable, we include data from both the request message and the reply message.

Publishing an event on an asynchronous response

When an event needs to be raised based on the content of an asynchronous response, we can use a BPEL process to do this. We invoke the target service and get the reply. Then, either before or after sending the reply back to the initiator of the service interaction, we can raise the event, as shown in the following screenshot:

Publishing an event on an asynchronous response

Publishing an event on an asynchronous message request and reply

When an event needs to be raised based on the content of both the request and reply parts of an asynchronous interaction, a BPEL process can be used to do this. The pattern is essentially the same as the previous pattern, except that in the <assign> to the event variable, we include data from both the request message and the reply message.

Publishing an event on an event

We can use a Mediator to raise an event based on an incoming event. We may want to do this to map events from one namespace to another or to manage backwards compatibility between different versions of an event without having to change subscribers or publishers. The Mediator can simply raise the outgoing event based on the incoming event by using a sequential routing rule, as shown in the following screenshot:

Publishing an event on an event

Publishing an event on a synchronous message response

If we wish to raise an event based on the response to a request/reply interaction, then we need to use a BPEL process to invoke the target service and then raise the event based on the content of the response, as shown in the following screenshot:

Publishing an event on a synchronous message response

Publishing an event on a synchronous message request and reply

When an event needs to be raised, based on the content of both the request and reply parts of a synchronous interaction, a BPEL process can be used to do this. The pattern is essentially the same as the previous pattern, except that in the <assign> to the event variable, we include data from both the request message and the reply message.

Publishing an event on an asynchronous response

When an event needs to be raised based on the content of an asynchronous response, we can use a BPEL process to do this. We invoke the target service and get the reply. Then, either before or after sending the reply back to the initiator of the service interaction, we can raise the event, as shown in the following screenshot:

Publishing an event on an asynchronous response

Publishing an event on an asynchronous message request and reply

When an event needs to be raised based on the content of both the request and reply parts of an asynchronous interaction, a BPEL process can be used to do this. The pattern is essentially the same as the previous pattern, except that in the <assign> to the event variable, we include data from both the request message and the reply message.

Publishing an event on an event

We can use a Mediator to raise an event based on an incoming event. We may want to do this to map events from one namespace to another or to manage backwards compatibility between different versions of an event without having to change subscribers or publishers. The Mediator can simply raise the outgoing event based on the incoming event by using a sequential routing rule, as shown in the following screenshot:

Publishing an event on an event

Publishing an event on a synchronous message request and reply

When an event needs to be raised, based on the content of both the request and reply parts of a synchronous interaction, a BPEL process can be used to do this. The pattern is essentially the same as the previous pattern, except that in the <assign> to the event variable, we include data from both the request message and the reply message.

Publishing an event on an asynchronous response

When an event needs to be raised based on the content of an asynchronous response, we can use a BPEL process to do this. We invoke the target service and get the reply. Then, either before or after sending the reply back to the initiator of the service interaction, we can raise the event, as shown in the following screenshot:

Publishing an event on an asynchronous response

Publishing an event on an asynchronous message request and reply

When an event needs to be raised based on the content of both the request and reply parts of an asynchronous interaction, a BPEL process can be used to do this. The pattern is essentially the same as the previous pattern, except that in the <assign> to the event variable, we include data from both the request message and the reply message.

Publishing an event on an event

We can use a Mediator to raise an event based on an incoming event. We may want to do this to map events from one namespace to another or to manage backwards compatibility between different versions of an event without having to change subscribers or publishers. The Mediator can simply raise the outgoing event based on the incoming event by using a sequential routing rule, as shown in the following screenshot:

Publishing an event on an event

Publishing an event on an asynchronous response

When an event needs to be raised based on the content of an asynchronous response, we can use a BPEL process to do this. We invoke the target service and get the reply. Then, either before or after sending the reply back to the initiator of the service interaction, we can raise the event, as shown in the following screenshot:

Publishing an event on an asynchronous response

Publishing an event on an asynchronous message request and reply

When an event needs to be raised based on the content of both the request and reply parts of an asynchronous interaction, a BPEL process can be used to do this. The pattern is essentially the same as the previous pattern, except that in the <assign> to the event variable, we include data from both the request message and the reply message.

Publishing an event on an event

We can use a Mediator to raise an event based on an incoming event. We may want to do this to map events from one namespace to another or to manage backwards compatibility between different versions of an event without having to change subscribers or publishers. The Mediator can simply raise the outgoing event based on the incoming event by using a sequential routing rule, as shown in the following screenshot:

Publishing an event on an event

Publishing an event on an asynchronous message request and reply

When an event needs to be raised based on the content of both the request and reply parts of an asynchronous interaction, a BPEL process can be used to do this. The pattern is essentially the same as the previous pattern, except that in the <assign> to the event variable, we include data from both the request message and the reply message.

Publishing an event on an event

We can use a Mediator to raise an event based on an incoming event. We may want to do this to map events from one namespace to another or to manage backwards compatibility between different versions of an event without having to change subscribers or publishers. The Mediator can simply raise the outgoing event based on the incoming event by using a sequential routing rule, as shown in the following screenshot:

Publishing an event on an event

Publishing an event on an event

We can use a Mediator to raise an event based on an incoming event. We may want to do this to map events from one namespace to another or to manage backwards compatibility between different versions of an event without having to change subscribers or publishers. The Mediator can simply raise the outgoing event based on the incoming event by using a sequential routing rule, as shown in the following screenshot:

Publishing an event on an event

Monitoring event processing in Enterprise Manager

We can monitor what is happening with events from within Enterprise Manager. We can also create new events from the EM console.

We can track what is happening with events by using the Business Events menu item of the soa_infra tree node. This brings up the Business Events screen.

Monitoring event processing in Enterprise Manager

On the Events tab of this screen, we can see the list of events registered with the server and the number of subscriptions and failed deliveries for each event. We can also create database event subscriptions from this screen by selecting an event and clicking on the Subscribe… link.

Monitoring event processing in Enterprise Manager

Selecting an event and clicking the Test… button allows us to publish a new event. No assistance is provided with the format of the event, which should be laid out as shown in the following example:

<business-event
  xmlns:ns1=http://soa.suite.book/events/edl/AuctionEvents
  xmlns="http://oracle.com/fabric/businessEvent">
    <name>ns1:NewAuction</name>
      <id>e4196227-806c-4680-a6b4-6f8df931b3f3</id>
        <content>
          <NewAuction xmlns="http://soa.suite.book/AuctionEvents">
            <seller>Antony</seller>
              <item>Used Running Shoes</item>
                <id>12345</id>
          </NewAuction>
        </content>
</business-event>

Note that the event content inside the <content> tab is the data associated with our new event. The <business-event> identifies the namespace of the event, and under this, the <name> element identifies the specific event.

Monitoring event processing in Enterprise Manager

The Subscriptions tab gives us more information about subscriptions, identifying the composite and component within the composite that are subscribing to a particular event. We can also see the transaction consistency level and any filter that is being applied.

Subscriptions can either be linked to a stored procedure in the database, database subscriptions, or they can be subscriptions within components in a composite.

Monitoring event processing in Enterprise Manager

The Faults tab allows us to see the details of any faults generated by subscriptions when trying to receive an event.

Summary

In this chapter, we have explored how EDN differs from traditional MOM systems and also how it is used to allow seamless extension of business functionality without requiring any modification of business processes and services. We have looked at the different ways in which Mediator and BPEL may be used to publish events and taken a brief overview of the event monitoring abilities of Enterprise Manager.

Chapter 9. Building Real-time Dashboards

The key objective driving service-oriented architecture is to move the IT organization closer to the business. Creation of services and their assembly into composite applications and processes is how IT can become more responsive to business. However, it is the provision of real-time business information via dashboards that really gives business the confidence that IT can add value. In this chapter, we will examine how to use Business Activity Monitoring (BAM) to provide real-time dashboards that give the business an insight into what is currently happening with their processes, not what happened yesterday or last week.

How BAM differs from traditional business intelligence

The Oracle SOA Suite stores the state of all processes in a database in documented schemas so why do we need yet another reporting tool to provide insight into our processes and services? In other words, how does BAM differ from traditional BI? In traditional BI, reports are generated and delivered either on a scheduled basis or in response to a user request. Any changes to the information will not be reflected until the next scheduled run or until a user requests the report to be rerun. BAM is an event-driven reporting tool that generates alerts and reports in real-time, based on a continuously changing data stream, some of whose data may not be in the database. For example, BAM may gather data from the currently executing state of BPEL processes to track how many orders are at each step of the order process. As events occur in services and processes, they are captured by BAM, transformed to business-friendly reports and views, and delivered and updated in real-time. Where necessary, these updated reports are delivered to users. This delivery to users can take several forms. The best known is the dashboard on user desktops that will automatically update without any need for the user to refresh the screen. There are also other means to deliver reports to the end user, including sending them via text message or e-mail.

Traditional reporting tools such as Oracle Reports and Oracle Discoverer , as well as Oracle's latest Business Intelligence Suite , can be used to provide some real-time reporting needs, but they do not provide the event-driven reporting that gives the business a continuously updating view of the current business situation.

Tip

Event-Driven Architecture (EDA) is about building business solutions around responsiveness to events. Events may be simple triggers such as a stock out event or they may be more complex triggers such as the calculations to realize that a stock out will occur in three days. An event-driven architecture will often take a number of simple events and then combine them through a complex event-processing sequence to generate complex events that could not have been raised without aggregation of several simpler events.

Oracle BAM scenarios

Oracle business activity monitoring is typically used to monitor two distinct types of real-time data. Firstly, it may be used to monitor the overall state of processes in the business. For example, it may be used to track how many auctions are currently running, how many have bids on them, and how many have been completed in the last 24 hours (or other time periods). Secondly, it may be used to track in real-time Key Performance Indicators, or KPIs. For example, it may be used to provide a real-time updating dashboard to a seller to show the current total value of all the sellers' auctions and to track this against an expected target.

In the first case, we are interested how business processes are progressing and are using BAM to identify bottlenecks and failure points within those processes. Bottlenecks can be identified by processes spending too much time in given steps in the process. Currently, BAM requires us to identify key points in a process and capture data at those key points. There is no direct linkage back to the process models in the current release of SOA Suite or Oracle's Business Process Analyst tool. BAM allows us to compute the time taken between two points in a process, such as the time between order placement and shipping, and provides real-time feedback on those times. Similarly, BAM can be used to track the percentage drop-out rate between steps in a sales process, allowing the business to take the appropriate action. For example, it can do this by tracking the number of shopping carts created, then by tracking the number of carts that continue to get a shipping cost, and finally by tracking the number of carts that result in an order being placed. For example, the business can use this real-time information to assess the impact of a free shipping offer.

In the second case, our interest is on some aggregate number, such as our total liabilities, should we win all the auctions we are bidding on. This requires us to aggregate results from many events, possibly performing some kind of calculation on them to provide us with a single KPI that gives an indication to the business of how things are going. BAM allows us to continuously update this number in real-time on a dashboard, without the need for continued polling. It also allows us to trigger alerts, perhaps through e-mail or SMS, or to notify an individual when a threshold is breached.

In both cases, reports delivered can be customized based on the individual receiving the report.

BAM architecture

It may seem odd to have a section on architecture in the middle of a chapter about how to effectively use BAM, but the key to successfully utilizing BAM is an understanding of how the different tiers relate to each other.

Logical view

Logical view

The preceding diagram represents a logical view of how BAM operates. Events are acquired from one or more sources through event acquisition and then normalized, correlated, and stored in event storage (generally a memory area in BAM that is backed up to the disk). The report cache generates reports based on events in storage and then delivers those reports, together with real-time updates through the report delivery layer. Event processing is also performed on events in storage, and when defined conditions are met, alerts will be delivered through the alert delivery service.

Physical view

To understand the physical view of the architecture of BAM better, we have divided this section into four parts.

Acquire

This logical view maps onto the physical BAM components, as shown in the following diagram. Data acquisition in the SOA Suite is primarily handled by a BAM Adapter. BAM can also receive events from JMS message queues. BAM exposes a web service interface to allow any web service-capable application to act as an event source. Finally, there is an Oracle Data Integrator (ODI) knowledge module that can be used to feed BAM. BAM has the ability to query data in databases (useful for historical comparison and reference data) but does not detect changes in that data. For complex data formats, such as master details record relationships or for other data sources, using the ODI Knowledge Module in conjunction with Oracle Data Integrator is recommended by Oracle.

As an alternative to using ODI, it is possible to use adapters to acquire data from multiple sources and feed it into BAM through SCA Assemblies or OSB. This is more work for the developer, but it avoids an investment in ODI if it is not used elsewhere in the business.

For high volume, real-time data capture, Oracle provides a Complex Event Processing Engine (CEP) that can batch events before forwarding them to BAM. This reduces the number of calls into BAM, allowing it to scale better.

Finally, it is possible to send messages straight from applications into BAM using a JMS queue or direct web service call. This, however, tightly couples the application and BAM and generally requires reworking the application to support BAM. Using the middleware approaches, which were shown earlier, allows us to avoid this coupling.

At the data capture level, we need to think of the data items that we can provide to feed the reports and alerts that we desire to generate. We must also consider the sources of that data and the best way to load it into BAM. If all the data we require passes through the composite engine, then we can use the BAM adapter within SOA Suite to capture our BAM data. If there is some data that is not visible through the composites, then we need to consider the other mechanisms discussed earlier, such as using ODI, creating new composites to capture the data, or directly wiring the sources of the data to BAM.

Acquire

Store

Once the data is captured, it is stored in a normalized form in memory in a component called the Active Data Cache (ADC). This storage facility has the ability to do simple correlation based on fields within the data, and multiple data items received from the acquisition layer may update just a single object in the data cache. For example, the state of a given BPEL process instance may be represented by a single object in the ADC and all updates to that process state will just update that single data item, rather than creating multiple data items. The ADC contents are also stored in the BAM data store to avoid losing data across restarts and to avoid running out of memory.

Process

Reports are run-based on user demand. Once a report is run, it will update the user's screen on a real-time basis. Where multiple users are accessing the same report, only one instance of the report is maintained by the report server. As events are captured and stored in real-time, the report engine will continuously monitor them for any changes that need to be made to the reports that are currently active. When changes are detected that impact active reports, the appropriate report will be updated in memory and the updates are sent to the user's screen.

In addition to the event processing required to correctly insert and update items in the ADC, there is also a requirement to monitor items in the ADC for events that require some sort of action to be taken. This is the job of the event processor. This will monitor data in the ADC to see if registered thresholds on values have been exceeded or if certain timeouts have expired. The event processor will often need to perform calculations across multiple data items to do this.

This monitoring of events in the event processor is accomplished through BAM rules, which are used to trigger BAM alerts. A BAM rule may be to monitor the percentage of aborted sales processes in the last 30 minutes and to raise an alert when the percentage exceeds a threshold value.

Deliver

Delivery of reports takes place in two ways. First, users can view reports on their desktop by selecting views within BAM. These reports are delivered as HTML pages within a browser and are updated whenever the underlying data used in the report changes. The other approach is that reports are sent out as a result of alerts being raised by the Event Processing Engine. In this latter case, the report may be delivered by e-mail, SMS, or voice messaging using the notifications service. A final option available for these alerts is to invoke a web service to take some sort of automated action.

Tip

Closing the Loop

While monitoring what is happening is all very laudable, it is only beneficial if we actually do something about what we are monitoring. BAM not only provides the real-time monitoring ability very well, but it also provides the facility to invoke other services to respond to undesirable events such as stock outs. The ability to invoke external services is crucial to the concept of a closed loop control environment where, as a result of monitoring, we are able to reach back into the processes and either alter their execution or start new ones. For example, when a stock out or low stock event is raised, rather than just notifying a manager about the stock out, the message centre could invoke a web service requesting a supplier to send more stock to replenish inventory. Placing this kind of feedback mechanism in BAM allows us trigger events across multiple applications and locations in a way that may not be possible within a single application or process, because they do not have sufficient visibility. For example, in response to a stock out, we may be monitoring stock levels in independent systems, and based on stock levels elsewhere, may redirect stock from one location to another rather than requesting our supplier to provide more stock. By invoking web services, we avoid the need for manual intervention in responding to these alerts.

Another way of accessing BAM reports is through Application Development Framework (ADF is Oracle's UI development framework) BAM data controls. These controls can be used on ADF pages to provide custom applications and portals with access to BAM data. These controls will update in real-time on a user desktop in the same way as reports retrieved directly from BAM.

Steps in using BAM

The following steps are used in creating BAM reports:

  1. Decide what reports are desired
  2. Decide what data is required to provide those reports
  3. Define suitable data objects
  4. Capture events to populate the data objects
  5. Create reports from the data objects

The first two steps are paper-based exercises to define the requirements. The remaining steps are the creation of suitable artifacts in BAM to support the desired business reports, defined in step 1.

User interface

Development in Oracle BAM is done through a web-based user interface.

User interface

This user interface gives access to four different applications that allow you to interact with different parts of BAM:

  • Active Viewer: For giving access to reports, this relates to the deliver stage for user-requested reports.
  • Active Studio: For building reports, this relates to the 'process' stage for creating reports.
  • Architect: For setting up both inbound and outbound events. Data elements are defined here, as are data sources. Alerts are also configured here. This covers setting up acquire and store stages as well as the deliver stage for alerts.
  • Administrator: For managing users and roles as well as defining the types of message sources.

We will not examine the applications individually, but we will take a task-focused look at how to use them as a part of providing some specific reports.

Logical view

Logical view

The preceding diagram represents a logical view of how BAM operates. Events are acquired from one or more sources through event acquisition and then normalized, correlated, and stored in event storage (generally a memory area in BAM that is backed up to the disk). The report cache generates reports based on events in storage and then delivers those reports, together with real-time updates through the report delivery layer. Event processing is also performed on events in storage, and when defined conditions are met, alerts will be delivered through the alert delivery service.

Physical view

To understand the physical view of the architecture of BAM better, we have divided this section into four parts.

Acquire

This logical view maps onto the physical BAM components, as shown in the following diagram. Data acquisition in the SOA Suite is primarily handled by a BAM Adapter. BAM can also receive events from JMS message queues. BAM exposes a web service interface to allow any web service-capable application to act as an event source. Finally, there is an Oracle Data Integrator (ODI) knowledge module that can be used to feed BAM. BAM has the ability to query data in databases (useful for historical comparison and reference data) but does not detect changes in that data. For complex data formats, such as master details record relationships or for other data sources, using the ODI Knowledge Module in conjunction with Oracle Data Integrator is recommended by Oracle.

As an alternative to using ODI, it is possible to use adapters to acquire data from multiple sources and feed it into BAM through SCA Assemblies or OSB. This is more work for the developer, but it avoids an investment in ODI if it is not used elsewhere in the business.

For high volume, real-time data capture, Oracle provides a Complex Event Processing Engine (CEP) that can batch events before forwarding them to BAM. This reduces the number of calls into BAM, allowing it to scale better.

Finally, it is possible to send messages straight from applications into BAM using a JMS queue or direct web service call. This, however, tightly couples the application and BAM and generally requires reworking the application to support BAM. Using the middleware approaches, which were shown earlier, allows us to avoid this coupling.

At the data capture level, we need to think of the data items that we can provide to feed the reports and alerts that we desire to generate. We must also consider the sources of that data and the best way to load it into BAM. If all the data we require passes through the composite engine, then we can use the BAM adapter within SOA Suite to capture our BAM data. If there is some data that is not visible through the composites, then we need to consider the other mechanisms discussed earlier, such as using ODI, creating new composites to capture the data, or directly wiring the sources of the data to BAM.

Acquire

Store

Once the data is captured, it is stored in a normalized form in memory in a component called the Active Data Cache (ADC). This storage facility has the ability to do simple correlation based on fields within the data, and multiple data items received from the acquisition layer may update just a single object in the data cache. For example, the state of a given BPEL process instance may be represented by a single object in the ADC and all updates to that process state will just update that single data item, rather than creating multiple data items. The ADC contents are also stored in the BAM data store to avoid losing data across restarts and to avoid running out of memory.

Process

Reports are run-based on user demand. Once a report is run, it will update the user's screen on a real-time basis. Where multiple users are accessing the same report, only one instance of the report is maintained by the report server. As events are captured and stored in real-time, the report engine will continuously monitor them for any changes that need to be made to the reports that are currently active. When changes are detected that impact active reports, the appropriate report will be updated in memory and the updates are sent to the user's screen.

In addition to the event processing required to correctly insert and update items in the ADC, there is also a requirement to monitor items in the ADC for events that require some sort of action to be taken. This is the job of the event processor. This will monitor data in the ADC to see if registered thresholds on values have been exceeded or if certain timeouts have expired. The event processor will often need to perform calculations across multiple data items to do this.

This monitoring of events in the event processor is accomplished through BAM rules, which are used to trigger BAM alerts. A BAM rule may be to monitor the percentage of aborted sales processes in the last 30 minutes and to raise an alert when the percentage exceeds a threshold value.

Deliver

Delivery of reports takes place in two ways. First, users can view reports on their desktop by selecting views within BAM. These reports are delivered as HTML pages within a browser and are updated whenever the underlying data used in the report changes. The other approach is that reports are sent out as a result of alerts being raised by the Event Processing Engine. In this latter case, the report may be delivered by e-mail, SMS, or voice messaging using the notifications service. A final option available for these alerts is to invoke a web service to take some sort of automated action.

Tip

Closing the Loop

While monitoring what is happening is all very laudable, it is only beneficial if we actually do something about what we are monitoring. BAM not only provides the real-time monitoring ability very well, but it also provides the facility to invoke other services to respond to undesirable events such as stock outs. The ability to invoke external services is crucial to the concept of a closed loop control environment where, as a result of monitoring, we are able to reach back into the processes and either alter their execution or start new ones. For example, when a stock out or low stock event is raised, rather than just notifying a manager about the stock out, the message centre could invoke a web service requesting a supplier to send more stock to replenish inventory. Placing this kind of feedback mechanism in BAM allows us trigger events across multiple applications and locations in a way that may not be possible within a single application or process, because they do not have sufficient visibility. For example, in response to a stock out, we may be monitoring stock levels in independent systems, and based on stock levels elsewhere, may redirect stock from one location to another rather than requesting our supplier to provide more stock. By invoking web services, we avoid the need for manual intervention in responding to these alerts.

Another way of accessing BAM reports is through Application Development Framework (ADF is Oracle's UI development framework) BAM data controls. These controls can be used on ADF pages to provide custom applications and portals with access to BAM data. These controls will update in real-time on a user desktop in the same way as reports retrieved directly from BAM.

Steps in using BAM

The following steps are used in creating BAM reports:

  1. Decide what reports are desired
  2. Decide what data is required to provide those reports
  3. Define suitable data objects
  4. Capture events to populate the data objects
  5. Create reports from the data objects

The first two steps are paper-based exercises to define the requirements. The remaining steps are the creation of suitable artifacts in BAM to support the desired business reports, defined in step 1.

User interface

Development in Oracle BAM is done through a web-based user interface.

User interface

This user interface gives access to four different applications that allow you to interact with different parts of BAM:

  • Active Viewer: For giving access to reports, this relates to the deliver stage for user-requested reports.
  • Active Studio: For building reports, this relates to the 'process' stage for creating reports.
  • Architect: For setting up both inbound and outbound events. Data elements are defined here, as are data sources. Alerts are also configured here. This covers setting up acquire and store stages as well as the deliver stage for alerts.
  • Administrator: For managing users and roles as well as defining the types of message sources.

We will not examine the applications individually, but we will take a task-focused look at how to use them as a part of providing some specific reports.

Physical view

To understand the physical view of the architecture of BAM better, we have divided this section into four parts.

Acquire

This logical view maps onto the physical BAM components, as shown in the following diagram. Data acquisition in the SOA Suite is primarily handled by a BAM Adapter. BAM can also receive events from JMS message queues. BAM exposes a web service interface to allow any web service-capable application to act as an event source. Finally, there is an Oracle Data Integrator (ODI) knowledge module that can be used to feed BAM. BAM has the ability to query data in databases (useful for historical comparison and reference data) but does not detect changes in that data. For complex data formats, such as master details record relationships or for other data sources, using the ODI Knowledge Module in conjunction with Oracle Data Integrator is recommended by Oracle.

As an alternative to using ODI, it is possible to use adapters to acquire data from multiple sources and feed it into BAM through SCA Assemblies or OSB. This is more work for the developer, but it avoids an investment in ODI if it is not used elsewhere in the business.

For high volume, real-time data capture, Oracle provides a Complex Event Processing Engine (CEP) that can batch events before forwarding them to BAM. This reduces the number of calls into BAM, allowing it to scale better.

Finally, it is possible to send messages straight from applications into BAM using a JMS queue or direct web service call. This, however, tightly couples the application and BAM and generally requires reworking the application to support BAM. Using the middleware approaches, which were shown earlier, allows us to avoid this coupling.

At the data capture level, we need to think of the data items that we can provide to feed the reports and alerts that we desire to generate. We must also consider the sources of that data and the best way to load it into BAM. If all the data we require passes through the composite engine, then we can use the BAM adapter within SOA Suite to capture our BAM data. If there is some data that is not visible through the composites, then we need to consider the other mechanisms discussed earlier, such as using ODI, creating new composites to capture the data, or directly wiring the sources of the data to BAM.

Acquire

Store

Once the data is captured, it is stored in a normalized form in memory in a component called the Active Data Cache (ADC). This storage facility has the ability to do simple correlation based on fields within the data, and multiple data items received from the acquisition layer may update just a single object in the data cache. For example, the state of a given BPEL process instance may be represented by a single object in the ADC and all updates to that process state will just update that single data item, rather than creating multiple data items. The ADC contents are also stored in the BAM data store to avoid losing data across restarts and to avoid running out of memory.

Process

Reports are run-based on user demand. Once a report is run, it will update the user's screen on a real-time basis. Where multiple users are accessing the same report, only one instance of the report is maintained by the report server. As events are captured and stored in real-time, the report engine will continuously monitor them for any changes that need to be made to the reports that are currently active. When changes are detected that impact active reports, the appropriate report will be updated in memory and the updates are sent to the user's screen.

In addition to the event processing required to correctly insert and update items in the ADC, there is also a requirement to monitor items in the ADC for events that require some sort of action to be taken. This is the job of the event processor. This will monitor data in the ADC to see if registered thresholds on values have been exceeded or if certain timeouts have expired. The event processor will often need to perform calculations across multiple data items to do this.

This monitoring of events in the event processor is accomplished through BAM rules, which are used to trigger BAM alerts. A BAM rule may be to monitor the percentage of aborted sales processes in the last 30 minutes and to raise an alert when the percentage exceeds a threshold value.

Deliver

Delivery of reports takes place in two ways. First, users can view reports on their desktop by selecting views within BAM. These reports are delivered as HTML pages within a browser and are updated whenever the underlying data used in the report changes. The other approach is that reports are sent out as a result of alerts being raised by the Event Processing Engine. In this latter case, the report may be delivered by e-mail, SMS, or voice messaging using the notifications service. A final option available for these alerts is to invoke a web service to take some sort of automated action.

Tip

Closing the Loop

While monitoring what is happening is all very laudable, it is only beneficial if we actually do something about what we are monitoring. BAM not only provides the real-time monitoring ability very well, but it also provides the facility to invoke other services to respond to undesirable events such as stock outs. The ability to invoke external services is crucial to the concept of a closed loop control environment where, as a result of monitoring, we are able to reach back into the processes and either alter their execution or start new ones. For example, when a stock out or low stock event is raised, rather than just notifying a manager about the stock out, the message centre could invoke a web service requesting a supplier to send more stock to replenish inventory. Placing this kind of feedback mechanism in BAM allows us trigger events across multiple applications and locations in a way that may not be possible within a single application or process, because they do not have sufficient visibility. For example, in response to a stock out, we may be monitoring stock levels in independent systems, and based on stock levels elsewhere, may redirect stock from one location to another rather than requesting our supplier to provide more stock. By invoking web services, we avoid the need for manual intervention in responding to these alerts.

Another way of accessing BAM reports is through Application Development Framework (ADF is Oracle's UI development framework) BAM data controls. These controls can be used on ADF pages to provide custom applications and portals with access to BAM data. These controls will update in real-time on a user desktop in the same way as reports retrieved directly from BAM.

Steps in using BAM

The following steps are used in creating BAM reports:

  1. Decide what reports are desired
  2. Decide what data is required to provide those reports
  3. Define suitable data objects
  4. Capture events to populate the data objects
  5. Create reports from the data objects

The first two steps are paper-based exercises to define the requirements. The remaining steps are the creation of suitable artifacts in BAM to support the desired business reports, defined in step 1.

User interface

Development in Oracle BAM is done through a web-based user interface.

User interface

This user interface gives access to four different applications that allow you to interact with different parts of BAM:

  • Active Viewer: For giving access to reports, this relates to the deliver stage for user-requested reports.
  • Active Studio: For building reports, this relates to the 'process' stage for creating reports.
  • Architect: For setting up both inbound and outbound events. Data elements are defined here, as are data sources. Alerts are also configured here. This covers setting up acquire and store stages as well as the deliver stage for alerts.
  • Administrator: For managing users and roles as well as defining the types of message sources.

We will not examine the applications individually, but we will take a task-focused look at how to use them as a part of providing some specific reports.

Acquire

This logical view maps onto the physical BAM components, as shown in the following diagram. Data acquisition in the SOA Suite is primarily handled by a BAM Adapter. BAM can also receive events from JMS message queues. BAM exposes a web service interface to allow any web service-capable application to act as an event source. Finally, there is an Oracle Data Integrator (ODI) knowledge module that can be used to feed BAM. BAM has the ability to query data in databases (useful for historical comparison and reference data) but does not detect changes in that data. For complex data formats, such as master details record relationships or for other data sources, using the ODI Knowledge Module in conjunction with Oracle Data Integrator is recommended by Oracle.

As an alternative to using ODI, it is possible to use adapters to acquire data from multiple sources and feed it into BAM through SCA Assemblies or OSB. This is more work for the developer, but it avoids an investment in ODI if it is not used elsewhere in the business.

For high volume, real-time data capture, Oracle provides a Complex Event Processing Engine (CEP) that can batch events before forwarding them to BAM. This reduces the number of calls into BAM, allowing it to scale better.

Finally, it is possible to send messages straight from applications into BAM using a JMS queue or direct web service call. This, however, tightly couples the application and BAM and generally requires reworking the application to support BAM. Using the middleware approaches, which were shown earlier, allows us to avoid this coupling.

At the data capture level, we need to think of the data items that we can provide to feed the reports and alerts that we desire to generate. We must also consider the sources of that data and the best way to load it into BAM. If all the data we require passes through the composite engine, then we can use the BAM adapter within SOA Suite to capture our BAM data. If there is some data that is not visible through the composites, then we need to consider the other mechanisms discussed earlier, such as using ODI, creating new composites to capture the data, or directly wiring the sources of the data to BAM.

Acquire

Store

Once the data is captured, it is stored in a normalized form in memory in a component called the Active Data Cache (ADC). This storage facility has the ability to do simple correlation based on fields within the data, and multiple data items received from the acquisition layer may update just a single object in the data cache. For example, the state of a given BPEL process instance may be represented by a single object in the ADC and all updates to that process state will just update that single data item, rather than creating multiple data items. The ADC contents are also stored in the BAM data store to avoid losing data across restarts and to avoid running out of memory.

Process

Reports are run-based on user demand. Once a report is run, it will update the user's screen on a real-time basis. Where multiple users are accessing the same report, only one instance of the report is maintained by the report server. As events are captured and stored in real-time, the report engine will continuously monitor them for any changes that need to be made to the reports that are currently active. When changes are detected that impact active reports, the appropriate report will be updated in memory and the updates are sent to the user's screen.

In addition to the event processing required to correctly insert and update items in the ADC, there is also a requirement to monitor items in the ADC for events that require some sort of action to be taken. This is the job of the event processor. This will monitor data in the ADC to see if registered thresholds on values have been exceeded or if certain timeouts have expired. The event processor will often need to perform calculations across multiple data items to do this.

This monitoring of events in the event processor is accomplished through BAM rules, which are used to trigger BAM alerts. A BAM rule may be to monitor the percentage of aborted sales processes in the last 30 minutes and to raise an alert when the percentage exceeds a threshold value.

Deliver

Delivery of reports takes place in two ways. First, users can view reports on their desktop by selecting views within BAM. These reports are delivered as HTML pages within a browser and are updated whenever the underlying data used in the report changes. The other approach is that reports are sent out as a result of alerts being raised by the Event Processing Engine. In this latter case, the report may be delivered by e-mail, SMS, or voice messaging using the notifications service. A final option available for these alerts is to invoke a web service to take some sort of automated action.

Tip

Closing the Loop

While monitoring what is happening is all very laudable, it is only beneficial if we actually do something about what we are monitoring. BAM not only provides the real-time monitoring ability very well, but it also provides the facility to invoke other services to respond to undesirable events such as stock outs. The ability to invoke external services is crucial to the concept of a closed loop control environment where, as a result of monitoring, we are able to reach back into the processes and either alter their execution or start new ones. For example, when a stock out or low stock event is raised, rather than just notifying a manager about the stock out, the message centre could invoke a web service requesting a supplier to send more stock to replenish inventory. Placing this kind of feedback mechanism in BAM allows us trigger events across multiple applications and locations in a way that may not be possible within a single application or process, because they do not have sufficient visibility. For example, in response to a stock out, we may be monitoring stock levels in independent systems, and based on stock levels elsewhere, may redirect stock from one location to another rather than requesting our supplier to provide more stock. By invoking web services, we avoid the need for manual intervention in responding to these alerts.

Another way of accessing BAM reports is through Application Development Framework (ADF is Oracle's UI development framework) BAM data controls. These controls can be used on ADF pages to provide custom applications and portals with access to BAM data. These controls will update in real-time on a user desktop in the same way as reports retrieved directly from BAM.

Steps in using BAM

The following steps are used in creating BAM reports:

  1. Decide what reports are desired
  2. Decide what data is required to provide those reports
  3. Define suitable data objects
  4. Capture events to populate the data objects
  5. Create reports from the data objects

The first two steps are paper-based exercises to define the requirements. The remaining steps are the creation of suitable artifacts in BAM to support the desired business reports, defined in step 1.

User interface

Development in Oracle BAM is done through a web-based user interface.

User interface

This user interface gives access to four different applications that allow you to interact with different parts of BAM:

  • Active Viewer: For giving access to reports, this relates to the deliver stage for user-requested reports.
  • Active Studio: For building reports, this relates to the 'process' stage for creating reports.
  • Architect: For setting up both inbound and outbound events. Data elements are defined here, as are data sources. Alerts are also configured here. This covers setting up acquire and store stages as well as the deliver stage for alerts.
  • Administrator: For managing users and roles as well as defining the types of message sources.

We will not examine the applications individually, but we will take a task-focused look at how to use them as a part of providing some specific reports.

Store

Once the data is captured, it is stored in a normalized form in memory in a component called the Active Data Cache (ADC). This storage facility has the ability to do simple correlation based on fields within the data, and multiple data items received from the acquisition layer may update just a single object in the data cache. For example, the state of a given BPEL process instance may be represented by a single object in the ADC and all updates to that process state will just update that single data item, rather than creating multiple data items. The ADC contents are also stored in the BAM data store to avoid losing data across restarts and to avoid running out of memory.

Process

Reports are run-based on user demand. Once a report is run, it will update the user's screen on a real-time basis. Where multiple users are accessing the same report, only one instance of the report is maintained by the report server. As events are captured and stored in real-time, the report engine will continuously monitor them for any changes that need to be made to the reports that are currently active. When changes are detected that impact active reports, the appropriate report will be updated in memory and the updates are sent to the user's screen.

In addition to the event processing required to correctly insert and update items in the ADC, there is also a requirement to monitor items in the ADC for events that require some sort of action to be taken. This is the job of the event processor. This will monitor data in the ADC to see if registered thresholds on values have been exceeded or if certain timeouts have expired. The event processor will often need to perform calculations across multiple data items to do this.

This monitoring of events in the event processor is accomplished through BAM rules, which are used to trigger BAM alerts. A BAM rule may be to monitor the percentage of aborted sales processes in the last 30 minutes and to raise an alert when the percentage exceeds a threshold value.

Deliver

Delivery of reports takes place in two ways. First, users can view reports on their desktop by selecting views within BAM. These reports are delivered as HTML pages within a browser and are updated whenever the underlying data used in the report changes. The other approach is that reports are sent out as a result of alerts being raised by the Event Processing Engine. In this latter case, the report may be delivered by e-mail, SMS, or voice messaging using the notifications service. A final option available for these alerts is to invoke a web service to take some sort of automated action.

Tip

Closing the Loop

While monitoring what is happening is all very laudable, it is only beneficial if we actually do something about what we are monitoring. BAM not only provides the real-time monitoring ability very well, but it also provides the facility to invoke other services to respond to undesirable events such as stock outs. The ability to invoke external services is crucial to the concept of a closed loop control environment where, as a result of monitoring, we are able to reach back into the processes and either alter their execution or start new ones. For example, when a stock out or low stock event is raised, rather than just notifying a manager about the stock out, the message centre could invoke a web service requesting a supplier to send more stock to replenish inventory. Placing this kind of feedback mechanism in BAM allows us trigger events across multiple applications and locations in a way that may not be possible within a single application or process, because they do not have sufficient visibility. For example, in response to a stock out, we may be monitoring stock levels in independent systems, and based on stock levels elsewhere, may redirect stock from one location to another rather than requesting our supplier to provide more stock. By invoking web services, we avoid the need for manual intervention in responding to these alerts.

Another way of accessing BAM reports is through Application Development Framework (ADF is Oracle's UI development framework) BAM data controls. These controls can be used on ADF pages to provide custom applications and portals with access to BAM data. These controls will update in real-time on a user desktop in the same way as reports retrieved directly from BAM.

Steps in using BAM

The following steps are used in creating BAM reports:

  1. Decide what reports are desired
  2. Decide what data is required to provide those reports
  3. Define suitable data objects
  4. Capture events to populate the data objects
  5. Create reports from the data objects

The first two steps are paper-based exercises to define the requirements. The remaining steps are the creation of suitable artifacts in BAM to support the desired business reports, defined in step 1.

User interface

Development in Oracle BAM is done through a web-based user interface.

User interface

This user interface gives access to four different applications that allow you to interact with different parts of BAM:

  • Active Viewer: For giving access to reports, this relates to the deliver stage for user-requested reports.
  • Active Studio: For building reports, this relates to the 'process' stage for creating reports.
  • Architect: For setting up both inbound and outbound events. Data elements are defined here, as are data sources. Alerts are also configured here. This covers setting up acquire and store stages as well as the deliver stage for alerts.
  • Administrator: For managing users and roles as well as defining the types of message sources.

We will not examine the applications individually, but we will take a task-focused look at how to use them as a part of providing some specific reports.

Process

Reports are run-based on user demand. Once a report is run, it will update the user's screen on a real-time basis. Where multiple users are accessing the same report, only one instance of the report is maintained by the report server. As events are captured and stored in real-time, the report engine will continuously monitor them for any changes that need to be made to the reports that are currently active. When changes are detected that impact active reports, the appropriate report will be updated in memory and the updates are sent to the user's screen.

In addition to the event processing required to correctly insert and update items in the ADC, there is also a requirement to monitor items in the ADC for events that require some sort of action to be taken. This is the job of the event processor. This will monitor data in the ADC to see if registered thresholds on values have been exceeded or if certain timeouts have expired. The event processor will often need to perform calculations across multiple data items to do this.

This monitoring of events in the event processor is accomplished through BAM rules, which are used to trigger BAM alerts. A BAM rule may be to monitor the percentage of aborted sales processes in the last 30 minutes and to raise an alert when the percentage exceeds a threshold value.

Deliver

Delivery of reports takes place in two ways. First, users can view reports on their desktop by selecting views within BAM. These reports are delivered as HTML pages within a browser and are updated whenever the underlying data used in the report changes. The other approach is that reports are sent out as a result of alerts being raised by the Event Processing Engine. In this latter case, the report may be delivered by e-mail, SMS, or voice messaging using the notifications service. A final option available for these alerts is to invoke a web service to take some sort of automated action.

Tip

Closing the Loop

While monitoring what is happening is all very laudable, it is only beneficial if we actually do something about what we are monitoring. BAM not only provides the real-time monitoring ability very well, but it also provides the facility to invoke other services to respond to undesirable events such as stock outs. The ability to invoke external services is crucial to the concept of a closed loop control environment where, as a result of monitoring, we are able to reach back into the processes and either alter their execution or start new ones. For example, when a stock out or low stock event is raised, rather than just notifying a manager about the stock out, the message centre could invoke a web service requesting a supplier to send more stock to replenish inventory. Placing this kind of feedback mechanism in BAM allows us trigger events across multiple applications and locations in a way that may not be possible within a single application or process, because they do not have sufficient visibility. For example, in response to a stock out, we may be monitoring stock levels in independent systems, and based on stock levels elsewhere, may redirect stock from one location to another rather than requesting our supplier to provide more stock. By invoking web services, we avoid the need for manual intervention in responding to these alerts.

Another way of accessing BAM reports is through Application Development Framework (ADF is Oracle's UI development framework) BAM data controls. These controls can be used on ADF pages to provide custom applications and portals with access to BAM data. These controls will update in real-time on a user desktop in the same way as reports retrieved directly from BAM.

Steps in using BAM

The following steps are used in creating BAM reports:

  1. Decide what reports are desired
  2. Decide what data is required to provide those reports
  3. Define suitable data objects
  4. Capture events to populate the data objects
  5. Create reports from the data objects

The first two steps are paper-based exercises to define the requirements. The remaining steps are the creation of suitable artifacts in BAM to support the desired business reports, defined in step 1.

User interface

Development in Oracle BAM is done through a web-based user interface.

User interface

This user interface gives access to four different applications that allow you to interact with different parts of BAM:

  • Active Viewer: For giving access to reports, this relates to the deliver stage for user-requested reports.
  • Active Studio: For building reports, this relates to the 'process' stage for creating reports.
  • Architect: For setting up both inbound and outbound events. Data elements are defined here, as are data sources. Alerts are also configured here. This covers setting up acquire and store stages as well as the deliver stage for alerts.
  • Administrator: For managing users and roles as well as defining the types of message sources.

We will not examine the applications individually, but we will take a task-focused look at how to use them as a part of providing some specific reports.

Deliver

Delivery of reports takes place in two ways. First, users can view reports on their desktop by selecting views within BAM. These reports are delivered as HTML pages within a browser and are updated whenever the underlying data used in the report changes. The other approach is that reports are sent out as a result of alerts being raised by the Event Processing Engine. In this latter case, the report may be delivered by e-mail, SMS, or voice messaging using the notifications service. A final option available for these alerts is to invoke a web service to take some sort of automated action.

Tip

Closing the Loop

While monitoring what is happening is all very laudable, it is only beneficial if we actually do something about what we are monitoring. BAM not only provides the real-time monitoring ability very well, but it also provides the facility to invoke other services to respond to undesirable events such as stock outs. The ability to invoke external services is crucial to the concept of a closed loop control environment where, as a result of monitoring, we are able to reach back into the processes and either alter their execution or start new ones. For example, when a stock out or low stock event is raised, rather than just notifying a manager about the stock out, the message centre could invoke a web service requesting a supplier to send more stock to replenish inventory. Placing this kind of feedback mechanism in BAM allows us trigger events across multiple applications and locations in a way that may not be possible within a single application or process, because they do not have sufficient visibility. For example, in response to a stock out, we may be monitoring stock levels in independent systems, and based on stock levels elsewhere, may redirect stock from one location to another rather than requesting our supplier to provide more stock. By invoking web services, we avoid the need for manual intervention in responding to these alerts.

Another way of accessing BAM reports is through Application Development Framework (ADF is Oracle's UI development framework) BAM data controls. These controls can be used on ADF pages to provide custom applications and portals with access to BAM data. These controls will update in real-time on a user desktop in the same way as reports retrieved directly from BAM.

Steps in using BAM

The following steps are used in creating BAM reports:

  1. Decide what reports are desired
  2. Decide what data is required to provide those reports
  3. Define suitable data objects
  4. Capture events to populate the data objects
  5. Create reports from the data objects

The first two steps are paper-based exercises to define the requirements. The remaining steps are the creation of suitable artifacts in BAM to support the desired business reports, defined in step 1.

User interface

Development in Oracle BAM is done through a web-based user interface.

User interface

This user interface gives access to four different applications that allow you to interact with different parts of BAM:

  • Active Viewer: For giving access to reports, this relates to the deliver stage for user-requested reports.
  • Active Studio: For building reports, this relates to the 'process' stage for creating reports.
  • Architect: For setting up both inbound and outbound events. Data elements are defined here, as are data sources. Alerts are also configured here. This covers setting up acquire and store stages as well as the deliver stage for alerts.
  • Administrator: For managing users and roles as well as defining the types of message sources.

We will not examine the applications individually, but we will take a task-focused look at how to use them as a part of providing some specific reports.

Steps in using BAM

The following steps are used in creating BAM reports:

  1. Decide what reports are desired
  2. Decide what data is required to provide those reports
  3. Define suitable data objects
  4. Capture events to populate the data objects
  5. Create reports from the data objects

The first two steps are paper-based exercises to define the requirements. The remaining steps are the creation of suitable artifacts in BAM to support the desired business reports, defined in step 1.

User interface

Development in Oracle BAM is done through a web-based user interface.

User interface

This user interface gives access to four different applications that allow you to interact with different parts of BAM:

  • Active Viewer: For giving access to reports, this relates to the deliver stage for user-requested reports.
  • Active Studio: For building reports, this relates to the 'process' stage for creating reports.
  • Architect: For setting up both inbound and outbound events. Data elements are defined here, as are data sources. Alerts are also configured here. This covers setting up acquire and store stages as well as the deliver stage for alerts.
  • Administrator: For managing users and roles as well as defining the types of message sources.

We will not examine the applications individually, but we will take a task-focused look at how to use them as a part of providing some specific reports.

User interface

Development in Oracle BAM is done through a web-based user interface.

User interface

This user interface gives access to four different applications that allow you to interact with different parts of BAM:

  • Active Viewer: For giving access to reports, this relates to the deliver stage for user-requested reports.
  • Active Studio: For building reports, this relates to the 'process' stage for creating reports.
  • Architect: For setting up both inbound and outbound events. Data elements are defined here, as are data sources. Alerts are also configured here. This covers setting up acquire and store stages as well as the deliver stage for alerts.
  • Administrator: For managing users and roles as well as defining the types of message sources.

We will not examine the applications individually, but we will take a task-focused look at how to use them as a part of providing some specific reports.

Monitoring process state

Now that we have examined how BAM is constructed, let us use this knowledge to construct some simple dashboards that track the state of a business process. We will create a simple version of an auction process. The process is shown as follows:

Monitoring process state

An auction is started, then bids are placed until the time runs out, at which point, the auction is completed. This is modeled in BPEL. This process has three distinct states. They are as follows:

  1. Started
  2. Bid received
  3. Completed

Defining reports and data required

We are interested in the number of auctions in each state as well as the total value of auctions in progress. This leads us to the following reporting requirements:

  • Display current number of auctions in each state
  • Display value of all auctions in each state
  • Allow filtering of reports by bidder and seller
  • Allow filtering of reports by auction end date

These reports will require the following data:

  • Auction identifier, so that we can correlate status changes back to a particular auction
  • Auction state, so that we can track the number of auctions in each state
  • Current highest bid, so that we can calculate the worth of all auctions
  • Current highest bidder, so that we can filter reports by a particular bidder
  • Seller, so that we can filter reports by a particular seller
  • Auction end date, so that we can filter auctions by completion date

Having completed our analysis, we can proceed to define our data objects, capture events, and build our reports.

We will follow a middle-out approach to building our dashboard. We will take the following steps:

  1. Define our data within the Active Data Cache
  2. Create sensors in BPEL and map to data in the ADC
  3. Create suitable reports
  4. Run the reports

Defining data objects

Data in BAM is stored in data objects. Individual data objects contain the information that is reported in BAM dashboards and may be updated by multiple events. Generally, BAM will report against aggregations of objects, but there is also the ability for reports to drill down into individual data objects.

Before defining our data objects, let's group them into an auction folder so that they are easy to find. To do this, we use the BAM Architect application, and select Data Objects, which gives us the following screenshot:

Defining data objects

We select Create subfolder to create the folder and give it a name (Auction).

Defining data objects

We then click on Create folder to actually create the folder, and we get a confirmation message to tell us that it has been created. Notice that once created, the folder also appears in the Folders window on the left-hand side of the screen.

Now that we have our folder, we can create a data object. Again, we select Data Objects from the drop-down list. To define the data objects that are to be stored in our Active Data Cache, we open the Auction folder, if it is not already open, and select Create Data Object. If we don't select the Auction folder, then we pick it later when filling in the details of the data object.

We need to give our object a unique name within the folder and optionally provide it with a tip text that helps explain what the object does when the mouse is moved over it in object listings. Having named our object, we can now create the data fields by selecting Add a field. When adding fields, we need to provide a name and type as well as indicating if they must contain data; the default Nullable does not require a field to be populated. We may also optionally indicate if a field should be publically available for display and whether it should have any tool tip text.

Defining data objects

Once all the data fields have been defined, we can click Create Data Object to actually create the object as we have defined it. We are then presented with a confirmation screen that the object has been created.

Tip

Grouping data into hierarchies

When creating a data object, it is possible to specify "Dimensions" for the object. A dimension is based on one or more fields within the object. A given field can only participate in one dimension. This gives the ability to automatically group the object by the fields in the given dimension. If multiple fields are selected for a single dimension, then they can be layered into a hierarchy; for example, to allow analysis by country, region, and city. In this case, all three elements would be selected into a single dimension, perhaps called geography. Within geography, a hierarchy could be set up with country at the top, region next, and finally city at the bottom, allowing drill down to occur in views. Just as a data object can have multiple dimensions, a dimension can also have multiple hierarchies.

A digression on populating data object fields

In the previous discussion, we mentioned the Nullable attribute that can be attached to fields. This is very important as we do not expect to populate all or even most of the fields in a data object one at a time. Failing to initialize a field will generate an error unless it is Nullable. Do not confuse data objects with the low-level events that are used to populate them. Data objects in BAM do not have a one-to-one correspondence with the low-level events that populate them. In our auction example, there will be just one auction object for every auction. However, there will be at least two, and usually more, messages for every auction; one message for the auction starting, another for the auction completing, and additional messages for each bid received. These messages will all populate, or in some cases overwrite, different parts of the auction data object. The table shows how the three messages populate different parts of the data object.

Message

Auction ID

State

Highest Bid

Reserve

Expires

Seller

Highest Bidder

Auction Started

Inserted

Inserted

Inserted

Inserted

Inserted

Inserted

 

Bid Received

 

Updated

Updated

   

Updated

Auction Finished

 

Updated

     

Instrumenting BPEL and SCA

Having defined the data we wish to capture in BAM, we now need to make our auction process generate appropriate events. We can instrument BPEL and SCA by making explicit calls to a BAM adapter as we would to any other adapter. Within BPEL, we may also take advantage of the sensor framework to raise BAM events from within an activity.

Tip

Sensors versus explicit calls

Explicit calls are available within both SCA and BPEL. Within BPEL, they make it more obvious where the BAM events are being generated. BPEL sensors, however, provide the ability to generate events at a finer grained level than explicit calls. For example, a BAM sensor in a BPEL activity could be set to fire not just on activation and completion (which could be captured by an explicit call just before and after the event), but also on events that are harder to catch with an explicit invoke, such as faults and compensation. Finally, sensors can fire on retry events that are impossible to capture in any other way. BAM sensors do not use partner links or references, but refer to the adapter JNDI location directly.

Sensors are not part of the normal BPEL executable flow. They can be thought of as event generators. They are attached to almost any kind of activity in BPEL, including partner link operations (invoke, receive, reply) and assigns. They can also be attached to variables and will fire whenever the variable is modified.

Invoking the BAM adapter as a regular service

When using the BAM adapter, we first need to configure an adapter instance.

Creating a BAM adapter

Let us start by creating a new BAM adapter. We begin by creating a new BAM Connection from the Connection section of the New Gallery (File | New, and then select Connection under General).

Creating a BAM adapter

We provide a name for the connection and identify if we wish it to be local to this application or available to all applications. We then define the connection characteristics of hostnames and port numbers for the Active Data Cache (BAM Server Host) and web applications (Web Server). Generally, these will be the same hostname and port number. We also provide a username and password for the BAM server. Finally, we can test our connection to ensure that it works.

Creating a BAM adapter

Having created our connection, we can now create a BAM partner link for use in BPEL or SCA. We do this in the same way as we create any other adapter-based link. We can drag a BAM Adapter from the Service Adapters section of the Component Palette onto either the External References section of an SCA or the Partner Links section of a BPEL process. This will launch the Adapter Configuration Wizard. After providing a name for our service, we are asked to select a BAM Data Object and determine the Operation to perform on the object. We must also provide an Operation Name and determine the batching behavior.

Creating a BAM adapter

The Data Object may be selected directly from the BAM server by using the Browse… button to pop up the BAM Data Object Chooser dialog box, which allows selection of the correct data object.

Creating a BAM adapter

Depending on the operation, we may need to provide a key to locate the correct data object instance. Update, Upsert, and Delete all require a key, only Insert does not.

Tip

Upsert the universal update mechanism

When using upsert, if the key already exists then that object is updated. If the object does not exist, then it is inserted. This enables upsert to cover both insert and update operations and is generally the most useful operation to perform on BAM objects, as it requires only one BAM adapter instance to provide two different operations.

Having identified the update characteristics of our adapter, we now must map it onto a resource in the underlying application server by providing the JNDI location of the BAM connection. Once this is completed, we can complete the wizard and finish creating our BAM adapter.

Invoking the BAM adapter

Invoking the BAM adapter is the same as invoking any other adapter from BPEL or the Mediator. The BAM adapter provides an interface to allow a collection of data objects to be submitted at the same time, each field in the data object is represented by an XML element in the interface to the adapter. XSLT or copy operations may be used to populate the fields of the input variable.

Invoking the BAM adapter

Invoking the BAM adapter through BPEL sensors

In this section, we will examine how to use BPEL sensors to invoke the BAM adapter.

Within JDeveloper, there are several modes in which we can view the BPEL process. On the right-hand side of the title bar for the BPEL editor, there is a drop-down list that allows us to select the viewing and editing mode.

The drop-down list shows us the three modes available:

  • BPEL: Lets us edit and view the BPEL activities
  • Monitor: Lets us edit and view sensor annotations to the BPEL process
  • BPA: Is only used with the Oracle BPA suite.
    Invoking the BAM adapter through BPEL sensors

After choosing Monitor, we can right-click on a BPEL activity to start creating the sensor. This brings up a pop-up menu from which we can select the Create | Sensor item. Note that there are also options to create other monitoring items.

  • Counter: Creates a count of the number of times an activity has been reached
  • Business Indicator: Evaluates an XPath expression when an activity has been reached
  • Interval: Calculates the elapsed time between two activities
  • Sensor: Creates a BAM sensor
    Invoking the BAM adapter through BPEL sensors

When creating a new sensor we need to provide it with a name and indicate when it should fire. The options are as follows:

  • Activation: When the activity is started
  • Completion: When the activity is completed
  • Fault: When the activity raises a fault
  • Compensation: When compensation is invoked on a surrounding scope
  • Retry: When the activity is retried, such as retrying an invoke
  • All: All of the above

We must also provide a variable that contains the data we want to be part of the sensor-generated event. This variable must be an element variable, not a simple type or a message type.

Invoking the BAM adapter through BPEL sensors

Sensors can have a number of sensor actions associated with them. Sensor actions can be thought of as the targets for the sensor event. One option is to send the events into the BPEL repository, which is useful for testing purposes. Another option is to send them to BAM. Other options revolve around JMS Queues and Topics.

Unfortunately, we cannot add a BAM sensor from the Create Activity Sensor dialog. They can only be created by using the structure pane for the BPEL process. To do this, we navigate to Sensor Actions in the structure pane, right-click, and select Bam Sensor Action. This brings up the Create Sensor Action dialog.

Invoking the BAM adapter through BPEL sensors

We provide a name for the sensor action and then select an eligible sensor from the drop-down list. There is a one-to-one relationship between BAM sensor sections and sensors. This is not the case for other types of sensors. The reason for the one-to-one relationship is that BAM sensor actions transform the variable associated with the action into the relevant fields for the BAM data object. This is done through an XSLT transform.

Having selected our sensor, we then click the torch next to the Data Object so that we can choose the BAM data object that we will map the sensor variable onto.

Having selected the BAM data object, we need to select the operation to be performed on the data object. The drop-down list gives us four options:

  • Insert
  • Update
  • Delete
  • Upsert

The Insert operation creates a new instance of the BAM data object. This may result in multiple data objects having the same field values.

The Insert operation does not use a key as it always creates a new data object. The remaining three operations require a key because they may operate on an existing data object. The key must uniquely identify a data object and may consist of one or more data object fields.

The Update operation will update an existing data object, overwriting some or all of the fields, as desired. If the object cannot be found from the key, then no data is updated in the ADC.

The Delete operation will remove a data object from the ADC. If the key does not identify an object, then no object will be deleted.

The Upsert operation behaves as an update operation if the key does identify an existing data object in the ADC. If the key does not identify an existing object in the ADC, then it behaves as an Insert operation.

Generally, we use the Insert operation when we know we are creating an object for the first time, and we use the Update operation when we know that the object already exists. We use the Upsert operation when we are unsure if an object exists.

For example, we may use an Insert operation to create an instance of a process status object and then use an update to change the status value of the object as the process progresses. When tracking process state, it is a good idea to use the process instance identifier as a key field in the data object.

Having chosen our operation, an Insert operation for example, we then need to map the fields in the sensor variable defined in BPEL to the BAM data object. We do this by creating a new XSLT transformation by clicking the green cross next to the Map File field.

Within the XSLT transformation editor, we can map the BPEL variable to the BAM data object. In addition to the variable itself, there is a host of other information available to us in the BPEL variable source document. This can be categorized as follows:

  • Header Information
    • This relates to the process instance and the specific sensor that is firing
  • Payload
    • This contains not only the sensor variable contents but also information about the activity and any fault associated with it

Useful data includes the instance ID of the process and also the time the sensor fired as well as the elapsed times for actions. Once we have wired up the variable data, we can save the transform file.

Invoking the BAM adapter through BPEL sensors

When we have finished creating the sensor action, we can deploy it to the BPEL server and events will be fired to populate the BAM active data cache.

Testing the events

After creating our BAM sensors, we can test them by executing a process in BPEL and ensuring that the events appear in the Active Data Cache. We can find the actual event data by selecting the object in BAM architect and then clicking Contents, which will then list the actual data object instances.

Creating a simple dashboard

Now that our sensors are in place and working, we can use the BAM Active Studio application to create a report based on the sensor information. To help organize our reports, it is possible to create folders to hold reports in a similar fashion to the way we created folders to hold data objects.

Let us create a report that shows the status of auctions in the system and also shows the value of all auctions currently open. We will start by creating the report itself. The report is just a holder for views, and we create it by selecting the Create A New Report button.

Creating a simple dashboard

We can select a report that has the right number of panes for the number of views we want. Note that it is possible to change the number of panes on the report, so if we get it wrong, it does not matter. For now, we will choose a simple split-screen report with two panes, one above the other.

We can provide a title for a report by editing the title section directly. Having updated the title, we can then proceed to create the views.

Defining reports and data required

We are interested in the number of auctions in each state as well as the total value of auctions in progress. This leads us to the following reporting requirements:

  • Display current number of auctions in each state
  • Display value of all auctions in each state
  • Allow filtering of reports by bidder and seller
  • Allow filtering of reports by auction end date

These reports will require the following data:

  • Auction identifier, so that we can correlate status changes back to a particular auction
  • Auction state, so that we can track the number of auctions in each state
  • Current highest bid, so that we can calculate the worth of all auctions
  • Current highest bidder, so that we can filter reports by a particular bidder
  • Seller, so that we can filter reports by a particular seller
  • Auction end date, so that we can filter auctions by completion date

Having completed our analysis, we can proceed to define our data objects, capture events, and build our reports.

We will follow a middle-out approach to building our dashboard. We will take the following steps:

  1. Define our data within the Active Data Cache
  2. Create sensors in BPEL and map to data in the ADC
  3. Create suitable reports
  4. Run the reports

Defining data objects

Data in BAM is stored in data objects. Individual data objects contain the information that is reported in BAM dashboards and may be updated by multiple events. Generally, BAM will report against aggregations of objects, but there is also the ability for reports to drill down into individual data objects.

Before defining our data objects, let's group them into an auction folder so that they are easy to find. To do this, we use the BAM Architect application, and select Data Objects, which gives us the following screenshot:

Defining data objects

We select Create subfolder to create the folder and give it a name (Auction).

Defining data objects

We then click on Create folder to actually create the folder, and we get a confirmation message to tell us that it has been created. Notice that once created, the folder also appears in the Folders window on the left-hand side of the screen.

Now that we have our folder, we can create a data object. Again, we select Data Objects from the drop-down list. To define the data objects that are to be stored in our Active Data Cache, we open the Auction folder, if it is not already open, and select Create Data Object. If we don't select the Auction folder, then we pick it later when filling in the details of the data object.

We need to give our object a unique name within the folder and optionally provide it with a tip text that helps explain what the object does when the mouse is moved over it in object listings. Having named our object, we can now create the data fields by selecting Add a field. When adding fields, we need to provide a name and type as well as indicating if they must contain data; the default Nullable does not require a field to be populated. We may also optionally indicate if a field should be publically available for display and whether it should have any tool tip text.

Defining data objects

Once all the data fields have been defined, we can click Create Data Object to actually create the object as we have defined it. We are then presented with a confirmation screen that the object has been created.

Tip

Grouping data into hierarchies

When creating a data object, it is possible to specify "Dimensions" for the object. A dimension is based on one or more fields within the object. A given field can only participate in one dimension. This gives the ability to automatically group the object by the fields in the given dimension. If multiple fields are selected for a single dimension, then they can be layered into a hierarchy; for example, to allow analysis by country, region, and city. In this case, all three elements would be selected into a single dimension, perhaps called geography. Within geography, a hierarchy could be set up with country at the top, region next, and finally city at the bottom, allowing drill down to occur in views. Just as a data object can have multiple dimensions, a dimension can also have multiple hierarchies.

A digression on populating data object fields

In the previous discussion, we mentioned the Nullable attribute that can be attached to fields. This is very important as we do not expect to populate all or even most of the fields in a data object one at a time. Failing to initialize a field will generate an error unless it is Nullable. Do not confuse data objects with the low-level events that are used to populate them. Data objects in BAM do not have a one-to-one correspondence with the low-level events that populate them. In our auction example, there will be just one auction object for every auction. However, there will be at least two, and usually more, messages for every auction; one message for the auction starting, another for the auction completing, and additional messages for each bid received. These messages will all populate, or in some cases overwrite, different parts of the auction data object. The table shows how the three messages populate different parts of the data object.

Message

Auction ID

State

Highest Bid

Reserve

Expires

Seller

Highest Bidder

Auction Started

Inserted

Inserted

Inserted

Inserted

Inserted

Inserted

 

Bid Received

 

Updated

Updated

   

Updated

Auction Finished

 

Updated

     

Instrumenting BPEL and SCA

Having defined the data we wish to capture in BAM, we now need to make our auction process generate appropriate events. We can instrument BPEL and SCA by making explicit calls to a BAM adapter as we would to any other adapter. Within BPEL, we may also take advantage of the sensor framework to raise BAM events from within an activity.

Tip

Sensors versus explicit calls

Explicit calls are available within both SCA and BPEL. Within BPEL, they make it more obvious where the BAM events are being generated. BPEL sensors, however, provide the ability to generate events at a finer grained level than explicit calls. For example, a BAM sensor in a BPEL activity could be set to fire not just on activation and completion (which could be captured by an explicit call just before and after the event), but also on events that are harder to catch with an explicit invoke, such as faults and compensation. Finally, sensors can fire on retry events that are impossible to capture in any other way. BAM sensors do not use partner links or references, but refer to the adapter JNDI location directly.

Sensors are not part of the normal BPEL executable flow. They can be thought of as event generators. They are attached to almost any kind of activity in BPEL, including partner link operations (invoke, receive, reply) and assigns. They can also be attached to variables and will fire whenever the variable is modified.

Invoking the BAM adapter as a regular service

When using the BAM adapter, we first need to configure an adapter instance.

Creating a BAM adapter

Let us start by creating a new BAM adapter. We begin by creating a new BAM Connection from the Connection section of the New Gallery (File | New, and then select Connection under General).

Creating a BAM adapter

We provide a name for the connection and identify if we wish it to be local to this application or available to all applications. We then define the connection characteristics of hostnames and port numbers for the Active Data Cache (BAM Server Host) and web applications (Web Server). Generally, these will be the same hostname and port number. We also provide a username and password for the BAM server. Finally, we can test our connection to ensure that it works.

Creating a BAM adapter

Having created our connection, we can now create a BAM partner link for use in BPEL or SCA. We do this in the same way as we create any other adapter-based link. We can drag a BAM Adapter from the Service Adapters section of the Component Palette onto either the External References section of an SCA or the Partner Links section of a BPEL process. This will launch the Adapter Configuration Wizard. After providing a name for our service, we are asked to select a BAM Data Object and determine the Operation to perform on the object. We must also provide an Operation Name and determine the batching behavior.

Creating a BAM adapter

The Data Object may be selected directly from the BAM server by using the Browse… button to pop up the BAM Data Object Chooser dialog box, which allows selection of the correct data object.

Creating a BAM adapter

Depending on the operation, we may need to provide a key to locate the correct data object instance. Update, Upsert, and Delete all require a key, only Insert does not.

Tip

Upsert the universal update mechanism

When using upsert, if the key already exists then that object is updated. If the object does not exist, then it is inserted. This enables upsert to cover both insert and update operations and is generally the most useful operation to perform on BAM objects, as it requires only one BAM adapter instance to provide two different operations.

Having identified the update characteristics of our adapter, we now must map it onto a resource in the underlying application server by providing the JNDI location of the BAM connection. Once this is completed, we can complete the wizard and finish creating our BAM adapter.

Invoking the BAM adapter

Invoking the BAM adapter is the same as invoking any other adapter from BPEL or the Mediator. The BAM adapter provides an interface to allow a collection of data objects to be submitted at the same time, each field in the data object is represented by an XML element in the interface to the adapter. XSLT or copy operations may be used to populate the fields of the input variable.

Invoking the BAM adapter

Invoking the BAM adapter through BPEL sensors

In this section, we will examine how to use BPEL sensors to invoke the BAM adapter.

Within JDeveloper, there are several modes in which we can view the BPEL process. On the right-hand side of the title bar for the BPEL editor, there is a drop-down list that allows us to select the viewing and editing mode.

The drop-down list shows us the three modes available:

  • BPEL: Lets us edit and view the BPEL activities
  • Monitor: Lets us edit and view sensor annotations to the BPEL process
  • BPA: Is only used with the Oracle BPA suite.
    Invoking the BAM adapter through BPEL sensors

After choosing Monitor, we can right-click on a BPEL activity to start creating the sensor. This brings up a pop-up menu from which we can select the Create | Sensor item. Note that there are also options to create other monitoring items.

  • Counter: Creates a count of the number of times an activity has been reached
  • Business Indicator: Evaluates an XPath expression when an activity has been reached
  • Interval: Calculates the elapsed time between two activities
  • Sensor: Creates a BAM sensor
    Invoking the BAM adapter through BPEL sensors

When creating a new sensor we need to provide it with a name and indicate when it should fire. The options are as follows:

  • Activation: When the activity is started
  • Completion: When the activity is completed
  • Fault: When the activity raises a fault
  • Compensation: When compensation is invoked on a surrounding scope
  • Retry: When the activity is retried, such as retrying an invoke
  • All: All of the above

We must also provide a variable that contains the data we want to be part of the sensor-generated event. This variable must be an element variable, not a simple type or a message type.

Invoking the BAM adapter through BPEL sensors

Sensors can have a number of sensor actions associated with them. Sensor actions can be thought of as the targets for the sensor event. One option is to send the events into the BPEL repository, which is useful for testing purposes. Another option is to send them to BAM. Other options revolve around JMS Queues and Topics.

Unfortunately, we cannot add a BAM sensor from the Create Activity Sensor dialog. They can only be created by using the structure pane for the BPEL process. To do this, we navigate to Sensor Actions in the structure pane, right-click, and select Bam Sensor Action. This brings up the Create Sensor Action dialog.

Invoking the BAM adapter through BPEL sensors

We provide a name for the sensor action and then select an eligible sensor from the drop-down list. There is a one-to-one relationship between BAM sensor sections and sensors. This is not the case for other types of sensors. The reason for the one-to-one relationship is that BAM sensor actions transform the variable associated with the action into the relevant fields for the BAM data object. This is done through an XSLT transform.

Having selected our sensor, we then click the torch next to the Data Object so that we can choose the BAM data object that we will map the sensor variable onto.

Having selected the BAM data object, we need to select the operation to be performed on the data object. The drop-down list gives us four options:

  • Insert
  • Update
  • Delete
  • Upsert

The Insert operation creates a new instance of the BAM data object. This may result in multiple data objects having the same field values.

The Insert operation does not use a key as it always creates a new data object. The remaining three operations require a key because they may operate on an existing data object. The key must uniquely identify a data object and may consist of one or more data object fields.

The Update operation will update an existing data object, overwriting some or all of the fields, as desired. If the object cannot be found from the key, then no data is updated in the ADC.

The Delete operation will remove a data object from the ADC. If the key does not identify an object, then no object will be deleted.

The Upsert operation behaves as an update operation if the key does identify an existing data object in the ADC. If the key does not identify an existing object in the ADC, then it behaves as an Insert operation.

Generally, we use the Insert operation when we know we are creating an object for the first time, and we use the Update operation when we know that the object already exists. We use the Upsert operation when we are unsure if an object exists.

For example, we may use an Insert operation to create an instance of a process status object and then use an update to change the status value of the object as the process progresses. When tracking process state, it is a good idea to use the process instance identifier as a key field in the data object.

Having chosen our operation, an Insert operation for example, we then need to map the fields in the sensor variable defined in BPEL to the BAM data object. We do this by creating a new XSLT transformation by clicking the green cross next to the Map File field.

Within the XSLT transformation editor, we can map the BPEL variable to the BAM data object. In addition to the variable itself, there is a host of other information available to us in the BPEL variable source document. This can be categorized as follows:

  • Header Information
    • This relates to the process instance and the specific sensor that is firing
  • Payload
    • This contains not only the sensor variable contents but also information about the activity and any fault associated with it

Useful data includes the instance ID of the process and also the time the sensor fired as well as the elapsed times for actions. Once we have wired up the variable data, we can save the transform file.

Invoking the BAM adapter through BPEL sensors

When we have finished creating the sensor action, we can deploy it to the BPEL server and events will be fired to populate the BAM active data cache.

Testing the events

After creating our BAM sensors, we can test them by executing a process in BPEL and ensuring that the events appear in the Active Data Cache. We can find the actual event data by selecting the object in BAM architect and then clicking Contents, which will then list the actual data object instances.

Creating a simple dashboard

Now that our sensors are in place and working, we can use the BAM Active Studio application to create a report based on the sensor information. To help organize our reports, it is possible to create folders to hold reports in a similar fashion to the way we created folders to hold data objects.

Let us create a report that shows the status of auctions in the system and also shows the value of all auctions currently open. We will start by creating the report itself. The report is just a holder for views, and we create it by selecting the Create A New Report button.

Creating a simple dashboard

We can select a report that has the right number of panes for the number of views we want. Note that it is possible to change the number of panes on the report, so if we get it wrong, it does not matter. For now, we will choose a simple split-screen report with two panes, one above the other.

We can provide a title for a report by editing the title section directly. Having updated the title, we can then proceed to create the views.

Defining data objects

Data in BAM is stored in data objects. Individual data objects contain the information that is reported in BAM dashboards and may be updated by multiple events. Generally, BAM will report against aggregations of objects, but there is also the ability for reports to drill down into individual data objects.

Before defining our data objects, let's group them into an auction folder so that they are easy to find. To do this, we use the BAM Architect application, and select Data Objects, which gives us the following screenshot:

Defining data objects

We select Create subfolder to create the folder and give it a name (Auction).

Defining data objects

We then click on Create folder to actually create the folder, and we get a confirmation message to tell us that it has been created. Notice that once created, the folder also appears in the Folders window on the left-hand side of the screen.

Now that we have our folder, we can create a data object. Again, we select Data Objects from the drop-down list. To define the data objects that are to be stored in our Active Data Cache, we open the Auction folder, if it is not already open, and select Create Data Object. If we don't select the Auction folder, then we pick it later when filling in the details of the data object.

We need to give our object a unique name within the folder and optionally provide it with a tip text that helps explain what the object does when the mouse is moved over it in object listings. Having named our object, we can now create the data fields by selecting Add a field. When adding fields, we need to provide a name and type as well as indicating if they must contain data; the default Nullable does not require a field to be populated. We may also optionally indicate if a field should be publically available for display and whether it should have any tool tip text.

Defining data objects

Once all the data fields have been defined, we can click Create Data Object to actually create the object as we have defined it. We are then presented with a confirmation screen that the object has been created.

Tip

Grouping data into hierarchies

When creating a data object, it is possible to specify "Dimensions" for the object. A dimension is based on one or more fields within the object. A given field can only participate in one dimension. This gives the ability to automatically group the object by the fields in the given dimension. If multiple fields are selected for a single dimension, then they can be layered into a hierarchy; for example, to allow analysis by country, region, and city. In this case, all three elements would be selected into a single dimension, perhaps called geography. Within geography, a hierarchy could be set up with country at the top, region next, and finally city at the bottom, allowing drill down to occur in views. Just as a data object can have multiple dimensions, a dimension can also have multiple hierarchies.

A digression on populating data object fields

In the previous discussion, we mentioned the Nullable attribute that can be attached to fields. This is very important as we do not expect to populate all or even most of the fields in a data object one at a time. Failing to initialize a field will generate an error unless it is Nullable. Do not confuse data objects with the low-level events that are used to populate them. Data objects in BAM do not have a one-to-one correspondence with the low-level events that populate them. In our auction example, there will be just one auction object for every auction. However, there will be at least two, and usually more, messages for every auction; one message for the auction starting, another for the auction completing, and additional messages for each bid received. These messages will all populate, or in some cases overwrite, different parts of the auction data object. The table shows how the three messages populate different parts of the data object.

Message

Auction ID

State

Highest Bid

Reserve

Expires

Seller

Highest Bidder

Auction Started

Inserted

Inserted

Inserted

Inserted

Inserted

Inserted

 

Bid Received

 

Updated

Updated

   

Updated

Auction Finished

 

Updated

     

Instrumenting BPEL and SCA

Having defined the data we wish to capture in BAM, we now need to make our auction process generate appropriate events. We can instrument BPEL and SCA by making explicit calls to a BAM adapter as we would to any other adapter. Within BPEL, we may also take advantage of the sensor framework to raise BAM events from within an activity.

Tip

Sensors versus explicit calls

Explicit calls are available within both SCA and BPEL. Within BPEL, they make it more obvious where the BAM events are being generated. BPEL sensors, however, provide the ability to generate events at a finer grained level than explicit calls. For example, a BAM sensor in a BPEL activity could be set to fire not just on activation and completion (which could be captured by an explicit call just before and after the event), but also on events that are harder to catch with an explicit invoke, such as faults and compensation. Finally, sensors can fire on retry events that are impossible to capture in any other way. BAM sensors do not use partner links or references, but refer to the adapter JNDI location directly.

Sensors are not part of the normal BPEL executable flow. They can be thought of as event generators. They are attached to almost any kind of activity in BPEL, including partner link operations (invoke, receive, reply) and assigns. They can also be attached to variables and will fire whenever the variable is modified.

Invoking the BAM adapter as a regular service

When using the BAM adapter, we first need to configure an adapter instance.

Creating a BAM adapter

Let us start by creating a new BAM adapter. We begin by creating a new BAM Connection from the Connection section of the New Gallery (File | New, and then select Connection under General).

Creating a BAM adapter

We provide a name for the connection and identify if we wish it to be local to this application or available to all applications. We then define the connection characteristics of hostnames and port numbers for the Active Data Cache (BAM Server Host) and web applications (Web Server). Generally, these will be the same hostname and port number. We also provide a username and password for the BAM server. Finally, we can test our connection to ensure that it works.

Creating a BAM adapter

Having created our connection, we can now create a BAM partner link for use in BPEL or SCA. We do this in the same way as we create any other adapter-based link. We can drag a BAM Adapter from the Service Adapters section of the Component Palette onto either the External References section of an SCA or the Partner Links section of a BPEL process. This will launch the Adapter Configuration Wizard. After providing a name for our service, we are asked to select a BAM Data Object and determine the Operation to perform on the object. We must also provide an Operation Name and determine the batching behavior.

Creating a BAM adapter

The Data Object may be selected directly from the BAM server by using the Browse… button to pop up the BAM Data Object Chooser dialog box, which allows selection of the correct data object.

Creating a BAM adapter

Depending on the operation, we may need to provide a key to locate the correct data object instance. Update, Upsert, and Delete all require a key, only Insert does not.

Tip

Upsert the universal update mechanism

When using upsert, if the key already exists then that object is updated. If the object does not exist, then it is inserted. This enables upsert to cover both insert and update operations and is generally the most useful operation to perform on BAM objects, as it requires only one BAM adapter instance to provide two different operations.

Having identified the update characteristics of our adapter, we now must map it onto a resource in the underlying application server by providing the JNDI location of the BAM connection. Once this is completed, we can complete the wizard and finish creating our BAM adapter.

Invoking the BAM adapter

Invoking the BAM adapter is the same as invoking any other adapter from BPEL or the Mediator. The BAM adapter provides an interface to allow a collection of data objects to be submitted at the same time, each field in the data object is represented by an XML element in the interface to the adapter. XSLT or copy operations may be used to populate the fields of the input variable.

Invoking the BAM adapter

Invoking the BAM adapter through BPEL sensors

In this section, we will examine how to use BPEL sensors to invoke the BAM adapter.

Within JDeveloper, there are several modes in which we can view the BPEL process. On the right-hand side of the title bar for the BPEL editor, there is a drop-down list that allows us to select the viewing and editing mode.

The drop-down list shows us the three modes available:

  • BPEL: Lets us edit and view the BPEL activities
  • Monitor: Lets us edit and view sensor annotations to the BPEL process
  • BPA: Is only used with the Oracle BPA suite.
    Invoking the BAM adapter through BPEL sensors

After choosing Monitor, we can right-click on a BPEL activity to start creating the sensor. This brings up a pop-up menu from which we can select the Create | Sensor item. Note that there are also options to create other monitoring items.

  • Counter: Creates a count of the number of times an activity has been reached
  • Business Indicator: Evaluates an XPath expression when an activity has been reached
  • Interval: Calculates the elapsed time between two activities
  • Sensor: Creates a BAM sensor
    Invoking the BAM adapter through BPEL sensors

When creating a new sensor we need to provide it with a name and indicate when it should fire. The options are as follows:

  • Activation: When the activity is started
  • Completion: When the activity is completed
  • Fault: When the activity raises a fault
  • Compensation: When compensation is invoked on a surrounding scope
  • Retry: When the activity is retried, such as retrying an invoke
  • All: All of the above

We must also provide a variable that contains the data we want to be part of the sensor-generated event. This variable must be an element variable, not a simple type or a message type.

Invoking the BAM adapter through BPEL sensors

Sensors can have a number of sensor actions associated with them. Sensor actions can be thought of as the targets for the sensor event. One option is to send the events into the BPEL repository, which is useful for testing purposes. Another option is to send them to BAM. Other options revolve around JMS Queues and Topics.

Unfortunately, we cannot add a BAM sensor from the Create Activity Sensor dialog. They can only be created by using the structure pane for the BPEL process. To do this, we navigate to Sensor Actions in the structure pane, right-click, and select Bam Sensor Action. This brings up the Create Sensor Action dialog.

Invoking the BAM adapter through BPEL sensors

We provide a name for the sensor action and then select an eligible sensor from the drop-down list. There is a one-to-one relationship between BAM sensor sections and sensors. This is not the case for other types of sensors. The reason for the one-to-one relationship is that BAM sensor actions transform the variable associated with the action into the relevant fields for the BAM data object. This is done through an XSLT transform.

Having selected our sensor, we then click the torch next to the Data Object so that we can choose the BAM data object that we will map the sensor variable onto.

Having selected the BAM data object, we need to select the operation to be performed on the data object. The drop-down list gives us four options:

  • Insert
  • Update
  • Delete
  • Upsert

The Insert operation creates a new instance of the BAM data object. This may result in multiple data objects having the same field values.

The Insert operation does not use a key as it always creates a new data object. The remaining three operations require a key because they may operate on an existing data object. The key must uniquely identify a data object and may consist of one or more data object fields.

The Update operation will update an existing data object, overwriting some or all of the fields, as desired. If the object cannot be found from the key, then no data is updated in the ADC.

The Delete operation will remove a data object from the ADC. If the key does not identify an object, then no object will be deleted.

The Upsert operation behaves as an update operation if the key does identify an existing data object in the ADC. If the key does not identify an existing object in the ADC, then it behaves as an Insert operation.

Generally, we use the Insert operation when we know we are creating an object for the first time, and we use the Update operation when we know that the object already exists. We use the Upsert operation when we are unsure if an object exists.

For example, we may use an Insert operation to create an instance of a process status object and then use an update to change the status value of the object as the process progresses. When tracking process state, it is a good idea to use the process instance identifier as a key field in the data object.

Having chosen our operation, an Insert operation for example, we then need to map the fields in the sensor variable defined in BPEL to the BAM data object. We do this by creating a new XSLT transformation by clicking the green cross next to the Map File field.

Within the XSLT transformation editor, we can map the BPEL variable to the BAM data object. In addition to the variable itself, there is a host of other information available to us in the BPEL variable source document. This can be categorized as follows:

  • Header Information
    • This relates to the process instance and the specific sensor that is firing
  • Payload
    • This contains not only the sensor variable contents but also information about the activity and any fault associated with it

Useful data includes the instance ID of the process and also the time the sensor fired as well as the elapsed times for actions. Once we have wired up the variable data, we can save the transform file.

Invoking the BAM adapter through BPEL sensors

When we have finished creating the sensor action, we can deploy it to the BPEL server and events will be fired to populate the BAM active data cache.

Testing the events

After creating our BAM sensors, we can test them by executing a process in BPEL and ensuring that the events appear in the Active Data Cache. We can find the actual event data by selecting the object in BAM architect and then clicking Contents, which will then list the actual data object instances.

Creating a simple dashboard

Now that our sensors are in place and working, we can use the BAM Active Studio application to create a report based on the sensor information. To help organize our reports, it is possible to create folders to hold reports in a similar fashion to the way we created folders to hold data objects.

Let us create a report that shows the status of auctions in the system and also shows the value of all auctions currently open. We will start by creating the report itself. The report is just a holder for views, and we create it by selecting the Create A New Report button.

Creating a simple dashboard

We can select a report that has the right number of panes for the number of views we want. Note that it is possible to change the number of panes on the report, so if we get it wrong, it does not matter. For now, we will choose a simple split-screen report with two panes, one above the other.

We can provide a title for a report by editing the title section directly. Having updated the title, we can then proceed to create the views.

A digression on populating data object fields

In the previous discussion, we mentioned the Nullable attribute that can be attached to fields. This is very important as we do not expect to populate all or even most of the fields in a data object one at a time. Failing to initialize a field will generate an error unless it is Nullable. Do not confuse data objects with the low-level events that are used to populate them. Data objects in BAM do not have a one-to-one correspondence with the low-level events that populate them. In our auction example, there will be just one auction object for every auction. However, there will be at least two, and usually more, messages for every auction; one message for the auction starting, another for the auction completing, and additional messages for each bid received. These messages will all populate, or in some cases overwrite, different parts of the auction data object. The table shows how the three messages populate different parts of the data object.

Message

Auction ID

State

Highest Bid

Reserve

Expires

Seller

Highest Bidder

Auction Started

Inserted

Inserted

Inserted

Inserted

Inserted

Inserted

 

Bid Received

 

Updated

Updated

   

Updated

Auction Finished

 

Updated

     
Instrumenting BPEL and SCA

Having defined the data we wish to capture in BAM, we now need to make our auction process generate appropriate events. We can instrument BPEL and SCA by making explicit calls to a BAM adapter as we would to any other adapter. Within BPEL, we may also take advantage of the sensor framework to raise BAM events from within an activity.

Tip

Sensors versus explicit calls

Explicit calls are available within both SCA and BPEL. Within BPEL, they make it more obvious where the BAM events are being generated. BPEL sensors, however, provide the ability to generate events at a finer grained level than explicit calls. For example, a BAM sensor in a BPEL activity could be set to fire not just on activation and completion (which could be captured by an explicit call just before and after the event), but also on events that are harder to catch with an explicit invoke, such as faults and compensation. Finally, sensors can fire on retry events that are impossible to capture in any other way. BAM sensors do not use partner links or references, but refer to the adapter JNDI location directly.

Sensors are not part of the normal BPEL executable flow. They can be thought of as event generators. They are attached to almost any kind of activity in BPEL, including partner link operations (invoke, receive, reply) and assigns. They can also be attached to variables and will fire whenever the variable is modified.

Invoking the BAM adapter as a regular service

When using the BAM adapter, we first need to configure an adapter instance.

Creating a BAM adapter

Let us start by creating a new BAM adapter. We begin by creating a new BAM Connection from the Connection section of the New Gallery (File | New, and then select Connection under General).

Creating a BAM adapter

We provide a name for the connection and identify if we wish it to be local to this application or available to all applications. We then define the connection characteristics of hostnames and port numbers for the Active Data Cache (BAM Server Host) and web applications (Web Server). Generally, these will be the same hostname and port number. We also provide a username and password for the BAM server. Finally, we can test our connection to ensure that it works.

Creating a BAM adapter

Having created our connection, we can now create a BAM partner link for use in BPEL or SCA. We do this in the same way as we create any other adapter-based link. We can drag a BAM Adapter from the Service Adapters section of the Component Palette onto either the External References section of an SCA or the Partner Links section of a BPEL process. This will launch the Adapter Configuration Wizard. After providing a name for our service, we are asked to select a BAM Data Object and determine the Operation to perform on the object. We must also provide an Operation Name and determine the batching behavior.

Creating a BAM adapter

The Data Object may be selected directly from the BAM server by using the Browse… button to pop up the BAM Data Object Chooser dialog box, which allows selection of the correct data object.

Creating a BAM adapter

Depending on the operation, we may need to provide a key to locate the correct data object instance. Update, Upsert, and Delete all require a key, only Insert does not.

Tip

Upsert the universal update mechanism

When using upsert, if the key already exists then that object is updated. If the object does not exist, then it is inserted. This enables upsert to cover both insert and update operations and is generally the most useful operation to perform on BAM objects, as it requires only one BAM adapter instance to provide two different operations.

Having identified the update characteristics of our adapter, we now must map it onto a resource in the underlying application server by providing the JNDI location of the BAM connection. Once this is completed, we can complete the wizard and finish creating our BAM adapter.

Invoking the BAM adapter

Invoking the BAM adapter is the same as invoking any other adapter from BPEL or the Mediator. The BAM adapter provides an interface to allow a collection of data objects to be submitted at the same time, each field in the data object is represented by an XML element in the interface to the adapter. XSLT or copy operations may be used to populate the fields of the input variable.

Invoking the BAM adapter

Invoking the BAM adapter through BPEL sensors

In this section, we will examine how to use BPEL sensors to invoke the BAM adapter.

Within JDeveloper, there are several modes in which we can view the BPEL process. On the right-hand side of the title bar for the BPEL editor, there is a drop-down list that allows us to select the viewing and editing mode.

The drop-down list shows us the three modes available:

  • BPEL: Lets us edit and view the BPEL activities
  • Monitor: Lets us edit and view sensor annotations to the BPEL process
  • BPA: Is only used with the Oracle BPA suite.
    Invoking the BAM adapter through BPEL sensors

After choosing Monitor, we can right-click on a BPEL activity to start creating the sensor. This brings up a pop-up menu from which we can select the Create | Sensor item. Note that there are also options to create other monitoring items.

  • Counter: Creates a count of the number of times an activity has been reached
  • Business Indicator: Evaluates an XPath expression when an activity has been reached
  • Interval: Calculates the elapsed time between two activities
  • Sensor: Creates a BAM sensor
    Invoking the BAM adapter through BPEL sensors

When creating a new sensor we need to provide it with a name and indicate when it should fire. The options are as follows:

  • Activation: When the activity is started
  • Completion: When the activity is completed
  • Fault: When the activity raises a fault
  • Compensation: When compensation is invoked on a surrounding scope
  • Retry: When the activity is retried, such as retrying an invoke
  • All: All of the above

We must also provide a variable that contains the data we want to be part of the sensor-generated event. This variable must be an element variable, not a simple type or a message type.

Invoking the BAM adapter through BPEL sensors

Sensors can have a number of sensor actions associated with them. Sensor actions can be thought of as the targets for the sensor event. One option is to send the events into the BPEL repository, which is useful for testing purposes. Another option is to send them to BAM. Other options revolve around JMS Queues and Topics.

Unfortunately, we cannot add a BAM sensor from the Create Activity Sensor dialog. They can only be created by using the structure pane for the BPEL process. To do this, we navigate to Sensor Actions in the structure pane, right-click, and select Bam Sensor Action. This brings up the Create Sensor Action dialog.

Invoking the BAM adapter through BPEL sensors

We provide a name for the sensor action and then select an eligible sensor from the drop-down list. There is a one-to-one relationship between BAM sensor sections and sensors. This is not the case for other types of sensors. The reason for the one-to-one relationship is that BAM sensor actions transform the variable associated with the action into the relevant fields for the BAM data object. This is done through an XSLT transform.

Having selected our sensor, we then click the torch next to the Data Object so that we can choose the BAM data object that we will map the sensor variable onto.

Having selected the BAM data object, we need to select the operation to be performed on the data object. The drop-down list gives us four options:

  • Insert
  • Update
  • Delete
  • Upsert

The Insert operation creates a new instance of the BAM data object. This may result in multiple data objects having the same field values.

The Insert operation does not use a key as it always creates a new data object. The remaining three operations require a key because they may operate on an existing data object. The key must uniquely identify a data object and may consist of one or more data object fields.

The Update operation will update an existing data object, overwriting some or all of the fields, as desired. If the object cannot be found from the key, then no data is updated in the ADC.

The Delete operation will remove a data object from the ADC. If the key does not identify an object, then no object will be deleted.

The Upsert operation behaves as an update operation if the key does identify an existing data object in the ADC. If the key does not identify an existing object in the ADC, then it behaves as an Insert operation.

Generally, we use the Insert operation when we know we are creating an object for the first time, and we use the Update operation when we know that the object already exists. We use the Upsert operation when we are unsure if an object exists.

For example, we may use an Insert operation to create an instance of a process status object and then use an update to change the status value of the object as the process progresses. When tracking process state, it is a good idea to use the process instance identifier as a key field in the data object.

Having chosen our operation, an Insert operation for example, we then need to map the fields in the sensor variable defined in BPEL to the BAM data object. We do this by creating a new XSLT transformation by clicking the green cross next to the Map File field.

Within the XSLT transformation editor, we can map the BPEL variable to the BAM data object. In addition to the variable itself, there is a host of other information available to us in the BPEL variable source document. This can be categorized as follows:

  • Header Information
    • This relates to the process instance and the specific sensor that is firing
  • Payload
    • This contains not only the sensor variable contents but also information about the activity and any fault associated with it

Useful data includes the instance ID of the process and also the time the sensor fired as well as the elapsed times for actions. Once we have wired up the variable data, we can save the transform file.

Invoking the BAM adapter through BPEL sensors

When we have finished creating the sensor action, we can deploy it to the BPEL server and events will be fired to populate the BAM active data cache.

Testing the events

After creating our BAM sensors, we can test them by executing a process in BPEL and ensuring that the events appear in the Active Data Cache. We can find the actual event data by selecting the object in BAM architect and then clicking Contents, which will then list the actual data object instances.

Creating a simple dashboard

Now that our sensors are in place and working, we can use the BAM Active Studio application to create a report based on the sensor information. To help organize our reports, it is possible to create folders to hold reports in a similar fashion to the way we created folders to hold data objects.

Let us create a report that shows the status of auctions in the system and also shows the value of all auctions currently open. We will start by creating the report itself. The report is just a holder for views, and we create it by selecting the Create A New Report button.

Creating a simple dashboard

We can select a report that has the right number of panes for the number of views we want. Note that it is possible to change the number of panes on the report, so if we get it wrong, it does not matter. For now, we will choose a simple split-screen report with two panes, one above the other.

We can provide a title for a report by editing the title section directly. Having updated the title, we can then proceed to create the views.

Instrumenting BPEL and SCA

Having defined the data we wish to capture in BAM, we now need to make our auction process generate appropriate events. We can instrument BPEL and SCA by making explicit calls to a BAM adapter as we would to any other adapter. Within BPEL, we may also take advantage of the sensor framework to raise BAM events from within an activity.

Tip

Sensors versus explicit calls

Explicit calls are available within both SCA and BPEL. Within BPEL, they make it more obvious where the BAM events are being generated. BPEL sensors, however, provide the ability to generate events at a finer grained level than explicit calls. For example, a BAM sensor in a BPEL activity could be set to fire not just on activation and completion (which could be captured by an explicit call just before and after the event), but also on events that are harder to catch with an explicit invoke, such as faults and compensation. Finally, sensors can fire on retry events that are impossible to capture in any other way. BAM sensors do not use partner links or references, but refer to the adapter JNDI location directly.

Sensors are not part of the normal BPEL executable flow. They can be thought of as event generators. They are attached to almost any kind of activity in BPEL, including partner link operations (invoke, receive, reply) and assigns. They can also be attached to variables and will fire whenever the variable is modified.

Invoking the BAM adapter as a regular service

When using the BAM adapter, we first need to configure an adapter instance.

Creating a BAM adapter

Let us start by creating a new BAM adapter. We begin by creating a new BAM Connection from the Connection section of the New Gallery (File | New, and then select Connection under General).

Creating a BAM adapter

We provide a name for the connection and identify if we wish it to be local to this application or available to all applications. We then define the connection characteristics of hostnames and port numbers for the Active Data Cache (BAM Server Host) and web applications (Web Server). Generally, these will be the same hostname and port number. We also provide a username and password for the BAM server. Finally, we can test our connection to ensure that it works.

Creating a BAM adapter

Having created our connection, we can now create a BAM partner link for use in BPEL or SCA. We do this in the same way as we create any other adapter-based link. We can drag a BAM Adapter from the Service Adapters section of the Component Palette onto either the External References section of an SCA or the Partner Links section of a BPEL process. This will launch the Adapter Configuration Wizard. After providing a name for our service, we are asked to select a BAM Data Object and determine the Operation to perform on the object. We must also provide an Operation Name and determine the batching behavior.

Creating a BAM adapter

The Data Object may be selected directly from the BAM server by using the Browse… button to pop up the BAM Data Object Chooser dialog box, which allows selection of the correct data object.

Creating a BAM adapter

Depending on the operation, we may need to provide a key to locate the correct data object instance. Update, Upsert, and Delete all require a key, only Insert does not.

Tip

Upsert the universal update mechanism

When using upsert, if the key already exists then that object is updated. If the object does not exist, then it is inserted. This enables upsert to cover both insert and update operations and is generally the most useful operation to perform on BAM objects, as it requires only one BAM adapter instance to provide two different operations.

Having identified the update characteristics of our adapter, we now must map it onto a resource in the underlying application server by providing the JNDI location of the BAM connection. Once this is completed, we can complete the wizard and finish creating our BAM adapter.

Invoking the BAM adapter

Invoking the BAM adapter is the same as invoking any other adapter from BPEL or the Mediator. The BAM adapter provides an interface to allow a collection of data objects to be submitted at the same time, each field in the data object is represented by an XML element in the interface to the adapter. XSLT or copy operations may be used to populate the fields of the input variable.

Invoking the BAM adapter

Invoking the BAM adapter through BPEL sensors

In this section, we will examine how to use BPEL sensors to invoke the BAM adapter.

Within JDeveloper, there are several modes in which we can view the BPEL process. On the right-hand side of the title bar for the BPEL editor, there is a drop-down list that allows us to select the viewing and editing mode.

The drop-down list shows us the three modes available:

  • BPEL: Lets us edit and view the BPEL activities
  • Monitor: Lets us edit and view sensor annotations to the BPEL process
  • BPA: Is only used with the Oracle BPA suite.
    Invoking the BAM adapter through BPEL sensors

After choosing Monitor, we can right-click on a BPEL activity to start creating the sensor. This brings up a pop-up menu from which we can select the Create | Sensor item. Note that there are also options to create other monitoring items.

  • Counter: Creates a count of the number of times an activity has been reached
  • Business Indicator: Evaluates an XPath expression when an activity has been reached
  • Interval: Calculates the elapsed time between two activities
  • Sensor: Creates a BAM sensor
    Invoking the BAM adapter through BPEL sensors

When creating a new sensor we need to provide it with a name and indicate when it should fire. The options are as follows:

  • Activation: When the activity is started
  • Completion: When the activity is completed
  • Fault: When the activity raises a fault
  • Compensation: When compensation is invoked on a surrounding scope
  • Retry: When the activity is retried, such as retrying an invoke
  • All: All of the above

We must also provide a variable that contains the data we want to be part of the sensor-generated event. This variable must be an element variable, not a simple type or a message type.

Invoking the BAM adapter through BPEL sensors

Sensors can have a number of sensor actions associated with them. Sensor actions can be thought of as the targets for the sensor event. One option is to send the events into the BPEL repository, which is useful for testing purposes. Another option is to send them to BAM. Other options revolve around JMS Queues and Topics.

Unfortunately, we cannot add a BAM sensor from the Create Activity Sensor dialog. They can only be created by using the structure pane for the BPEL process. To do this, we navigate to Sensor Actions in the structure pane, right-click, and select Bam Sensor Action. This brings up the Create Sensor Action dialog.

Invoking the BAM adapter through BPEL sensors

We provide a name for the sensor action and then select an eligible sensor from the drop-down list. There is a one-to-one relationship between BAM sensor sections and sensors. This is not the case for other types of sensors. The reason for the one-to-one relationship is that BAM sensor actions transform the variable associated with the action into the relevant fields for the BAM data object. This is done through an XSLT transform.

Having selected our sensor, we then click the torch next to the Data Object so that we can choose the BAM data object that we will map the sensor variable onto.

Having selected the BAM data object, we need to select the operation to be performed on the data object. The drop-down list gives us four options:

  • Insert
  • Update
  • Delete
  • Upsert

The Insert operation creates a new instance of the BAM data object. This may result in multiple data objects having the same field values.

The Insert operation does not use a key as it always creates a new data object. The remaining three operations require a key because they may operate on an existing data object. The key must uniquely identify a data object and may consist of one or more data object fields.

The Update operation will update an existing data object, overwriting some or all of the fields, as desired. If the object cannot be found from the key, then no data is updated in the ADC.

The Delete operation will remove a data object from the ADC. If the key does not identify an object, then no object will be deleted.

The Upsert operation behaves as an update operation if the key does identify an existing data object in the ADC. If the key does not identify an existing object in the ADC, then it behaves as an Insert operation.

Generally, we use the Insert operation when we know we are creating an object for the first time, and we use the Update operation when we know that the object already exists. We use the Upsert operation when we are unsure if an object exists.

For example, we may use an Insert operation to create an instance of a process status object and then use an update to change the status value of the object as the process progresses. When tracking process state, it is a good idea to use the process instance identifier as a key field in the data object.

Having chosen our operation, an Insert operation for example, we then need to map the fields in the sensor variable defined in BPEL to the BAM data object. We do this by creating a new XSLT transformation by clicking the green cross next to the Map File field.

Within the XSLT transformation editor, we can map the BPEL variable to the BAM data object. In addition to the variable itself, there is a host of other information available to us in the BPEL variable source document. This can be categorized as follows:

  • Header Information
    • This relates to the process instance and the specific sensor that is firing
  • Payload
    • This contains not only the sensor variable contents but also information about the activity and any fault associated with it

Useful data includes the instance ID of the process and also the time the sensor fired as well as the elapsed times for actions. Once we have wired up the variable data, we can save the transform file.

Invoking the BAM adapter through BPEL sensors

When we have finished creating the sensor action, we can deploy it to the BPEL server and events will be fired to populate the BAM active data cache.

Testing the events

After creating our BAM sensors, we can test them by executing a process in BPEL and ensuring that the events appear in the Active Data Cache. We can find the actual event data by selecting the object in BAM architect and then clicking Contents, which will then list the actual data object instances.

Creating a simple dashboard

Now that our sensors are in place and working, we can use the BAM Active Studio application to create a report based on the sensor information. To help organize our reports, it is possible to create folders to hold reports in a similar fashion to the way we created folders to hold data objects.

Let us create a report that shows the status of auctions in the system and also shows the value of all auctions currently open. We will start by creating the report itself. The report is just a holder for views, and we create it by selecting the Create A New Report button.

Creating a simple dashboard

We can select a report that has the right number of panes for the number of views we want. Note that it is possible to change the number of panes on the report, so if we get it wrong, it does not matter. For now, we will choose a simple split-screen report with two panes, one above the other.

We can provide a title for a report by editing the title section directly. Having updated the title, we can then proceed to create the views.

Invoking the BAM adapter as a regular service

When using the BAM adapter, we first need to configure an adapter instance.

Creating a BAM adapter

Let us start by creating a new BAM adapter. We begin by creating a new BAM Connection from the Connection section of the New Gallery (File | New, and then select Connection under General).

Creating a BAM adapter

We provide a name for the connection and identify if we wish it to be local to this application or available to all applications. We then define the connection characteristics of hostnames and port numbers for the Active Data Cache (BAM Server Host) and web applications (Web Server). Generally, these will be the same hostname and port number. We also provide a username and password for the BAM server. Finally, we can test our connection to ensure that it works.

Creating a BAM adapter

Having created our connection, we can now create a BAM partner link for use in BPEL or SCA. We do this in the same way as we create any other adapter-based link. We can drag a BAM Adapter from the Service Adapters section of the Component Palette onto either the External References section of an SCA or the Partner Links section of a BPEL process. This will launch the Adapter Configuration Wizard. After providing a name for our service, we are asked to select a BAM Data Object and determine the Operation to perform on the object. We must also provide an Operation Name and determine the batching behavior.

Creating a BAM adapter

The Data Object may be selected directly from the BAM server by using the Browse… button to pop up the BAM Data Object Chooser dialog box, which allows selection of the correct data object.

Creating a BAM adapter

Depending on the operation, we may need to provide a key to locate the correct data object instance. Update, Upsert, and Delete all require a key, only Insert does not.

Tip

Upsert the universal update mechanism

When using upsert, if the key already exists then that object is updated. If the object does not exist, then it is inserted. This enables upsert to cover both insert and update operations and is generally the most useful operation to perform on BAM objects, as it requires only one BAM adapter instance to provide two different operations.

Having identified the update characteristics of our adapter, we now must map it onto a resource in the underlying application server by providing the JNDI location of the BAM connection. Once this is completed, we can complete the wizard and finish creating our BAM adapter.

Invoking the BAM adapter

Invoking the BAM adapter is the same as invoking any other adapter from BPEL or the Mediator. The BAM adapter provides an interface to allow a collection of data objects to be submitted at the same time, each field in the data object is represented by an XML element in the interface to the adapter. XSLT or copy operations may be used to populate the fields of the input variable.

Invoking the BAM adapter

Invoking the BAM adapter through BPEL sensors

In this section, we will examine how to use BPEL sensors to invoke the BAM adapter.

Within JDeveloper, there are several modes in which we can view the BPEL process. On the right-hand side of the title bar for the BPEL editor, there is a drop-down list that allows us to select the viewing and editing mode.

The drop-down list shows us the three modes available:

  • BPEL: Lets us edit and view the BPEL activities
  • Monitor: Lets us edit and view sensor annotations to the BPEL process
  • BPA: Is only used with the Oracle BPA suite.
    Invoking the BAM adapter through BPEL sensors

After choosing Monitor, we can right-click on a BPEL activity to start creating the sensor. This brings up a pop-up menu from which we can select the Create | Sensor item. Note that there are also options to create other monitoring items.

  • Counter: Creates a count of the number of times an activity has been reached
  • Business Indicator: Evaluates an XPath expression when an activity has been reached
  • Interval: Calculates the elapsed time between two activities
  • Sensor: Creates a BAM sensor
    Invoking the BAM adapter through BPEL sensors

When creating a new sensor we need to provide it with a name and indicate when it should fire. The options are as follows:

  • Activation: When the activity is started
  • Completion: When the activity is completed
  • Fault: When the activity raises a fault
  • Compensation: When compensation is invoked on a surrounding scope
  • Retry: When the activity is retried, such as retrying an invoke
  • All: All of the above

We must also provide a variable that contains the data we want to be part of the sensor-generated event. This variable must be an element variable, not a simple type or a message type.

Invoking the BAM adapter through BPEL sensors

Sensors can have a number of sensor actions associated with them. Sensor actions can be thought of as the targets for the sensor event. One option is to send the events into the BPEL repository, which is useful for testing purposes. Another option is to send them to BAM. Other options revolve around JMS Queues and Topics.

Unfortunately, we cannot add a BAM sensor from the Create Activity Sensor dialog. They can only be created by using the structure pane for the BPEL process. To do this, we navigate to Sensor Actions in the structure pane, right-click, and select Bam Sensor Action. This brings up the Create Sensor Action dialog.

Invoking the BAM adapter through BPEL sensors

We provide a name for the sensor action and then select an eligible sensor from the drop-down list. There is a one-to-one relationship between BAM sensor sections and sensors. This is not the case for other types of sensors. The reason for the one-to-one relationship is that BAM sensor actions transform the variable associated with the action into the relevant fields for the BAM data object. This is done through an XSLT transform.

Having selected our sensor, we then click the torch next to the Data Object so that we can choose the BAM data object that we will map the sensor variable onto.

Having selected the BAM data object, we need to select the operation to be performed on the data object. The drop-down list gives us four options:

  • Insert
  • Update
  • Delete
  • Upsert

The Insert operation creates a new instance of the BAM data object. This may result in multiple data objects having the same field values.

The Insert operation does not use a key as it always creates a new data object. The remaining three operations require a key because they may operate on an existing data object. The key must uniquely identify a data object and may consist of one or more data object fields.

The Update operation will update an existing data object, overwriting some or all of the fields, as desired. If the object cannot be found from the key, then no data is updated in the ADC.

The Delete operation will remove a data object from the ADC. If the key does not identify an object, then no object will be deleted.

The Upsert operation behaves as an update operation if the key does identify an existing data object in the ADC. If the key does not identify an existing object in the ADC, then it behaves as an Insert operation.

Generally, we use the Insert operation when we know we are creating an object for the first time, and we use the Update operation when we know that the object already exists. We use the Upsert operation when we are unsure if an object exists.

For example, we may use an Insert operation to create an instance of a process status object and then use an update to change the status value of the object as the process progresses. When tracking process state, it is a good idea to use the process instance identifier as a key field in the data object.

Having chosen our operation, an Insert operation for example, we then need to map the fields in the sensor variable defined in BPEL to the BAM data object. We do this by creating a new XSLT transformation by clicking the green cross next to the Map File field.

Within the XSLT transformation editor, we can map the BPEL variable to the BAM data object. In addition to the variable itself, there is a host of other information available to us in the BPEL variable source document. This can be categorized as follows:

  • Header Information
    • This relates to the process instance and the specific sensor that is firing
  • Payload
    • This contains not only the sensor variable contents but also information about the activity and any fault associated with it

Useful data includes the instance ID of the process and also the time the sensor fired as well as the elapsed times for actions. Once we have wired up the variable data, we can save the transform file.

Invoking the BAM adapter through BPEL sensors

When we have finished creating the sensor action, we can deploy it to the BPEL server and events will be fired to populate the BAM active data cache.

Testing the events

After creating our BAM sensors, we can test them by executing a process in BPEL and ensuring that the events appear in the Active Data Cache. We can find the actual event data by selecting the object in BAM architect and then clicking Contents, which will then list the actual data object instances.

Creating a simple dashboard

Now that our sensors are in place and working, we can use the BAM Active Studio application to create a report based on the sensor information. To help organize our reports, it is possible to create folders to hold reports in a similar fashion to the way we created folders to hold data objects.

Let us create a report that shows the status of auctions in the system and also shows the value of all auctions currently open. We will start by creating the report itself. The report is just a holder for views, and we create it by selecting the Create A New Report button.

Creating a simple dashboard

We can select a report that has the right number of panes for the number of views we want. Note that it is possible to change the number of panes on the report, so if we get it wrong, it does not matter. For now, we will choose a simple split-screen report with two panes, one above the other.

We can provide a title for a report by editing the title section directly. Having updated the title, we can then proceed to create the views.

Creating a BAM adapter

Let us start by creating a new BAM adapter. We begin by creating a new BAM Connection from the Connection section of the New Gallery (File | New, and then select Connection under General).

Creating a BAM adapter

We provide a name for the connection and identify if we wish it to be local to this application or available to all applications. We then define the connection characteristics of hostnames and port numbers for the Active Data Cache (BAM Server Host) and web applications (Web Server). Generally, these will be the same hostname and port number. We also provide a username and password for the BAM server. Finally, we can test our connection to ensure that it works.

Creating a BAM adapter

Having created our connection, we can now create a BAM partner link for use in BPEL or SCA. We do this in the same way as we create any other adapter-based link. We can drag a BAM Adapter from the Service Adapters section of the Component Palette onto either the External References section of an SCA or the Partner Links section of a BPEL process. This will launch the Adapter Configuration Wizard. After providing a name for our service, we are asked to select a BAM Data Object and determine the Operation to perform on the object. We must also provide an Operation Name and determine the batching behavior.

Creating a BAM adapter

The Data Object may be selected directly from the BAM server by using the Browse… button to pop up the BAM Data Object Chooser dialog box, which allows selection of the correct data object.

Creating a BAM adapter

Depending on the operation, we may need to provide a key to locate the correct data object instance. Update, Upsert, and Delete all require a key, only Insert does not.

Tip

Upsert the universal update mechanism

When using upsert, if the key already exists then that object is updated. If the object does not exist, then it is inserted. This enables upsert to cover both insert and update operations and is generally the most useful operation to perform on BAM objects, as it requires only one BAM adapter instance to provide two different operations.

Having identified the update characteristics of our adapter, we now must map it onto a resource in the underlying application server by providing the JNDI location of the BAM connection. Once this is completed, we can complete the wizard and finish creating our BAM adapter.

Invoking the BAM adapter

Invoking the BAM adapter is the same as invoking any other adapter from BPEL or the Mediator. The BAM adapter provides an interface to allow a collection of data objects to be submitted at the same time, each field in the data object is represented by an XML element in the interface to the adapter. XSLT or copy operations may be used to populate the fields of the input variable.

Invoking the BAM adapter
Invoking the BAM adapter through BPEL sensors

In this section, we will examine how to use BPEL sensors to invoke the BAM adapter.

Within JDeveloper, there are several modes in which we can view the BPEL process. On the right-hand side of the title bar for the BPEL editor, there is a drop-down list that allows us to select the viewing and editing mode.

The drop-down list shows us the three modes available:

  • BPEL: Lets us edit and view the BPEL activities
  • Monitor: Lets us edit and view sensor annotations to the BPEL process
  • BPA: Is only used with the Oracle BPA suite.
    Invoking the BAM adapter through BPEL sensors

After choosing Monitor, we can right-click on a BPEL activity to start creating the sensor. This brings up a pop-up menu from which we can select the Create | Sensor item. Note that there are also options to create other monitoring items.

  • Counter: Creates a count of the number of times an activity has been reached
  • Business Indicator: Evaluates an XPath expression when an activity has been reached
  • Interval: Calculates the elapsed time between two activities
  • Sensor: Creates a BAM sensor
    Invoking the BAM adapter through BPEL sensors

When creating a new sensor we need to provide it with a name and indicate when it should fire. The options are as follows:

  • Activation: When the activity is started
  • Completion: When the activity is completed
  • Fault: When the activity raises a fault
  • Compensation: When compensation is invoked on a surrounding scope
  • Retry: When the activity is retried, such as retrying an invoke
  • All: All of the above

We must also provide a variable that contains the data we want to be part of the sensor-generated event. This variable must be an element variable, not a simple type or a message type.

Invoking the BAM adapter through BPEL sensors

Sensors can have a number of sensor actions associated with them. Sensor actions can be thought of as the targets for the sensor event. One option is to send the events into the BPEL repository, which is useful for testing purposes. Another option is to send them to BAM. Other options revolve around JMS Queues and Topics.

Unfortunately, we cannot add a BAM sensor from the Create Activity Sensor dialog. They can only be created by using the structure pane for the BPEL process. To do this, we navigate to Sensor Actions in the structure pane, right-click, and select Bam Sensor Action. This brings up the Create Sensor Action dialog.

Invoking the BAM adapter through BPEL sensors

We provide a name for the sensor action and then select an eligible sensor from the drop-down list. There is a one-to-one relationship between BAM sensor sections and sensors. This is not the case for other types of sensors. The reason for the one-to-one relationship is that BAM sensor actions transform the variable associated with the action into the relevant fields for the BAM data object. This is done through an XSLT transform.

Having selected our sensor, we then click the torch next to the Data Object so that we can choose the BAM data object that we will map the sensor variable onto.

Having selected the BAM data object, we need to select the operation to be performed on the data object. The drop-down list gives us four options:

  • Insert
  • Update
  • Delete
  • Upsert

The Insert operation creates a new instance of the BAM data object. This may result in multiple data objects having the same field values.

The Insert operation does not use a key as it always creates a new data object. The remaining three operations require a key because they may operate on an existing data object. The key must uniquely identify a data object and may consist of one or more data object fields.

The Update operation will update an existing data object, overwriting some or all of the fields, as desired. If the object cannot be found from the key, then no data is updated in the ADC.

The Delete operation will remove a data object from the ADC. If the key does not identify an object, then no object will be deleted.

The Upsert operation behaves as an update operation if the key does identify an existing data object in the ADC. If the key does not identify an existing object in the ADC, then it behaves as an Insert operation.

Generally, we use the Insert operation when we know we are creating an object for the first time, and we use the Update operation when we know that the object already exists. We use the Upsert operation when we are unsure if an object exists.

For example, we may use an Insert operation to create an instance of a process status object and then use an update to change the status value of the object as the process progresses. When tracking process state, it is a good idea to use the process instance identifier as a key field in the data object.

Having chosen our operation, an Insert operation for example, we then need to map the fields in the sensor variable defined in BPEL to the BAM data object. We do this by creating a new XSLT transformation by clicking the green cross next to the Map File field.

Within the XSLT transformation editor, we can map the BPEL variable to the BAM data object. In addition to the variable itself, there is a host of other information available to us in the BPEL variable source document. This can be categorized as follows:

  • Header Information
    • This relates to the process instance and the specific sensor that is firing
  • Payload
    • This contains not only the sensor variable contents but also information about the activity and any fault associated with it

Useful data includes the instance ID of the process and also the time the sensor fired as well as the elapsed times for actions. Once we have wired up the variable data, we can save the transform file.

Invoking the BAM adapter through BPEL sensors

When we have finished creating the sensor action, we can deploy it to the BPEL server and events will be fired to populate the BAM active data cache.

Testing the events

After creating our BAM sensors, we can test them by executing a process in BPEL and ensuring that the events appear in the Active Data Cache. We can find the actual event data by selecting the object in BAM architect and then clicking Contents, which will then list the actual data object instances.

Creating a simple dashboard

Now that our sensors are in place and working, we can use the BAM Active Studio application to create a report based on the sensor information. To help organize our reports, it is possible to create folders to hold reports in a similar fashion to the way we created folders to hold data objects.

Let us create a report that shows the status of auctions in the system and also shows the value of all auctions currently open. We will start by creating the report itself. The report is just a holder for views, and we create it by selecting the Create A New Report button.

Creating a simple dashboard

We can select a report that has the right number of panes for the number of views we want. Note that it is possible to change the number of panes on the report, so if we get it wrong, it does not matter. For now, we will choose a simple split-screen report with two panes, one above the other.

We can provide a title for a report by editing the title section directly. Having updated the title, we can then proceed to create the views.

Invoking the BAM adapter

Invoking the BAM adapter is the same as invoking any other adapter from BPEL or the Mediator. The BAM adapter provides an interface to allow a collection of data objects to be submitted at the same time, each field in the data object is represented by an XML element in the interface to the adapter. XSLT or copy operations may be used to populate the fields of the input variable.

Invoking the BAM adapter
Invoking the BAM adapter through BPEL sensors

In this section, we will examine how to use BPEL sensors to invoke the BAM adapter.

Within JDeveloper, there are several modes in which we can view the BPEL process. On the right-hand side of the title bar for the BPEL editor, there is a drop-down list that allows us to select the viewing and editing mode.

The drop-down list shows us the three modes available:

  • BPEL: Lets us edit and view the BPEL activities
  • Monitor: Lets us edit and view sensor annotations to the BPEL process
  • BPA: Is only used with the Oracle BPA suite.
    Invoking the BAM adapter through BPEL sensors

After choosing Monitor, we can right-click on a BPEL activity to start creating the sensor. This brings up a pop-up menu from which we can select the Create | Sensor item. Note that there are also options to create other monitoring items.

  • Counter: Creates a count of the number of times an activity has been reached
  • Business Indicator: Evaluates an XPath expression when an activity has been reached
  • Interval: Calculates the elapsed time between two activities
  • Sensor: Creates a BAM sensor
    Invoking the BAM adapter through BPEL sensors

When creating a new sensor we need to provide it with a name and indicate when it should fire. The options are as follows:

  • Activation: When the activity is started
  • Completion: When the activity is completed
  • Fault: When the activity raises a fault
  • Compensation: When compensation is invoked on a surrounding scope
  • Retry: When the activity is retried, such as retrying an invoke
  • All: All of the above

We must also provide a variable that contains the data we want to be part of the sensor-generated event. This variable must be an element variable, not a simple type or a message type.

Invoking the BAM adapter through BPEL sensors

Sensors can have a number of sensor actions associated with them. Sensor actions can be thought of as the targets for the sensor event. One option is to send the events into the BPEL repository, which is useful for testing purposes. Another option is to send them to BAM. Other options revolve around JMS Queues and Topics.

Unfortunately, we cannot add a BAM sensor from the Create Activity Sensor dialog. They can only be created by using the structure pane for the BPEL process. To do this, we navigate to Sensor Actions in the structure pane, right-click, and select Bam Sensor Action. This brings up the Create Sensor Action dialog.

Invoking the BAM adapter through BPEL sensors

We provide a name for the sensor action and then select an eligible sensor from the drop-down list. There is a one-to-one relationship between BAM sensor sections and sensors. This is not the case for other types of sensors. The reason for the one-to-one relationship is that BAM sensor actions transform the variable associated with the action into the relevant fields for the BAM data object. This is done through an XSLT transform.

Having selected our sensor, we then click the torch next to the Data Object so that we can choose the BAM data object that we will map the sensor variable onto.

Having selected the BAM data object, we need to select the operation to be performed on the data object. The drop-down list gives us four options:

  • Insert
  • Update
  • Delete
  • Upsert

The Insert operation creates a new instance of the BAM data object. This may result in multiple data objects having the same field values.

The Insert operation does not use a key as it always creates a new data object. The remaining three operations require a key because they may operate on an existing data object. The key must uniquely identify a data object and may consist of one or more data object fields.

The Update operation will update an existing data object, overwriting some or all of the fields, as desired. If the object cannot be found from the key, then no data is updated in the ADC.

The Delete operation will remove a data object from the ADC. If the key does not identify an object, then no object will be deleted.

The Upsert operation behaves as an update operation if the key does identify an existing data object in the ADC. If the key does not identify an existing object in the ADC, then it behaves as an Insert operation.

Generally, we use the Insert operation when we know we are creating an object for the first time, and we use the Update operation when we know that the object already exists. We use the Upsert operation when we are unsure if an object exists.

For example, we may use an Insert operation to create an instance of a process status object and then use an update to change the status value of the object as the process progresses. When tracking process state, it is a good idea to use the process instance identifier as a key field in the data object.

Having chosen our operation, an Insert operation for example, we then need to map the fields in the sensor variable defined in BPEL to the BAM data object. We do this by creating a new XSLT transformation by clicking the green cross next to the Map File field.

Within the XSLT transformation editor, we can map the BPEL variable to the BAM data object. In addition to the variable itself, there is a host of other information available to us in the BPEL variable source document. This can be categorized as follows:

  • Header Information
    • This relates to the process instance and the specific sensor that is firing
  • Payload
    • This contains not only the sensor variable contents but also information about the activity and any fault associated with it

Useful data includes the instance ID of the process and also the time the sensor fired as well as the elapsed times for actions. Once we have wired up the variable data, we can save the transform file.

Invoking the BAM adapter through BPEL sensors

When we have finished creating the sensor action, we can deploy it to the BPEL server and events will be fired to populate the BAM active data cache.

Testing the events

After creating our BAM sensors, we can test them by executing a process in BPEL and ensuring that the events appear in the Active Data Cache. We can find the actual event data by selecting the object in BAM architect and then clicking Contents, which will then list the actual data object instances.

Creating a simple dashboard

Now that our sensors are in place and working, we can use the BAM Active Studio application to create a report based on the sensor information. To help organize our reports, it is possible to create folders to hold reports in a similar fashion to the way we created folders to hold data objects.

Let us create a report that shows the status of auctions in the system and also shows the value of all auctions currently open. We will start by creating the report itself. The report is just a holder for views, and we create it by selecting the Create A New Report button.

Creating a simple dashboard

We can select a report that has the right number of panes for the number of views we want. Note that it is possible to change the number of panes on the report, so if we get it wrong, it does not matter. For now, we will choose a simple split-screen report with two panes, one above the other.

We can provide a title for a report by editing the title section directly. Having updated the title, we can then proceed to create the views.

Invoking the BAM adapter through BPEL sensors

In this section, we will examine how to use BPEL sensors to invoke the BAM adapter.

Within JDeveloper, there are several modes in which we can view the BPEL process. On the right-hand side of the title bar for the BPEL editor, there is a drop-down list that allows us to select the viewing and editing mode.

The drop-down list shows us the three modes available:

  • BPEL: Lets us edit and view the BPEL activities
  • Monitor: Lets us edit and view sensor annotations to the BPEL process
  • BPA: Is only used with the Oracle BPA suite.
    Invoking the BAM adapter through BPEL sensors

After choosing Monitor, we can right-click on a BPEL activity to start creating the sensor. This brings up a pop-up menu from which we can select the Create | Sensor item. Note that there are also options to create other monitoring items.

  • Counter: Creates a count of the number of times an activity has been reached
  • Business Indicator: Evaluates an XPath expression when an activity has been reached
  • Interval: Calculates the elapsed time between two activities
  • Sensor: Creates a BAM sensor
    Invoking the BAM adapter through BPEL sensors

When creating a new sensor we need to provide it with a name and indicate when it should fire. The options are as follows:

  • Activation: When the activity is started
  • Completion: When the activity is completed
  • Fault: When the activity raises a fault
  • Compensation: When compensation is invoked on a surrounding scope
  • Retry: When the activity is retried, such as retrying an invoke
  • All: All of the above

We must also provide a variable that contains the data we want to be part of the sensor-generated event. This variable must be an element variable, not a simple type or a message type.

Invoking the BAM adapter through BPEL sensors

Sensors can have a number of sensor actions associated with them. Sensor actions can be thought of as the targets for the sensor event. One option is to send the events into the BPEL repository, which is useful for testing purposes. Another option is to send them to BAM. Other options revolve around JMS Queues and Topics.

Unfortunately, we cannot add a BAM sensor from the Create Activity Sensor dialog. They can only be created by using the structure pane for the BPEL process. To do this, we navigate to Sensor Actions in the structure pane, right-click, and select Bam Sensor Action. This brings up the Create Sensor Action dialog.

Invoking the BAM adapter through BPEL sensors

We provide a name for the sensor action and then select an eligible sensor from the drop-down list. There is a one-to-one relationship between BAM sensor sections and sensors. This is not the case for other types of sensors. The reason for the one-to-one relationship is that BAM sensor actions transform the variable associated with the action into the relevant fields for the BAM data object. This is done through an XSLT transform.

Having selected our sensor, we then click the torch next to the Data Object so that we can choose the BAM data object that we will map the sensor variable onto.

Having selected the BAM data object, we need to select the operation to be performed on the data object. The drop-down list gives us four options:

  • Insert
  • Update
  • Delete
  • Upsert

The Insert operation creates a new instance of the BAM data object. This may result in multiple data objects having the same field values.

The Insert operation does not use a key as it always creates a new data object. The remaining three operations require a key because they may operate on an existing data object. The key must uniquely identify a data object and may consist of one or more data object fields.

The Update operation will update an existing data object, overwriting some or all of the fields, as desired. If the object cannot be found from the key, then no data is updated in the ADC.

The Delete operation will remove a data object from the ADC. If the key does not identify an object, then no object will be deleted.

The Upsert operation behaves as an update operation if the key does identify an existing data object in the ADC. If the key does not identify an existing object in the ADC, then it behaves as an Insert operation.

Generally, we use the Insert operation when we know we are creating an object for the first time, and we use the Update operation when we know that the object already exists. We use the Upsert operation when we are unsure if an object exists.

For example, we may use an Insert operation to create an instance of a process status object and then use an update to change the status value of the object as the process progresses. When tracking process state, it is a good idea to use the process instance identifier as a key field in the data object.

Having chosen our operation, an Insert operation for example, we then need to map the fields in the sensor variable defined in BPEL to the BAM data object. We do this by creating a new XSLT transformation by clicking the green cross next to the Map File field.

Within the XSLT transformation editor, we can map the BPEL variable to the BAM data object. In addition to the variable itself, there is a host of other information available to us in the BPEL variable source document. This can be categorized as follows:

  • Header Information
    • This relates to the process instance and the specific sensor that is firing
  • Payload
    • This contains not only the sensor variable contents but also information about the activity and any fault associated with it

Useful data includes the instance ID of the process and also the time the sensor fired as well as the elapsed times for actions. Once we have wired up the variable data, we can save the transform file.

Invoking the BAM adapter through BPEL sensors

When we have finished creating the sensor action, we can deploy it to the BPEL server and events will be fired to populate the BAM active data cache.

Testing the events

After creating our BAM sensors, we can test them by executing a process in BPEL and ensuring that the events appear in the Active Data Cache. We can find the actual event data by selecting the object in BAM architect and then clicking Contents, which will then list the actual data object instances.

Creating a simple dashboard

Now that our sensors are in place and working, we can use the BAM Active Studio application to create a report based on the sensor information. To help organize our reports, it is possible to create folders to hold reports in a similar fashion to the way we created folders to hold data objects.

Let us create a report that shows the status of auctions in the system and also shows the value of all auctions currently open. We will start by creating the report itself. The report is just a holder for views, and we create it by selecting the Create A New Report button.

Creating a simple dashboard

We can select a report that has the right number of panes for the number of views we want. Note that it is possible to change the number of panes on the report, so if we get it wrong, it does not matter. For now, we will choose a simple split-screen report with two panes, one above the other.

We can provide a title for a report by editing the title section directly. Having updated the title, we can then proceed to create the views.

Testing the events

After creating our BAM sensors, we can test them by executing a process in BPEL and ensuring that the events appear in the Active Data Cache. We can find the actual event data by selecting the object in BAM architect and then clicking Contents, which will then list the actual data object instances.

Creating a simple dashboard

Now that our sensors are in place and working, we can use the BAM Active Studio application to create a report based on the sensor information. To help organize our reports, it is possible to create folders to hold reports in a similar fashion to the way we created folders to hold data objects.

Let us create a report that shows the status of auctions in the system and also shows the value of all auctions currently open. We will start by creating the report itself. The report is just a holder for views, and we create it by selecting the Create A New Report button.

Creating a simple dashboard

We can select a report that has the right number of panes for the number of views we want. Note that it is possible to change the number of panes on the report, so if we get it wrong, it does not matter. For now, we will choose a simple split-screen report with two panes, one above the other.

We can provide a title for a report by editing the title section directly. Having updated the title, we can then proceed to create the views.

Creating a simple dashboard

Now that our sensors are in place and working, we can use the BAM Active Studio application to create a report based on the sensor information. To help organize our reports, it is possible to create folders to hold reports in a similar fashion to the way we created folders to hold data objects.

Let us create a report that shows the status of auctions in the system and also shows the value of all auctions currently open. We will start by creating the report itself. The report is just a holder for views, and we create it by selecting the Create A New Report button.

Creating a simple dashboard

We can select a report that has the right number of panes for the number of views we want. Note that it is possible to change the number of panes on the report, so if we get it wrong, it does not matter. For now, we will choose a simple split-screen report with two panes, one above the other.

We can provide a title for a report by editing the title section directly. Having updated the title, we can then proceed to create the views.

Monitoring process status

For our first view, let us monitor how many auctions are at particular states. We are interested in a count of the number of auctions with a given state value. This would be well represented with a histogram style chart, so we select a 3D bar chart from the view pane.

A wizard appears at the bottom of the screen, which gives us the opportunity to select a data object to be used as the basis of the view. We navigate to the Auction folder and select the AuctionState object. Note that it is possible to have multiple data objects in a view, but additional data objects are added later.

Monitoring process status

Having selected the data object, we select the fields from the data object that we will need in order to present the current state an auction is in. We choose the state field as a value we want to use in our report by selecting it from the Chart Values column. We can choose to group the data by particular fields, in this case, the state of the auction. By default, date and string fields can be grouped, but by selecting Include Value Fields, it is possible to group by any field by selecting it in the Group By column. By selecting a summary function (Count) for our state field, we can count the number of auctions in a given state.

Monitoring process status

Finally, the wizard gives us the opportunity to further modify the view by:

  • Creating a filter to restrict the range of data objects included in the view
  • Adding additional calculated fields to the view
  • Adding additional data objects to the view to be displayed alongside the existing data object
  • Changing the visual properties of the view
Monitoring process status

We will create a filter to restrict the display to those processes that are either currently running or have completed in the last seven days. To do this, after selecting the filter link, add a new entry to the filter.

Monitoring process status

We can now select a date field (Expires) and select that we want to include any data object whose Expires field is within a time period of one week ago. This will prevent us from having an ever increasing number of completed processes. When the filter expression is completed, we click Update Entry to add the entry to the filter.

Tip

Update Entry link

Always remember to click the Update Entry or Add Entry link after making changes in your filter expressions. Only after clicking this can you select OK to complete your changes, otherwise your changes will be lost.

Monitoring process status

When we have clicked Update Entry, we can review the filter and select Apply. This will update the underlying view and we can verify that the data is as we expect it to look.

Monitoring KPIs

In the previous section, we looked at monitoring the state of a process. In this section, we will use BAM to give a real-time view of our KPIs. For example, we may be interested in monitoring the current value of all open auctions. This can be done by creating a view, for example, using a dial gauge. The gauge will give us a measure of a value in the context of acceptable and unacceptable bounds. Creating the view is done in a similar fashion as done previously, and again, we may make use of filters to restrict the range of data objects that are included in the view.

When we have completed the views in our report and saved the report, we may view the report through the active viewer application and watch the values change in real-time.

Monitoring KPIs

Note that we can drill down into the reports to gain additional information. This only gives a list of individual data objects with the same values displayed as on the top level view. To gain more control over drill down, it is necessary to use the Drilling tab in the view editor to specify the drill-down parameters.

Summary

In this chapter, we have explored how business activity monitoring differs from and is complementary to more traditional business intelligence solutions such as Oracle Reports and Business Objects. We have explored how BAM can allow the business to monitor the state of business targets and Key Performance Indicators, such as the current most popular products in a retail environment or the current time taken to serve customers in a service environment. We also looked at how BAM can be used to allow the business to monitor the current state of processes, both in aggregate and also drilling down to individual process instances.

Left arrow icon Right arrow icon

Key benefits

  • A hands-on, best-practice guide to using and applying the Oracle SOA Suite in the delivery of real-world SOA applications
  • Detailed coverage of the Oracle Service Bus, BPEL PM, Rules, Human Workflow, Event Delivery Network, and Business Activity Monitoring
  • Master the best way to use and combine each of these different components in the implementation of a SOA solution
  • Illustrates key techniques and best practices using a working example of an online auction site (oBay)

Description

We are moving towards a standards-based Service-Oriented Architecture (SOA), where IT infrastructure is continuously adapted to keep up with the pace of business change. Oracle is at the forefront of this vision, with the Oracle SOA Suite providing the most comprehensive, proven, and integrated tool kit for building SOA-based applications.Developers and Architects using the Oracle SOA Suite, whether working on integration projects, building composite applications, or specializing in implementations of Oracle Applications, need a hands-on guide on how best to harness and apply this technology. This book will guide you on using and applying the Oracle SOA Suite to solve real-world problems, enabling you to quickly learn and master the technology and its applications.This book is a major update to Oracle SOA Suite Developer's Guide, which covered 10gR3. It is completely updated for Oracle SOA Suite 11gR1, with 40% new material, including detailed coverage of newer components, such as: the Mediator, the new Rules Editor, the Event Delivery Network, Service Data Objects, and the Meta Data Repository. There is also a complete additional chapter on advanced SOA Architecture including message delivery, transaction handling and clustering considerations.The initial section of the book provides you with a detailed hands-on tutorial to each of the core components that make up the Oracle SOA Suite. Once you are familiar with the various pieces of the SOA Suite and what they do, the next question will typically be:"What is the best way to use and combine all of these different components to implement a real-world SOA solution?"Answering this question is the goal of the next section. Using a working example of an online auction site (oBay), it leads you through key SOA design considerations in implementing a robust solution that is designed for change.The final section addresses non-functional considerations and covers the packaging, deployment, and testing of SOA applications. It then details how to secure and administer SOA applications.

Who is this book for?

If you are a developer or a technical architect who works in the SOA domain, this book is for you. The primary purpose of the book is to provide you with a hands-on practical guide to using and applying the Oracle SOA Suite in the delivery of real-world composite applications.You need basic understanding of the concepts of SOA, as well as some of the key standards in this field, including web services (SOAP, WSDL), XML Schemas, and XSLT (and XPath).

What you will learn

  • Implement SOA composites using standards like the Services Component Architecture (SCA) of the Oracle SOA Suite
  • Build implementation-agnostic services using the Oracle Service Bus and Mediator
  • Learn to use key technology adapters to service-enable existing systems
  • Assemble services to build composite services and long-running business process using BPEL
  • Implement Service Data Objects (SDOs) and embed them as Entity Variables within a BPEL Process using ADF-Business Components
  • Implement Business Rules and Decision Tables using the new Rules Editor
  • Incorporate Human Workflow into your processes and use Business Rules to provide greater agility
  • Leverage the Meta Data Service (new in 11gR1) to share XML resources between composites.
  • Design XML schemas and WSDL service contracts for improved agility, reuse, and interoperability
  • Transform the look and feel of the workflow within your solution using the Workflow APIs
  • Handle errors within your application using Fault Policies
  • Create, deploy, and run test cases that automate the testing of composite applications
  • Secure and administer SOA applications using Web Service Manager
  • Learn best practices to architect, design, and implement your overall SOA Solution
Estimated delivery fee Deliver to Turkey

Standard delivery 10 - 13 business days

$12.95

Premium delivery 3 - 6 business days

$34.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jul 01, 2010
Length: 720 pages
Edition : 1st
Language : English
ISBN-13 : 9781849680189
Vendor :
Oracle

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Estimated delivery fee Deliver to Turkey

Standard delivery 10 - 13 business days

$12.95

Premium delivery 3 - 6 business days

$34.95
(Includes tracking information)

Product Details

Publication date : Jul 01, 2010
Length: 720 pages
Edition : 1st
Language : English
ISBN-13 : 9781849680189
Vendor :
Oracle

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 219.97
Oracle SOA Suite 11g R1 Developer's Guide
$87.99
Oracle SOA Suite 11g Administrator's Handbook
$65.99
Oracle Service Bus 11g Development Cookbook
$65.99
Total $ 219.97 Stars icon

Table of Contents

4 Chapters
I. Getting Started Chevron down icon Chevron up icon
II. Putting it All Together Chevron down icon Chevron up icon
III. Other Considerations Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.3
(11 Ratings)
5 star 72.7%
4 star 9.1%
3 star 0%
2 star 9.1%
1 star 9.1%
Filter icon Filter
Top Reviews

Filter reviews by




ST Dec 17, 2010
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is one of the best book covering Oracle SOA Suite 11g for both developers and hands-on architects. Every SOA Suite topic is covered in a simple and effective way. You can use it in multiple ways - either read it from the beginning to end as a supplement for self study, or just as a reference guide going right into the specific SOA Suite 11g topic you are looking for. I would also suggest you to take a look at Antony's blog including many topics covered in this book and other advance areas for mastering Oracle SOA Suite 11g. Download the Oracle SOA Suite 11g and start your project - this book on your desk you will master it fast.
Amazon Verified review Amazon
Lei Zhang Sep 14, 2010
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is great guide for all people who want to know all new features of Oracle SOA suite 11g, very detailed and easy to understand.Highly recommend !
Amazon Verified review Amazon
alena Dec 20, 2014
Full star icon Full star icon Full star icon Full star icon Full star icon 5
very useful and practical. I used the solutions in our real project.
Amazon Verified review Amazon
Rafiq Oct 28, 2015
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I bought this book when I started learning SOA and have bought several other books after that but none compares to it. It is the most practical/hands-on book I have ever read on Oracle SOA. I can't wait for the 12c version. To the authors, if you are planning a 12c version (I think you should), I will suggest that you use a real life project to illustrate the concepts and technologies. Maybe a bank that is building a SOA solution. There is one thing that is mostly missing in SOA books though...How do end users interact with SOA applications? I know that Oracle Fusion Applications are built on the SOA architecture with nice looking web forms? How are these web applications built on-top of or as part of SOA solutions? Please do include this in the 12c version. I haven't read the whole book though so I am leaving room for correction if the requested features in the 12c version are already in this book.
Amazon Verified review Amazon
Konstantin Jan 14, 2013
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Great book ! Everything's explained in details ! Fantastic book for people with hands on SOA and for beginners, as well!
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela