Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Servers

95 Articles
article-image-optimizing-lighttpd
Packt
16 Oct 2009
5 min read
Save for later

Optimizing Lighttpd

Packt
16 Oct 2009
5 min read
If our Lighttpd runs on a multi-processor machine, it can take advantage of that by spawning multiple versions of itself. Also, most Lighttpd installations will not have a machine to themselves; therefore, we should not only measure the speed but also its resource usage. Optimizing Compilers: gcc with the usual settings (-O2) already does quite a good job of creating a fast Lighttpd executable. However, -O3 may nudge the speed up a tiny little bit (or slow it down, depending on our system) at the cost of a bigger executable system. If there are optimizing compilers for our platform (for example, Intel and Sun Microsystems each have compilers that optimize for their CPUs), they might even give another tiny speed boost. If we do not want to invest money in commercial compilers, but maximize on what gcc has to offer, we can use Acovea, which is an open source project that employs genetic algorithms and trial-and-error to find the best individual settings for gcc on our platform. Get it from http://www.coyotegulch.com/products/acovea/ Finally, optimization should stop where security (or, to a lesser extent, maintainability) is compromised. A slower web server that does what we want is way better than a fast web server obeying the commands of a script kiddie. Before we optimize away blindly, we better have a way to measure the "speed". A useful measure most administrators will agree with is "served requests per second". http_load is a tool to measure the requests per second. We can get it from http://www.acme.com/software/http_load/. http_load is very simple. Give it a site to request, and it will flood the site with requests, measuring how many are served in a given amount of time. This allows a very simplistic approach to optimizing Lighttpd: Tweak some settings, run http_load with a sufficient realistic scenario, and see if our Lighttpd handles more or less requests than before. We do not yet know where to spend time optimizing. For this, we need to make use of timing log instrumentation that has been included with Lighttpd 1.5.0 or even use a profiler to see where the most time is spent. However, there are some "big knobs" to turn that can increase performance, where http_load will help us find a good setting. Installing http_load http_load can be downloaded as a source .tar file (which was named .tar.gz for me, though it is not gzipped). The version as of this writing is 12Mar2006. Unpack it to /usr/src (or another path by changing the /usr/src) with: $ cd /usr/src && tar xf /path/to/http_load-12Mar2006.tar.gz$ cd http_load-12Mar2006 We can optionally add SSL support. We may skip this if we do not need it. To add SSL support we need to find out where the SSL libs and includes are. I assume they are in /usr/lib and /usr/include, respectively, but they may or may not be the same on your system. Additionally, there is a "SSL tree" directory that is usually in /usr/ssl or /usr/local/ssl and contains certificates, revocation lists, and so on. Open the Makefile with a text editor and look at line 11 to 14, which reads: #SSL_TREE = /usr/local/ssl#SSL_DEFS = -DUSE_SSL#SSL_INC = -I$(SSL_TREE)/include#SSL_LIBS = -L$(SSL_TREE)/lib -lssl -lcrypto Change them to the following (assuming the given directories are correct): SSL_TREE = /usr/sslSSL_DEFS = -DUSE_SSLSSL_INC = -I/usr/includeSSL_LIBS = -L/usr/lib -lssl -lcrypto Now compile and install http_loadwith the following command: $ make all install Now we're all set to load-test our Lighttpd. Running http_load tests We just need a URL file, which contains URLs that lead to the pages our Lighttpd serves. http_load will then fetch these pages at random as long as, or as often as we ask it to. For example, we may have a front page with links to different articles. We can just start putting a link to our front page into the URL file, which we will name urls to get started; for example, http://localhost/index.html. Note that the file just contains URLs, nothing less, nothing more (for example, http_load does not support blank lines). Now we can make our first test run: $ http_load -parallel 10 -seconds 60 urls This will run for one minute and try to open 10 connections per second. Let's see if our Lighttpd keeps up: 343 fetches, 10 max parallel, 26814 bytes, in 60 seconds78.1749 mean bytes/connection5.71667 fetches/sec, 446.9 bytes/secmsecs/connect: 290.847 mean, 9094 max,15 minmsecs/first-response: 181.902 mean, 9016 max, 15 minHTTP response codes: code 200 - 327   As we can see, it does. http_load needs one of the two start conditions and one of the two stop conditions plus a URL file to run. We can create the URL file manually or crawl our document root(s) with the following python script called crawl.py: #!/usr/bin/python#run from document root, pipe into URLs file. For example:# /path/to/docroot$ crawl.py > urlsimport os, re, syshostname = "http://localhost/"for (root, dirs, files) in os.walk("."): for name in files: filepath = os.path.join(root, name) print re.sub("./", hostname, filepath)   You can download the crawl.oy file from http://www.packtpub.com/files/code/2103_Code.zip. Capture the output into a file to use as URL file. For example, start the script from within our document root with: $ python crawl.py > urls This will give us a urls file, which will make http_load try to get all files (given that we have specified enough requests). Then we can start http_load as discussed in the preceding example. http_load takes the following options:  
Read more
  • 0
  • 0
  • 2503

article-image-messaging-websphere-application-server-70-part-2
Packt
05 Oct 2009
7 min read
Save for later

Messaging with WebSphere Application Server 7.0 (Part 2)

Packt
05 Oct 2009
7 min read
WebSphere MQ overview WebSphere MQ formerly known as MQ Series is IBM's enterprise messaging solution. In a nutshell, MQ provides the mechanisms for messaging both in point-to-point and publish-subscribe. However, it guarantees to deliver a message only once. This is important for critical business applications which implement messaging. An example of a critical system could be a banking payments system where messages contain messages pertaining to money transfer between banking systems, so guaranteeing delivery of a debit/credit is paramount in this context. Aside from guaranteed delivery, WMQ is often used for messaging between dissimilar systems and the WMQ software provides programming interfaces in most of the common languages, such as Java, C, C++, and so on. If you are using WebSphere, then it is common to find that WMQ is often used with WebSphere when WebSphere is hosting message-enabled applications. It is important that the WebSphere administrator understands how to configure WebSphere resources so that application can be coupled to the MQ queues. Overview of WebSphere MQ example To demonstrate messaging using WebSphere MQ, we are going to re-configure the previously deployed JMS Tester application so that it will use a connection factory which communicates with a queue on a WMQ queue manager as opposed to using the default provider which we demonstrated earlier. Installing WebSphere MQ Before we can install our demo messaging application, we will need to download and install WebSphere MQ 7.0. A free 90-day trial can be found at the following URL: http://www.ibm.com/developerworks/downloads/ws/wmq/. Click the download link as shown below. You will be prompted to register as an IBM website user before you can download the WebSphere MQ Trial. Once you have registered and logged in, the download link above will take you to a page which lists download for different operating systems. Select WebSphere MQ 7.0 90-day trial from the list of available options as shown below. Click continue to go to the download page. You may be asked to fill out a questionnaire detailing why you are evaluating WebSphere MQ (WMQ). Fill out the question as you see fit and submit to move to the download page. As shown above, make sure you use the IBM HTTP Download director as it will ensure that your download will resume, even if your Internet loses a connection. If you do not have a high-speed Internet connection, you can try downloading a free 90-day trial of WebSphere MQ 7.0 overnight while you are asleep. Download the trial to a temp folder, for example c:temp, on your local machine. The screenshot above shows how the IBM HTTP Downloader will prompt for a location where you want to download it to. Once the WMQ install file has been downloaded, you can then upload the file using an appropriate secure copy utility like Winscp to an appropriate folder like /apps/wmq_install on your Linux machine. Once you have the file uploaded to Linux, you can then decompress the file and run the installer to install WebSphere MQ. Running the WMQ installer Now that you have uploaded the WMQv700Trial-x86_linux.tar file on your Linux machine, and follow these steps: You can decompress the file using the following command: gunzip ./WMQv700Trial-x86_linux.tar.gz Then run the un-tar command: tar -xvf ./ WMQv700Trial-x86_linux.tar Before we can run the WMQ installations, we need to accept the license agreement by running the following command: ./mqlicense.sh –accept To run the WebSphere MQ installation, type the following commands: rpm -ivh MQSeriesRuntime-7.0.0-0.i386.rpmrpm -ivh MQSeriesServer-7.0.0-0.i386.rpmrpm -ivh MQSeriesSamples-7.0.0-0.i386.rpm As a result of running the MQSeriesServer installation, a new user called mqm was created. Before running any WMQ command, we need to switch to this user using the following command: su - mqm Then, we can run commands like the dspmqver command which can be run to check that WMQ was installed correctly. To check whether WMQ is installed, run the following command: /opt/mqm/bin/dspmqver The result will be the following message as shown in the screenshot below: Creating a queue manager Before we can complete our WebSphere configuration, we need to create a WMQ queue manager and a queue, then we will use some MQ command line tools to put a test message on an MQ queue and get a message from an MQ queue. To create a new queue manager called TSTDADQ1, use the following command: crtmqm TSTDADQ1 The result will be as shown in the image below. We can now type the following command to list queue managers: dspmq The result of running the dspmq command is shown in the image below. To start the queue manager (QM), type the following command: strmqm The result of starting the QM will be similar to the image below. Now that we have successfully created a QM, we now need to add a queue called LQ.Test where we can put and get messages. To create a local queue on the TSTDADQ1 QM, type the following commands in order: runmqsc TSTDADQ1 You are now running the MQ scripting command line, where you can issue MQ commands to configure the QM. To create the queue, type the following command and hit Enter: define qlocal(LQ.TEST) Then immediately type the following command: end Hit Enter to complete the QM configuration, as shown by the following screenshot. You can use the following command to see if your LQ.TEST queue exists. echo "dis QLOCAL(*)" | runmqsc TSTDADQ1 | grep -i test You have now added a local queue called Q.Test to the TSTDADQ1 queue manager. runmqsc TSTDADQ1DEFINE LISTENER(TSTDADQ1.listener) TRPTYPE (TCP) PORT(1414)START LISTENER(TSTDADQ1.listener)End You can type the following command to ensure that your QM listener is running. ps -ef | grep mqlsr The result will be similar to the image below. To create a default channel, you can run the following command. runmqsc TSTDADQ1DEFINE CHANNEL(SYSTEM.ADMIN.SVRCONN) CHLTYPE(SVRCONN)End We can now use a sample MQ program called amqsput which we can use to put and get a test message from a queue to ensure that our MQ configuration is working before we continue to configure WebSphere. Type the following command to put a test message on the LQ.Test queue: /opt/mqm/samp/bin/amqsput LQ.TEST TSTDADQ1 Then you can type a test message: Test Message and hit Enter; this will put a message on the LQ.Test queue and will exit you from the AMQSPUTQ command tool. Now that we have put a message on the queue, we can read the message by using the MQ Sample command tool called amqsget. Type the following command to get the message you posted earlier: /opt/mqm/samp/bin/amqsget LQ.TEST TSTDADQ1 The result will be that all messages on the LQ.TEST queue will be listed and then the tool will timeout after a few seconds as shown below. We need to do two final steps to complete and that is to add the root user to the mqm group. This is not a standard practice in an enterprise, but we have to do this because our WebSphere installation is running as root. If we did not do this, we would have to reconfigure the user which the WebSphere process is running under and then add the new user to MQ security. To keep things simple, ensure that root is a member of the mqm group, by typing the following command: usermod -a -G mqm root We also need to change WMQ security to ensure that all users of the mqm group have access to all the objects of the TSTDADQ1 queue manager. To change WMQ security to give access to all objects in the QM, type the following command: setmqaut -m TSTDADQ1 -t qmgr -g mqm +all Now, we are ready to re-continue our configuring WebSphere and create the appropriate QCF and queue destinations to access WMQ from WebSphere.
Read more
  • 0
  • 0
  • 2608

article-image-messaging-websphere-application-server-70-part-1
Packt
01 Oct 2009
6 min read
Save for later

Messaging with WebSphere Application Server 7.0 (Part 1)

Packt
01 Oct 2009
6 min read
Messaging in a large enterprise is common and a WebSphere administrator needs to understand what WebSphere Application Server can do for Java Messaging and/or WebSphere Message Queuing (WMQ) based messaging. Here, we will learn how to create Queue Connection Factories (QCF) and Queue Destinations (QD) which we will use in a demonstration application where we will demonstrate the Java Message Service (JMS) and also show how WMQ can be used as part of a messaging implementation. In this two-part article by Steven Charles Robinson, we will cover the following topics: Java messaging Java Messaging Service (JMS) WebSphere messaging Service integration bus (SIB) WebSphere MQ Message providers Queue connection factories Queue destinations Java messaging Messaging is a method of communication between software components or applications. A messaging system is often peer-to-peer, meaning that a messaging client can send messages to, and receive messages from, any other client. Each client connects to a messaging service that provides a system for creating, sending, receiving, and reading messages. So why do we have Java messaging? Messaging enables distributed communication that is loosely-coupled. What this means is that a client sends a message to a destination, and the recipient can retrieve the message from the destination. A key point of Java messaging is that the sender and the receiver do not have to be available at the same time in order to communicate. The term communication can be understood as an exchange of messages between software components. In fact, the sender does not need to know anything about the receiver; nor does the receiver need to know anything about the sender. The sender and the receiver need to know only what message format and what destination to use. Messaging also differs from electronic mail (email), which is a method of communication between people or between software applications and people. Messaging is used for communication between software applications or software components. Java messaging attempts to relax tightly-coupled communication (such as, TCP network sockets, CORBA, or RMI), allowing software components to communicate indirectly with each other. Java Message Service Java Message Service (JMS) is an application program interface (API) from Sun. JMS provides a common interface to standard messaging protocols and also to special messaging services in support of Java programs. Messages can involve the exchange of crucial data between systems and contain information such as event notification and service requests. Messaging is often used to coordinate programs in dissimilar systems or written in different programming languages. By using the JMS interface, a programmer can invoke the messaging services like IBM's WebSphere MQ (WMQ) formerly known as MQSeries, and other popular messaging products. In addition, JMS supports messages that contain serialized Java objects and messages that contain XML-based data. A JMS application is made up of the following parts, as shown in the following diagram: A JMS provider is a messaging system that implements the JMS interfaces and provides administrative and control features. JMS clients are the programs or components, written in the Java programming language, that produce and consume messages. Messages are the objects that communicate information between JMS clients. Administered objects are preconfigured JMS objects created by an administrator for the use of clients. The two kinds of objects are destinations and Connection Factories (CF). As shown in the diagram above, administrative tools allow you to create destinations and connection factories resources and bind them into a Java Naming and Directory Interface (JNDI) API namespace. A JMS client can then look up the administered objects in the namespace and establish a logical connection to the same objects through the JMS provider. JMS features Application clients, Enterprise Java Beans (EJB), and Web components can send or synchronously receive JMS messages. Application clients can, in addition, receive JMS messages asynchronously. A special kind of enterprise bean, the message-driven bean, enables the asynchronous consumption of messages. A JMS message can also participate in distributed transactions. JMS concepts The JMS API supports two models: Point-to-point or queuing model As shown below, in the point-to-point or queueing model, the sender posts messages to a particular queue and a receiver reads messages from the queue. Here, the sender knows the destination of the message and posts the message directly to the receiver's queue. Only one consumer gets the message. The producer does not have to be running at the time the consumer consumes the message, nor does the consumer need to be running at the time the message is sent. Every message successfully processed is acknowledged by the consumer. Multiple queue senders and queue receivers can be associated with a single queue, but an individual message can be delivered to only one queue receiver. If multiple queue receivers are listening for messages on a queue, Java Message Service determines which one will receive the next message on a first-come-first-serve basis. If no queue receivers are listening on the queue, messages remain in the queue until a queue receiver attaches to the queue. Publish and subscribe model As shown by the above diagram, the publish/subscribe model supports publishing messages to a particular message topic. Unlike the point-to-point messaging model, the publish/subscribe messaging model allows multiple topic subscribers to receive the same message. JMS retains the message until all topic subscribers have received it. The Publish & Subscribe messaging model supports durable subscribers, allowing you to assign a name to a topic subscriber and associate it with a user or application. Subscribers may register interest in receiving messages on a particular message topic. In this model, neither the publisher nor the subscriber knows about each other. By using Java, JMS provides a way of separating the application from the transport layer of providing data. The same Java classes can be used to communicate with different JMS providers by using the JNDI information for the desired provider. The classes first use a connection factory to connect to the queue or topic, and then populate and send or publish the messages. On the receiving side, the clients then receive or subscribe to the messages.
Read more
  • 0
  • 0
  • 8593

article-image-deploying-your-applications-websphere-application-server-70
Packt
01 Oct 2009
9 min read
Save for later

Deploying Your Applications on WebSphere Application Server 7.0

Packt
01 Oct 2009
9 min read
Data access applications We have just deployed an application that did not require database connectivity. Often, applications in the business world require access to a RDBMS to fulfill their business objective. If an application requires the ability to retrieve from, or store information in, a database, then you will need to create a data source which will allow the application to connect and use the database (DB). Looking at the figure below, we can see the logical flow of the sample data access application that we are going to install. The basic idea of the application is to display a list of tables that exist in a database schema. Since the application requires a database connection, we need to configure WebSphere before we can deploy the application. We will now cover the preparation work before we install our application. Data sources Each data source is associated with a JDBC provider that is configured for access to a specific database type. The data source provides connectivity which allows an application to communicate with the database. Preparing our sample database Before you create a data source, you need to ensure that the appropriate client database driver software is installed. For our demonstration, we are going to use Oracle Express Edition (Oracle XE) for Linux which is the free version of Oracle. We are using version Oracle XE 10g for Linux and the download size is about 210MB, so it will take time to download. We installed Oracle XE using the default install option for installing an RPM. The administration process is fully documented on Oracle's web site and in the documentation which is installed with the product. We could have chosen to use many open source/free databases, however their explanations and configurations would detract from the point. We have chosen to use Oracle's free RDBMS called Oracle XE, and JDBC with Oracle XE is quite easy to configure. By following these steps, you will be able to apply the same logic to any of the major vendors' full RDMS products, that is, DB/2, Oracle, SQL Server, and so on. Another reason why we chose Oracle XE is that it is an enterprise-ready DB and is administered by a simple web interface and comes with sample databases. We need to test that we can connect to our database without WebSphere so that we can evaluate the DB design. To do this, we will need to install Oracle XE. We will now cover the following steps one by one. Download Oracle XE from Oracle's web site using the following URL:http://www.oracle.com/technology/products/database/xe/index.html. Transfer the oracle-xe-10.2.0.1-1.0.i386.rpm file to an appropriate directory on your Linux server using WinSCP (Secure Copy) or your chosen Secure FTP client. Since the XE installer uses X Windows, ensure that you have Xming running. Then install Oracle XE by using the rpm command, as shown here: rpm -ivh oracle-xe-10.2.0.1-1.0.i386.rpm Follow the installer steps as prompted: HTTP port = 8080 Listener port = 1521 SYS & SYSTEM / password = oracle Autostart = y Oracle XE requires 1024 minimum swap space and requires 1.5 GB of disk space to install. Ensure that Oracle XE is running. You can now access the web interface via a browser from the local machine; by default, XE will only accept a connection locally. As shown in the following figure, we have a screenshot of using Firefox to connect to OracleXE using the URL http://localhost:8080/apex. The reason we use Firefox on Linux is that this is the most commonly installed default browser on the newer Linux distributions. When the administration application loads, you will be presented with a login screen as seen in the following screenshot. You can log in using the username SYSTEM and password oracle as set by your installation process. Oracle XE comes with a pre-created user called HR which is granted ownership to the HR Schema. However, the account is locked by default for security reasons and so we need to unlock the HR user account. To unlock an account, we need to navigate to the Database Users | Manage Users screen, as demonstrated in the following screenshot: You will notice that the icon for the HR user is locked. You will see a small padlock on the HR icon, as seen in this figure: Click on the HR user icon and unlock the account as shown in the following figure. You need to reset the password and change Account Status to Unlocked, and then click Alter User to set the new password. The following figure shows that the HR account is unlocked: The HR account is now unlocked as seen above. Log out and log back into the administration interface using the HR user to ensure that the account is now unlocked. Another good test to perform to ensure connectivity to Oracle is to use an Oracle admin tool called sqlplus. Sqlplus is a command line tool which database administrators can use to administer Oracle. We are going to use sqlplus to do a simple query to list the tables in the HR schema. To run sqlplus, we need to set up an environment variable called $ORACLE_HOME which is required to run sqlplus. To set $ORACLE_HOME, type the following command in a Linux shell: export ORACLE_HOME=/usr/lib/oracle/xe/app/oracle/product/10.2.0/server If you have installed Oracle XE in a non-default location, then you may have to use a different path. To run sqlplus, type the following command: <oracle_home>/bin/sqlplus The result will be a login screen as shown below: You will be prompted for a username. Type the following command: hr@xe<enter> For the password, type the following command: hr<enter> When you have successfully logged in, you can type the following commands in the SQL prompt: SELECT TABLE_NAME FROM user_tables<enter> /<enter> The / command means execute the command buffer. The result will be a list of tables in the HR schema, as shown in the following screenshot: We have now successfully verified that Oracle works from a command line, and thus it is very likely that WebSphere will also be able to communicate with Oracle. Next, we will cover how to configure WebSphere to communicate with Oracle. JDBC providers Deployed applications use JDBC providers to communicate with RDBMS. The JDBC provider object provides the actual JDBC driver implementation class for access to a specific database type, that is, Oracle, SQL Server, DB/2, and so on. You associate a data source with a JDBC provider. A data source provides the connection to the RDBMS. The JDBC provider and the data source provide connectivity to a database. Creating a JDBC provider Before creating a JDBC provider, you will need to understand the application's resource requirements, that is, the data sources that the application references. You should know the answer to the following questions: Does your application require a data source? Not all applications use a database. The security credentials required to connect to the database. Often databases are secured and you will need a username and password to access a secure database. Are there any web components (Servlets, JSP, and so on) or EJBs which need to access a database. Answering these questions will determine the amount of configuration required for your database connectivity configurations. To create a JDBC provider, log into the administration console and click on the JDBC Provider link in the JDBC category of the Resources section located in the left-hand panel of the administration console as shown below. We need to choose an appropriate scope from the Scope drop-down pick list. Scope determines how the provider will be seen by applications. We will talk more about scope in the JNDI section. For now, please choose the Cell scope as seen below. Click New and the new JDBC provider wizard is displayed. Select the Database type as Oracle, Provider type as Oracle JDBC Driver, Implementation type as Connection pool data source, and Name for the new JDBC provider. We are going to enter MyJDBCDriver as the provider name as seen in the previous screenshot. We also have to choose an Implementation type. There are two implementation types for Oracle JDBC Drivers. The table below explains the two different types. Implementation Type Description Connection pool data source Use Connection Pool datasource if your application does not require connection that supports two-phase commit transactions... XA Datasource Use XA Datasource if your application requires two-phase commit transactions. Click Next to go to the database classpath screen. As shown in the following screenshot, enter the database class path information for the JDBC provider. As long as you have installed Oracle XE using the default paths, you will be able to use the following path in the Directory location field: /usr/lib/oracle/xe/oracle/product/10.2.0/server/jdbc/lib. Click Next to proceed to the next step, where you will be presented with a summary as shown in the following screenshot. Review the JDBC provider information that you have entered and click Finish. You will now be prompted to save the JDBC provider configuration. Click Save, as shown in the following screenshot. Saving this will persist the configuration to disk the resources to resources.xml. Before we finish, we need to update the JDBC Provider with the correct JAR file as the default one is not the one that we wish to use as it was assuming a later Oracle driver which we are not using. To change the driver, we must first select the driver that we created earlier called MyJDBCDriver as shown in the following screenshot: In the screen presented, we are going to change the Classpath field from: ${ORACLE_JDBC_DRIVER_PATH}/ojdbc6.jar to ${ORACLE_JDBC_DRIVER_PATH}/ojdbc14.jar Since WAS 7.0 is the latest version of WebSphere, the wizard already knows about the new version of the oracle 11g JDBC Driver. We are connecting to Oracle XE 10g and the driver for this is ojdbc14.jar.The classpath file can contain a list of paths or JAR file names which together form the location for the resource provider classes. Class path entries are separated by using the ENTER key and must not contain path separator characters (such as ; or :). Class paths can contain variable (symbolic) names that can be substituted using a variable map. Check your driver installation notes for specific JAR file names that are required. Click Apply and save the configuration.
Read more
  • 0
  • 0
  • 4069

article-image-deploying-your-applications-websphere-application-server-70-part-1
Packt
30 Sep 2009
10 min read
Save for later

Deploying your Applications on WebSphere Application Server 7.0 (Part 1)

Packt
30 Sep 2009
10 min read
Inside the Application Server Before we look at deploying an application, we will quickly run over the internals of WebSphere Application Server (WAS). The anatomy of WebSphere Application Server is quite detailed, so for now, we will briefly explain the important parts of WebSphere Application Server. The figure below shows the basic architecture model for a WebSphere Application Server JVM. An important thing to remember is that the WebSphere product code base is the same for all operating-systems (platforms). The Java applications that are deployed are written once and can be deployed to all versions of a given WebSphere release without any code changes. JVM All WebSphere Application Servers are essentially Java Virtual Machines (JVMs). IBM has implemented the J2EE application server model in a way which maximizes the J2EE specification and also provides many enhancements creating specific features for WAS. J2EE applications are deployed to an Application Server. Web container A common type of business application is a web application. The WAS web container is essentially a Java-based web server contained within an application server's JVM, which serves the web component of an application to the client browser. Virtual hosts A virtual host is a configuration element which is required for the web container to receive HTTP requests. As in most web server technologies, a single machine may be required to host multiple applications and appear to the outside world as multiple machines. Resources that are associated with a particular virtual host are designed to not share data with resources belonging to another virtual host, even if the virtual hosts share the same physical machine. Each virtual host is given a logical name and assigned one or more DNS aliases by which it is known. A DNS alias is the TCP/ host name and port number that are used to request a web resource, for example: <hostname>:9080/<servlet>. By default, two virtual host aliases are created during installation. One for the administration console called admin_host and another called default_host which is assigned as the default virtual host alias for all application deployments unless overridden during the deployment phase. All web applications must be mapped to a virtual host, otherwise web browser clients cannot access the application that is being served by the web container. Environment settings WebSphere uses Java environment variables to control settings and properties relating to the server environment. WebSphere variables are used to configure product path names, such as the location of a database driver, for example, ORACLE_JDBC_DRIVER_PATH, and environmental values required by internal WebSphere services and/or applications. Resources Configuration data is stored in XML files in the underlying configuration repository of the WebSphere Application Server. Resource definitions are a fundamental part of J2EE administration. Application logic can vary depending on the business requirement and there are several types of resource types that can be used by an application. Below is a list of some of the most commonly used resource types. Resource Types Description JDBC (Java database connectivity) Used to define providers and data sources URL Providers Used to define end-points for external services for example web services... JMS Providers Used to defined messaging configurations for Java Message Service, MQ connection factories and queue destinations etc. Mail Providers Enable applications to send and receive mail, typically use the SMTP protocol. JNDI The Java Naming and Directory Interface (JNDI) is employed to make applications more portable. JNDI is essentially an API for a directory service which allows Java applications to look up data and objects via a name. JNDI is a lookup service where each resource can be given a unique name. Naming operations, such as lookups and binds, are performed on contexts. All naming operations begin with obtaining an initial context. You can view the initial context as a starting point in the namespace. Applications use JNDI lookups to find a resource using a known naming convention. Administrators can override the resource the application is actually connecting to without requiring a reconfiguration or code change in the application. This level of abstraction using JNDI is fundamental and required for the proper use of WebSphere by applications. Application file types There are three file types we work with in Java applications. Two can be installed via the WebSphere deployment process. One is known as an EAR file, and the other is a WAR file. The third is a JAR file (often re-usable common code) which is contained in either the WAR or EAR format. The explanation of these file types is shown in the following table: File Type Description JAR file A JAR file (or Java ARchive) is used for organising many files into one. The actual internal physical layout is much like a ZIP file. A JAR is  generally used to distribute Java classes and associated metadata. In J2EE applications the JAR file often contains utility code, shared libraries and  EJBS. An EJB is a server-side model that encapsulates the business logic of an application and is one of several Java APIs in the Java Platform, Enterprise Edition with its own specification. You can visit http://java.sun.com/products/ejb/ for information on EJBs. EAR file An Enterprise Archive file represents a J2EE application that can be deployed in a WebSphere application server. EAR files are standard Java archive files (JAR) and have the file extension .ear. An EAR file can consist of the following: One or more Web modules packaged in WAR files. One or more EJB modules packaged in JAR files One or more application client modules Additional JAR files required by the application Any combination of the above The modules that make up the EAR file are themselves packaged in archive files specific to their types. For example, a Web module contains Web archive files and an EJB module contains Java archive files. EAR files also contain a deployment descriptor (an XML file called application.xml) that describes the contents of the application and contains instructions for the entire application, such as security settings to be used in the run-time environment... WAR file A WAR file (Web Application) is essentially a JAR file used to encapsulate a collection of JavaServer Pages (JSP), servlets, Java classes, HTML and other related files which may include XML and other file types depending on the web technology used. For information on JSP and Servlets, you can visit http://java.sun.com/products/jsp/. Servlets can support dynamic Web page content; they provide dynamic server-side processing and can connect to databases. Java ServerPages (JSP) files can be used to separate HTML code from the business logic in Web pages. Essentially they too can generate dynamic pages; however, they employ Java beans (classes) which contain specific detailed server-side logic. A WAR file also has its own deployment descriptor called "web.xml" which is used to configure the WAR file and can contain instruction for resource mapping and security. When an EJB module or web module is installed as a standalone application, it is automatically wrapped in an Enterprise Archive (EAR) file by the WebSphere deployment process and is managed on disk by WebSphere as an EAR file structure. So, if a WAR file is deployed, WebSphere will convert it into an EAR file. Deploying an application As WebSphere administrators, we are asked to deploy applications. These applications may be written in-house or delivered by a third-party vendor. Either way, they will most often be provided as an EAR file for deployment into WebSphere. For the purpose of understanding a manual deployment, we are now going to install a default application. The default application can be located in the <was_root>/installableApps folder. The following steps will show how we deploy the EAR file. Open the administration console and navigate to the Applications section and click on New Application as shown below: You now see the option to create one of the following three types of applications: Application Type Description Enterprise Application EAR file on a server configured to hold installable Web Applications, (WAR), Java archives, library files, and other resource files. Business Level Application A business-level application is an administration model similar to a server or cluster. However, it lends itself to the configuration of applications as a single grouping of modules. Asset An asset represents one or more application binary files that are stored in an asset repository such as Java archives, library files, and other resource files. Assets can be shared between applications. Click on New Enterprise Application. As seen in the following screenshot, you will be presented with the option to either browse locally on your machine for the file or remotely on the Application Server's file system. Since the EAR file we wish to install is on the server, we will choose the Remote file system option. It can sometimes be quicker to deploy large applications by first using Secure File Transfer Protocol (SFTP) to move the file to the application server's file system and then using remote, as opposed to transferring via local browse, which will do an HTTP file transfer which takes more resources and can be slower. The following screenshot depicts the path to the new application: Click Browse.... You will see the name of the application server node. If there is more than one profile, select the appropriate instance. You will then be able to navigate through a web-based version of the Linux file system as seen in the following screenshot: Locate the DefaultApplication.ear file. It will be in a folder called installableApps located in the root WebSphere install folder, for example, <was_root>/installableApps as shown in the previous screenshot. Click Next to begin installing the EAR file. On the Preparing for the application installation page, choose the Fast Path option. There are two options to choose. Install option Description Fast Path The deployment wizard will skip advanced settings and only prompt for the absolute minimum settings required for the deployment. Detailed The wizard will allow, at each stage of the installation, for the user to override any of the J2EE properties and configurations available to an EAR file. The Choose to generate default bindings and mappings setting allows the user to accept the default settings for resource mappings or override with specific values. Resource mappings will exist depending on the complexity of the EAR. Bindings are JNDI to resource mappings. Each EAR file has pre-configured XML descriptors which specify the JNDI name that the application resource uses to map to a matching (application server) provided resource. An example would be a JDBC data source name which is referred to as jdbc/mydatasource, whereas the actual data source created in the application server might be called jdbc/datasource1. By choosing the Detailed option, you get prompted by the wizard to decide on how you want to map the resource bindings. By choosing the Fast Path option, you are allowing the application to use its pre-configured default JNDI names. We will select Fast Path as demonstrated in the following screenshot: Click on Next. In the next screen, we are given the ability to fill out some specific deployment options. Below is a list of the options presented in this page.
Read more
  • 0
  • 0
  • 19367
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime