Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - Servers

95 Articles
article-image-using-nginx-reverse-proxy
Packt
23 May 2011
7 min read
Save for later

Using Nginx as a Reverse Proxy

Packt
23 May 2011
7 min read
  Nginx 1 Web Server Implementation Cookbook Over 100 recipes to master using the Nginx HTTP server and reverse proxy         Read more about this book       (For more resources on Nginx, see here.) Introduction Nginx has found most applications acting as a reverse proxy for many sites. A reverse proxy is a type of proxy server that retrieves resources for a client from one or more servers. These resources are returned to the client as though they originated from the proxy server itself. Due to its event driven architecture and C codebase, it consumes significantly lower CPU power and memory than many other better known solutions out there. This article will deal with the usage of Nginx as a reverse proxy in various common scenarios. We will have a look at how we can set up a rail application, set up load balancing, and also look at a caching setup using Nginx, which will potentially enhance the performance of your existing site without any codebase changes.   Using Nginx as a simple reverse proxy Nginx in its simplest form can be used as a reverse proxy for any site; it acts as an intermediary layer for security, load distribution, caching, and compression purposes. In effect, it can potentially enhance the overall quality of the site for the end user without any change of application source code by distributing the load from incoming requests to multiple backend servers, and also caching static, as well as dynamic content. How to do it... You will need to first define proxy.conf, which will be later included in the main configuration of the reverse proxy that we are setting up: proxy_redirect off;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;client_max_body_size 10m;client_body_buffer_size 128k;proxy_connect_timeout 90;proxy_send_timeout 90;proxy_read_timeout 90;sproxy_buffers 32 4k To use Nginx as a reverse proxy for a site running on a local port of the server, the following configuration will suffice: server { listen 80; server_name example1.com; access_log /var/www/example1.com/log/nginx.access.log; error_log /var/www/example1.com/log/nginx_error.log debug;location / { include proxy.conf; proxy_pass http://127.0.0.1:8080; }} How it works... In this recipe, Nginx simply acts as a proxy for the defined backend server which is running on the 8080 port of the server, which can be any HTTP web application. Later in this article, other advanced recipes will have a look at how one can define more backend servers, and how we can set them up to respond to requests.   Setting up a rails site using Nginx as a reverse proxy In this recipe, we will set up a working rails site and set up Nginx working on top of the application. This will assume that the reader has some knowledge of rails and thin. There are other ways of running Nginx and rails, as well, like using Passenger Phusion. How to do it... This will require you to set up thin first, then to configure thin for your application, and then to configure Nginx. If you already have gems installed then the following command will install thin, otherwise you will need to install it from source: sudo gem install thin Now you need to generate the thin configuration. This will create a configuration in the /etc/thin directory: sudo thin config -C /etc/thin/myapp.yml -c /var/rails/myapp--servers 5 -e production Now you can start the thin service. Depending on your operating system the start up command will vary. Assuming that you have Nginx installed, you will need to add the following to the configuration file: upstream thin_cluster { server unix:/tmp/thin.0.sock; server unix:/tmp/thin.1.sock; server unix:/tmp/thin.2.sock; server unix:/tmp/thin.3.sock; server unix:/tmp/thin.4.sock;} server { listen 80; server_name www.example1.com; root /var/www.example1.com/public; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect false; try_files $uri $uri/index.html $uri.html @thin; location @thin { include proxy.conf; proxy_pass http://thin_cluster; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }} How it works... This is a fairly simple rails stack, where we basically configure and run five upstream thin threads which interact with Nginx through socket connections. There are a few rewrites that ensure that Nginx serves the static files, and all dynamic requests are processed by the rails backend. It can also be seen how we set proxy headers correctly to ensure that the client IP is forwarded correctly to the rails application. It is important for a lot of applications to be able to access the client IP to show geo-located information, and logging this IP can be useful in identifying if geography is a problem when the site is not working properly for specific clients.   Setting up correct reverse proxy timeouts In this section we will set up correct reverse proxy timeouts which will affect your user's interaction when your backend application is unable to respond to the client's request. In such a case, it is advisable to set up some sensible timeout pages so that the user can understand that further refreshing may only aggravate the issues on the web application. How to do it... You will first need to set up proxy.conf which will later be included in the configuration: proxy_redirect off;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;client_max_body_size 10m;client_body_buffer_size 128k;proxy_connect_timeout 90;proxy_send_timeout 90;proxy_read_timeout 90;sproxy_buffers 32 4k Reverse proxy timeouts are some fairly simple flags that we need to set up in the Nginx configuration like in the following example: server { listen 80; server_name example1.com; access_log /var/www/example1.com/log/nginx.access.log; error_log /var/www/example1.com/log/nginx_error.log debug; #set your default location location / { include proxy.conf; proxy_read_timeout 120; proxy_connect_timeout 120; proxy_pass http://127.0.0.1:8080; }} How it works... In the preceding configuration we have set the following variables, it is fairly clear what these variables achieve in the context of the configurations:   Setting up caching on the reverse proxy In a setup where Nginx acts as the layer between the client and the backend web application, it is clear that caching can be one of the benefits that can be achieved. In this recipe, we will have a look at setting up caching for any site to which Nginx is acting as a reverse proxy. Due to extremely small footprint and modular architecture, Nginx has become quite the Swiss knife of the modern web stack. How to do it... This example configuration shows how we can use caching when utilizing Nginx as a reverse proxy web server: http { proxy_cache_path /var/www/cache levels=1:2 keys_zone=my-cache:8mmax_size=1000m inactive=600m; proxy_temp_path /var/www/cache/tmp;...server { listen 80; server_name example1.com; access_log /var/www/example1.com/log/nginx.access.log; error_log /var/www/example1.com/log/nginx_error.log debug; #set your default location location / { include proxy.conf; proxy_pass http://127.0.0.1:8080/; proxy_cache my-cache; proxy_cache_valid 200 302 60m; proxy_cache_valid 404 1m; }}} How it works... This configuration implements a simple cache with 1000MB maximum size, and keeps all HTTP response 200 pages in the cache for 60 minutes and HTTP response 404 pages in cache for 1 minute. There is an initial directive that creates the cache file on initialization, in further directives we basically configure the location that is going to be cached. It is possible to actually set up more than one cache path for multiple locations. There's more... This was a relatively small show of what can be achieved with the caching aspect of the proxy module. Here are some more directives that can be really useful in optimizing and making your stack faster and more efficient:  
Read more
  • 0
  • 0
  • 3242

Packt
25 Apr 2011
7 min read
Save for later

Improve Your Surfing Experience – Try Squid

Packt
25 Apr 2011
7 min read
  Squid Proxy Server 3.1: Beginner's Guide Improve the performance of your network using the caching and access control capabilities of Squid         Read more about this book       Squid proxy server enables you to cache your web content and return it quickly on subsequent requests. In this article we will learn about the different configuration options available and the transparent and accelerated modes that enable you to focus on particular areas of your network. In this article by Kulbir Saini, author of Squid Proxy Server 3 Beginners Guide, we will cover: Configuring Squid to use DNS servers A few directives related to logging Other important or commonly used configuration directives (For more resources on Proxy Servers, see here.) DNS server configuration For every request received from a client, Squid needs to resolve the domain name before it can contact the target web server. For this purpose, Squid can either use the built-in internal DNS client or, external DNS program to resolve the hostnames. The default behavior is to use the internal DNS client for resolving hostnames unless we have used the --disable-internal-dns option but it must be set with the configure program before compiling Squid, as shown: $ ./configure --disable-internal-dns Let's have a quick look at the DNS-related configuration directives provided by Squid. Specifying the DNS program path The directive cache_dns_program is used to specify the path of the external DNS program built with Squid. If we have not moved the Squid-related file after installing, this directive will have the correct value, by default. However, if the DNS program is located at a different location, we can specify the path using the following directive: cache_dns_program /path/to/dnsprogram Controlling the number of DNS client processes The number of parallel instances of the DNS program specified by cache_dns_program can be controlled by using the directive dns_children. The syntax of the directive dns_children is as follows: dns_children max startup=n idle=n The parameter max determines the maximum number of DNS programs which can run at any one time. We should set it to a significantly high value as Squid has to wait for the response from the DNS program before it can proceed any further and setting this number to a lower value will keep Squid waiting for the response. The default value is set to 32. The value of the parameter startup determines the number of DNS programs that will be started when Squid starts. This can be set to zero and Squid will not start any processes by default. The first ever request to Squid will result in the creation of the first child process. The value of the parameter idle determines the number of processes that will be available at any one time. More requests will result in the creation of more processes, but keeping this many processes free (available) is subject to a total of max processes. A minimum acceptable value for this parameter is 1. Setting the DNS name servers By default, Squid picks up the name servers from the file /etc/resolv.conf. However, if we want to specify a list of different name servers, we can use the directive dns_nameservers. Time for action – adding DNS name servers A list of IP addresses can be passed to this directive or several IP addresses can be written on different lines like the following: dns_nameservers 192.0.2.25 198.51.100.25dns_nameservers 203.0.113.25 The previous configuration lines will set the name servers to 192.0.2.25, 198.51.100.25, and 203.0.113.25. What just happened? We added three DNS name servers to the Squid configuration file which will be used by Squid to resolve the domain names corresponding to the requests received from the clients. Setting the hosts file Squid can read the hostname and IP address associations from the hosts file generally found at /etc/hosts. This file normally contains hostnames for the machines or servers in the local area network. We can specify the host's file location using the directive hosts_file as shown: hosts_file /etc/hosts If we don't want Squid to read the host's file, we can set the value to none. Default domain name for requests Using the directive append_domain, we can append a default domain name to the hostnames without any period (.) in them. This is generally useful for handling local domain names. The value of the append_domain must begin with a period (.). For example: append_domain .example.com Timeout for DNS queries If the DNS servers do not respond to the query within the time specified by the directive dns_timeout, they are assumed to be unavailable. The default timeout value is two minutes. Considering the ever increasing network speeds, we can set this to a slightly lower value. For example, if there is no response within one minute, we can consider the DNS service to be unavailable. Caching the DNS responses The IP addresses of most domains change quite rarely, so it's safe to cache the positive responses from DNS servers for a few hours. This doesn't provide much of a saving in bandwidth, but caching DNS responses may reduce the latency quite significantly because a DNS query is done for every request. For caching DNS responses while using an external DNS program, Squid provides two directives known as positive_dns_ttl and negative_dns_ttl to tune the caching of DNS responses. The directive positive_dns_ttl determines the maximum time for which a positive DNS response will be cached while negative_dns_ttl determines the time for which a negative DNS response will be cached. The directive negative_dns_ttl also serves as a minimum time for which the positive DNS responses can be cached. Let's see the example values for both of the directives: positive_dns_ttl 8 hoursnegative_dns_ttl 30 seconds We should keep the time to live (TTL) for negative responses to a lower value as the negative responses may be due to problems with the DNS servers. Setting the size of the DNS cache Squid performs domain name to address lookups for all the MISS requests and address to domain name lookups for requests involving ACLs such as dstdomain. These lookups are cached. To control the size of these cached lookups, Squid exposes four directives—ipcache_size (number), ipcache_low (percent), ipcache_high (percent), and fqdncache_size (number). Let's see what these directives mean. The directive ipcache_size determines the maximum number of entries that can be cached for domain name to address lookups. As these entries take really small amounts of memory and the amount of available main memory is enormous these days, we can cache tens of thousands of these entries. The default value for this directive is 1024, but we can easily push it to 15,000 on busy caches. The directives ipcache_low (let's say 95) and ipcache_high (let's say 97) are low and high water marks for the IP cache. So, Squid will try to keep the number of entries in the cache between 95 percent and 97 percent. Using fqdncache_size, we can simply set the maximum number of address to domain name lookups that can be in the cache at any time. These entries also take really small amounts of memory, so we can cache a large number of these. The default value is 1024, but we can easily push it to 10,000 on busy caches.  
Read more
  • 0
  • 0
  • 1909

article-image-squid-proxy-server-fine-tuning-achieve-better-performance
Packt
25 Apr 2011
12 min read
Save for later

Squid Proxy Server: Fine Tuning to Achieve Better Performance

Packt
25 Apr 2011
12 min read
  Squid Proxy Server 3.1: Beginner's Guide Improve the performance of your network using the caching and access control capabilities of Squid         Read more about this book       Whether you only run one site, or are in charge of a whole network, Squid is an invaluable tool which improves performance immeasurably. Caching and performance optimization usually requires a lot of work on the developer's part, but Squid does all that for you. In this article we will learn to fine-tune our cache to achieve a better HIT ratio to save bandwidth and reduce the average page load time. In this article by Kulbir Saini, author of Squid Proxy Server 3 Beginners Guide, we will take a look at the following: Cache peers or neighbors Caching the web documents in the main memory and hard disk Tuning Squid to enhance bandwidth savings and reduce latency (For more resources on Proxy Servers, see here.) Cache peers or neighbors Cache peers or neighbors are the other proxy servers with which our Squid proxy server can: Share its cache with to reduce bandwidth usage and access time Use it as a parent or sibling proxy server to satisfy its clients' requests Use it as a parent or sibling proxy server We normally deploy more than one proxy server in the same network to share the load of a single server for better performance. The proxy servers can use each other's cache to retrieve the cached web documents locally to improve performance. Let's have a brief look at the directives provided by Squid for communication among different cache peers. Declaring cache peers The directive cache_peer is used to tell Squid about proxy servers in our neighborhood. Let's have a quick look at the syntax for this directive: cache_peer HOSTNAME_OR_IP_ADDRESS TYPE PROXY_PORT ICP_PORT [OPTIONS] In this code, HOSTNAME_OR_IP_ADDRESS is the hostname or IP address of the target proxy server or cache peer. TYPE specifies the type of the proxy server, which in turn, determines how that proxy server will be used by our proxy server. The other proxy servers can be used as a parent, sibling, or a member of a multicast group. Time for action – adding a cache peer Let's add a proxy server (parent.example.com) that will act as a parent proxy to our proxy server: cache_peer parent.example.com parent 3128 3130 default proxy-only 3130 is the standard ICP port. If the other proxy server is not using the standard ICP port, we should change the code accordingly. This code will direct Squid to use parent.example.com as a proxy server to satisfy client requests in case it's not able to do so itself. The option default specifies that this cache peer should be used as a last resort in the scenario where other peers can't be contacted. The option proxy-only specifies that the content fetched using this peer should not be cached locally. This is helpful when we don't want to replicate cached web documents, especially when the two peers are connected with a high bandwidth backbone. What just happened? We added parent.example.com as a cache peer or parent proxy to our Squid proxy server. We also used the option proxy-only, which means the requests fetched using this cache peer will not be cached on our proxy server. There are several other options in which you can add cache peers, for various purposes, such as, a hierarchy. Quickly restricting access to domains using peers If we have added a few proxy servers as cache peers to our Squid server, we may have the desire to have a little bit of control over the requests being forwarded to the peers. The directive cache_peer_domain is a quick way to achieve the desired control. The syntax of this directive is quite simple: cache_peer_domain CACHE_PEER_HOSTNAME [!]DOMAIN1 [[!]DOMAIN2 ...] In the code, CACHE_PEER_HOSTNAME is the hostname or IP address of the cache peer being used when declaring it as a cache peer, using the cache_peer directive. We can specify any number of domains which may be fetched through this cache peer. Adding a bang (!) as a prefix to the domain name will prevent the use of this cache peer for that particular domain. Let's say we want to use the videoproxy.example.com cache peer for browsing video portals like Youtube, Netflix, Metacafe, and so on. cache_peer_domain videoproxy.example.com .youtube.com .netflix.comcache_peer_domain videoproxy.example.com .metacafe.com These two lines will configure Squid to use the videoproxy.example.com cache peer for requests to the domains youtube.com, netflix.com, and metacafe.com only. Requests to other domains will not be forwarded using this peer. Advanced control on access using peers We just learned about cache_peer_domain, which provides a way to control access using cache peers. However, it's not really flexible in granting or revoking access. That's when cache_peer_access comes into the picture, which provides a very flexible way to control access using cache peers using ACLs. The syntax and implications are similar to other access directives such as http_access. cache_peer_access CACHE_PEER_HOSTNAME allow|deny [!]ACL_NAME Let's write the following configuration lines, which will allow only the clients on the network 192.0.2.0/24 to use the cache peer acadproxy.example.com for accessing Youtube, Netflix, and Metacafe. acl my_network src 192.0.2.0/24acl video_sites dstdomain .youtube.com .netflix.com .metacafe.comcache_peer_access acadproxy.example.com allow my_network video_sitescache_peer_access acadproxy.example.com deny all In the same way, we can use other ACL types to achieve better control over access to various websites using cache peers. Caching web documents All this time, we have been talking about the caching of web documents and how it helps in saving bandwidth and improving the end user experience, now it's time to learn how and where Squid actually keeps these cached documents so that they can be served on demand. Squid uses main memory (RAM) and hard disks for storing or caching the web documents. Caching is a complex process but Squid handles it beautifully and exposes the directives using squid.conf, so that we can control how much should be cached and what should be given the highest priority while caching. Let's have a brief look at the caching-related directives provided by Squid. Using main memory (RAM) for caching The web documents cached in the main memory or RAM can be served very quickly as data read/write speeds of RAM are very high compared to hard disks with mechanical parts. However, as the amount of space available in RAM for caching is very low compared to the cache space available on hard disks, only very popular objects or the documents with a very high probability of being requested again, are stored in cache space available in RAM. As the cache space in memory is precious, the documents are stored on a priority basis. Let's have a look at the different types of objects which can be cached. In-transit objects or current requests These are the objects related to the current requests and they have the highest priority to be kept in the cache space in RAM. These objects must be kept in RAM and if there is a situation where the incoming request rate is quite high and we are about to overflow the cache space in RAM, Squid will try to keep the served part (the part which has already been sent to the client) on the disk to create free space in RAM. Hot or popular objects These objects or web documents are popular and are requested quite frequently compared to others. These are stored in the cache space left after storing the in-transit objects as these have a lower priority than in-transit objects. These objects are generally pushed to disk when there is a need to generate more in RAM cache space for storing the in-transit objects. Negatively cached objects Negatively cached objects are error messages which Squid has encountered while fetching a page or web document on behalf of a client. For example, if a request to a web page has resulted in a HTTP error 404 (page not found), and Squid receives a subsequent request for the same web page, then Squid will check if the response is still fresh and will return a reply from the cache itself. If there is a request for the same page after the negatively cached object corresponding to that page has expired, Squid will check again if the page is available. Negatively cached objects have the same priority as hot or popular objects and they can be pushed to disk at any time in favor of in-transit objects. Specifying cache space in RAM So far we have learned about how the available cache space is utilized for storing or caching different types of objects with different priorities. Now, it's time to learn about specifying the amount of RAM space we want to dedicate for caching. While deciding the RAM space for caching, we should be neither greedy nor paranoid. If we specify a large percentage of RAM for caching, the overall system performance will suffer as the system will start swapping processes in case there is no free RAM left for other processes. If we use a very low percentage of RAM for caching, then we'll not be able to take full advantage of Squid's caching mechanism. The default size of the memory cache is 256 MB. Time for action – specifying space for memory caching We can use extra RAM space available on a running system after sparing a chunk of memory that can be utilized by the running process under heavy load. To find out the amount of free RAM available on our system, we can use either the top or free command. To find out the free RAM in Megabytes, we can use the free command as follows: $ free -m For more details, please check the top(1) and free(1) man pages. Now, let's say we have 4 GB of total RAM on the server and all the processes are running comfortably in 1 GB of RAM space. After securing another 512 MB for emergency situations where running processes may take extra memory, we can safely allocate 2.5 GB of RAM for caching. To specify the cache size in the main memory, we use the directive cache_mem. It has a very simple format. As we have learned before, we can specify the memory size in bytes, KB, MB, or GB. Let's specify the cache memory size for the previous example: cache_mem 2500 MB The previous value specified with cache_mem is in Megabytes. What just happened? We learned about calculating the approximate space in the main memory, which can be used to cache web documents and therefore enhance the performance of the Squid server by a significant margin. Have a go hero – calculating cache_mem for your machine Note down the total RAM on your machine and calculate the approximate space in megabytes that you can allocate for memory caching. Maximum object size in memory As we have limited space in memory available for caching objects, we need to use the space in an optimized way. We should plan to set this a bit low, as setting it to a too larger size will mean that there will be a lesser number of cached objects in the memory and the HIT (being found in cache) rate will suffer significantly. The default maximum size used by Squid is 512 KB, but we can change it depending on our value for cache_mem. So, if we want to set it to 1 MB, as we have a lot of RAM available for caching (as in the previous example), we can use the maximum_object_size_in_memory directive as follows: maximum_object_size_in_memory 1 MB This command will set the allowed maximum object size in memory cache to 1 MB. Memory cache mode With the newer versions of Squid, we can control which objects we want to keep in the memory cache for optimizing the performance. Squid offers the directive memory_cache_mode to set the mode that Squid should use to utilize the space available in memory cache. There are three different modes available: Mode Description always The mode always is used to keep all the most recently fetched objects that can fit in the available space. This is the default mode used by Squid. disk When the disk mode is set, only the objects which are already cached on a hard disk and have received a HIT (meaning they were requested subsequently after being cached), will be stored in the memory cache. network Only the objects which have been fetched from the network (including neighbors) are kept in the memory cache, if the network mode is set. Setting the mode is easy and can be set using the memory_cache_mode directive as shown: memory_cache_mode always This configuration line will set memory cache mode to always; this means that most recently fetched objects will be kept in the memory.  
Read more
  • 0
  • 2
  • 29146
Banner background image

article-image-how-configure-squid-proxy-server
Packt
25 Apr 2011
8 min read
Save for later

How to Configure Squid Proxy Server

Packt
25 Apr 2011
8 min read
  Squid Proxy Server 3.1: Beginner's Guide Improve the performance of your network using the caching and access control capabilities of Squid         Read more about this book       In this article by Kulbir Saini, author of Squid Proxy Server 3 Beginners Guide, we are going to learn to configure Squid according to the requirements of a given network. We will learn about the general syntax used for a Squid configuration file. Specifically, we will cover the following: Quick exposure to Squid Syntax of the configuration file HTTP port, the most important configuration directive Access Control Lists (ACLs) Controlling access to various components of Squid (For more resources on Proxy Servers, see here.) Quick start Let's have a look at the minimal configuration that you will need to get started. Get ready with the configuration file located at /opt/squid/etc/squid.conf, as we are going to make the changes and additions necessary to quickly set up a minimal proxy server. cache_dir ufs /opt/squid/var/cache/ 500 16 256acl my_machine src 192.0.2.21 # Replace with your IP addresshttp_access allow my_machine We should add the previous lines at the top of our current configuration file (ensuring that we change the IP address accordingly). Now, we need to create the cache directories. We can do that by using the following command: $ /opt/squid/sbin/squid -z We are now ready to run our proxy server, and this can be done by running the following command: $ /opt/squid/sbin/squid Squid will start listening on port 3128 (default) on all network interfaces on our machine. Now we can configure our browser to use Squid as an HTTP proxy server with the host as the IP address of our machine and port 3128. Once the browser is configured, try browsing to http://www.example.com/. That's it! We have configured Squid as an HTTP proxy server! Now try to browse to http://www.example.com:897/ and observe the message you receive. The message shown is an access denied message sent to you by Squid. Now, let's move on to understanding the configuration file in detail. Syntax of the configuration file Squid's configuration file can normally be found at /etc/squid/squid.conf, /usr/local/squid/etc/squid.conf, or ${prefix}/etc/squid.conf where ${prefix} is the value passed to the --prefix option, which is passed to the configure command before compiling Squid. In the newer versions of Squid, a documented version of squid.conf, known as squid.conf.documented, can be found along side squid.conf. In this article, we'll cover some of the import directives available in the configuration file. For a detailed description of all the directives used in the configuration file, please check http://www.squid-cache.org/Doc/config/. The syntax for Squid's documented configuration file is similar to many other programs for Linux/Unix. Generally, there are a few lines of comments containing useful related documentation before every directive used in the configuration file. This makes it easier to understand and configure directives, even for people who are not familiar with configuring applications using configuration files. Normally, we just need to read the comments and use the appropriate options available for a particular directive. The lines beginning with the character # are treated as comments and are completely ignored by Squid while parsing the configuration file. Additionally, any blank lines are also ignored. # Test comment. This and the above blank line will be ignored by Squid. Let's see a snippet from the documented configuration file (squid.conf.documented) # TAG: cache_effective_user# If you start Squid as root, it will change its effective/real# UID/GID to the user specified below. The default is to change# to UID of nobody.# see also; cache_effective_group#Default:# cache_effective_user nobody In the previous snippet, the first line mentions the name of the directive, that is in this case, cache_effective_user. The lines following the tag line provide brief information about the usage of a directive. The last line shows the default value for the directive, if none is specified. Types of directives Now, let's have a brief look at the different types of directives and the values that can be specified. Single valued directives These are directives which take only one value. These directives should not be used multiple times in the configuration file because the last occurrence of the directive will override all the previous declarations. For example, logfile_rotate should be specified only once. logfile_rotate 10# Few lines containing other configuration directiveslogfile_rotate 5 In this case, five logfile rotations will be made when we trigger Squid to rotate logfiles. Boolean-valued or toggle directives These are also single valued directives, but these directives are generally used to toggle features on or off. query_icmp onlog_icp_queries offurl_rewrite_bypass off We use these directives when we need to change the default behavior. Multi-valued directives Directives of this type generally take one or more than one value. We can either specify all the values on a single line after the directive or we can write them on multiple lines with a directive repeated every time. All the values for a directive are aggregated from different lines: hostname_aliases proxy.exmaple.com squid.example.com Optionally, we can pass them on separate lines as follows: dns_nameservers proxy.example.comdns_nameservers squid.example.com Both the previous code snippets will instruct Squid to use proxy.example.com and squid.example.com as aliases for the hostname of our proxy server. Directives with time as a value There are a few directives which take values with time as the unit. Squid understands the words seconds, minutes, hours, and so on, and these can be suffixed to numerical values to specify actual values. For example: request_timeout 3 hourspersistent_request_timeout 2 minutes Directives with file or memory size as values The values passed to these directives are generally suffixed with file or memory size units like bytes, KB, MB, or GB. For example: reply_body_max_size 10 MBcache_mem 512 MBmaximum_object_in_memory 8192 KB As we are familiar with the configuration file syntax now, let's open the squid.conf file and learn about the frequently used directives. Have a go hero – categorize the directives Open the documented Squid configuration file and find out at least three directives of each type that we discussed before. Don't use the directives already used in the examples. HTTP port This directive is used to specify the port where Squid will listen for client connections. The default behavior is to listen on port 3128 on all the available interfaces on a machine. Time for action – setting the HTTP port Now, we'll see the various ways to set the HTTP port in the squid.conf file: In its simplest form, we just specify the port on which we want Squid to listen: http_port 8080 We can also specify the IP address and port combination on which we want Squid to listen. We normally use this approach when we have multiple interfaces on our machine and we want Squid to listen only on the interface connected to local area network (LAN): http_port 192.0.2.25:3128 This will instruct Squid to listen on port 3128 on the interface with the IP address as 192.0.2.25. Another form in which we can specify http_port is by using hostname and port combination: http_port myproxy.example.com:8080 The hostname will be translated to an IP address by Squid and then Squid will listen on port 8080 on that particular IP address. Another aspect of this directive is that, it can take multiple values on separate lines. Let's see what the following lines will do: http_port 192.0.2.25:8080http_port lan1.example.com:3128http_port lan2.example.com:8081 These lines will trigger Squid to listen on three different IP addresses and port combinations. This is generally helpful when we have clients in different LANs, which are configured to use different ports for the proxy server. In the newer versions of Squid, we may also specify the mode of operation such as intercept, tproxy, accel, and so on. Intercept mode will support the interception of requests without needing to configure the client machines. http_port 3128 intercept tproxy mode is used to enable Linux Transparent Proxy support for spoofing outgoing connections using the client's IP address. http_port 8080 tproxy We should note that enabling intercept or tproxy mode disables any configured authentication mechanism. Also, IPv6 is supported for tproxy but requires very recent kernel versions. IPv6 is not supported in the intercept mode. Accelerator mode is enabled using the mode accel. It's a good idea to listen on port 80, if we are configuring Squid in accelerator mode. This mode can't be used as it is. We must specify at least one website we want to accelerate. http_port 80 accel defaultsite=website.example.com We should set the HTTP port carefully as the standard ports like 3128 or 8080 can pose a security risk if we don't secure the port properly. If we don't want to spend time on securing the port, we can use any arbitrary port number above 10000. What just happened? In this section, we learned about the usage of one of the most important directives, namely, http_port. We have learned about the various ways in which we can specify HTTP port, depending on the requirement. We can force Squid to listen on multiple interfaces and on different ports, on different interfaces.  
Read more
  • 0
  • 7
  • 21270

article-image-squid-proxy-server-3-getting-started
Packt
06 Apr 2011
12 min read
Save for later

Squid Proxy Server 3: getting started

Packt
06 Apr 2011
12 min read
What is a proxy server? A proxy server is a computer system sitting between the client requesting a web document and the target server (another computer system) serving the document. In its simplest form, a proxy server facilitates communication between client and target server without modifying requests or replies. When we initiate a request for a resource from the target server, the proxy server hijacks our connection and represents itself as a client to the target server, requesting the resource on our behalf. If a reply is received, the proxy server returns it to us, giving a feel that we have communicated with the target server. In advanced forms, a proxy server can filter requests based on various rules and may allow communication only when requests can be validated against the available rules. The rules are generally based on an IP address of a client or target server, protocol, content type of web documents, web content type, and so on. As seen in the preceding image, clients can't make direct requests to the web servers. To facilitate communication between clients and web servers, we have connected them using a proxy server which is acting as a medium of communication for clients and web servers. Sometimes, a proxy server can modify requests or replies, or can even store the replies from the target server locally for fulfilling the same request from the same or other clients at a later stage. Storing the replies locally for use at a later time is known as caching. Caching is a popular technique used by proxy servers to save bandwidth, empowering web servers, and improving the end user's browsing experience. Proxy servers are mostly deployed to perform the following: Reduce bandwidth usage Enhance the user's browsing experience by reducing page load time which, in turn, is achieved by caching web documents Enforce network access policies Monitoring user traffic or reporting Internet usage for individual users or groups Enhance user privacy by not exposing a user's machine directly to Internet Distribute load among different web servers to reduce load on a single server Empower a poorly performing web server Filter requests or replies using an integrated virus/malware detection system Load balance network traffic across multiple Internet connections Relay traffic around within a local area network In simple terms, a proxy server is an agent between a client and target server that has a list of rules against which it validates every request or reply, and then allows or denies access accordingly. What is a reverse proxy? Reverse proxying is a technique of storing the replies or resources from a web server locally so that the subsequent requests to the same resource can be satisfied from the local copy on the proxy server, sometimes without even actually contacting the web server. The proxy server or web cache checks if the locally stored copy of the web document is still valid before serving the cached copy. The life of the locally stored web document is calculated from the additional HTTP headers received from the web server. Using HTTP headers, web servers can control whether a given document/response should be cached by a proxy server or not. Web caching is mostly used: To reduce bandwidth usage. A large number of static web documents like CSS and JavaScript files, images, videos, and so on can be cached as they don't change frequently and constitutes the major part of a response from a web server. By ISPs to reduce average page load time to enhance browsing experience for their customers on Dial-Up or broadband. To take a load off a very busy web server by serving static pages/documents from a proxy server's cache. How to download Squid Squid is available in several forms (compressed source archives, source code from a version control system, binary packages such as RPM, DEB, and so on) from Squid's official website, various Squid mirrors worldwide, and software repositories of almost all the popular operating systems. Squid is also shipped with many Linux/Unix distributions. There are various versions and releases of Squid available for download from Squid's official website. To get the most out of a Squid installation its best to check out the latest source code from a Version Control System (VCS) so that we get the latest features and fixes. But be warned, the latest source code from a VCS is generally leading edge and may not be stable or may not even work properly. Though code from a VCS is good for learning or testing Squid's new features, you are strongly advised not to use code from a VCS for production deployments. If we want to play safe, we should probably download the latest stable version or stable version from the older releases. Stable versions are generally tested before they are released and are supposed to work out of the box. Stable versions can directly be used in production deployments. Time for action – identifying the right version A list of available versions of Squid is maintained here. For production environments, we should use versions listed under the Stable Versions section only. If we want to test new Squid features in our environment or if we intend to provide feedback to the Squid community about the new version, then we should be using one of the Beta Versions. As we can see in the preceding screenshot, the website contains the First Production Release Date and Latest Release Date for the stable versions. If we click on any of the versions, we are directed to a page containing a list of all the releases in that particular version. Let's have a look at the page for version 3.1: For every release, along with a release date, there are links for downloading compressed source archives. Different versions of Squid may have different features. For example, all the features available in Squid version 2.7 may or may not be available in newer versions such as Squid 3.x. Some features may have been deprecated or have become redundant over time and they are generally removed. On the other hand, Squid 3.x may have several new features or existing features in an improved and revised manner. Therefore, we should always aim for the latest version, but depending on the environment, we may go for stable or beta version. Also, if we need specific features that are not available in the latest version, we may choose from the available releases in a different branch. What just happened? We had a brief look at the pages containing the different versions and releases of Squid, on Squid's official website. We also learned which versions and releases that we should download and use for different types of usage. Methods of obtaining Squid After identifying the version of Squid that we should be using for compiling and installation, let's have a look at the ways in which we can obtain Squid release 3.1.10. Using source archives Compressed source archives are the most popular way of getting Squid. To download the source archive, please visit Squid download page, http://www.squid-cache.org/Download/. This web page has links for downloading the different versions and releases of Squid, either from the official website or available mirrors worldwide. We can use either HTTP or FTP for getting the Squid source archive. Time for action – downloading Squid Now we are going to download Squid 3.1.10 from Squid's official website: Let's go to the web page. Now we need to click on the link to Version 3.1, as shown in the following screenshot: We'll be taken to a page displaying the various releases in version 3.1. The link with the display text tar.gz in the Download column is a link to the compressed source archive for Squid release 3.1.10, as shown in the following screenshot: To download Squid 3.1.10 using the web browser, just click on the link. Alternatively, we can use wget to download the source archive from the command line as follows: wget http://www.squid-cache.org/Versions/v3/3.1/squid-3.1.10.tar.gz What just happened? We successfully retrieved Squid version 3.1.10 from Squid's official website. The process of retrieving other stable or beta versions is very similar. Obtaining the latest source code from Bazaar VCS Advanced users may be interested in getting the very latest source code from the Squid code repository, using Bazaar. We can safely skip this section if we are not familiar with VCS in general. Bazaar is a popular version control system used to track project history and facilitate collaboration. From version 3.x onwards, Squid source code has been migrated to Bazaar. Therefore, we should ensure that we have Bazaar installed on our system in order to checkout the source code from repository. To find out more about Bazaar or for Bazaar installation and configuration manuals, please visit Bazaar's official website. Once we have setup Bazaar, we should head to the Squid code repository mirrored on Launchpad. From here we can browse all the versions and branches of Squid. Let's get ourselves familiar with the page layout: In the previous screenshot, Series: trunk represents the development branch, which contains code that is still in development and is not ready for production use. The branches with the status Mature are stable and can be used right away in production environments. Time for action – using Bazaar to obtain source code Now that we are familiar with the various branches, versions, and releases. Let's proceed to checking out the source code with Bazaar. To download code from any branch, the syntax for the command is as follows: bzr branch lp:squid[/branch[/version]] branch and version are optional parameters in the previous code. So, if we want to get branch 3.1, then the command will be as follows: bzr branch lp:squid/3.1 The previous command will fetch source code from Launchpad and may take a considerable amount of time, depending on the Internet connection. If we are willing to download source code for Squid version 3.1.10, then the command will be as follows: bzr branch lp:squid/3.1/3.1.10 In the previous code, 3.1 is the branch name and 3.1.10 is the specific version of Squid that we want to checkout. What just happened? We learned to fetch the source code for any Squid branch or release using Bazaar from Squid's source code hosted on Launchpad. Have a go hero – fetching the source code Using the command syntax that we learned in the previous section, fetch the source code for Squid version 3.0.stable25 from Launchpad. Solution: bzr branch lp:squid/3.0/3.0.stable25 Explanation: If we browse to the particular version on Launchpad, the version number used in the command becomes obvious. Using binary packages Squid binary packages are pre-compiled and ready to install software bundles. Binary packages are available in the software repositories of almost all Linux/Unix-based operating systems. Depending on the operating system, only stable and sometimes well tested beta versions make it to the software repositories, so they are ready for production use. Installing Squid Squid can be installed using the source code we obtained in the previous section, using a package manager which, in turn, uses the binary package available for our operating system. Let's have a detailed look at the ways in which we can install Squid. Installing Squid from source code Installing Squid from source code is a three step process: Select the features and operating system-specific settings. Compile the source code to generate the executables. Place the generated executables and other required files in their designated locations for Squid to function properly. We can perform some of the above steps using automated tools that make the compilation and installation process relatively easy. Compiling Squid Compiling Squid is a process of compiling several files containing C/C++ source code and generating executables. Compiling Squid is really easy and can be done in a few steps. For compiling Squid, we need an ANSI C/C++ compliant compiler. If we already have a GNU C/C++ Compiler (GNU Compiler Collection (GCC) and g++, which are available on almost every Linux/Unix-based operating system by default), we are ready to begin the actual compilation. Why compile? Compiling Squid is a bit of a painful task compared to installing Squid from the binary package. However, we recommend compiling Squid from the source instead of using pre-compiled binaries. Let's walk through a few advantages of compiling Squid from the source: While compiling we can enable extra features, which may not be enabled in the pre-compiled binary package. When compiling, we can also disable extra features that are not needed for a particular environment. For example, we may not need Authentication helpers or ICMP support. configure probes the system for several features and enables or disables them accordingly, while pre-compiled binary packages will have the features detected for the system the source was compiled on. Using configure, we can specify an alternate location for installing Squid. We can even install Squid without root or super user privileges, which may not be possible with pre-compiled binary package. Though compiling Squid from source has a lot of advantages over installing from the binary package, the binary package has its own advantages. For example, when we are in damage control mode or a crisis situation and we need to get the proxy server up and running really quickly, using a binary package for installation will provide a quicker installation. Uncompressing the source archive If we obtained the Squid in a compressed archive format, we must extract it before we can proceed any further. If we obtained Squid from Launchpad using Bazaar, we don't need to perform this step. tar -xvzf squid-3.1.10.tar.gz tar is a popular command which is used to extract compressed archives of various types. On the other hand, it can also be used to compress many files into a single archive. The preceding command will extract the archive to a directory named squid-3.1.10.
Read more
  • 0
  • 0
  • 2943

article-image-nginx-http-server-faqs
Packt
25 Mar 2011
4 min read
Save for later

Nginx HTTP Server FAQs

Packt
25 Mar 2011
4 min read
  Nginx HTTP Server Adopt Nginx for your web applications to make the most of your infrastructure and serve pages faster than ever         Read more about this book       (For more resources on this subject, see here.) Q: What is Nginx and how is it pronounced?A: Nginx, is a lightweight HTTP server originating from Russia— pronounced as "engine X". Q: From where can one download and find resources related to Nginx?A: Although Nginx is a relatively new and growing project, there are already a good number of resources available on the World Wide Web (WWW) and an active community of administrators and developers. The official website, which is at www.nginx.net, is rather simple and does not provide much information or documentation, other than links for downloading the latest versions. On the contrary, you will find a lot of interesting documentation and examples on the official wiki—wiki.nginx.org. (Move the mouse over the image to enlarge it.) Q: Which different versions are currently available?A: There are currently three version branches on the project: Stable version: This version is usually recommended, as it is approved by both developers and users, but is usually a little behind the development version above. The current latest stable version is 0.7.66, released on June 07, 2010. Development version: This is the the latest version available for download. Although it is generally solid enough to be installed on production servers, you may run into the occasional bug. As such, the stable version is recommended, even though you do not get to use the latest features. The current latest development version is 0.8.40, released on June 07, 2010. Legacy version: If for some reason you are interested in looking at the older versions, you will find two of them. There's a legacy version and a legacy stable version, respectively coming as 0.5.38 and 0.6.39 releases. Q: Are the development versions stable enough to be used on production servers?A: Cliff Wells, founder and maintainer of the nginx.org wiki website and community, believes so—"I generally use and recommend the latest development version. It's only bit me once!". Early adopters rarely report critical problems. It is up to you to select the version you will be using on your server. The Nginx developers have decided to maintain backwards compatibility in new versions. You can find more information on version changes, new additions, and bug fixes in the dedicated change log page on the official website. Q: How can one Upgrade Nginx without loosing a single connection?A: There are many situations where you need to replace the Nginx binary, for example, when you compile a new version and wish to put it in production or simply after having enabled new modules and rebuilt the application. What most administrators would do in this situation is stop the server, copy the new binary over the old one, and start Nginx again. While this is not considered to be a problem for most websites, there may be some cases where uptime is critical and connection losses should be avoided at all costs. Fortunately, Nginx embeds a mechanism allowing you to switch binaries with uninterrupted uptime—zero percent request loss is guaranteed if you follow these steps carefully: Replace the old Nginx binary (by default, /usr/local/nginx/sbin/nginx) with the new one. Find the pid of the Nginx master process, for example, with ps x grep nginx | grep master| or by looking at the value found in the pid file. Send a USR2 (12) signal to the master process—kill –USR2 ***, replacing *** with the pid found in step 2. This will initiate the upgrade by renaming the old .pid file and running the new binary. Send a WINCH (28) signal to the old master process—kill –WINCH ***, replacing *** with the pid found in step 2. This will engage a graceful shutdown of the old worker processes. Make sure that all the old worker processes are terminated, and then send a QUIT signal to the old master process—kill –QUIT ***, replacing *** with the pid found in step 2.
Read more
  • 0
  • 0
  • 1354
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-squid-proxy-server-tips-and-tricks
Packt
16 Mar 2011
6 min read
Save for later

Squid Proxy Server: Tips and Tricks

Packt
16 Mar 2011
6 min read
Rotating log files frequently Tip: For better performance, it is good practice to rotate log files frequently instead of going with large files. --sysconfdir=/etc/squid/ option Tip: It's a good idea to use the --sysconfdir=/etc/squid/ option with configure, so that you can share the configuration across different Squid installations while testing. tproxy mode Tip: We should note that enabling intercept or tproxy mode disables any configured authentication mechanism. Also, IPv6 is supported for tproxy but requires very recent kernel versions. IPv6 is not supported in the intercept mode. Securing the port Tip: We should set the HTTP port carefully as the standard ports like 3128 or 8080 can pose a security risk if we don't secure the port properly. If we don't want to spend time on securing the port, we can use any arbitrary port number above 10000. ACL naming Tip: We should carefully note that one ACL name can't be used with more than one ACL type. acl destination dstdomain example.com acl destination dst 192.0.2.24 The above code is invalid as it uses ACL name destination across two different ACL types. HTTP access control Tip: The default behavior of HTTP access control is a bit tricky if access for a client can't be identified by any of the access rules. In such cases, the default behavior is to do the opposite of the last access rule. If last access rule is deny, then the action will be to allow access and vice-versa. Therefore, to avoid any confusion or undesired behavior, it's a good practice to add a deny all line after the access rules. Using the http_reply_access directive Tip: We should be really careful while using the http_reply_access directive. When a request is allowed by http_access, Squid will contact the original server, even if a rule with the http_reply_access directive denies the response. This may lead to serious security issues. For example, consider a client receiving a malicious URL, which can submit a client's critical private information using the HTTP POST method. If the client's request passes through http_access rules but the response is denied by an http_reply_access rule, then the client will be under the impression that nothing happened but a hacker will have cleverly stolen our client's private information. refresh_pattern directive Tip: Using refresh_pattern to cache the non-cacheable responses or to alter the lifetime of the cached objects, may lead to unexpected behavior or responses from the web servers. We should use this directive very carefully. Expires HTTP header Tip: We should note that the Expires HTTP header overrides min and max values. Overriding directives Tip: Please note that the directive never_direct overrides hierarchy_stoplist. Path of the PID file Tip: Setting the path of the PID file to none will prevent regular management operations like automatic log rotation or restarting Squid. The operating system will not be able to stop Squid at the time of a shutdown or restart. Parsing the configuration file Tip: It's good practice to parse the configuration file for any errors or warning using the -k parse option before issuing the reconfigure signal. Squid signals Tip: Please note that shutdown, interrupt, and kill are Squid signals and not the system kill signals which are emulated. Squid process in debug mode Tip: The Squid process running in debug mode may write a log of debugging output to the cache.log file and may quickly consume a lot of disk space. Access Control List (ACL) elements with dst Tip: ACL elements configured with dst as a ACL type works slower compared to ACLs with the src ACL type, as Squid will have to resolve the destination domain name before evaluating the ACL, which will involve a DNS query. ACL elements with srcdomain Tip: ACL elements with srcdomain as ACL types works slower, compared to ACLs with the dstdomain ACL type because Squid will have to perform a reverse DNS lookup before evaluating ACL. This will introduce significant latency. Moreover, the reverse DNS lookup may not work properly with local IP addresses. Adding port numbers Tip: We should note that the port numbers we add to the SSL ports list should be added to the safe ports list as well. Take care while using the ident protocol Tip: The ident protocol is not really secure and it's very easy to spoof an ident server. So, it should be used carefully. ident lookups Tip: Please note that the ident lookups are blocking calls and Squid will wait for the reply before it can proceed with processing the request, and that may increase the delays by a significant margin. Denied access by the http_access Tip: If a client is denied access by the http_access rule, it'll never match an http_reply_access rule. This is because, if a client's request is denied then Squid will not fetch a reply. Authentication helpers Tip: Configuring authentication helpers is of no use unless we use the proxy_auth ACL type to control access. basic_pop3_auth helper Tip: The basic_pop3_auth helper uses the Net::POP3 Perl package. So, we should make sure that we have this package installed before using the authentication helper.   --enable-ssl option Tip: : Please note that we should use the --enable-ssl option with the configure program before compiling, if we want Squid to accept HTTPS requests. Also note that several operating systems don't provide packages capable of HTTPS reverse-proxy due to GPL and policy constraints.   URL redirector programs Tip: We should be careful while using URL redirector programs because Squid passes the entire URL along with query parameters to the URL redirector program. This may lead to leakage of sensitive client information as some websites use HTTP GET methods for passing clients' private information.   Using the url_rewrite_access directive to block request types Tip: Please note that certain request types such as POST and CONNECT must not be rewritten as they may lead to errors and unexpected behavior. It's a good idea to block them using the url_rewrite_access directive. In this article we saw some tips and tricks on Squid Proxy server to enhance the performance of your network. Further resources on this subject: Configuring Apache and Nginx [Article] Different Ways of Running Squid Proxy Server [Article] Lighttpd [Book] VirtualBox 3.1: Beginner's Guide [Book] Squid Proxy Server 3.1: Beginner's Guide [Book]
Read more
  • 0
  • 2
  • 5220

article-image-different-ways-running-squid-proxy-server
Packt
24 Feb 2011
10 min read
Save for later

Different Ways of Running Squid Proxy Server

Packt
24 Feb 2011
10 min read
  Squid Proxy Server 3.1: Beginner's Guide Improve the performance of your network using the caching and access control capabilities of Squid Get the most out of your network connection by customizing Squid's access control lists and helpers Set up and configure Squid to get your website working quicker and more efficiently No previous knowledge of Squid or proxy servers is required Part of Packt's Beginner's Guide series: lots of practical, easy-to-follow examples accompanied by screenshots Command line options Normally, all of the Squid configuration options reside with in the squid.conf file (the main Squid configuration file). To tweak the Squid functionality, the preferred method is to change the options in the squid.conf file. However there are some options which can also be controlled using additional command line options while running Squid. These options are not very popular and are rarely used, but these are very useful for debugging problems without the Squid proxy server. Before exploring the command line options, let's see how Squid is run from the command line. The location of the Squid binary file depends on the --prefix option passed to the configure command before compiling. So, depending upon the value of the --prefix option, the location of the Squid executable may be one of /usr/local/sbin/squid or ${prefix}/sbin/squid, where ${prefix} is the value of the option --prefix passed to the configure command. Therefore, to run Squid, we need to run one of the following commands on the terminal: When the --prefix option was not used with the configure command, the default location of the Squid executable will be /usr/local/sbin/squid. When the --prefix option was used and was set to a directory, then the location of the Squid executable will be ${prefix}/sbin/squid. It's painful to type the absolute path for Squid to run. So, to avoid typing the absolute path, we can include the path to the Squid executable in our PATH shell variable, using the export command as shown in the following example: $ export PATH=$PATH:/usr/local/sbin/ Alternatively, we can use the following command: $ export PATH=$PATH:/opt/squid/sbin/ We can also add the preceding command to our ~/.bashrc or ~/.bash_profile file to avoid running the export command every time we enter a new shell. After setting the PATH shell variable appropriately, we can run Squid by simply typing the following command on shell: $ squid This command will run Squid after loading the configuration options from the squid.conf file. We'll be using the squid command without an absolute path for running the Squid process. Please use the appropriate path according to the installation prefix which you have chosen. Now that we know how to run Squid from the command line, let's have a look at the various command line options. Getting a list of available options Before actually moving forward, we should firstly check the available set of options for our Squid installation. Time for action – listing the options Like a lot of other Linux programs, Squid also provides the option -h which can be used to retrieve a list of options: squid -h The previous command will result in the following output: Usage: squid [-cdhvzCFNRVYX] [-s | -l facility] [-f config-file] [-[au] port] [-k signal] -a port Specify HTTP port number (default: 3128). -d level Write debugging to stderr also. -f file Use given config-file instead of /opt/squid/etc/squid.conf. -h Print help message. -k reconfigure|rotate|shutdown|interrupt|kill|debug|check|parse Parse configuration file, then send signal to running copy (except -k parse) and exit. -s | -l facility Enable logging to syslog. -u port Specify ICP port number (default: 3130), disable with 0. -v Print version. -z Create swap directories. -C Do not catch fatal signals. -F Don't serve any requests until store is rebuilt. -N No daemon mode. -R Do not set REUSEADDR on port. -S Double-check swap during rebuild. ... We will now have a look at a few important options from the preceding list. We will also, have a look at the squid(8) man page or http://linux.die.net/man/8/squid for more details. What just happened? We have just used the squid command to list the available options which we can use on the command line. Getting information about our Squid installation Various features may vary across different versions of Squid. Before proceeding any further, it's a good idea to know the version of Squid installed on our machine. Time for action – finding out the Squid version Just in case we want to check which version of Squid we are using or the options we used with the configure command before compiling, we can use the option -v on the command line. Let's run Squid with this option: squid -v If we try to run the preceding command in the terminal, it will produce an output similar to the following: configure options: '--config-cache' '--prefix=/opt/squid/' '--enable-storeio=ufs,aufs' '--enable-removal-policies=lru,heap' '--enable-icmp' '--enable-useragent-log' '--enable-referer-log' '--enable-cache-digests' '--with-large-files' --enable-ltdl-convenience What just happened? We used the squid command with the -v option to find out the version of Squid installed on our machine, and the options used with the configure command before compiling Squid. Creating cache or swap directories The cache directories specified using the cache_dir directive in the squid.conf file, must already exist before Squid can actually use them. Time for action – creating cache directories Squid provides the -z command line option to create the swap directories. Let's see an example: squid -z If this option is used and the cache directories don't exist already, the output will look similar to the following: 2010/07/20 21:48:35| Creating Swap Directories 2010/07/20 21:48:35| Making directories in /squid_cache/00 2010/07/20 21:48:35| Making directories in /squid_cache/01 2010/07/20 21:48:35| Making directories in /squid_cache/02 2010/07/20 21:48:35| Making directories in /squid_cache/03 ... We should use this option whenever we add new cache directories in the Squid configuration file. What just happened? When the squid command is run with the option -z, Squid reads all the cache directories from the configuration file and checks if they already exist. It will then create the directory structure for all the cache directories that don't exist. Have a go hero – adding cache directories Add two or three test cache directories with different values of level 1 (8, 16, and 32) and level 2 (64, 256, and 512) to the configuration file. Then try creating them using the squid command. Now study the difference in the directory structure. Using a different configuration file The default location for Squid's main configuration file is ${prefix}/etc/squid/squid.conf. Whenever we run Squid, the main configuration is read from the default location. While testing or deploying a new configuration, we may want to use a different configuration file just to check whether it will work or not. We can achieve this by using the option -f, which allows us to specify a custom location for the configuration file. Let's see an example: squid -f /etc/squid.minimal.conf # OR squid -f /etc/squid.alternate.conf If Squid is run this way, Squid will try to load the configuration from /etc/squid.minimal.conf or /etc/squid.alternate.conf, and it will completely ignore the squid.conf from the default location. Getting verbose output When we run Squid from the terminal without any additional command line options, only warnings and errors are displayed on the terminal (or stderr). However, while testing, we would like to get a verbose output on the terminal, to see what is happening when Squid starts up. Time for action – debugging output in the console To get more information on the terminal, we can use the option -d. The following is an example: squid -d 2 We must specify an integer with the option -d to indicate the verbosity level. Let's have a look at the meaning of the different levels: Only critical and fatal errors are logged when level 0 (zero) is used. Level 1 includes the logging of important problems. Level 2 and higher includes the logging of informative details and other actions. Higher levels result in more output on the terminal. A sample output on the terminal with level 2 would look similar to the following: 2010/07/20 21:40:53| Starting Squid Cache version 3.1.10 for i686-pc-linux-gnu... 2010/07/20 21:40:53| Process ID 15861 2010/07/20 21:40:53| With 1024 file descriptors available 2010/07/20 21:40:53| Initializing IP Cache... 2010/07/20 21:40:53| DNS Socket created at [::], FD 7 2010/07/20 21:40:53| Adding nameserver 192.168.36.222 from /etc/resolv.conf 2010/07/20 21:40:53| User-Agent logging is disabled. 2010/07/20 21:40:53| Referer logging is disabled. 2010/07/20 21:40:53| Unlinkd pipe opened on FD 13 2010/07/20 21:40:53| Local cache digest enabled; rebuild/rewrite every 3600/3600 sec 2010/07/20 21:40:53| Store logging disabled 2010/07/20 21:40:53| Swap maxSize 0 + 262144 KB, estimated 20164 objects 2010/07/20 21:40:53| Target number of buckets: 1008 2010/07/20 21:40:53| Using 8192 Store buckets 2010/07/20 21:40:53| Max Mem size: 262144 KB 2010/07/20 21:40:53| Max Swap size: 0 KB 2010/07/20 21:40:53| Using Least Load store dir selection 2010/07/20 21:40:53| Current Directory is /opt/squid/sbin 2010/07/20 21:40:53| Loaded Icons. As we can see, Squid is trying to dump a log of actions that it is performing. The messages shown are mostly startup messages and there will be fewer messages when Squid starts accepting connections. Starting Squid in debug mode is quite helpful when Squid is up and running and users complain about poor speeds or being unable to connect. We can have a look at the debugging output and the appropriate actions to take. What just happened? We started Squid in debugging mode and can now see Squid writing an output on the command line, which is basically a log of the actions which Squid is performing. If Squid is not working, we'll be able to see the reasons on the command line and we'll be able to take actions accordingly. Full debugging output on the terminal The option -d specifies the verbosity level of the output dumped by Squid on the terminal. If we require all of the debugging information on the terminal, we can use the option -X, which will force Squid to write debugging information at every single step. If the option -X is used, we'll see information about parsing the squid.conf file and the actions taken by Squid, based on the configuration directives encountered. Let's see a sample output produced when option -X is used: ... 2010/07/21 21:50:51.515| Processing: 'acl my_machines src 172.17.8.175 10.2.44.46 127.0.0.1 172.17.11.68 192.168.1.3' 2010/07/21 21:50:51.515| ACL::Prototype::Registered: invoked for type src 2010/07/21 21:50:51.515| ACL::Prototype::Registered: yes 2010/07/21 21:50:51.515| ACL::FindByName 'my_machines' 2010/07/21 21:50:51.515| ACL::FindByName found no match 2010/07/21 21:50:51.515| aclParseAclLine: Creating ACL 'my_machines' 2010/07/21 21:50:51.515| ACL::Prototype::Factory: cloning an object for type 'src' 2010/07/21 21:50:51.515| aclParseIpData: 172.17.8.175 2010/07/21 21:50:51.515| aclParseIpData: 10.2.44.46 2010/07/21 21:50:51.515| aclParseIpData: 127.0.0.1 2010/07/21 21:50:51.515| aclParseIpData: 172.17.11.68 2010/07/21 21:50:51.515| aclParseIpData: 192.168.1.3 ... Let's see what this output means. In the first line, Squid encountered a line defining an ACL my_machines. The next few lines in the output describe Squid invoking different methods to parse, creating a new ACL, and then assigning values to it. This option can be very helpful while debugging ambiguous ACLs. Running as a normal process Sometime during testing, we may not want Squid to run as a daemon. Instead, we may want it to run as a normal process which we can interrupt easily by pressing CTRL-C. To achieve this, we can use the option -N. When this option is used, Squid will not run in the background it will run in the current shell instead. Parsing the Squid configuration file for errors or warnings It's a good idea to parse or check the configuration file (squid.conf) for any errors or warnings before we actually try to run Squid, or reload a Squid process which is already running in a production deployment. Squid provides an option -k with an argument parse, which, if supplied, will force Squid to parse the current Squid configuration file and report any errors and warnings. Squid -k is also used to check and report directive and option changes when we upgrade our Squid version.  
Read more
  • 0
  • 2
  • 12186

article-image-ibm-websphere-application-server-security-threefold-view
Packt
21 Feb 2011
8 min read
Save for later

IBM WebSphere Application Server Security: A Threefold View

Packt
21 Feb 2011
8 min read
  IBM WebSphere Application Server v7.0 Security Secure your IBM WebSphere applications with Java EE and JAAS security standards using this book and eBook Imagine yourself at an athletic event. Hey! No, no-you are at the right place. Yes, this is a technical article. Just bear with me for a minute. Well, now that the little misunderstanding is out of the way let's go back to the beginning. The home crowd is really excited about the performance of its team. However, that superb performance has not been yet reflected on the scoreboard. When finally that performance pays off with the long-waited score, 'it' happens! The score gets called off. It is not at all unlikely that a controversial call would be made, or worse yet, not made! Or so we think. There is a group of players and fans of the team that just scored that 'see' the play as a masterpiece of athletic execution. Then there is another group, that of players and coaches of the visiting team who clearly see a violation to the rules just before the score. And there is a third group, the referees. Well, who knows what they see! The fact is that for the same action, there may be several perceptions of the same set of events. Albert Einstein and other scientists provided a great example of multi-perception with the wave-particle duality concept. In a similar fashion, a WebSphere based environment could be analyzed in a number of forms. None of the forms or views is absolutely correct or incorrect. Each view, however, helps to focus on the appropriate set of components and their relationships for a given situation or need. WebSphere Application Server technology is a long and complex subject. This article provides three WAS ND environment views, emphasizing security, which will help the reader connect individual security tasks to the big picture. One view aids the WebSphere administrator to relate isolated security tasks to the overall middleware infrastructure (for example, messaging systems, directory services, and backend databases to name a few). This is useful in possible interactions with teams responsible for such technologies. On the other hand, a second view helps the administrator to link specific security configuration tasks to a particular Enterprise Application (for example, EJB applications, Service Integration Bus, and many more) set of components. This view will help the administrator to relate to possible development team needs. The article also includes a third view, one that focuses on the J2EE technology stack as it relates to security. This view could help blend the former two views. Enterprise Application-Server infrastructure architecture view This article starts with the Application Server infrastructure architecture view. The actual order of each of these major article sub-sections is really unimportant. However, since it needs to be a beginning, the infrastructure architecture view is thus selected. A possibly more formal name for what it is desired to convey in this section would be the Enterprise J2EE Application server infrastructure architecture. In this way, the scope of technologies that make up the application-centric architecture is well defined as that pertaining to J2EE applications. Nevertheless, this type of architecture is not exclusive to a WebSphere Application Server Network Deployment environment. Well, it's not in a way. If the architecture does not mention specific implementations of a function, it is a generic view of the architecture. On the other hand, if the architecture view defines or includes specific branded technologies of a function (for example, IHS for a web server function), then it is a specialized architecture. The point is that other J2EE application server products not related to the WebSphere umbrella may use the same generic type of infrastructure architecture. Therefore, this view has to do with J2EE application servers and the enterprise infrastructure components needed to sustain such application servers in a way that they can host a variety of enterprise applications (also known as J2EE applications). The following diagram provides an example of a basic WebSphere Application Server infrastructure architecture topology: The use of multiple user registries is new in version 7.0 Simple infrastructure architecture characteristics The architecture is basic since it only shows the minimum infrastructure components needed by a WebSphere Application Server infrastructure to become functional. In this diagram, the infrastructure elements are presented as they relate to each other functionally. In other words, the diagram is generic enough that it only shows and identifies the components by their main function. For instance, the infrastructure diagram includes, among others, proxy and messaging servers. Nothing in the diagram implies the mapping of a given functional component to a specific physical element such as an OS server or a specialized appliance. Branded infrastructure elements The infrastructure architecture presented in the diagram depicts a WebSphere clustered environment. The only technologies identified by their brand are the IBM HTTP Server (IHS) web server component (represented by the two rectangles (light blue) labeled IHS) and the WebSphere Application Server (WAS) nodes (represented by the rectangles (green) labeled WAS). These two simple components offer a variety of architectural choices, such as: Hosting both components in a single OS host under a WAS node Host each component in their own OS host in the same sub-network (normally an intranet) Host each component in different OS hosts in different sub-network (normally a DMZ for the IHS and intranet for the WAS) The choice for a specific architecture will be made in terms of a variety of requirements for your environment, including security requirements. Generic infrastructure components The infrastructure diagram also includes a number of components that are only identified by their function but no information is provided as to the specific technology/product implementing the function. For instance, there are four shapes (light yellow) labeled DB, Messaging, Legacy Systems, and Service Providers. In your environment, there may be choices to make in terms of the specific component. Take for instance, the DB component. Identifying what DB server or servers will be part of the architecture is dependent on the type of database employed by the enterprise application being hosted. Some corporations limit the number of database types to less than a handful. Nevertheless, the objective of the WebSphere Administrator responsible for the environment is to identify which type of databases will be interfacing with the WAS environment. Once that fact is determined, the appropriate brand/product could be added to the architecture diagram. Other technologies/components that need to be identified in a similar way are the user registry (represented by the shape (light purple) labeled User Registry), the security access component (represented in the diagram by the oval (yellow) labeled Security Access). A common type of user registry used in WebSphere environments is an LDAP server. Furthermore, a popular security access product is SiteMinder (formerly by Netegrity, now offered by CA). The remaining group of elements in the architecture has the function to front-end the IHS/WAS environment in order to provide high availability and added security. Proxy servers may be used or not, depending on whether the IHS function can be brought to the DMZ in its own OS host. Specialized appliances offered by companies such as CISCO or F5 normally implement load balancers. However, some software products can be used to implement this function. An example to the latter is the IBM WebSphere Edge suite. In general, most corporations already own and use firewalls and load balancers; so for the WebSphere administrator, it is just a matter of integrating them to the WebSphere infrastructure. Using the infrastructure architecture view Some of the benefits of picturing your WebSphere environment using the infrastructure architecture view come from realizing the following important points: Identify the technology or technology choices to be used to implement a specific function. For instance, what type of user registry to use. An immediate result of the previous point is identifying the corporate group the WebSphere administrator would be working with in order to integrate (that is, configure) said technology and WebSphere. Once the initial architecture has been laid out, the WebSphere administrator will be responsible to identify the type of security involved to secure the interactions between the various infrastructure architecture components. For instance, what type of communication will take place between the IHS and the Security Access component, if any. What is the best way to secure the communication channel? How is the IHS component authenticated to the Security Access component?  
Read more
  • 0
  • 0
  • 2807

article-image-nginx-http-server-base-module-directives
Packt
21 Jul 2010
7 min read
Save for later

Nginx HTTP Server: Base Module Directives

Packt
21 Jul 2010
7 min read
(For more resources on Nginx, see here.) We are particularly more interested in answering two questions—what are base modules and what directives are made available. What are base modules? The base modules offer directives that allow you to define parameters of the basic functionality of Nginx. They cannot be disabled at compile time; as a result, the directives and blocks they offer are always available. Three base modules are distinguished: Core module: Essential features and directives such as process management and security Events module: It lets you configure the inner mechanisms of the networking capabilities Configuration module: Enables the inclusion mechanism These modules offer a large range of directives; we will be detailing them individually with their syntaxes and default values. Nginx process architecture Before we start detailing the basic configuration directives, it's necessary to understand the process architecture, that is, how Nginx works behind the scenes. Although the application comes as a simple binary file, (apparently lightweight background process) the way it functions at runtime is rather intricate. At the very moment of starting Nginx, one unique process exists in memory—the Master Process. It is launched with the current user and group permissions—usually root/root if the service is launched at boot time by an init script. The master process itself does not process any client request; instead, it spawns processes that do—the Worker Processes, which are affected to a customizable user and group. From the configuration file, you are able to define the amount of worker processes, the maximum connections per worker process, and more. Core module directives Below is the list of directives made available by the core module. Most of these directives must be placed at the root of the configuration file and can only be used once. However, some of them are valid in multiple contexts. If that is the case, the list of valid contexts is mentioned below the directive name.root of the configuration file and can only be used once. Name and context Syntax and description daemon Accepted values: on or off Syntax: daemon on; Default value: on Enables or disables daemon mode. If you disable it, the program will not be started in the background; it will stay in the foreground when launched from the shell. debug_points Accepted values: stop or abort Syntax: debug_points stop; Default value: None. Activates debug points in Nginx. Use stop to interrupt the application when a debug point comes about in order to attach a debugger. Use abort to abort the debug point and create a core dump file. To disable this option, simply do not use the directive. env Syntax: env MY_VARIABLE; env MY_VARIABLE=my_value; Lets you (re)define environment variables. error_log Context: main, http, server, and location Syntax: error_log /file/path level; Default value: logs/error.log error. Where level is one of the following values: debug, info, notice, warn, error, and crit (from most to least detailed: debug provides frequent log entries, crit only reports critical errors). Enables error logging at different levels: Application, HTTP server, virtual host, and virtual host directory. By redirecting the log output to /dev/null, you can disable error logging. Use the following directive at the root of the configuration file: error_log /dev/null crit; lock_file Syntax: File path lock_file logs/nginx.lock; Default value: Defined at compile time Use a lock file for mutual exclusion. Disabled by default, unless you enabled it at compile time. log_not_found Context: main, http, server, and location Accepted values: on or off log_not_found on; Default value: on Enables or disables logging of 404 not found HTTP errors. If your logs get filled with 404 errors due to missing favicon.ico or robots.txt files, you might want to turn this off. master_process Accepted values: on or off master_process on; Default value: on If enabled, Nginx will start multiple processes: A main process (the master process) and worker processes. If disabled, Nginx works with a unique process. This directive should be used for testing purposes only as it disables the master process-clients thus cannot connect to your server. pid Syntax: File path pid logs/nginx.pid; Default value: Defined at compile time. Path of the pid file for the Nginx daemon. The default value can be configured at compile time. ssl_engine Syntax: Character string ssl_engine enginename; Default value: None Where enginename is the name of an available hardware SSL accelerator on your system. To check for available hardware SSL accelerators, run this command from the shell: openssl engine -t thread_stack_size Syntax: Numeric (size) thread_stack_size 1m; Default value: None Defines the size of thread stack; please refer to the worker_threads directive below timer_resolution Syntax: Numeric (time) timer_resolution 100ms; Default value: None Controls the interval between system calls to gettimeofday() to synchronize the internal clock. If this value is not specified, the clock is refreshed after each kernel event notification. user Syntax: user username groupname; user username; Default value: Defined at compile time. If still undefined, the user and group of the Nginx master process are used. Lets you define the user account and optionally the user group used for starting the Nginx worker processes. worker_threads Syntax: Numeric worker_threads 8; Default value: None Defines the amount of threads per worker process. Warning! Threads are disabled by default. The author stated that "the code is currently broken". worker_cpu_affinity Syntax: worker_cpu_affinity 1000 0100 0010 0001; worker_cpu_affinity 10 10 01 01; worker_cpu_affinity; Default value: None This directive works in conjunction with worker_processes. It lets you affect worker processes to CPU cores. There are as many series of digit blocks as worker processes; there are as many digits in a block as your CPU has cores. If you configure Nginx to use three worker processes, there are three blocks of digits. For a dual-core CPU, each block has two digits. worker_cpu_affinity 01 01 10; The first block (01) indicates that the first worker process should be affected to the second core. The second block (01) indicates that the second worker process should be affected to the second core. The third block (10) indicates that the third worker process should be affected to the first core. Note that affinity is only recommended for multi-core CPUs, not for processors with hyper-treading or similar technologies. worker_priority Syntax: Numeric worker_priority 0; Default value: 0 Defines the priority of the worker processes, from -20 (highest) to 19 (lowest). The default value is 0. Note that kernel processes run at priority level -5, so it's not recommended that you set the priority to -5 or less. worker_processes Syntax: Numeric worker_processes 4; Default value: 1 Defines the amount of worker processes. Nginx offers to separate the treatment of requests into multiple processes. The default value is 1, but it's recommended to increase this value if your CPU has more than one core. Besides, if a process gets blocked due to slow I/O operations, incoming requests can be delegated to the other worker processes. worker_rlimit_core Syntax: Numeric (size) worker_rlimit_core 100m; Default value: None Defines the size of core files per worker process. worker_rlimit_nofile Syntax: Numeric worker_rlimit_nofile 10000; Default value: None Defines the amount of files a worker process may use simultaneously. worker_rlimit_sigpending Syntax: Numeric worker_rlimit_sigpending 10000; Default value: None Defines the amount of signals that can be queued per user (user ID of the calling process). If the queue is full, signals are ignored past this limit. working_directory Syntax: Directory path working_directory /usr/local/nginx/; Default value: The prefi x switch defined at compile time. Working directory used for worker processes; only used to define the location of core files. The worker process user account (user directive) must have write permissions on this folder in order to be able to write core files.
Read more
  • 0
  • 0
  • 2569
article-image-configuring-apache-and-nginx
Packt
19 Jul 2010
8 min read
Save for later

Configuring Apache and Nginx

Packt
19 Jul 2010
8 min read
(For more resources on Nginx, see here.) There are basically two main parts involved in the configuration, one relating to Apache and one relating to Nginx. Note that while we have chosen to describe the process for Apache in particular, this method can be applied to any other HTTP server. The only point that differs is the exact configuration sections and directives that you will have to edit. Otherwise, the principle of reverse-proxy can be applied, regardless of the server software you are using. Reconfiguring Apache There are two main aspects of your Apache configuration that will need to be edited in order to allow both Apache and Nginx to work together at the same time. But let us first clarify where we are coming from, and what we are going towards. Configuration overview At this point, you probably have the following architecture set up on your server: A web server application running on port 80, such as Apache A dynamic server-side script processing application such as PHP, communicating with your web server via CGI, FastCGI, or as a server module The new configuration that we are going towards will resemble the following: Nginx running on port 80 Apache or another web server running on a different port, accepting requests coming from local sockets only The script processing application configuration will remain unchanged As you can tell, only two main configuration changes will be applied to Apache as well as the other web server that you are running. Firstly, change the port number in order to avoid conflicts with Nginx, which will then be running as the frontend server. Secondly, (although this is optional) you may want to disallow requests coming from the outside and only allow requests forwarded by Nginx. Both configuration steps are detailed in the next sections. Resetting the port number Depending on how your web server was set up (manual build, automatic configuration from server panel managers such as cPanel, Plesk, and so on) you may find yourself with a lot of configuration files to edit. The main configuration file is often found in /etc/httpd/conf/ or /etc/apache2/, and there might be more depending on how your configuration is structured. Some server panel managers create extra configuration files for each virtual host. There are three main elements you need to replace in your Apache configuration: The Listen directive is set to listen on port 80 by default. You will have to replace that port by another such as 8080. This directive is usually found in the main configuration file. You must make sure that the following configuration directive is present in the main configuration file: NameVirtualHost A.B.C.D:8080, where A.B.C.D is the IP address of the main network interface on which server communications go through. The port you just selected needs to be reported in all your virtual host configuration sections, as described below. The virtual host sections must be transformed from the following template <VirtualHost A.B.C.D:80> ServerName example.com ServerAlias www.example.com [...]</VirtualHost> to the following: <VirtualHost A.B.C.D:8080> ServerName example.com:8080 ServerAlias www.example.com [...]</VirtualHost> In this example, A.B.C.D is the IP address of the virtual host and example.com is the virtual host's name. The port must be edited on the first two lines. Accepting local requests only There are many ways you can restrict Apache to accept only local requests, denying access to the outside world. But first, why would you want to do that? As an extra layer positioned between the client and Apache, Nginx provides a certain comfort in terms of security. Visitors no longer have direct access to Apache, which decreases the potential risk regarding all security issues the web server may have. Globally, it's not necessarily a bad idea to only allow access to your frontend server. The first method consists of changing the listening network interface in the main configuration file. The Listen directive of Apache lets you specify a port, but also an IP address, although, by default, no IP address is selected resulting in communications coming from all interfaces. All you have to do is replace the Listen 8080 directive by Listen 127.0.0.1:8080; Apache should then only listen on the local IP address. If you do not host Apache on the same server, you will need to specify the IP address of the network interface that can communicate with the server hosting Nginx. The second alternative is to establish per-virtual-host restrictions: <VirtualHost A.B.C.D:8080> ServerName example.com:8080 ServerAlias www.example.com [...] Order deny,allow allow from 127.0.0.1 allow from 192.168.0.1 eny all</VirtualHost> Using the allow and deny Apache directives, you are able to restrict the allowed IP addresses accessing your virtual hosts. This allows for a finer configuration, which can be useful in case some of your websites cannot be fully served by Nginx. Once all your changes are done, don't forget to reload the server to make sure the new configuration is applied, such as service httpd reload or /etc/init.d/ httpd reload. Configuring Nginx There are only a couple of simple steps to establish a working configuration of Nginx, although it can be tweaked more accurately as seen in the next section. Enabling proxy options The first step is to enable proxying of requests from your location blocks. Since the proxy_pass directive cannot be placed at the http or server level, you need to include it in every single place that you want to be forwarded. Usually, a location / { fallback block suffices since it encompasses all requests, except those that match location blocks containing a break statement. Here is a simple example using a single static backend hosted on the same server: server { server_name .example.com; root /home/example.com/www; [...] location / { proxy_pass http://127.0.0.1:8080; }} In the following example, we make use of an Upstream block allowing us to specify multiple servers: upstream apache { server 192.168.0.1:80; server 192.168.0.2:80; server 192.168.0.3:80 weight=2; server 192.168.0.4:80 backup;} server { server_name .example.com; root /home/example.com/www; [...] location / { proxy_pass http://apache; }} So far, with such a configuration, all requests are proxied to the backend server; we are now going to separate the content into two categories: Dynamic files: Files that require processing before being sent to the client, such as PHP, Perl, and Ruby scripts, will be served by Apache Static files: All other content that does not require additional processing, such as images, CSS files, static HTML files, and media, will be served directly by Nginx We thus have to separate the content somehow to be provided by either server. Separating content In order to establish this separation, we can simply use two different location blocks—one that will match the dynamic file extensions and another one encompassing all the other files. This example passes requests for .php files to the proxy: server { server_name .example.com; root /home/example.com/www; [...] location ~* .php.$ { # Proxy all requests with an URI ending with .php* # (includes PHP, PHP3, PHP4, PHP5...) proxy_pass http://127.0.0.1:8080; } location / { # Your other options here for static content # for example cache control, alias... expires 30d; }} This method, although simple, will cause trouble with websites using URL rewriting. Most Web 2.0 websites now use links that hide file extensions such as http://example.com/articles/us-economy-strengthens/; some even replace file extensions with links resembling the following: http://example.com/useconomy- strengthens.html. When building a reverse-proxy configuration, you have two options: Port your Apache rewrite rules to Nginx (usually found in the .htaccess file at the root of the website), in order for Nginx to know the actual file extension of the request and proxy it to Apache correctly. If you do not wish to port your Apache rewrite rules, the default behavior shown by Nginx is to return 404 errors for such requests. However, you can alter this behavior in multiple ways, for example, by handling 404 requests with the error_page directive or by testing the existence of files before serving them. Both solutions are detailed below. Here is an implementation of this mechanism, using the error_page directive : server { server_name .example.com; root /home/example.com/www; [...] location / { # Your static files are served here expires 30d; [...] # For 404 errors, submit the query to the @proxy # named location block error_page 404 @proxy; } location @proxy { proxy_pass http://127.0.0.1:8080; }} Alternatively, making use of the if directive from the Rewrite module: server { server_name .example.com; root /home/example.com/www; [...] location / { # If the requested file extension ends with .php, # forward the query to Apache if ($request_filename ~* .php.$) { break; # prevents further rewrites proxy_pass http://127.0.0.1:8080; } # If the requested file does not exist, # forward the query to Apache if (!-f $request_filename) { break; # prevents further rewrites proxy_pass http://127.0.0.1:8080; } # Your static files are served here expires 30d; }} There is no real performance difference between both solutions, as they will transfer the same amount of requests to the backend server. You should work on porting your Apache rewrite rules to Nginx if you are looking to get optimal performance.
Read more
  • 0
  • 0
  • 4357

article-image-microsoft-chart-xml-data
Packt
18 Nov 2009
4 min read
Save for later

Microsoft Chart with XML Data

Packt
18 Nov 2009
4 min read
Introduction SQL 2000 Server provided T-SQL language extensions to operate bi-directionally with relational and XML sources. It also provided two system stored procedures, sp_XML_preparedocument and sp_XML_removedocument, that assist the XML to Relational transformation. This support for returning XML data from relational data using the For XML clause is continued in SQL Server 2005 and SQL Server 2008 although the support for XML is lot more extensive. The shape of the data returned by the For XML clause is further modified by choosing the following modes, raw, auto, explicit, or path. As a preparation for this article we will be creating an XML document starting from the PrincetonTemp table used in a previous article, Binding MS Chart Control to LINQ Data Source Control, on this site. Creating an XML document from an SQL Table Open the SQL Server Management and create a new query [SELECT * from PrincetonTemp for XML auto]. You can use the For XML Auto clause to create a XML document (actually what you create is a fragment - a root-less XML without a processing directive) as shown in Figure 01. Figure 01: For XML Auto clause of a SELECT statement The result shown in a table has essentially two columns with the second column containing the document fragment shown in the next listing. Listing 01: <PrincetonTemp Id="1" Month="Jan " Temperature="4.000000000000000e+001" RecordHigh="6.000000000000000e+001"/> <PrincetonTemp Id="2" Month="Feb " Temperature="3.200000000000000e+001" RecordHigh="5.000000000000000e+001"/> <PrincetonTemp Id="3"Month="Mar " Temperature="4.300000000000000e+001" RecordHigh="6.500000000000000e+001"/> <PrincetonTemp Id="4" Month="Apr " Temperature="5.000000000000000e+001" RecordHigh="7.000000000000000e+001"/> <PrincetonTemp Id="5" Month="May " Temperature="5.300000000000000e+001" RecordHigh="7.400000000000000e+001"/> <PrincetonTemp Id="6" Month="Jun " Temperature="6.000000000000000e+001" RecordHigh="7.800000000000000e+001"/> <PrincetonTemp Id="7" Month="Jul " Temperature="6.800000000000000e+001" RecordHigh="7.000000000000000e+001"/> <PrincetonTemp Id="8" Month="Aug " Temperature="7.100000000000000e+001" RecordHigh="7.000000000000000e+001"/> <PrincetonTemp Id="9" Month="Sep " Temperature="6.000000000000000e+001" RecordHigh="8.200000000000000e+001"/> <PrincetonTemp Id="10" Month="Oct " Temperature="5.500000000000000e+001" RecordHigh="6.700000000000000e+001"/> <PrincetonTemp Id="11" Month="Nov " Temperature="4.500000000000000e+001" RecordHigh="5.500000000000000e+001"/> <PrincetonTemp Id="12" Month="Dec " Temperature="4.000000000000000e+001" RecordHigh="6.200000000000000e+001"/> This result is attribute-centric as each row of data corresponds to a row in the relational table with each column represented as an XML attribute. The same data can be extracted in an element centric manner by using the directive elements in the SELECT statement as shown in the next figure. Figure 02: For XML auto, Elements clause of a Select statement This would still give us an XML fragment but now it is displayed with element nodes as shown in the next listing (only two nodes 1 and 12 are shown). Listing 02: <PrincetonTemp><Id>1</Id><Month>Jan </Month><Temperature>4.000000000000000e+001</Temperature> <RecordHigh>6.000000000000000e+001</RecordHigh> </PrincetonTemp> ... <PrincetonTemp><Id>12</Id><Month>Dec </Month><Temperature>4.000000000000000e+001</Temperature> <RecordHigh>6.200000000000000e+001 </RecordHigh></PrincetonTemp> To make a clear distinction between the results returned by the two select statements the first row of data is shown in blue. This has returned elements and not attributes. As you can see the returned XML still lacks a root element as well as the XML processing directive. To continue with displaying this data in MS Chart Save Listing 2 as PrincetonXMLDOC.xml to a location of your choice. Create a Framework 3.5 Web Site project Let us create a web site project and display the chart on the Default.aspx page. Open Visual Studio 2008 from its shortcut on the desktop. Click File  New | Web Site...|(or Shift+Alt+N) to open the New Web Site window. Change the default name of the site to a name of your choice (herein Chart_XMLWeb) as shown. Make sure you are creating a .NET Framework 3.5 web site as shown here. Figure 03: New Framework 3.5 Web Site Project Click on APP_Data folder in the solution explorer as shown in the next figure and click on Add Existing Item… menu item. Figure 04: Add an existing item to the web site folder In the interactive window that gets displayed browse to the location where you saved the PrincetonXMLDOC.xml file and click Add button. This will add the XML file to the ADD_Data folder of the web site project. Double click PrincetonXMLDOC.xml in the web site project folder to display and verify its contents as shown in the next figure. Only nodes 1 and 12 are shown expanded. As mentioned previously this is an XML fragment. Figure 05: Imported PrincetonXMLDOC.xml Modify this document by adding the <root/> as well as the XML processing instruction as shown in the next figure. Build the project. Figure 06: Modified PrincetonXMLDOX.xml (valid XML document)
Read more
  • 0
  • 0
  • 2341

article-image-ground-sql-azure-migration-using-ms-sql-server-integration-services
Packt
18 Nov 2009
5 min read
Save for later

Ground to SQL Azure migration using MS SQL Server Integration Services

Packt
18 Nov 2009
5 min read
Enterprise data can be of very different kinds ranging from flat files to data stored in relational databases with the recent trend of storing data in XML data sources. The extraordinary number of database related products, and their historic evolution, makes this task exacting. The entry of cloud computing has turned this into one of the hottest areas as SSIS has been one of the methods indicated for bringing ground based data to cloud storage in SQL Azure, the next milestone in Microsoft Data Management. The reader may review my book on this site, "Beginners Guide to Microsoft SQL Server Integration Services" to get a jump start on learning this important product from Microsoft. SQL Azure SQL Azure is one of the three pillars of Microsoft's Azure cloud computing platform. It is a relational database built on SQL Server Technologies maintained on Microsoft's physical site where subscribers like you and me can rent out storage of data. Since the access is over the internet it is deemed to be in the cloud and Microsoft would provide all data maintenance. Some of the key benefits of this 'Database usage as a service' are: Manageability High Availability Scalability Which in other words means taking away a lot headache from you like worrying about hardware and software (SQL Azure Provisioning takes care of this), replication, DBAs with attitudes etc. Preparation for this tutorial You need some preparation to work with this tutorial. You must have a SQL Server 2008 installed to start with. You also need to register yourself with Microsoft to get an invitation to use SQL Azure by registering for the SQL Azure CTP. Getting permission is not immediate and may take days. After you register agreeing to the license terms, you get the permission (You become a subscriber to the service) to use the Azure Platform components (SQL Azure is one of them). After subscribing you can create a database on the SQL Azure instance. You will be the administrator of your instance (Your login will be known as the server level principal equivalent to the landbased sa login), and you can web access the server with a specific connection string provided to you and a strong password which you create. When you access the Azure URL, you provide the authentication to get connected to your instance of the server by signing in. Therein, you can create a database or delete an existing database. You have couple of tools available to work with this product. Read the blog post mentioned in the summary. Overview of this tutorial In this tutorial you will be using MS SQL Server Integration Services to create a package that can transfer a table from SQL Server 2008 to SQL Azure for which you have established your credentials. In my case the credentials are: Server: tcp:XXXXXX.ctp.database.windows.net User ID: YYYYY Password: ZZZZZ Database: PPPPPP Trusted_Connection=False; Here XXXXXX, YYYY,ZZZZZ, and PPPPPP are all the author's personal authentication values and you would get yours when you register as previously mentioned. Table to be migrated on SQL Server 2008 The table to be migrated on the SQL Server 2008 (Enterprise server, evaluation edition is shown in the next figure). PrincetonTemp is a simple table in the TestNorthwind database on the default instance of the local server on a Windows XP machine, with a few columns and no primary key. Create a SQL Server Integration Services Package Open BIDS (a Visual Studio add-in extending support to build database applications with SQL Server) and create a new SQL Server Integration Services project[Use File |New |Project...in the IDE]. Herein the Visual Studio 2008 with SP1 is used. You need to provide a name which for this project is GroundToCloud. The program creates the project for you which you can see in the Solution Explorer. By default it creates a package for you, Package.dtsx. You may rename the package (herein ToAzure.dtsx)and the project folders and file appear as shown. Add an ADO.NET Source component Drag and drop a Data Flow Task to the tabbed page Control Flow in the package designer. Into the Data flow tabbed page drag and drop an ADO.NET Source component from the Toolbox. Double click the component you just added, from the pop-up menu choose Edit... The ADO.NET Source editor gets displayed. If there are previously configured connections one of them may show up in this window. We will be creating a new connection and therefore click the New... button to display an empty Configure ADO.NET Connection Manager as shown (again, if there are existing connections they all will show up in this window). A connection is needed in connecting to a source outside the IDE. Double click the New... button to display the Connection Manager window which is all but empty. Fill in the details for your instance of ground based server as shown (the ones shown are for this article at the author's site). You may test the connection by hitting the Test Connection button. Clicking the OK buttons on the Connection Manager and the Configure ADO.NET Connection Manager will bring you back to the ADO.NET Source Editor displaying the connection you have just made as shown. A connection string also gets added to the bottom pane of the package designer as well as to the Configure ADO.NET Connection Manager. Click on the drop-down and pick the table (PrincetonTemp) that needs to be migrated to the cloud based server, SQL Azure. Click OK. The Columns navigation on the left would reveal all the columns in the table if it were to be clicked. The Preview button would return the data returned by a SELECT query on the columns as shown.
Read more
  • 0
  • 0
  • 2232
article-image-creating-vbnet-application-enterprisedb
Packt
27 Oct 2009
5 min read
Save for later

Creating a VB.NET application with EnterpriseDB

Packt
27 Oct 2009
5 min read
Overview of the tutorial You will begin by creating an ODBC datasource for accessing data on the Postgres server. Using the User DSN created you will be connecting to the Postgres server data. You will derive a dataset from the table which you will be using to display in a datagrid view on a form in a windows application. We start with the Categories table that was migrated from MS SQL Server 2008. This table with all of its columns is shown in the Postgres studio in the next figure. Creating the ODBC DSN Navigate to Start | Control Panel | Administrative Tools | Data Sources (ODBC) to bring up the ODBC Database Manager window. Click on Add.... In the Create New Data Source scroll down to EnterpriseDB 8.2 under the list heading Name as shown. Click Finish. The EnterpriseDB ODBC Driver page gets displayed as shown. Accept the default name for the Data Source(DSN) or, if you prefer, change the name. Here the default is accepted. The Database, Server, User Name, Port and the Password should all be available to you [Read article 1]. If you click on the option button Datasource you display a window with two pages as shown. Make no changes to the pages and accept defaults but make sure you review the pages. Click OK and you will be back in the EnterpriseDB Driver window. If you click on the button Global the Global Settings window gets displayed (not shown). These are logging options as the page describes. Click Cancel to the Global Settings window. Click on the Test button and verify that the connection was successful. Click on the Save button and save the DSN under the list heading User DSN. The DSN EnterpriseDB enters the list of DSN's created as shown here. Create a Windows Forms application and Establish a connection to Postgres Open Visual Studio 2008 from its shortcut. Click File | New | Project... and open the New Project window. Choose a windows forms project for Framework 2.0. Besides Framework 2.0 you can also create projects in other versions in Visual Studio 2008. In Server Explorer window double click the Connection icon as shown. This brings up the Add Connection window as shown. Click on Change... button to display the Change Data Source window. Scroll up and select Microsoft ODBC Data Source as shown. Click OK. Click on the drop-down handle for the option Use user or system data source name and choose EnterpriseDB you created earlier as shown. Insert User Name and Password and click on the Test Connection button. You should get a connection succeeded message as shown. Click OK on the message screen as well as to the add connection window. The connection appears in the Visual Studio 2008 in the Server Explorer as shown.     Displaying data from the table Drag and drop a DataGridView under Data in the Toolbox onto the form as shown (shown with SmartTasks handle clicked) Click on Choose Data Source handle to display a drop-down menu as shown below. Click on Add Project Data Source at the bottom. This displays the Choose a Data Source Type page of the Data Source Configuration Wizard. Accept the default datasource type and click Next. In the Choose Your Data Connection page of the wizard choose the ODBC.localhost.PGNorthwind as shown in the drop-down list. Click Next in the page that gets displayed and accept the default to save the connection string to the application configuration file as shown. Click Next. In the Choose Your Database Objects page, expand Tables and choose the categories table as shown. The default Dataset name can be changed. Herein the default is accepted. Click Finish. The DatagridView on Form1 gets displayed with two columns and a row but can be extended to the right by using drag handles to reveal all the four columns as shown. Three other objects PGNorthwindDataSet, CategoriesBindingSource, and CategoriesTableAdapter are also added to the control tray as shown. The PGNorthwindDataset.xsd file gets added to the project. Now build the project and run. The Form 1 gets displayed with the data from the PGNorthwind database as shown. In the design view of the form few more tasks have been added as shown. Here you can Add Query... to filter the data displayed; Edit the details of the columns and you can choose to add a column if you had chosen fewer columns from the original table. For example, Edit Column brings up its editor as shown where you can make changes to the styles if you desire to do so. The next figure shows slightly modified form by editing the columns and resizing the cell heights as shown. Summary A step-by-step procedure was described to display the data stored in a table in the Postgres database in a Windows Forms application. Procedure to create an ODBC DSN was also described. Using this ODBC DSN a connection was established to the Postgres server in Visual Studio 2008.
Read more
  • 0
  • 0
  • 3088

article-image-working-sbs-services-user-part-2
Packt
26 Oct 2009
10 min read
Save for later

Working with SBS Services as a User: Part 2

Packt
26 Oct 2009
10 min read
Managing files One service that SBS 2008 provides for users is a secure place to store files. Both web sites and file shares are provided by default to assist with this. Enabling collaboration on documents, where multiple people will want to read or update a file is best delivered using the CompanyWeb site. The CompanyWeb site is the internal web site and it is built on Windows SharePoint Services technologies. In this section, I will explore: File management aspects of CompanyWeb Searching across the network for information User file recovery Internal Web Site Access SBS 2008 provides an intranet for sharing information. This site is called the CompanyWeb and can be accessed internally by visiting http://companyweb. To access it remotely, click on the Internal Web Site button that will open up the URL https://remote.yourdomain.co.uk:987. It is important that you note the full URL with :987 on the end, otherwise you will not see your CompanyWeb. CompanyWeb, in its simplest form, is a little like a file share, but has considerably more functionality such as the ability to store more than just files, be accessible over the Internet and your local network, host applications, and much more. For file management, it enables flow control such as document check-in and check-out for locking of updates and an approval process for those updates. It can also inform users when changes have taken place, so that they do not need to check on the web site as it will tell them. Finally, it can enable multiple people to work on a document and it will arbitrate the updates so the owner can see all the comments and changes. While we are looking at CompanyWeb from a file management perspective, it is worth pointing out that any Windows SharePoint Services site also has the capability to run surveys, provide groups, web-based calendars, run web-based applications that are built on top of the SharePoint services, host blog and wiki pages, and perform as your fax center. In looking at file management, I will briefly explain how to: Upload a document via the web interface Add a document via email attachment Edit a document stored in CompanyWeb Check Out/In a document Recover a deleted document Uploading documents Navigate to http://CompanyWeb in your browser and then to the Shared Documents section. You can create other document libraries by clicking on Site Actions in the righthand corner of the screen and then selecting Create. From here, you can upload documents in three different ways. You can upload single or multiple documents from the Upload menu. If you chose this option, you will be prompted to Browse for a single file and then click on OK to upload the file. If you chose Upload Multiple Documents from the menu or the Upload Document screen, you will be presented with the multiple upload tool. Navigate to the folder with the files you wish to upload, check the items, and click OK to start the upload. The final mechanism to load documents is to choose to Open with Windows Explorer from the Actions menu. This will open an Explorer window that you can then copy and paste into as if you had two local folders open on your computer. Uploading using email I know this might sound a little strange, but the process of emailing documents backwards and forwards between people, for ideas and changes, can make "keeping up to date" very confusing for everyone. Using CompanyWeb in this way enables each user to update their copy of the document and then merge them all together so the differences can be accepted or rejected by the owner. To upload a document via email, create a new email in Outlook and attach a document as per normal. Then, go to the Insert tab and click on the small arrow on the bottom right of the Include section. In the task pane that opens on the righthand side, change the Attachment Options to Shared attachments and type http://CompanyWeb into the box labeled Create Document Workspace at:. This will create the additional text in the mail and include a link to the site that was created under CompanyWeb. This site is secured so that only the people on the To line and the person who sent it have access. Send the email, and the attachment will be loaded to the special site. Each user can open the attachment as per normal, save it to their hard disk, and edit the document. The user can make as many changes as they like and finally, save the updates to the CompanyWeb site. If their changes are to an earlier version, they will be asked to either overwrite or merge the changes. The following sample shows the writing from Molly and Lizzy in two different colors so that the document owner can read and consider all the changes and then accept all or some of them.   Opening documents and Checking Out and In Once you have documents stored on the CompanyWeb site, you can open them by simply clicking on the links. You will be prompted if you want to open a Read Only copy or Edit the document. Click OK once you have selected the right option. This simple mechanism is fine where there is no control, but you might want to ensure that no one else can modify the document while you are doing so. In the previous section, I showed the conflict resolution process, but this can be avoided by individuals checking documents in and out. When a document is checked out, you can only view the document unless you are the person who checked it out, in which case you can edit it. To check a document out, hover over the document and click on the downward arrow that appears on the right of the filename. A menu will appear and you can select Check Out from that menu. You can then edit the document while others cannot. Once you are finished, you need to check the document back in. This can be done from Word or back on the web site on the same drop-down menu where you checked it out. Recovering a deleted document in CompanyWeb If you delete a document in CompanyWeb, there is a recycle bin to recover documents from. On almost all lefthand navigation panes is the Recycle Bin link. Click this and you will be asked to select the documents to recover and then click on Restore Selection. Searching for information You can search for any file, email, calendar appointment, or document stored on your hard disk with SBS 2008 and Windows Vista or Windows XP and Windows Search. Just as with the email search facility, you can also search for any file, or the contents of any file on both the CompanyWeb site and on your computer. To search on CompanyWeb, type the key words that you are interested in into the search box in the top right corner and then click on the magnifying glass. This will then display you a varied set of results as you can see in the following example. If you are using Vista, you can type a search into the Start menu or select Search from the Start menu and again type the key words you are looking for in the top right corner. The Windows search will search your files, emails, calendar and contacts, and browser history to find a list of matches for you. You can get the latest version of Desktop Search for Windows Vista and Windows XP by following http://davidoverton.com/r.ashx?1K. User file recovery We have already covered how you recover deleted emails and documents in CompanyWeb, but users need something a little more sophisticated with file recovery on their desktop. Generally, when an administrator is asked to recover a file for a user, it is either because they have just deleted it and it is not in the recycle bin or they still have the file, but it has become corrupt or they wish to undo changes made over the last day or two. When you turn on folder redirection or when you are using Windows Vista, users get the ability to roll back time to a version of the file or folder that was copied over the previous few days. This means that not only can we undelete files from the recycle bin, but we can revert back to an earlier copy of a file that has not been deleted from 3-7 days previous without needing to access the backups. If the file has been deleted, we can look into the folder from an earlier time snap-shot as opposed to just the still existing files. To access this facility, right-click on the folder for which you want to get an earlier version and select Properties. Now, move to the Previous Versions tab. You can now Open the folder to view, as is shown on the right below, Copy the folder to a new location, or Revert the folder to the selected version, overwriting the current files. Remote access Now that the client computers are configured to work with SBS 2008, you need to check that the remote access tools are working. These are: Remote Web Workplace Outlook Web Access Internal Web Site Access Connecting to a PC on the SBS 2008 LAN Connecting via a Virtual Private Network (VPN) Remote Web Workplace, remote email, and intranet access The Remote Web Workplace is the primary location to use to access computers and services inside your SBS 2008 network when you are not yourself connected to it. To access the site, open your browser and go to https://remote.yourdomain.co.uk/remote. If you forget the /remote from the URL, you will get a 403 – Forbidden: Access is denied error. You will be presented with a sign-in screen where you enter your user name and password. Once you are through the login screen, you will see options for the provided three sections and a number of links. Customizing Remote Web Workplace You can customize the information that is present on the Welcome screen of the Remote Web Workplace, including the links shown, the background bitmaps, and company icons. Two of the links shown on the Welcome Page have a URL that starts with https://sites, which will not work from the Internet, so these will need to be changed. To do this, go to the Shares Folders and Web Sites tab and select Web Sites. Click on the View site properties button in the righthand task pane and navigate to the Home page links section. From here, you can choose what is displayed on the front page, removing options if desired. To alter the URLs of the links, click on the Manage links… button.
Read more
  • 0
  • 0
  • 2015