Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - Servers

95 Articles
article-image-microsoft-sql-server-2012-performance-tuning-implementing-physical-database-structure
Packt
20 Jul 2012
8 min read
Save for later

Microsoft SQL Server 2012 Performance Tuning: Implementing Physical Database Structure

Packt
20 Jul 2012
8 min read
In this article we will cover: Configuring a data file and log file on multiple physical disks Using files and filegroups Moving an existing large table to a separate physical disk Moving non-clustered indexes to a separate physical disk Configuring the tempdb database on separate physical disk Configuring data file and log file on multiple physical disks If you know the exact difference between the ways in which data files and log files of a database are accessed, you can understand why you should place data files and log files on separate physical disks for better performance. The data file of a database, which is normally a file with a .mdf or .ndf extension, is used to store the actual data in the database. The data is stored in pages that are 8 KB in size. When particular data is queried by the user, SQL Server reads the required data pages from the disk into memory containing the requested data from the data file. In case SQL Server needs to make any modifcation in the existing data, it reads the required data pages into the buffer cache, updates those cached data pages in memory, writes modifications to the log file, when the transaction is committed, and then writes the updated data pages back to the disk, when the checkpoint operation is performed. SQL Server performs configurable checkpoint operations at regular intervals. In-memory modified data pages are called dirty pages. When a checkpoint is performed, it permanently writes these dirty pages on disk. The log file is used to record any change that is made to the database. It's intended for recovery of the database in case of disaster or failure. Because a log file is intended to record the changes, it is not designed to be read randomly, as compared to a data file. Rather, it is designed to be written and accessed in a sequential manner. SQL Server is designed to handle and process multiple I/O requests simultaneously, if we have enough hardware resources. Even if SQL Server is capable of handling simultaneous I/O requests in parallel, it may face the issue of disk contention while reading large amounts of data from data files and writing large a number of transaction logs to log files in parallel with two different requests if data files and log files reside on the same physical disk. However, if data file and log file are located on separate physical disks, SQL Server gracefully handles and processes such requests in parallel. When simultaneous requests for reading data and writing transaction logs are commonly expected in the OLTP database environment, placing data files and log files on separate physical drives greatly improves the performance of the database. Let's suppose that you are a DBA and, in your organization, you maintain and administer a production database called AdventureWorks2012 database. The database was created/ installed by an inexperienced team and has been residing in the default location for SQL Server. You are required to separate the data files and log files for this database and place them on different physical disks to achieve maximum I/O performance. How would you perform this task? The goal of this recipe is to teach you how to separate the data files and log files for an existing database to improve the I/O response time and database performance. Getting ready This recipe refers to the following physical disk volumes: E drive—to store the data file L drive—to store the log file In this article, wherever it is said "separate disk volume" or "separate drive", consider it a separate physical drive and not logical partitioned drive. The following are the prerequisites for completing this recipe: An instance of SQL Server 2012 Developer or Enterprise Evaluation edition. Sample AdventureWorks2012 database on the instance of SQL server. For more details on how to install the AdventureWorks2012 database, please refer to the Preface of this book E drive should be available on your machine. L drive should be available on your machine. How to do it... The following are the steps you need to perform for this recipe: Start SQL Server Management Studio and connect to SQL Server. In the query window, type and execute the following script to verify the existing path for data files and log files for the AdventureWorks2012 database: --Switch the current database --context to AdventureWorks2012 USE AdventureWorks2012 GO --Examine the current --location of the database. SELECT physical_name FROM sys.database_files GO Assuming that the AdventureWorks2012 database resides in its default location, depending upon your SQL Server installation path, you may see a result in the output of the previous query, similar to the one given here: Now, execute the following query to bring the database offline: USE master GO --Bring database offline ALTER DATABASE AdventureWorks2012 SET OFFLINE WITH ROLLBACK IMMEDIATE GO Once the database is offline, you can detach it without any problem. Right-click on AdventureWorks2012Object ExplorerTasks and then Detach…, as shown in following screenshot: This step brings up the Detach Database dialog box, as shown in following screenshot. Press the OK button on this dialog box. This will detach the AdventureWorks2012 database from the SQL Server instance and it will no longer appear in Object Explorer: Create the following two directories to place data files (.mdf files) and log files (.ldf files), respectively, for the AdventureWorks2012 database, on different physical disks: E:SQL_Data L:SQL_Log Now, using Windows Explorer, move the AdventureWorks2012_data.mdf and AdventureWorks2012_log.ldf database files manually from their original location to their respective new directories. The following paths should be the respective destinations: E:SQL_DataAdventureWorks2012_Data.mdf L:SQL_Log AdventureWorks2012_Log.ldf After the data and log files are copied to their new locations, we will attach them and bring our AdventureWorks2012 database back online. To do this, in Object Explorer, right-click on the Databases node and select Attach…. You will see the following Attach Databases dialog box. In this dialog box, click on the Add…> button: The previous step opens the Locate Database Files dialog box. In this dialog box, locate the .mdf data file E:SQL_DataAdventureWorks2012_Data.mdf and click on the OK button, as shown in following screenshot: After locating the .mdf data file, the Attach Databases dialog box should look similar to the following screenshot. Note that the log file (.ldf file) could not be located at this stage and there is a Not Found message against AdventureWorks2012_log.ldf, under the AdventureWorks2012 database details: section. This happens because we have moved the log file to our new location, L:SQL_Log, and SQL Server tries to find it in its default location: To locate the log file, click on the … button in the Current File Path column for the AdventureWorks2012_log.ldf log file. This will bring up the Locate Database Files dialog box. Locate the file L:SQL_LogAdventureWorks2012_log.ldf and click on the OK button. Refer to the following screenshot: To verify the new location of the AdventureWorks2012 database, run the following query in SSMS: --Switch the current database --context to AdventureWorks2012 USE AdventureWorks2012 GO --Verify the new location of --the database. SELECT physical_name ,name FROM sys.database_files GO In the query result, examine the new locations of the data files and log files for the AdventureWorks2012 database; see the following screenshot: How it works... In this recipe, we first queried the sys.database_files system catalog view to verify the current location of the AdventureWorks2012 database. Because we wanted to move the .mdf and .ldf files to new locations, we had to bring the database offline. We brought the database offline with the ALTER DATABASE command. Note that, in the ALTER DATABASE command, we included the ROLLBACK IMMEDIATE option. This rolls back the transactions that are not completed, and current connections to AdventureWorks2012 database are closed. After bringing the database offline, we detached the AdventureWorks2012 database from the instance of SQL server. You cannot move a database file to a new location if the database is online. If a database is to be moved, it must not be in use by SQL Server. In order to move a database, you can either stop the SQL Server service or bring the database offline. Bringing the database offline is a preferable option because stopping SQL Server service stops the functioning of the whole SQL Server instance. Alternatively, you can also select the checkbox Drop Connections in the Detach Database dialog box, which does not require bringing a database offline. We then created two new directories—E:SQL_Data and L:SQL_Log—to place the data and log files for AdventureWorks2012 and moved AdventureWorks2012_Data.mdf and AdventureWorks2012_Log.ldf over there. We then attached the AdventureWorks2012 database by attaching the .mdf and .ldf files from their new locations. Finally, we verifed the new location of the database by querying sys.database_files. You can script your Attach Database and Detach Database actions by clicking on the Script button in the wizard. This allows you to save and re-use the script for future purposes.
Read more
  • 0
  • 0
  • 3690

article-image-troubleshooting-websphere-security-related-issues
Packt
11 Aug 2011
8 min read
Save for later

Troubleshooting WebSphere Security-related Issues

Packt
11 Aug 2011
8 min read
  IBM WebSphere Application Server v7.0 Security Secure your IBM WebSphere applications with Java EE and JAAS security standards using this book and eBook Troubleshooting general security configuration exceptions The selected cases in this subsection concerns the situations when various aspects of configuring security are carried out and, as a result, error conditions occur. Identifying problems with the Deployment Manager—node agent communication blues Several of the problems that may take place due to either wrong or incomplete security configuration are found in the communication of the administrative layers of the WebSphere environment, i.e., between the deployment manager and the node agent(s). A couple of the most common situations are shown below, along with recommendations as to how to correct the condition. Receiving the message HMGR0149E: node agent rejected The message HMGR0149E is the result of the Deployment Manager rejecting a request to connect from the node agent. This type of error and the display of this message normally takes place when security changes in the Deployment Manager were not synchronized with the node in question. An example of log file clip where this message is found can be seen in the following screenshot: One way to fix this problem is by using the syncNode.sh command. The syntax for this command is: syncNode.sh dmgr_host [dmgr_port] [-conntype <type>] [-stopservers] [-restart] [-quiet] [-nowait] [-logfile <filename>] [-replacelog] [-trace] [-username <username>] [-password <password>] [-localusername <localusername>] [-localpassword <localpassword>] [-profileName <profile>] syncNode.sh [-help] Furthermore, a very simple procedure to correct this problem is given next: Stop the affected node agent(s). Execute, on the node agent OS host, the syncNode.sh command. Monitor the SystemOut.log file for both dmgr and nodeagent processes. Start the node agent. For additional information on messages from the high availability manager, refer to the WAS ND7 Information Center link: http://publib.boulder.ibm.com/infocenter/wasinfo/ v7r0/topic/com.ibm.websphere.messages.doc/com.ibm. ws.hamanager.nls.HAManagerMessages.html Receiving the message ADMS0005E: node agent unable to synchronize This message, ADMS0005E, is the result of the node agent attempting to synchronize configuration with the Deployment Manager. It is likely caused when changes in security-related configuration occurred and the node agent were not available. The following screenshot shows an example of this type of error. One way to solve the issue is to shut down the node agent, and then, manually execute the command syncNode.sh from the node OS host using a user ID and password that has administrative privileges on the Deployment Manager. For syntax or usage information about this command, kindly refer to the previous example. In case this action does not solve the problem, follow the next procedure: Stop the node agent(s) Using the ISC, disable global security Restart the Deployment Manager Start the node agent(s) Perform a full synchronization using the ISC Using the ISC, enable global security Synchronize changes with all nodes Stop the node agent(s) Restart the Deployment Manager to activate global security Start the node agent(s) For additional information on messages about the administrative synchronization, refer to the WAS ND7 Information Center link: http://publib.boulder.ibm.com/infocenter/wasinfo/ v7r0/topic/com.ibm.websphere.messages.doc/com.ibm. ws.management.resources.sync.html Troubleshooting runtime security exceptions To close the section on troubleshooting, this subsection presents several cases of error or exception conditions that occur due to security configuration of various WAS ND7 environment components. Such components can be all within WAS or some components could be external, for example, the IHS/WebSphere Plug-in. Troubleshooting HTTPS communication between WebSphere Plug-in and Application Server When setting up the HTTPS communication between the WebSphere Plug-in and the WebSphere Application Server there may be instances in which exceptions and errors may occur during the configuration phase. Some of the most common are listed next. Receiving the message SSL0227E: SSL handshake fails The message SSL0227E is a common one when the main IHS process is attempting to retrieve the SSL certificate indicated by the property SSLServerCert located in the httpd.conf file. What this message is stating is that the intended SSL certificate cannot be found by its label from the key ring indicated by the directive KeyFile in the same configuration file. An example of this type of message is shown in the following screenshot. In order to correct this error, there are two possibilities that can be explored. On the one hand, one needs to insure that the directive KeyFile is pointing to the correct key ring file. That is, that the key ring file actually stores the target SSL certificate to be used with this IHS server. On the other hand, there may be a typographic error in the value of the property SSLServerCert. In other words, the label that is mapped to the target SSL certificate was misspelled in the httpd.conf file. In both cases, the command gsk7capicmd can be used to list the content of the key ring file. The syntax for listing the contents of a key ring file is: <IHS_ROOT_Directory>/bin/gsk7capicmd -cert -list all -db <Path_To_ kdb_File> -pw <kdb_File_Password> For additional information on messages about handshaking issues, refer to the IHS v7 Information Center link: http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/ topic/com.ibm.websphere.ihs.doc/info/ihs/ihs/rihs_ troubhandmsg.html Receiving ws_config_parser errors while loading the plugin configuration file If the configParserParse message of the ws_config_parser component is observed in the errors log file of the IBM HTTP Server; the following screenshot is an example of a possible output that may be found in the error logs. There may be a couple of reasons for this type of message to appear in the logs. One reason for this type of message is that it occurs at the time in which the IHS process is being brought down. The WebSphere Plug-in module is in its cycle to reparse the plugin-cfg.xml file while the IHS process is shutting down, therefore the ws_config_parser component does not have enough resources to perform the parsing of the configuration file and throws this message, possibly multiple times in a row. In order to ensure that this is the correct interpretation of the message, it is necessary to find an indicator, such as a 'shutting down' type of message like the one shown in the next screenshot: The other reason why this message may appear in the logs is very likely that the process owner of the IHS process does not have the correct privileges to read the plugin-cfg.xml file. In this case, ensure that the definition for the property User in the httpd.conf file has enough privileges to read the plug-in configuration file defined for the property WebSpherePluginConfig of the httpd.conf file. For additional information on messages about WebSphere Plug-in issues, refer to the article Error message definitions for WebSphere Application Server's webserver plugin component. Receiving the message GSK_ERROR_BAD_CERT: No suitable certificate found The message GSK_ERROR_BAD_CERT appears in log files when the WebSphere Plug-in is attempting to establish an SSL connection with the back-end WebSphere Application Server and it does not have a way to validate the SSL certificate sent by the WebSphere Application Server. An example of this type of message is shown in the next screenshot: One way to solve this problem is by adding to the IHS key ring file the signer certificate from the WebSphere Application Server. When doing this, care must be taken to correctly select the WebSphere trust store. In other words, the correct scope for your target Application Server needs to be identified so that the appropriate trust store can be accessed. For instance, if it was desired to obtain the root certificate (aka, signer certificate) used by the Chap7AppServer Application Server, one needs to identify the scope for that application server. Therefore, one should start with the following breadcrumb in the ISC (Deployment Manager console): Security | SSL certificate and key management | Manage endpoint security configurations. The following screenshot illustrates a portion of the resulting page: Once the appropriate scope is identified, continue by completing the breadcrumb: Security | SSL certificate and key management | Manage endpoint security configurations | Chap7AppServer | Key stores and certificates | NodeDefaultTrustStore | Signer certificates. The following screenshot shows a portion of a resulting page. You are now in position to extract the Application Server signer SSL certificate. Once this certificate is extracted, it needs to be imported into the IHS key ring file as a root certificate.
Read more
  • 0
  • 0
  • 5608

article-image-tuning-websphere-security
Packt
11 Aug 2011
8 min read
Save for later

Tuning WebSphere Security

Packt
11 Aug 2011
8 min read
IBM WebSphere Application Server v7.0 Security Secure your IBM WebSphere applications with Java EE and JAAS security standards using this book and eBook Tuning general security There are several general security aspects of a WebSphere environment that can be tweaked to either loosening or tightening the security level. This tweaking will have an inversely proportional effect on performance of the WebSphere system. This section briefly describes some of them. Tightening security using the administrative connector The administrative connectors are used for communication of various WAS ND7 components such as the deployment manager, node agents, application servers, and the wsadmin interface. On the one hand, by default the connector for communication between WAS ND7 components located in different physical hosts (remote connector) uses the SOAP protocol. On the other hand, the connector for communication between WAS ND7 components located on the same physical host (local connector) by default uses the IPC protocol. The recommendation for the remote connector is to use the RMI connector. The reason for doing this is because the RMI API uses stateful connections, whereas the SOAP protocol uses stateless communication. This parameter can be changed on the application servers Administration services page. In order to get to an Administration services page, one needs to follow a breadcrumb similar to: Servers | Server Types | Application servers | AppServer_ Name | Administration services. The resulting page should be similar to the one shown in the following screenshot: It is always a good idea to perform a benchmark to ensure that performance is not being significantly affected. Disabling security attribute propagation Security Attribute Propagation (SAP) is the capability of WAS ND7 to carry principal (the caller) static and dynamic security related information from one server to another in the infrastructure according to your configuration. Static security information may include data normally derived from the user registry. On the other hand, dynamic security information may include information about the principal context such as the identity, IP, and so on. If enterprise applications are not using this type of propagation, it is recommended to disable SAP in order to avoid its overhead. In SAP, security attributes would need to be serialized using Java serialization to carry out the propagation. Therefore, by disabling this feature, the serialization portion of the process is eliminated. Disabling SAP is accomplished by adding and setting to false the property com.ibm. CSI.disablePropagationCallerList. The location where this property must be defined is at the global security level. Therefore, follow the breadcrumb Security | Global security | Custom properties. On that page you need to click on the New button and you will be presented with a page similar to the one shown in the following screenshot: For additional information on Security Attributes Propagation, refer to the WAS ND7 Information Center link: http://publib.boulder.ibm.com/infocenter/wasinfo/ v7r0/topic/com.ibm.websphere.nd.doc/info/ae/ae/csec_ secattributeprop.html Using unrestricted Java Cryptographic Extensions The JCE have been part of the Java SDK since version 1.4.x. JCE, very succinctly, is the Java technology that offers a scheme and realization for encryption, key generation, and key agreement. In addition, the JCE policy files are the portion of the JCE which determines the strength of the encryption to be supported. Furthermore, due to several country laws, the JCE policy files that are included with the WAS ND7 SDK only enables to perform strong and limited cryptography in a way that can be shipped to any country in the world. For instance, the local policy file limits encryption of various methods to the values shown in the following screenshot: IBM states that there is a possibility that the restricted policy files may affect performance. Therefore, it is strongly advised to use unrestricted encryption JCE policy files. Warning: Before you replace the JCE policy libraries with their unrestricted version, it is imperative that you check your local laws regarding encryption. Should you determine that it is permissible to use unrestricted encryption, the following procedure describes how to obtain and install the Unrestricted JCE policy files. In order to download the JAR files, you must be registered with IBM. Use your company's authentication credentials when they are requested. Obtaining the Unrestricted JCE policy files The first stage in the procedure is to obtain from IBM the ZIP file unrestricted.zip that contains the Unrestricted JCE policy files. Open the URL https://www14.software.ibm.com/webapp/iwm/web/reg/ pick.do?source=jcesdk〈=en_US. Using a browser open the Unrestricted JCE files page at https:// www14.software.ibm.com/webapp/iwm/web/reg/pick. do?source=jcesdk〈=en_US. Select the libraries for version 1.4.2+. From the choices presented, select Unrestricted JCE Policy files for SDK for all newer versions. Version 1.4.2+. Click on the Continue button. Log on with your company's credentials. Provide or update information as needed. Check the I agree check box. Click on the I confirm button. Download the unrestricted.zip file. Click on the Download now link. Installing the Unrestricted JCE policy files Once the policy files have been downloaded, you can proceed to install them. Log on to the host where WAS ND7 is installed. Do this procedure for each WAS ND node (Deployment Manager host and node hosts), that is, every host in which you have installed the WAS ND7 binaries for your environment. Stop all profiles associated with the binary installation. Extract the JAR files. Create a temporary directory and in it un-archive the content of unrestricted. zip. The content is two JAR files: local_policy.jar and US_export_policy. jar Change the working directory to security Java directory. Change the working directory to <WAS_BIN_ROOT>/java/jre/lib/security Backup existing policy files. Make a copy of the files: local_policy.jar and US_export_policy.jar located in the security directory. Install the Unrestricted JCE policy files. Copy the policy files obtained in the previous subsection into the security directory. Restart the WAS ND7 environment. Tuning CSIv2 connectivity In WAS ND7, the Common Secure Interoperability Version 2 (CSIv2) is the authentication protocol used by EJBs. CSIv2 is the security protocol that undertakes the stipulations of CORBA security for interoperability authentication, delegation, and entitlements. Therefore, if your environment is using EJBs, the following tasks can improve performance without compromising security. Using Active Authentication Protocol: Set it only to CSI When an enterprise WebSphere environment is made up of WebSphere Application nodes of multiple versions, there may be a need for setting the CSIv2 authentication protocol to both, CSI and SAS (Security Authentication Service). However, in WAS ND7, the SAS has been deprecated for communicating with WAS versions 5 and newer. Therefore, it is highly recommended to set the property com.ibm.CSI. protocol to the value csiv2. When the protocol is set to CSI, WebSphere ND7 eliminates, on both server and client, a call to an interceptor for each request that a client makes. In order to configure the protocol to CSI, the file <Profile_Root_Directory>/ properties/sas.client.props must be edited by adding the line shown below: com.ibm.CSI.protocol=csiv2 Other possible values for this property are: ibm: Should be used if the clients connecting to the WAS ND7 environment are hosted in a WebSphere Application Server version 4 or earlier setup both: Should be used if the clients that communicate with the WAS ND7 environment are hosted in WebSphere Application Server installations versions 4 or earlier and versions 5 or newer In order to make the change effective, the complete WAS ND7 cell needs to be restarted. Enforcing client certificates using SSL When a WebSphere client sends secure requests to an enterprise application hosted in a WAS ND7 setup, the requestor can be authenticated either using a user ID and password combination or an SSL certificate. Since the channel is already secure, employing ID and password to validate the communication adds overhead to both client and server. Therefore, it is recommended to select the use of client SSL certificates to perform the authentication of client requests. The configuration to enforce the use of certificates for authentication can be done at the global security level or at the security domain level. The recommendation is to do it at the global security level and use this setting in all security domains. The procedure to set the use of certificates over user IDs and passwords at the global security level is as follows: Log on to the ISC (Deployment Manager). Follow the breadcrumb Security | Global security | (Authentication | RMI/IIOP security) CSIv2 inbound communications. Refer to the next screenshot to identify the link to the CSIv2 inbound communications. Enforce the use of client SSL certificates. Set the following parameters as shown in the following screenshot: Client certificate authentication (required) Transport (SSL-required) Message layer authentication (never) Note, however, that if client fails when the message layer authentication is set to never, it may need to be modified to the supported value. Ensure not to override setting in security domains. Finally, for each security domain that is defined in your WAS ND7 environment it is recommended to set the RMI/IIOP security using the global security settings, as shown in the following screenshot: However, if additional customization of the security domain RMI/IIOP is needed, ensure to set the values for CISv2 Transport Layer and CISv2 Message Layer as those shown in step three. Save the changes of configuration. Log off.
Read more
  • 0
  • 0
  • 3574
Banner background image

article-image-squid-proxy-server-debugging-problems
Packt
29 Jul 2011
4 min read
Save for later

Squid Proxy Server: debugging problems

Packt
29 Jul 2011
4 min read
Mostly, we encounter problems that are well-known and are a result of configuration glitches or operating system limitations. So, those problems can be fixed easily by tweaking configuration files. However, sometimes we may face problems that cannot be solved directly or we may not even be able to identify them by simply looking at the log files. By default, Squid only logs the essential information to cache.log. To inspect or debug problems, we need to increase the verbosity of the logs so that Squid can tell us more about the actions it's taking, which may help us find the source of the problem. We can extract information from Squid about its actions at our convenience by using the debug_options directive in the Squid configuration file. Let's have a look at the format of the debug_options directive: debug_options rotate=N section,verbosity [section,verbosity]... The parameter rotate (rotate=N) specifies the number of cache.log files that will be maintained when Squid logs are rotated. The default value of N is 1. The rotate option helps in preventing disk space from being wasted due to excessive log messages when the verbosity level is high. The parameter section is an integer identifying a particular component of Squid. It can have a special value, ALL, which represents all components of Squid. The verbosity parameter is also an integer representing the verbosity level for each section. Let's have a look at the meaning of different verbosity levels: Verbosity levelDescription0Only critical or fatal messages will be logged.1Warnings and important problems will be logged.2At verbosity level 2, the minor problems, recovery, and regular high-level actions will be logged.3-5Almost everything useful is covered by verbosity level 5.6-9Above verbosity level 5, it is extremely verbose. Individual events, signals, and so on are described in detail. The following is the default configuration: debug_options rotate=1 ALL,1 The preceding configuration line sets the verbosity level for all sections of Squid to 1, which means that Squid will try to log the minimum amount of information possible. The section number can be determined by looking at the source of the file. In most source files, we can locate a commented line, as shown in the following example, which is from access_log.cc: /* ... * DEBUG: section 46 Access Log ... */ The previous comment tells us that the section number for the Access Log is 46. A list of section numbers and corresponding Squid components can be found at doc/debug-sections.txt in Squid's source code. The following table represents some of the important section numbers for Squid version 3.1.10: Section numberSquid components0Announcement Server, Client Database, Debug Routines, DNS Resolver Daemon, UFS Store Dump Tool1Main Loop, Startup2Unlink Daemon3Configuration File Parsing, Configuration Settings4Error Generation6Disk I/O Routines9File Transfer Protocol (FTP)11Hypertext Transfer Protocol (HTTP)12Internet Cache Protocol (ICP)14IP Cache, IP Storage, and Handling15Neighbor Routines16Cache Manager Objects17Request Forwarding18Cache Manager Statistics20Storage Manager, Storage Manager Heap-based replacement, Storage Manager Logging Functions, Storage Manager MD5 Cache Keys, Storage Manager Swapfile Metadata, Storage Manager Swapfile Unpacker, Storage Manager Swapin Functions, Storage Manager Swapout Functions, Store Rebuild Routines, Swap Dir base object23URL Parsing, URL Scheme parsing28Access Control29Authenticator, Negotiate Authenticator, NTLM Authenticator31Hypertext Caching Protocol32Asynchronous Disk I/O34Dnsserver interface35FQDN Cache44Peer Selection Algorithm46Access Log50Log file handling51Filedescriptor Functions55HTTP Header56HTTP Message Body57HTTP Status-line58HTTP Reply (Response)61Redirector64HTTP Range Header65HTTP Cache Control Header66HTTP Header Tools67String68HTTP Content-Range Header70Cache Digest71Store Digest Manager72Peer Digest Routines73HTTP Request74HTTP Message76Internal Squid Object handling78DNS lookups, DNS lookups; interacts with lib/rfc1035.c79Disk IO Routines, Squid-side DISKD I/O functions, Squid-side Disk I/O functions, Storage Manager COSS Interface, Storage Manager UFS Interface84Helper process maintenance89NAT / IP Interception90HTTP Cache Control Header, Storage Manager Client-Side Interface92Storage File System Summary In this article we took a look at some debugging problems which we may come across while configuring or running Squid. Further resources on this subject: Squid Proxy Server: Tips and Tricks [Article] Squid Proxy Server 3: Getting Started [Article] Configuring Squid to Use DNS Servers [Article] Different Ways of Running Squid Proxy Server [Article] Squid Proxy Server: Fine Tuning to Achieve Better Performance [Article]
Read more
  • 0
  • 0
  • 12338

article-image-microsoft-sql-server-2008-r2-hierarchies-collections-and-mds-metadata
Packt
21 Jul 2011
9 min read
Save for later

Microsoft SQL Server 2008 R2: Hierarchies, Collections, and MDS Metadata

Packt
21 Jul 2011
9 min read
  Microsoft SQL Server 2008 R2 Master Data Services Manage and maintain your organization's master data effectively with Microsoft SQL Server 2008 R2 Master Data Services         Read more about this book       (For more resources on this subject, see here.) The reader is advised to refer the previous article on Creating and Using Models since this article is related to it. Master Data Services includes a Hierarchy Management feature, where we can: Browse all levels of a hierarchy Move members within a hierarchy Access the Explorer grid and all its functionality for all members of a given hierarchy. As we've seen already, there are two types of hierarchies in MDS—Derived Hierarchies and Explicit Hierarchies. We will look at how to create and use both types of hierarchies now. Derived Hierarchies In our example scenario, as we have stores in many different cities and states, we have a requirement to create a "Stores by Geography" hierarchy. In order to create the hierarchy, carry out the following steps: Navigate to the System Administration function, which can be accessed using the Master Data Manager home page. Hover over the Manage menu and click on the Derived Hierarchies menu item, which will open the Derived Hierarchy Maintenance page. Click on the green plus icon to add a Derived Hierarchy, which will open the Add Derived Hierarchy page. Enter Stores By Geography as the Derived Hierarchy Name and click on save. The Edit Derived Hierarchy page will now be displayed, where we can build the hierarchy. On the left-hand side of the screen we can pick entities to be in our hierarchy, whereas the middle pane of the screen displays the hierarchy in its current state. A preview of the hierarchy with real data is also available on the right-hand side. Drag the Store entity from the left-hand side of the screen and drop it onto the red Current Levels : Stores By Geography node in the center of the screen: The choice of entities of on the left hand side will now change to the only two entities that are related to Store, namely City and StoreType. Repeat the drag-and-drop process, but this time drag the City entity onto the red Current Levels node so that the Current Levels hierarchy is as follows: The Available Entities and Hierarchies pane will now be updated to show the State entity, as this is the only entity related to the City entity. Drag the State entity over to the red Current levels node, above the City entity. The Available Entities and Hierarchies pane will now be updated to show the Country. Drag the Country entity over to the red Current Levels node, above the State entity. This is the last step in building our Stores By Geography hierarchy, which will now be complete. We will now look at how we can browse and edit our new hierarchy. Exploring Derived Hierarchies Before we make any changes to the Derived Hierarchy, we will explore the user interface, so that we are comfortable with how it is used. Carry out the following in order to browse the new hierarchy features: Navigate to the home page and select the Explorer function. Within the Explorer function, hover over the Hierarchies menu, where a menu item called Derived: Stores By Geography should appear. Click on this new item, which will display the Derived Hierarchy, as shown below: The buttons above the hierarchy tree structure are as follows (from left to right): Pin Selected Item—Hides all members apart from the select item and all of its descendants. This option can be useful when browsing large hierarchies. Locate Parent of Selected Item—The immediate parent of the selected member could be hidden, if someone has chosen to pin the item (as above). Locate Parent of Selected Item will locate and display the members parent, as well as any other children of the parent. Refresh Hierarchy—Refreshes the hierarchy tree to display the latest version, as edits could occur outside the immediate tree structure. Show/Hide Names—Toggles the hierarchy view to be either the member code and the name, or just the code. The default is to show the member name and code. Show/Hide Attributes—On the right-hand side of the screen (not shown) the children of the selected item are shown in the Explorer grid, along with all their attributes. This button shows or hides the Explorer grid. View Metadata—Displays a pop-up window that will display the metadata for the selected member. We will discuss metadata towards the end of this article. Select the DE {Germany} member by clicking on it. Note: the checkboxes are not how members are selected; instead, clicking on the member name will select the member. Use the Pin Selected Item button to pin the DE {Germany} member, which will hide the siblings of Germany as shown below: To now locate the parent of DE {Germany}, and display the parent's other children (for example, USA and United Kingdom), click on DE {Germany}, then click on the Locate Parent of Selected Item button. The hierarchy tree will revert back to the original structure that we encountered. Now that we have returned to the original hierarchy structure, expand the US member until the member CA {California} is visible. Click on this member, which will display some of the cities, which we have loaded for the State of California: Editing multiple entitiesThe above point illustrates one of the useful features of the hierarchy editor. Although we can edit the individual entities using their respective Explorer grids, with a Derived Hierarchy, we can edit multiple entities on a single page. We don't actually need to edit the cities for the moment, but we do want to look at showing and hiding the Explorer grid. Click on the Show/Hide Attributes button to hide the Explorer grid. Click on the button again to make the Explorer grid reappear. Finally, we're able to look at the Metadata for the Derived Hierarchy. Click on the View Metadata button to open the Metadata Explorer, which is shown below. This is where we would look for any auxiliary information about the Derived Hierarchy, such as a description to explain what is in the hierarchy. We'll look at metadata in detail at the end of this article. We will now look at how we add a new member in a Derived Hierarchy. Adding a member in a Derived Hierarchy Adding a member in a Derived Hierarchy achieves exactly the same thing as adding a member in the entity itself. The difference is that the member addition process when carried out in a Derived Hierarchy is slightly simplified, as the domain attribute (for example, City in the case of the Store entity) gets automatically completed by MDS. This because in a Derived Hierarchy we choose to add a Store in a particular City, which negates the need to specify the City itself. In our example scenario, we wish to open a new Store in Denver. Carry out the following steps to add the new Store: Expand the US {United States} member of the Stores By Geography hierarchy, and then expand the CO {Colorado} member. Click on the 136 {Denver} member. On the far right-hand side of the screen, the Stores for Denver (of which there are none) will be shown. Click on the green plus icon to begin the process of adding a Store. Enter the Name as AW Denver and enter the Code as 052. Click on the save icon to create the member. Click on the pencil icon to edit the attributes of the new member. Note that the City attribute is already completed for us. Complete the remaining attributes with test data of your choice. Click on the save icon to save the attribute values. Click on the green back arrow button at the top of the screen in order to return to the Derived Hierarchy. Notice that we now have a new Store that exists in the Derived Hierarchy, as well as a new row in the Explorer grid on the right-hand side of the screen. We will now continue to explore the functionality in the hierarchy interface by using Explicit Hierarchies. Explicit Hierarchies Whereas Derived Hierarchies rely on the relationships between different entities in order to exist, all the members within Explicit Hierarchies come from a single entity. The hierarchy is made by making explicit relationships between leaf members and the consolidated members that are used to give the hierarchy more than one level. Explicit Hierarchies are useful in order to represent a ragged hierarchy, which is a hierarchy where the leaf members exist at different levels across the hierarchy. In our example scenario, we wish to create a hierarchy that shows the reporting structures for our stores. Most stores report to a regional center, with the regional centers reporting to Head Office. However, some stores that are deemed to be important report directly to Head Office, which is why we need the Explicit Hierarchy. Creating an Explicit Hierarchy As we saw when creating the original Store entity in the previous article, an Explicit Hierarchy can get automatically created for us when we create an Entity. While that is always an option, right now we will cover how to do this manually. In order to create the Explicit Hierarchy, carry out the following steps: Navigate to the System Administration function. Hover over the Manage menu and click on the Entities menu item. Select the Store entity and then click on the pencil icon to edit the entity. Select Yes from Enable explicit hierarchies and collections drop-down. Enter Store Reporting as the Explicit hierarchy name. Uncheck the checkbox called Include all leaf members in mandatory hierarchy. If the checkbox is unchecked, a special hierarchy node called Unused will be created, where leaf members that are not required in the hierarchy will reside. If the checkbox is checked, then all leaf members will be included in the Explicit Hierarchy. This is shown next: Click on the save icon to make the changes to the entity, which will return us to the Entity Maintenance screen, and conclude the creation of the hierarchy.
Read more
  • 0
  • 0
  • 1448

article-image-microsoft-sql-server-2008-r2-mds-creating-and-using-models
Packt
21 Jul 2011
7 min read
Save for later

Microsoft SQL Server 2008 R2 MDS: Creating and Using Models

Packt
21 Jul 2011
7 min read
Microsoft SQL Server 2008 R2 Master Data Services Manage and maintain your organization's master data effectively with Microsoft SQL Server 2008 R2 Master Data Services MDS object model overview The full MDS object model contains objects such as entities, attributes, members, collections, as well as a number of other objects. The full MDS object model can be summarized by the following diagram: As indicated in the diagram, the Model object is the highest level object in the MDS object model. It represents an MDM subject area, such as Product or Customer. Therefore, it would be normal to have a model called Product, which would contain several different Entities, such as Color, Brand, Style, and an entity called Product itself. Entities have one or more Attributes and instances of entities are known as Members. Instead of an entity just containing attributes, it is possible within MDS to categorize the attributes into groups, which are known as Attribute Groups. There are two types of hierarchies within MDS, namely Derived Hierarchies and Explicit Hierarchies. Derived hierarchies are hierarchies that are created based upon the relationships that exist between entities. For example, the Customer entity may have an attribute called City that is based upon the City entity. In turn, the City entity itself may have a State attribute that is based upon the State entity. The State entity would then have a Country attribute, and so on. This is shown below: Using the relationships between the entities, a Derived Hierarchy called Customer Geography, for example, can be created that would break customers down by their geographic locations. A visual representation of this hierarchy is as follows: A Derived Hierarchy relies on separate entities that share relationships between one another. This is in contrast to an Explicit Hierarchy, whose hierarchy members must all come from a single entity. The point to note though is that the members in an Explicit Hierarchy can be both Leaf Members and also Consolidated Members. The idea behind Explicit Hierarchies is that they can represent ragged data structures whereby the members cannot be easily categorized as being a member of any one given entity. For example, Profit and Loss accounts can exist at different levels. Both the accounts "Cash in Transit" and "Net Profit" are valid accounts, but one is lowlevel and one is a high-level account, plus there are many more accounts that sit in between the two. One solution is to create "Net Profit" as a consolidated member and "Cash In Transit" as a leaf member, all within the same entity. This way, a full ragged Chart of Accounts structure may be created within MDS. Models The first step to get started with Master Data Services is to create a model, as the model is the overall container object for all other objects. We are going to start by creating a practice model called Store—meaning that it will be our single source of retail stores in our fictitious company. Models are created in the Master Data Manager application, within the System Administration functional area. Carry out the following steps to create our Store model: Ensure that the Master Data Manager application is open. To do this, type the URL into your web browser, which will typically be http://servername/MDS/. Click on the System Administration option in the home page, which will take you to the following screen: The tree view that is shown breaks down each of the models that are installed in the MDS environment, showing entities, attributes and other objects within the system. The tree view provides an alternate way to edit some of the MDS objects. Rather than using the tree view, we are going to create a model via the menu at the top of the screen. Hover over the Manage menu and choose the Models menu item. The resulting screen will show a list of the current models within MDS, with the ability to add or edit a model. Click on the familiar Master Data Manager green plus icon in order to add our new model, which will produce the following screen: Enter Store as the Model name. Next there are some settings to configure for the model, via three checkboxes. The settings are: Create entity with same name as model—This is essentially a shortcut to save creating an entity manually after creating the model. We want an entity called Store to be created, so leave this option checked. Create explicit hierarchy with same name as model—Another shortcut, this time to create an Explicit Hierarchy. We will manually create an Explicit Hierarchy later on, so uncheck this box. Include all leaf members in mandatory hierarchy—This setting only applies if we are creating an Explicit Hierarchy. If checked, it will ensure that all leaf members must exist within an Explicit Hierarchy. By unchecking the previous option, this option will be unchecked. Click the "Disk" icon to save and create the new model. Entities and attributes After creating the new model, we are now in a position to create all the entities and attributes that we need to manage our store data. However, before doing this, we need to plan what entities and attributes are needed. This can be summarized by the following table: Attributes Attributes are the building blocks of entities in MDS, and there are three different types, which we need to cover before we can start creating and altering entities. The different types are explained in the following table: Domain attributes and relationships between entities As we left the Create entity with same name as model option checked when creating the model, we will have an entity already created called Store. The task now is to edit the Store entity, in order to set up the attributes that we have defined previously. However, before we can complete the Store entity, we actually need to create several of the other entities. This is because we want some of the Store's attributes to be based on other entities, which is exactly why we need domain attributes. Domain attributes are the method that allow us to create relationships between entities, which we need, for example, in order to create Derived Hierarchies. In addition, they also allow for some good data entity validation as, for example, it would not be possible for a user to enter a new attribute value of "New York" in the City attribute if the member "New York" did not exist in the City entity. If we look at our table of entities and attributes again, there is a column to indicate where we need a domain attribute. If we were building an entity relationship diagram, we would say that a Store has a City and that a City belongs to a State. Creating an entity To focus on the Store entity first, before we can say that its structure is complete, we firstly need to create any entities that the Store entity wants to use as domain attributes. We can clearly see from the previous table that these are City and StoreType. The Country and State entities are also required as domain attributes by other entities. Therefore, carry out the following steps in order to create the remaining entities: Ensure that you are in the System Administration function. Hover over the Manage menu and click on the Entities menu item. Ensure that the Store model is selected from the Model drop-down. Click the green "plus" icon. This will open the Add Entity screen. Enter the following information on the Add Entity screen, as shown below: Enter City as the Entity Name. Choose No to not Enable explicit hierarchies and collections. Click the save icon to create the entity and return to the Entity Maintenance screen. Repeat this process to create the StoreType, Country, and State entities. Once you have finished the Entity Maintenance screen should look as follows:
Read more
  • 0
  • 0
  • 2250
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-microsoft-sql-server-2008-r2-master-data-services-overview
Packt
19 Jul 2011
5 min read
Save for later

Microsoft SQL Server 2008 R2 Master Data Services Overview

Packt
19 Jul 2011
5 min read
Microsoft SQL Server 2008 R2 Master Data Services Master Data Services overview Master Data Services is Microsoft's Master Data Management product that ships with SQL Server 2008 R2. Much like other parts of SQL Server, such as Analysis Services (SSAS) or Reporting Services (SSRS), MDS doesn't get installed along with the database engine, but is a separate product in its own right. Unlike SSAS or SSRS, it's worth noting that MDS is only available in the Enterprise and Data Centre Editions of SQL Server, and that the server must be 64-bit. MDS is a product that has grown from acquisition, as it is based on the +EDM product that Microsoft obtained when they bought the Atlanta-based company called Stratature in 2007. A great deal of work has been carried out since the acquisition, including changing the user interface and putting in a web service layer. At a high level, the new product has the following features: Entity maintenance—MDS supports data stewardship by allowing users to add, edit, and delete members. The tool is not specific to a particular industry or area, but instead is generic enough to work across a variety of subject domains. Modeling capability—MDS contains interfaces that allow administrative users to create data models to hold entity members. Hierarchy management—Relationships between members can be utilized to produce hierarchies that users can alter. Version management—Copies of entity data and related metadata can be archived to create an entirely separate version of the data. Business rules and workflow—A comprehensive business rules engine is included in order to enforce data quality and assist with data stewardship via workflow. Alerts can be sent to users using e-mail when the business rules encounter a particular condition. Security—A granular security model is included, where it is possible, for example, to prevent a given user from accessing certain entities, attributes, and members. Master Data Services architecture Technically, Master Data Services consists of the following components: SQL Server Database—The database holds the entities such as Customer or Product, whether they are imported from other systems or created in MDS. Master Data Manager—A web-based data stewardship and administration portal that amongst many other features allows data stewards to add, edit, and delete entity members. Web Service Layer—All calls to the database from the front-end go through a WCF (Windows Communication Foundation) web service. Internet Information Services (IIS) is used to host the web services and the Master Data Manager application. Workflow Integration Service—A Windows service that acts as a broker between MDS and SharePoint in order to allow MDS business rules to use SharePoint workflows. Configuration Manager—A windows application that allows key settings to be altered by an administrator. The following diagram shows how the components interact with one another: MDS SQL Server database The MDS database uses a mix of components in order to be the master data store and to support the functionality found in the Master Data Manager, including stored procedures, views, and functions. Separate tables are created both for entities and for their supporting objects, all of which happens on the fly when a new object gets created in Master Data Manager. The data for all entities across all subject areas are stored in the same database, meaning that the database could get quite big if several subject domains are being managed. The tables themselves are created with a code name. For example, on my local installation, the Product entity is not stored in a table called "Product" as you might expect, but in a table called "tbl_2_10_EN". Locating entity data The exact table that contains the data for a particular entity can be found by writing a select statement against the view called viw_SYSTEM_SCHEMA_ENTITY in the mdm schema. As well as containing a number of standard SQL Server table-valued and scalar functions, the MDS database also contains a handful of .Net CLR (Common Language Runtime)-based functions, which can be found in the mdq schema. The functions utilize the Microsoft.MasterDataServices.DataQuality assembly and are used to assist with data quality and the merging, de-duplication, and survivorship exercises that are often required in a master data management solution. Some of the actions in MDS, such as the e-mail alerts or loading of large amounts of data, need to happen in an asynchronous manner. Service Broker is utilized to allow users to continue to use the front-end without having to wait for long running processes to complete. Although strictly outside the MDS database, SQL Server Database Mail, which resides in the system msdb database, is used as the mechanism to send e-mail alerts to subscribing users. In addition to the tables that hold the master data entities, the MDS database also contains a set of staging tables that should be used when importing data into MDS. Once the staging tables have been populated correctly, the staged data can be loaded into the master data store in a single batch. Internet Information Services (IIS) During the initial configuration of MDS, an IIS Web Application will get created within an IIS website. The name of the application that gets created is called "MDS", although this can be over-ridden if needed. The Web Application contains a Virtual Directory that points to the physical path of <Drive>:<install location>WebApplication, where the various components of the Master Data Manager application are stored. The MDS WCF Service called Service.svc is also located in the same directory. The service can be exposed in order to provide a unified access point to any MDS functionality (for example, creating an entity or member, retrieving all entity members) that is needed by other applications. Master Data Manager connects to the WCF service, which then connects to the database, so this is the route that should be taken by other applications, instead of connecting to the database directly.
Read more
  • 0
  • 0
  • 2729

article-image-core-http-module-nginx
Packt
08 Jul 2011
8 min read
Save for later

The Core HTTP Module in Nginx

Packt
08 Jul 2011
8 min read
Nginx 1 Web Server Implementation Cookbook Over 100 recipes to master using the Nginx HTTP server and reverse proxy   Setting up the number of worker processes correctly Nginx like any other UNIX-based server software, works by spawning multiple processes and allows the configuration of various parameters around them as well. One of the basic configurations is the number of worker processes spawned! It is by far one of the first things that one has to configure in Nginx. How to do it... This particular configuration can be found at the top of the sample configuration file nginx.conf: user www www; worker_processes 5; error_log logs/error.log; pid logs/nginx.pid; worker_rlimit_nofile 8192; events { worker_connections 4096; } In the preceding configuration, we can see how the various process configurations work. You first set the UNIX user under which the process runs, then you can set the number of worker processes that Nginx needs to spawn, after that we have some file locations where the errors are logged and the PIDs (process IDs) are saved. How it works... By default, worker_processes is set at 2. It is a crucial setting in a high performance environment as Nginx uses it for the following reasons: It uses SMP, which allows you to efficiently use multi-cores or multi-processors systems very efficiently and have a definite performance gain. It increases the number of processes decreases latency as workers get blocked on disk I/O. It limits the number of connections per process when any of the various supported event types are used. A worker process cannot have more connections than specified by the worker_connections directive. There's more... It is recommended that you set worker_processes as the number of cores available on your server. If you know the values of worker_processes and worker_connections, one can easily calculate the maximum number of connections that Nginx can handle in the current setup. Maximum clients = worker_processes * worker_connections   Increasing the size of uploaded files Usually when you are running a site where the user uploads a lot of files, you will see that when they upload a file which is more than 1MB in size you get an Nginx error stating, "Request entity too Large" (413), as shown in the following screenshot. We will look at how Nginx can be configured to handle larger uploads. How to do it... This is controlled by one simple part of the Nginx configuration. You can simply paste this in the server part of the Nginx configuration: client_max_body_size 100M; # M stands for megabytes This preceding configuration will allow you to upload a 100 megabyte file. Anything more than that, and you will receive a 413. You can set this to any value which is less than the available disk space to Nginx, which is primarily because Nginx downloads the file to a temporary location before forwarding it to the backend application. There's more... Nginx also lets us control other factors related to people uploading files on the web application, like timeouts in case the client has a slow connection. A slow client can keep one of your application threads busy and thus potentially slow down your application. This is a problem that is experienced on all the heavy multimedia user-driven sites, where the consumer uploads all kinds of rich data such as images, documents, videos, and so on. So it is sensible to set low timeouts. client_body_timeout 60; # parameter in seconds client_body_buffer_size 8k; client_header_timeout 60; # parameter in seconds client_header_buffer_size 1k; So, here the first two settings help you control the timeout when the body is not received at one read-step (basically, if the server is queried and no response comes back). Similarly, you can set the timeout for the HTTP header as well. The following table lists out the various directives and limits you can set around client uploading.   Using dynamic SSI for simple sites With the advent of modern feature-full web servers, most of them have Server-Side Includes (SSI) built in. Nginx provides easy SSI support which can let you do pretty much all basic web stuff. How to do it... Let's take a simple example and start understanding what one can achieve with it. Add the following code to the nginx.conf file: server { ..... location / { ssi on; root /var/www/www.example1.com; } } Add the following code to the index.html file: <html> <body> <!--# block name="header_default" --> the header testing <!--# endblock --> <!--# include file="header.html" stub="header_default" --> <!--# echo var="name" default="no" --> <!--# include file="footer.html"--> </body> </html> Add the following code to the header.html file: <h2>Simple header</h2> Add the following code to the footer.html file: <h2>Simple footer</h2> How it works... This is a simple example where we can see that you can simply include some partials in the larger page, and in addition to that you can create block as well within the page. So the <block> directive allows you to create silent blocks that can be included later, while the <include> directive can be used to include HTML partials from other files, or even URL end points. The <echo> directive is used to output certain variables from within the Nginx context. There's more... You can utilize this feature for all kinds of interesting setups where: You are serving different blocks of HTML for different browsers types You want to optimize and speed up certain common blocks of the sites You want to build a simple site with template inheritance without installing any other scripting language   Adding content before and after a particular page Today, in most of the sites that we visit, the webpage structure is formally divided into a set of boxes. Usually, all sites have a static header and a footer block. Here, in this following page you can see the YUI builder generating the basic framework of such a page. In such a scenario, Nginx has a really useful way of adding content before and after it serves a certain page. This will potentially allow you to separate the various blocks and optimize their performance individually, as well. Let's have a look at an example page: So here we want to insert the header block before the content, and then append the footer block: How to do it… The sample configuration for this particular page would look like this: server { listen 80; server_name www.example1.com; location / { add_before_body /red_block add_after_body /blue_block; ... } location /red_block/ { ... } location /blue_block/ { .... } } This can act as a performance enhancer by allowing you to load CSS based upon the browser only. There can be cases where you want to introduce something into the header or the footer on short notice, without modifying your backend application. This provides an easy fix for those situations. This module is not installed by default and it is necessary to enable it when building Nginx. ./configure –with-http_addition_module   Enabling auto indexing of a directory Nginx has an inbuilt auto-indexing module. Any request where the index file is not found will route to this module. This is similar to the directory listing that Apache displays. How to do it... Here is the example of one such Nginx directory listing. It is pretty useful when you want to share some files over your local network. To start auto index on any directory all you need to do is to carry out the following example and place it in the server section of the Nginx configuration file: server { location 80; server_name www.example1.com; location / { root /var/www/test; autoindex on; } } How it works... This will simply enable auto indexing when the user types in http://www.example1.com. You can also control some other things in the listings in this way: autoindex_exact_size off; This will turn off the exact file size listing and will only show the estimated sizes. This can be useful when you are worried about file privacy issues. autoindex_localtime on; This will represent the timestamps on the files as your local server time (it is GMT by default): This image displays a sample index auto-generated by Nginx using the preceding configuration. You can see the filenames, timestamp, and the file sizes as the three data columns.  
Read more
  • 0
  • 0
  • 2809

article-image-nginx-web-services-configuration-and-implementation
Packt
08 Jul 2011
6 min read
Save for later

Nginx Web Services: Configuration and Implementation

Packt
08 Jul 2011
6 min read
Nginx 1 Web Server Implementation Cookbook Installing new modules and compiling Nginx Today, most softwares are designed to be modular and extensible. Nginx, with its great community, has an amazing set of modules out there that lets it do some pretty interesting things. Although most operating system distributions have Nginx binaries in their repositories, it is a necessary skill to be able to compile new, bleeding edge modules, and try them out. Now we will outline how one can go about compiling and installing Nginx with its numerous third-party modules. How to do it... The first step is to get the latest Nginx distribution, so that you are in sync with the security and performance patches (http://sysoev.ru/nginx/nginx-0.7.67.tar.gz). Do note that you will require sudo or root access to do some of the installation steps going ahead. Un-tar the Nginx source code. This is simple, you will need to enter the following command: tar -xvzf nginx-0.7.67.tar.gz Go into the directory and configure it. This is essential, as here you can enable and disable the core modules that already come with Nginx. Following is a sample configure command: ./configure --with-debug --with-http_ssl_module --with-http_realip_module --with-http_ssl_module --with-http_perl_module --with-http_stub_status_module You can figure out more about what other modules and configuration flags use: ./configure --help If you get an error, then you will need to install the build dependencies, depending on your system. For example, if you are running a Debian based system, you can enter the following command: apt-get build-dep nginx This will install all the required build dependencies, like PCRE and TLS libraries. After this, you can simply go ahead and build it: sudo make install This was the plain vanilla installation! If you want to install some new modules, we take the example of the HTTP subscribe-publish module. Download your module (http://pushmodule.slact.net/downloads/nginx_http_push_module-0.692.tar.gz). Un-tar it at a certain location:/path/to/module. Reconfigure Nginx installation: ./configure ..... --add-module=/path/to/module The important part is to point the –add-module flag to the right module path. The rest is handled by the Nginx configuration script. You can continue to build and install Nginx as shown in step 5. sudo make install If you have followed steps 1 to 10, it will be really easy for you to install any Nginx module. There's more... If you want to check that the module is installed correctly, you can enter the following command: nginx -V A sample output is something as shown in the following screenshot: This basically gives you the compilation flags that were used to install this particular binary of Nginx, indirectly listing the various modules that were compiled into it. Running Nginx in debug mode Nginx is a fairly stable piece of software which has been running in production for over a decade and has built a very strong developer community around it. But, like all software there are issues and bugs which crop up under the most critical of situations. When that happens, it's usually best to reload Nginx with higher levels of error logging and if possible, in the debug mode. How to do it... If you want the debug mode, then you will need to compile Nginx with the debug flag (--with-debug). In most cases, most of the distributions have packages where Nginx is precompiled with debug flag. Here are the various levels of debugging that you can utilize: error_log LOGFILE [debug | info | notice | warn | error | crit | debug_core | debug_alloc | debug_mutex | debug_event | debug_http | debug_imap]; Downloading the example code You can download the example code files here If you do not set the error log location, it will log to a compiled-in default log location. This logging is in addition to the normal error logging that you can do per site. Here is what the various specific debug flags do: There's more... Nginx allows us to log errors for specific IP addresses. Here is a sample configuration that will log errors from 192.168.1.1 and the IP range of 192.168.10.0/24: error_log logs/error.log; events { debug_connection 192.168.1.1; debug_connection 192.168.10.0/24; } This is extremely useful when you want to debug in the production environment, as logging for all cases has unnecessary performance overheads. This feature allows you to not set a global debug on the error_log, while being able to see the debug output for specific matched IP blocks based on the user's IP address. Easy reloading of Nginx using the CLI Depending on the system that you have, it will offer one clean way of reloading your Nginx setup Debian based: /etc/init.d/Nginx reload Fedora based: service Nginx reload FreeBSD/BSD: service Nginx reload Windows: Nginx -s reload All the preceding commands reload Nginx; they send a HUP signal to the main Nginx process. You can send quite a few control signals to the Nginx master process, as outlined in the following table. These let you manage some of the basic administrative tasks: How to do it... Let me run you through the simple steps of how you can reload Nginx from the command line. Open a terminal on your system. Most UNIX-based systems already have fairly powerful terminals, while you can use PuTTY on Windows systems. Type in ps auxww | grep nginx. This will output something as shown in the following screenshot: If nothing comes, then it means that Nginx is not running on your system. If you get the preceding output, then you can see the master process and the two worker processes (it may be more, depending on your worker_processes configuration). The important number is 3322, which is basically the PID of the master process. To reload Nginx, you can issue the command kill -HUP <PID of the nginx master process>. In this case, the PID of the master process is 3322. This will basically read the configurations again, gracefully close your current connections, and start new worker processes. You can issue another ps auxww | grep nginx to see new PIDs for the worker processes (4582,4583): If the worker PIDs do not change it means that you may have a problem while reloading the configuration files. Go ahead and check the Nginx error log. This is very useful while writing scripts, which control Nginx configuration. A good example is when you are deploying code on production; you will temporarily point the site to a static landing page.  
Read more
  • 0
  • 0
  • 3966

article-image-sql-server-2008-r2-multiserver-management-using-utility-explorer
Packt
30 Jun 2011
6 min read
Save for later

SQL Server 2008 R2: Multiserver Management Using Utility Explorer

Packt
30 Jun 2011
6 min read
  Microsoft SQL Server 2008 R2 Administration Cookbook Over 70 practical recipes for administering a high-performance SQL Server 2008 R2 system with this book and eBook         Read more about this book       (For more resources on Microsoft SQL Server, see here.) The UCP collects configuration and performance information that includes database file space utilization, CPU utilization, and storage volume utilization from each enrolled instance. Using Utility Explorer helps you to troubleshoot the resource health issues identified by SQL Server UCP. The issues might include mitigating over-utilized CPU on a single instance or multiple instances. UCP also helps in reporting troubleshooting information using SQL Server Utility on issues that might include resolving a failed operation to enroll an instance of SQL Server with a UCP, troubleshooting failed data collection resulting in gray icons in the managed instance list view on a UCP, mitigating performance bottlenecks, or resolving resource health issues. The reader will benefit by referring the previous articles on Best Practices for SQL Server 2008 R2 Administration and Managing the Core Database Engine before proceeding ahead. Getting ready The UCP and all managed instances of SQL Server must satisfy the following prerequisites: UCP SQL Server instance version must be SQL Server 2008 SP2[10.00.4000.00] or higher The managed instances must be a database engine only and the edition must be Datacenter or Enterprise on a production environment UCP managed account must operate within a single Windows domain or domains with two-way trust relationships The SQL Server service accounts for UCP and managed instances must have read permission to Users in Active Directory To set up the SQL Server Utility you need to: Create a UCP from the SQL Server Utility Enroll data-tier applications Enroll instances of SQL Server with the UCP Define Global and Instance level policies, and manage and monitor the instances. Since the UCP itself becomes a managed instance automatically, once the UCP wizard is completed, the Utility Explorer content will display a graphical view of various parameters, as follows: How to do it... To define the global and instance level policies to monitor the multiple instances, use the Utility Explorer from SSMS tool and complete the following steps: Click on Utility Explorer; populate the server that is registered as utility control point. On the right-hand screen, click on the Utility Administration pane. The evaluable time period and tolerance for percent violations are configurable using Policy tab settings. The default upper threshold utilization is 70 percent for CPU, data file space, and storage volume utilization values. To change the policies use the slider-controls (up or down) to the right of each policy description. For this recipe, we have modified the upper thresholds for CPU utilization as 50 percent and data file space utilization as 80 percent. We have also reduced the upper limit for the storage volume utilization parameter. The default lower threshold utilization is 0 percent for CPU, data file space, and storage volume utilization values. To change the policies, use the slider-controls (up only) to the right of each policy description. For this recipe, we have modified (increased) the lower threshold for CPU utilization to 5 percent. Once the threshold parameters are changed, click Apply to take into effect. For the default system settings, either click on the Restore Defaults button or the Discard button, as shown in the following screenshot: Now, let us test whether the defined global policies are working or not. From the Query Editor, open a new connection against SQL instances, which is registered as Managed Instance on UCP, and execute the following time-intensive TSQL statements: create table test ( x int not null, y char(896) not null default (''), z char(120) not null default('') ) go insert test (x) select r from ( selectrow_number() over (order by (select 1)) r from master..spt_values a, master..spt_values b ) p where r <= 4000000 go create clustered index ix_x on test (x, y) with fillfactor=51 go The script will simulate a data load process that will lead into a slow performance on managed SQL instance. After a few minutes, right-click on the Managed Instances option on Utility Explorer, which will produce the following screenshot of managed instances: In addition to the snapshot of utilization information, click on the Managed Instances option on Utility Explorer to obtain information on over-utilized database files on an individual instance (see the next screenshot): We should now have completed the strategic steps to manage multiple instances using the Utility Explorer tool. How it works... The unified view of instances from Utility Explorer is the starting point of application and multi-server management that helps the DBAs to manage the multiple instances efficiently. Within the UCP, each managed instance of SQL Server is instrumented with a data collection set that queries configuration and performance data and stores it back in UMDW on the UCP every 15 minutes. By default, the data-tier applications automatically become managed by the SQL Server utility. Both of these entities are managed and monitored based on the global policy definitions or individual policy definitions. Troubleshooting resource health issues identified by an SQL Server UCP might include mitigating over-utilized CPU on a computer on an instance of SQL Server, or mitigating over-utilized CPU for a data-tier application. Other issues might include resolving over-utilized file space for database files or resolving over-utilization of allocated disk space on a storage volume. The managed instances health parameter collects the following system resource information: CPU utilization for the instance of SQL Server Database files utilization Storage volume space utilization CPU utilization for the computer Status for each parameter is divided into four categories: Well-utilized: Number of managed instances of an SQL Server, which are not violating resource utilization policies. Under-utilized: Number of managed resources, which are violating resource underutilization policies. Over-utilized: Number of managed resources, which are violating resource overutilization policies. No Data Available: Data is not available for managed instances of SQL Server as either the instance of SQL Server has just been enrolled and the first data collection operation is not completed, or because there is a problem with the managed instance of SQL Server collecting and uploading data to the UCP. The data collection process begins immediately, but it can take up to 30 minutes for data to appear in the dashboard and viewpoints in the Utility Explorer content pane. However, the data collection set for each managed instance of an SQL Server will send relevant configuration and performance data to the UCP every 15 minutes. Summary This article on SQL Server 2008 R2 covered Multiserver Management Using Utility Explorer. Further resources on this subject: Best Practices for Microsoft SQL Server 2008 R2 Administration Microsoft SQL Server 2008 R2: Managing the Core Database Engine [Article] Managing Core Microsoft SQL Server 2008 R2 Technologies [Article] SQL Server 2008 R2 Technologies: Deploying Master Data Services [Article] Getting Started with Microsoft SQL Server 2008 R2 [Article] Microsoft SQL Server 2008 - Installation Made Easy [Article] Creating a Web Page for Displaying Data from SQL Server 2008 [Article] Ground to SQL Azure migration using MS SQL Server Integration Services [Article] Microsoft SQL Server 2008 High Availability: Understanding Domains, Users, and Security [Article]
Read more
  • 0
  • 0
  • 1396
article-image-best-practices-microsoft-sql-server-2008-r2-administration
Packt
30 Jun 2011
12 min read
Save for later

Best Practices for Microsoft SQL Server 2008 R2 Administration

Packt
30 Jun 2011
12 min read
  Microsoft SQL Server 2008 R2 Administration Cookbook Over 70 practical recipes for administering a high-performance SQL Server 2008 R2 system with this book and eBook The reader would benefit by referring to the previous article on Managing the Core Database Engine since the following recipes are related to it. Implementing Utility & Non-utility collection sets The Utility information data collection set is installed and automatically started on each instance of SQL Server 2008 R2 when you complete the Utility Control Point (UCP) as we have seen in the previous article. The data is stored in the UMDW database, which is created during the UCP creation. The SQL Server utility collection set is supported side-by-side with Utility collection sets and non-SQL Server utility collection sets. In this recipe, we will go through the implementation tasks to set up the UCP data collection sets for utility and non-utility categories. SQL Server 2008 R2 introduces the Utility Control Point (UCP) with a set of pre-defined utility collection sets that are managed by UMDW. Similarly, SQL Server 2008 manages the data collection to monitor CPU, disk, and memory resources of an instance using a Data Collector that is managed by Management Data Warehouse (MDW). For this recipe, it is necessary to introduce the MDW feature that stands as a non-utility collection set. The Management Data Warehouse is a relational database that contains all the data that is retained. This database can be on the same system as the data collector, or it can be on another computer. The MDW collection set is run in one of the following collection and upload modes: Non-cached mode: Data collection and upload are on the same schedule. The packages start, collect, and upload data at their configured frequency, and run until they are finished. After the packages finish, they are unloaded from memory. Cached mode: Data collection and upload are on different schedules. The packages collect and cache data until they receive a signal to exit from a loop control-flow task. This ensures that the data flow can be executed repeatedly, which enables continuous data collection. Getting ready The new feature of SQL Server 2008 R2—Utility Control Point (UCP)—allows DBAs to set up and collect the utility collection sets. Once the instances are enrolled, the default capacity policies of utilization across the instances or applications are set. It is essential to check that you are using a SQL Server 2008 R2 instance to register the UCP to design the multi-server management feature. How to do it... Using SQL Server Management Studio, these are the steps to implement the utility and nonutility data collection sets: To implement the utility data collection sets, connect to the Utility Explorer where the UCP is registered. Right-click on Managed Instances and choose Enroll instance (refer to the next screenshot). Specify the instance name of SQL Server to enroll. Specify the service account to run the utility collection set. To specify the account to collect data, you can choose SQL Server Agent service account, but for security precautions, it is recommended to propose a new account or existing domain user account with the required privileges. Review prerequisite validation results and selections. Enroll the instance. After completing the Enroll Instance wizard, click on the Managed Instances node in the Utility Explorer navigation pane. On the right-hand side of the Utility Explorer content pane, the enrolled SQL Server instances are displayed. Next, to implement the non-utility collection sets, from the SSMS tool, use the Configure Management Data Warehouse wizard to configure storage for collected data. Create the management data warehouse. You can install the management data warehouse on the same instance of SQL Server that runs the data collector for the utility collection set. Select the configuration task to install the predefined System Data collection sets. Configure the MDW storage by selecting the SQL Server instance to host and collect the non-utility collection sets. Map logins to management data warehouse roles. Once you have completed the MDW wizard, the data collection information for utility and non-utility collection sets are displayed under the Management folder, as shown in the next screenshot: Before we proceed to enable the data collection, it is essential to restart and upload the non-utility collection sets to the Data Collection. To upload and pass a validation of non-utility collection sets, execute the following TSQL from Query Editor: execmsdb.dbo.sp_syscollector_set_warehouse_database_name NULL execmsdb.dbo.sp_syscollector_set_warehouse_instance_name NULL Under the Management folder, right-click on Data Collection and choose Enable the data collector from SSMS, which is shown in the following screenshot: Once we have completed the MDW wizard, the data collection information will be stored in the data warehouse databases. To ensure that both the utility collection sets exist, review the Data Collection option from SSMS, as shown in the preceding screenshot, which completes the process as a successful implementation of utility and non-utility collection sets on the same instance. How it works... The utility data collection sets are installed and automatically started on each instance of SQL Server 2008 R2 when they are configured using Utility Control Point. The UMDW database is created on the instance where UCP is configured and the following collection set and items are stored: Utility Information—DAC Information Utility Information—SMO Information Utility Information—Utility Allocated CPU Info Utility Information—Utility CPU-Memory Related Info Utility Information—Utility Database FilesInfo Utility Information—Utility Performance Counters Items Utility Information—Utility Performance Counters Items1 Utility Information—Utility Volumes Information The non-utility data collection sets are installed when MDW wizard is completed, but not started until they are enabled. The required schemas and their objects for the pre-defined system collect sets are created when MDW is configured. The various UCP and MDW jobs are created under SQL Server Agent | Jobs folder as follows: collection_set_1_noncached_collect_and_upload collection_set_2_collection collection_set_2_upload collection_set_3_collection collection_set_3_upload collection_set_4_noncached_collect_and_upload mdw_purge_data_[MDW] sysutility_get_cache_tables_data_into_aggregate_tables_daily sysutility_get_views_data_into_cache_tables sysutility_mi_collect_performance sysutility_get_cache_tables_data_into_aggregate_tables_hourly syspolicy_purge_history sysutility_mi_collect_and_upload mdw_purge_data_[sysutility_mdw] The core schema is prefixed by 'core', which describes the tables, stored procedures, and views that are used to manage and identify the collected data. These objects are locked and can only be modified by the owner of the MDW database. The parallel management of SQL Server Utility collection sets (utility and non-utility) requires a preparation on the instance where UCP information is stored and the best practice is to customize the data-collection frequency to avoid any overlap with the MDW data collection schedule. The data collection store contains server activity for all the instances that are configured to manage and host the operating system, such as percent CPU, memory usage, disk I/O usage, network usage, SQL Server waits, and SQL Server activity. Designing and refreshing a Scalable Shared database Designing a Scalable Shared Database (SSD) feature in SQL Server 2008 R2, allows the DBAs to scale out a read-only database (reporting database), which is a copy of a production database, built exclusively for reporting purposes. SSD feature has been part of SQL Server from 2005 Enterprise Edition onwards, that has been enhanced since SQL Server 2008 and this is supported in Enterprise edition and Data Center editions only. To host this reporting database, the disk volumes must be dedicated and read-only, and the scalable shared database feature will permit the smooth update process from production database to the reporting database. The internals behind such a process of building or refreshing a reporting database are known as the build phase or refresh phase, depending on whether a new reporting database is being built or a stale reporting database is being refreshed. The validity of a scalable shared database begins from building a reporting database on a set of reporting volumes and that reporting data eventually becomes too outdated to be useful, which means that the stale database requires a data-refresh as part of each update cycle. Refreshing a stale reporting database involves either updating its data or building a completely new, fresh version of the database. This scalability feature is supported in Enterprise Edition and Data Center editions only. This recipe will cover how to design and refresh a reporting database that is intended for use as a scalable shared database. Getting ready Keeping the reporting database refreshed is a prerequisite as part of each update cycle. The key aspect of having an updated reporting database can be achieved by using the data-copy method, which requires the following: Create or copy a database by designing a SSIS package to use. Execute SQL Task method or Transfer Database task method. From SSMS, use SQL Server Import and Export wizard to copy required objects for the reporting purpose. Restore a backup of the production database into the reporting volume, which will involve a full database backup file to be used. The essential components such as, SAN storage hardware, processing environment, and data access environment must be used. The reporting database must have the same layout as the production database, so we need to use the same drive letter for the reporting volume and the same directory path for the database. Additionally, verify that the reporting servers and the associated reporting database are running on identical platforms. How to do it... To design and refresh a reporting database, you will need to complete the following steps on the production SQL Server instance: Unmask the Logical Unit Number (LUN) on the disks where the Production database is stored. (Refer to the hardware vendor's manual). Mount each reporting volume and mark it as read-write. Obtain the disk volume information. Logon remotely to the server and open a command prompt window to run the following: DiskPart list volumes Use the DiskPart utility to mount the volumes, then on that command prompt window run the following: DISKPART The DiskPart utility will open a prompt for you to enter the following commands: DISKPART> select volume=<drive-number> DISKPART> assign letter=<drive-letter> DISKPART> attribute clear readonly DISKPART> exit The <drive-number> is the volume number assigned by the Windows operating system. The <drive-letter> is the letter assigned to the reporting volume. To ensure that data files are accessible and disks are correctly mounted, list the contents of the directory using the following command from the command prompt: DIR <drive-letter>:<database directory> As we are refreshing an existing reporting database, attach the database to that server instance using SSMS. On Query Editor, enter the following TSQL statements: ALTER DATABASE AdventureWorks2008R2 SET READ_WRITE GO ALTER DATABASE AdventureWorks2008R2 SET RECOVERY FULL, PAGE_VERIFY CHECKSUM; GO Detach the database from that server instance using the sp_detach_db statement from Query Editor. Now, we have to mark each reporting volume as read-only and dismount from the server. Go to the command prompt window and enter the following commands: DiskPart DISKPART> select volume=<drive-number> DISKPART> attribute set readonly DISKPART> remove DISKPART> exit To ensure that the reporting volume is read-only, you should attempt to create a file on the volume. This attempt must return an error. Next, go to the command prompt window and enter the following commands: DiskPart DISKPART> select volume=<drive-number> DISKPART> assign letter = <drive letter> DISKPART> exit The <drive-letter> is the letter assigned to the reporting volume. Attach the database to one or more server instances on each of the reporting servers using the sp_attach_db statement or SSMS tool. Now, the reporting database is made available as a scalable shared database to process the queries from the application. How it works... Using the available hardware vendor-specific servers and disk volumes, the scalable shared database features allow the application to scale out a read-only database built exclusively for reporting purposes. The 'build' phase is the process of mounting the reporting volume on the production server and building the reporting database. After the reporting database is built on the volume, using the defined data-copy methods, the data is updated. Once it is completed, the process of setting each reporting volume to read-only and dismount begins. The 'attach' phase is the process of making the reporting database available as a scalable shared database. After the reporting database is built on a set of reporting volumes, the volumes are marked as read-only and mounted across multiple reporting servers. The individual reporting server service instance will use the reporting database that is attached. There's more... The Scalable Shared Database feature's best practice recommendation: On the basis of hardware, there is no limit on the number of server instances per database; however, for the shared database configuration, ensure that a maximum of eight servers per database are hosted. The SQL Server instance collation and sort order must be similar across all the instances. If the relational or reporting database is spread across the shared servers, then ensure to test and deploy a synchronized update then a rolling update of the scalable shared database. Also, scaling out this solution is possible in SQL Server 2008 Analysis Services with the Read-Only Database capability.  
Read more
  • 0
  • 0
  • 5375

article-image-microsoft-sql-server-2008-r2-managing-core-database-engine
Packt
29 Jun 2011
9 min read
Save for later

Microsoft SQL Server 2008 R2: Managing the Core Database Engine

Packt
29 Jun 2011
9 min read
Microsoft SQL Server 2008 R2 Administration Cookbook Over 70 practical recipes for administering a high-performance SQL Server 2008 R2 system with this book and eBook         Implementing Central Management feature enhancements Central Management Server (CMS) is a feature that enables the DBA to administer multiple servers by designating a set of server groups, which maintain the connection information for one or more instances of SQL Server. In this recipe, we will go through the implementation strategy for the CMS feature in your SQL Server data platform. Before we continue into the recipe, it is essential for you to understand the core functionality of this feature, which uses the multiple features from the SQL Server core database engine: SQL Server system repository Manage the SQL Server farm from one location Easy reporting about system state, configuration, and performance Getting ready As per the feature characteristics described previously, there are three important components in the system. These components are: Storage Execution Reporting Storage is used to store the system repository, Execution manages the SQL Server instances, and Reporting shows the statistical information. The CMS feature implementation is a dual-process (setup and configuration) making use of SQL Server components. The prerequisite of services is to install SQL Server 2008 R2 and Reporting Services. To support the extensive features of the core database engine Enterprise Edition, it is recommended that the CMS be a single point of resource to manage the multiple instances of SQL Server. However, for cost purposes, you can still make use of the Standard Edition. CMS is introduced in SQL Server 2008, as earlier versions of SQL Server cannot be designated as a Central Management Server. The main characteristic of CMS is to execute TSQL and Policy-based management policies at the same time against these designated server groups. How to do it... To get started, we will need to complete the following steps to implement the central management server feature to the existing SQL Server data platform: SQL Server Management Studio is the main tool to designate the CMS feature. Open SQL Server Management Studio, on the View menu, click Registered Servers. In Registered Servers, expand Database Engine, right-click Central Management Servers, point to New, and then click Central Management Servers. In the New Server Registration dialog box, register the instance of SQL Server that will be the central management server. In Registered Servers, right-click the central management server, point to New, and then click New Server Group. Type a group name and description, and then click OK. In Registered Servers, right-click the central management server group, and then click New Server Registration. In the New Server Registration dialog box, register one or more instances of SQL Server that will be members of this server group, see the following screenshot: After you have registered a server, the Central Management Server (CMS) will be able to execute queries against all servers in the group at the same time. TSQL statements and Policy-based management policies can be executed at the same time against the server groups; CMS maintains the connection information for one of more instances of SQL Server. It is essential to consider the effective permissions on the servers in the server groups, which might vary; hence, it is best to use Windows Authentication in this case. To enable the single-point of resource to execute a single query on multiple CMS servers, it is easier to access the registration properties of a group of servers. Using that specific server group, there are multiple options of new query, object explorer, evaluate policies, and import policies. The CMS feature is a powerful manageability feature and is easy to set up. However, it is potentially dangerous to implement a policy that may not be suitable for a set of server groups. Therefore, it is best to represent the CMS query editor with a specific color that can easily catch our eye to avoid any 'unintentional execution of queries or policies'. To enable such a representation follow these steps: Open SQL Server Management Studio, on the Tools menu, click on Options. In Options, select Editor Tab and Status Bar. Select the Status Bar Layout and Colors section from the right-hand side of the options window. In the following screenshot, we can see that we have selected the Pink color, which highlights that extra care is needed before executing any queries, as shown in the next screenshot: Now, let us evaluate how to create a CMS server group, evaluate policies, and execute queries on multiple servers. Open SQL Server Management Studio; on the View menu, click on Registered Servers, expand Database Engine, right-click Central Management Servers, point to New, and then click Central Management Servers. In the New Server Registration dialog box, register the instance of SQL Server that will be the Central Management Server. In Registered Servers, right-click on Central Management Server. Point to New and click on New Server Group. Now type a group name and description, for instance, UAT or Production and click OK. See the following screenshot: To register the servers to these individual server groups, in Registered Servers right-click on Central Management Server Group and then click New Server Registration. In the New Server Registration dialog box, register one or multiple instances that are applicable to the relevant server groups, such as Development or Production. As a best practice, it is ideal to create individual groups within CMS to enable a single-query execution against multiple servers as in the following screenshot: SQL Server 2008 R2 provides a 'Policy Based Management framework' that consists of multiple sets of policy files that you can import as best practice policies. You can then evaluate the policies against a target set that includes instances, instance objects, databases, or database objects. You can evaluate policies manually, set policies to evaluate a target set according to a schedule, or set policies to evaluate a target set according to an event. To evaluate a policy complete the following steps: To evaluate the policy against multiple configuration targets in the SQL Server Management Studio, click on the View menu to select Registered Servers and expand the Central Management Server. Right-click on the Production server group and click on Evaluate Policies. The Evaluate Policies screen will be presented as shown in the next screenshot. Click on Source to Choose Source. The relevant best practices policies are stored as XML files under %:Program FilesMicrosoft SQL Server100ToolsPoliciesDatabaseEngine1033 directory, which is shown in the next screenshot: For this recipe, we have chosen to select the policy from SQL Server 2008 R2 Best Practices Analyzer that is downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=0fd439d7-4bff-4df7-a52f-9a1be8725591 and installed on the server. Let us evaluate the policy of Backup and Data File Location against the production instances. This policy will perform the steps to: Check if the database and the backups are on separate backup devices. If they are on the same backup device, and the device that contains the database fails, your backups will be unavailable. Putting the data and backups on separate devices optimizes the I/O performance for both the production use of the database and writing the backups, as shown in the following screenshot. (This is one of the Best Practices of Maintenance policies from Microsoft SQL Server 2008 R2 Best Practices Analyzer.) The ease of executing a single query against multiple-servers at the same time is one of my favorite manageability features. The results that are returned with this query can be combined into a single results pane, or in a separate results pane, which can include additional columns for the server name and the login that is used by the query on each server. To accomplish this task, let us execute a query to obtain the Product Version, Product Level, Edition, and EngineEdition that are registered against the Production server group. In SQL Server Management Studio, on the View menu, click Registered Servers. Expand a Central Management Server, right-click a server group, point to Connect, and then click New Query. In Query Editor, execute the following TSQL statement: SELECT SERVERPROPERTY('ProductVersion') AS ProductVersion, SERVERPROPERTY('ProductLevel') AS ProductLevel, SERVERPROPERTY('Edition') AS Edition, SERVERPROPERTY('EngineEdition') AS EngineEdition; GO The results are displayed as follows: From the preceding screenshot, the results with a pink bar at the bottom state that this is a Production group. To change the multi-server results options, in the Management Studio, on the Tools menu, click Options. Expand the Query Results, expand SQL Server, and then click Multi-server Results. On the Multi-server Results page, specify the option settings that you want, and then click OK. We now should have successfully completed the CMS feature implementation into the SQL Server data platform. How it works... The SQL Server instance that is designated as a Central Management Server maintains server groups, which maintain the connection information for one or more instances of SQL Server. CMS and all subsidiary servers on the network are registered using Windows Authentication only; in case local server groups are registered using Windows Authentication and SQL Server Authentication. The storage for centralized management server groups, connection information, and authentication details are available from themsdb system database in: dbo.sysmanagement_shared_registered_servers_internal dbo.sysmanagement_shared_server_groups_internal system tables dbo.sysmanagement_shared_registered_servers dbo.sysmanagement_shared_server_groups system views The user and permissions information on the registered servers may vary based on the user permission that uses Windows Authentication connection. For instance, in our recipe, on the dbia-ssqaSQL2K8 instance, the login is a member of the SYSADMIN fixed server role and on the DBIA-SSQASSQA instance, the login is a member of the DBCREATOR fixed server role. In this case, we may not be able to run certain SYSADMIN privileged statements on DBIA-SSQASSQA instance. There's more... Using SQL Server 2008 R2 Management Studio with CMS, we can manage the lower SQL Server version instances, such as 2005 (90), 2000 (80), and 7.0 (70), which is truly a central management of instances.
Read more
  • 0
  • 0
  • 2232

article-image-getting-started-microsoft-sql-server-2008-r2
Packt
09 Jun 2011
9 min read
Save for later

Getting Started with Microsoft SQL Server 2008 R2

Packt
09 Jun 2011
9 min read
  Microsoft SQL Server 2008 R2 Administration Cookbook Over 70 practical recipes for administering a high-performance SQL Server 2008 R2 system Introduction Microsoft SQL Server 2008 has opened up a new dimension within data platforms and SQL Server 2008 R2 has been developed on the areas of core Database Platform and rich Business Intelligence. On the core database environment, SQL Server 2008 R2 advances the new enhancements as a primary goal of scalability and availability for highly transactional applications on enterprise-wide networks. On Business Intelligence platforms, the new features that are elevated include Master Data Management (MDM), StreamInsight, PowerPivot for Excel 2010, and Report Builder 3.0. The SQL Server 2008 R2 Installation Center includes system configuration checker rules to ensure the deployment and installation completes successfully. Further, the SQL Server setup support files will help to reduce the software footprint for installation of multiple SQL instances. This article begins with SQL Server 2008 R2 version's new features and enhancements, and adding the service pack features using Slipstream technology. Then an explanation towards how best the master data services can help in designing and adopting key solutions, working with data-tier applications to integrate development into deployment, and an explanation of how best the federated servers enhancement can help to design highly scalable applications for data platforms. Adding SQL Server 2008 R2 Service Pack features using Slipstream technology The success of any project relies upon the simpler methods of implementation and a process to reduce the complexity in testing to ensure a successful outcome. This can be applied directly to the process of SQL Server 2008 R2 installation that involves some downtime, such as the reboot of servers. This is where the Slipstream process allows other changes to the databases or database server. This method offers the extension of flexibility to upgrade the process as an easier part, if there are minimal changes to only those required for the upgrade process. The following recipe is prepared to enable you to get to know Slipstream. Slipstream is the process of combining all the latest patch packages into the initial installation. The major advantage of this process is time, and the capability to include all the setup files along with service pack and hotfixes. The single-click deployment of Slipstream helps us to merge the original source media with updates in memory and then install the update files to enable multiple deployments of SQL Server 2008 R2. Getting Ready In order to begin adding features of SQL Server using Slipstream, you need to ensure you have the following in place: .NET Framework 3.5 Service Pack 1: It helps improvements in the area of data platform, such as ADO.NET Entity Framework, ADO.NET data services, and support for new features of SQL Server 2008 version onwards. You can download .NET Framework 3.5 Service Pack 1 from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=ab99342f-5d1a-413d-8319-81da479ab0d7&displaylang=en. Windows Installer 4.5: It helps the application installation and configuration service for Windows, which works as an embedded chainer to add packages to a multiple package transaction. The major advantage of this feature enables an update to add or change custom action, so that the custom action is called when an update is uninstalled. You can download Windows Installer 4.5 redistributable package from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=5A58B56F-60B6-4412-95B9-54D056D6F9F4. SQL Server setup support files: It installs SQL Server Native Client that contains SQL OLEDB provider and SQL ODBC driver as a native dynamic link library (DLL) supporting applications using native code APIs to SQL Server. How to do it... Slipstream is a built-in ability of the Windows operating system and since the release of SQL Server 2008 Service Pack 1, it is included. The best practice is to use Slipstream Service Pack as an independent process for Service pack installation, Cumulative Update patching, and Hotfix patching. The key step to Slipstream success is to ensure the following steps are succeeded: The prerequisite steps (mentioned in the earlier sections) are completed. In case of multiple language instances of SQL Server, we need to ensure that we download the correct service pack language from http://www.microsoft.com/downloads/en/ that suits the instance. The Service Pack files are independent to each platform to download, such as X86 for 32-bit, X64 for 64-bit, and IA64 for Itanium platform. To perform the Slipstream Service Pack process, you need to complete the following steps: Create two folders on the local server: SQL2K8R2_FullSP and SQL2K8R2SP. Obtain the original SQL Server 2008 R2 setup source media and copy to SQL2K8R2_FullSP folder. Download the Service Pack1 from Microsoft Downloads site to save in SQL2K8R2SP folder, as per the platform architecture: SQLServer2008SP1-KB968369-IA64-ENU.exe SQLServer2008SP1-KB968369-x64-ENU.exe SQLServer2008SP1-KB968369-x86-ENU.exe Extract the package file using Windows Explorer or using a command prompt operation, as shown in the following screenshot: In case the platform consists of multiple SQL instances with different architectures, for instance SQL Server 2008 R2 Enterprise Edition 64-bit as a default instance and SQL Server 2008 R2 Standard Edition as a named instance, then make sure you download the relevant architecture file http://www.microsoft.com/downloads/en/ as stated previously and extract to relevant folders. This is the first checkpoint to proceed further and the key to ensuring the original setup media is updated correctly. Copy the executable and localized resource file from the extracted location to the original source media location using robocopy utility, which is available from Windows Server 2008 onwards: Copy all the files except the module program file that is executed by various programs and applications in Windows operating systems. It is important to ensure the correct architecture files are copied, such X64 and X86 related files. In addition to the initial checkpoint, this additional checkpoint is required in order to ensure the correct path is specified that will be picked up by Slipstream during the setup of SQL Server 2008 R2 and Service Pack installation. The defaultsetup.ini is the key to guide the Slipstream process to install the RTM version and Service Pack files. The file can be located within the SQL2K8R2_FullSP folder as per the architecture. From Windows Explorer, go to the SQL2K8R2_FullSP folder and open the defaultsetp.ini file to add the correct path for the PCUSOURCE parameter. The file can be located from the SQL Server setup folder location for the processor, for instance, the 32-bit platform the file is available from servernamedirectorySQL Server 2008 R2X86 folder. The previous screenshot represents the file existence within the server, to ensure that the matching SQL Server Product ID (license key) is supplied. There is more attached to the process if the file does not exist, there is no harm to the Slipstream process, the file can be created at the original folder defined in the following steps. It is essential that the license key (product ID) and PCUSource information is included as follows: ;SQLSERVER2008 Configuration File [SQLSERVER2008] PID="??" PCUSOURCE=?? Now, the PCUSOURCE value should consist of the full path of Service pack files that are copied during the initial step, the entry should be as follows: add PCUSOURCE="{Full path}PCU". The full path must include the absolute path to the PCU folder, for instance, if the setup files exist in local folder the path must be as follows: <drivename>SQLServer2008R2_FullSP If that folder is shared out, then the full path must be: MyServerSQLServer2008_FullSP1 The final step of this Slipstream process is to execute the setup.exe from SQL2K8R2_FullSP folder. How it works... The Slipstream steps and installation process are a two-fold movement. Slipstream uses the Remote Installation Services (RIS) technology of Windows Server services to allow configuration management to be automated. The RIS process is capable of downloading the required files or images from the specific path to complete the installation process. The SQL Server 2008 R2 setup runs a pre-check before preceding the installation. The System Configuration Check (SCC) application scans the computer where the SQL Server will be installed. The SCC checks for a set of conditions that prevent a successful installation of SQL Server services. Before the setup starts the SQL Server installation wizard, the SCC executes as a background process and retrieves the status of each item. It then compares the result with the required conditions and provides guidance for the removal of blocking issues. The SQL Server Setup validates your computer configuration using a System Configuration Checker (SCC) before the Setup operation completes using a set of check-parameters that will help to resolve the blocking issues. The sample list of check-parameters is as follows: The following are some of the additional checks that SCC performs to determine if the SQL Server editions in an in-place upgrade path are valid: Checks the system databases for features that are not supported in the SQL Server edition to which you are upgrading Checks that neither SQL Server 7.0 nor SQL Server 7.0 OLAP Services is installed on the server SQL Server 2008 or higher versions are not supported on the server that has SQL Server 7.0. Checks all user databases for features that are not supported by the SQL Server edition Checks if the SQL Server service can be restarted Checks that the SQL Server service is not set to Disabled Checks if the selected instance of SQL Server meets the upgrade matrix requirements Checks if SQL Server Analysis Services is being upgraded to a valid edition SCC checks if the edition of the selected instance of SQL Server is supported for 'Allowable Upgrade Paths' There's more... As the prerequisite process of Slipstream is completed, we need to ensure that the installation of SQL Server 2008 R2, Service Pack, and Hotfixes patches are applied with the setup steps. To confirm the workflow process is followed correctly from the folder SQL2K8R2_FullSP, double-click on setup.exe file to continue the installation of RTM version, Service Pack, and required hotfix patches. While continuing the setup at the Installation Rules screen, the SCC rule checks for Update Setup Media Language Compatibility value, which should be passed in order to proceed, as shown in the following screenshot: If you have failed to see the update setup media language rule, then the same information can be obtained once the installation process is completed. The complete steps and final result of setup are logged as a text file under the folder: C:Program FilesMicrosoft SQL Server100Setup BootstrapLog. The log file is saved as Summary_<MachineName>_Date_Time.txt, for example, 'Summary_DBiASSQA_20100708_200214.txt'.
Read more
  • 0
  • 0
  • 4453
article-image-managing-core-microsoft-sql-server-2008-r2-technologies
Packt
27 May 2011
10 min read
Save for later

Managing Core Microsoft SQL Server 2008 R2 Technologies

Packt
27 May 2011
10 min read
  Microsoft SQL Server 2008 R2 Administration Cookbook Over 70 practical recipes for administering a high-performance SQL Server 2008 R2 system     Introduction SQL Server 2008 R2 has a flurry of new enhancements added to the core database engine and business intelligence suite. The new enhancements within the core database engine are: SQL Azure connectivity (SQLAzure), Data-Tier application (DAC PACK), SQL Server Utility (UCP), and network connectivity. In addition to the new features and internal enhancements, SQL Server 2008 R2 includes new developments to the asynchronous messaging subsystem, such as Service Broker (SB) and external components such as Master Data Services (MDS), StreamInsight, Reporting Services with SharePoint Integration, and PowerPivot for Analysis Services. These recipes involve the planning, design, and implementation of features that are added and they are important to the management of the core technologies of SQL Server 2008 R2. Planning and implementing Self-Service Business Intelligence services Self-Service Business Intelligence (BI) is the new buzzword in the data platform, a new paradigm to the existing BI functionalities. Using Microsoft's Self-Service BI, anyone can easily build the BI applications using traditional desktop tools such as Office Excel and specialized services such as SharePoint. The BI application can be built to manage the published applications in a common way and track data usage having the analytical data connected to its source. The data customization can be accomplished easily by sharing data in a controlled way where the customers can access it from a web browser (intranet or extranet) without using Office applications or Server applications. The external tasks such as security administration and deployment of new hardware are accomplished using the features of SQL Server 2008 R2 and Windows Server 2008 operating system. Self-Service BI can be implemented using PowerPivot, which has two components working together, PowerPivot for Excel and PowerPivot for SharePoint. The PowerPivot for Excel is an add-in that enhances the capabilities of Excel for users and brings the full power of SQL Server Analysis Services right into Excel; whereas, PowerPivot for SharePoint extends the Office SharePoint Services to share and manage the PowerPivot applications that are created with PowerPivot for Excel. In this recipe, we will go through the steps that are required to plan and implement PowerPivot for Excel and PowerPivot for SharePoint. Getting ready PowerPivot for Excel is a component of SQL Server 2008 R2 and is an add-in of Excel 2010 from Office 2010 suite, along with the Office Shared Features. To get started you will need to do the following: Download the PowerPivot for Excel 2010 add-in from: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=e081c894-e4ab-42df-8c87-4b99c1f3c49b&displaylang=en. If you install the 32-bit version of Excel, you must use the 32-bit version of PowerPivot. If you install the 64-bit version of Excel, you must use the 64-bit version of PowerPivot. To test PowerPivot features in addition to the Excel add-in you need to download the sample databases for SQL Server 2008 R2. They can be downloaded from: http://msftdbprodsamples.codeplex.com/wikipage?title=Installing%20SQL%20Server%202008R2%20Databases. In this recipe, the sample database built in SQL Server Analysis Services 2008 R2 based on an imaginary company will be referred to. It is available for download from: http://www.microsoft.com/downloads/en/details.aspx?displaylang=en&FamilyID=868662dc-187a-4a85-b611-b7df7dc909fc. Using the windows installer (I) package, you can install SQL Server 2008 R2 sample databases. However, you must make sure that your SQL Server instance meets the following prerequisites: Full-Text Search must be installed with SQL Full-text filter daemon launcher service running FILESTREAM must be enabled To install these prerequisites on existing SQL Server instances, refer to http://msftdbprodsamples.codeplex.com/wikipage?title=Database%20Prerequisites&referringTitle=Installing%20SQL%20Server%202008R2%20Databases. PowerPivot for the SharePoint component can be installed using the SharePoint 2010 setup program SharePoint 2010 is only supported on Windows Server 2008 Service Pack 2 or higher, and Window Server 2008 R2, and only on x64 platform There are two setup options—a New Farm install and an Existing Farm install The New Farm install is typically expected to be used in a single-machine install where PowerPivot will take care of installing and configuring all the relevant services effectively for you To view PowerPivot workbooks published to the PowerPivot Gallery, you will need Silverlight Silverlight is not available in a 64-bit version; you must use the 32-bit Web browser to see PowerPivot workbooks using the PowerPivot Gallery's special views The Self-Service Analytics solution describes the steps required to analyze sales and promotions data and share the analysis to other users This solution consists of two documents and one sample data file (Access database and Excel workbooks), which can be downloaded from: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=fa8175d0-157f-4e45-8053-6f5bb7eb6397&displaylang=en How to do it... In this recipe, we will go through the steps required to plan and implement PowerPivot for Excel and PowerPivot for SharePoint: Connect to your SQL Server relational database server using SQL Server Management Studio and restore the Contoso retail sample database on the SQL Server instance. Office Shared Features installs VSTO 4.0, which is needed as a prerequisite for PowerPivot for Excel. To install the client application once the download is completed, run a setup program (PowerPivot_for_Excel.msi), which is a self-explanatory wizard installation. The initial PowerPivot installation of Excel 2010 program requires COM Add-ins activation. To do so, on the Excel worksheet click File | Options and select the Add-Ins page. In the Add-Ins page from the Manage drop-down list select COM Add-ins and click on the Go button. Finally, select the PowerPivot for Excel option and click the OK button to display the PowerPivot tab back in the Excel 2010 sheet. You can open the PowerPivot window from an Excel file, click on the PowerPivot tab on the Excel ribbon. Launch Excel 2010 from All Programs | Microsoft Office | Microsoft Excel 2010. On the PowerPivot tab, click on the PowerPivot Window. Click on the PowerPivot Window button that will open a PowerPivot Window, as shown in the preceding screenshot. The PowerPivot Window helps you with the key operations of importing data, filtering, and analyzing the data, as well as creating certain Data Analysis Expression (DAX) calculations. Let us see how the PowerPivot provides the several methods to import and enhance data by building a Self-Service BI application. PowerPivot for SharePoint configures your SharePoint 2010 as part of the New Farm install option. After successfully installing PowerPivot for SharePoint, you should verify the installation. Open the Web browser in administrator's privilege mode and enter http://<machinename> with <machinename> being the name of the server machine where you performed the PowerPivot for SharePoint install. You should see a link to the PowerPivot Gallery on the left side of the page, as shown in the following screenshot: Next, launch the SharePoint Central Administration page from Start | All | Programs | Microsoft SharePoint 2010 Products | SharePoint 2010 Central Administration. Click on the Manage web applications link under the heading Application Management. In the Web Applications General Settings dialog, navigate to the value Maximum upload size. For this recipe, let us choose the value 3192 MB. The default value for Maximum Upload size is set to 50MB, for ideal content deployment change the upload size to a minimum 3192MB. Navigate back to SharePoint Central Administration | Application Management | Manage Service Applications. Select ExcelServiceApp1 (Excel Services Application Web Service Application) and click Manage to choose Trusted file locations. Click on http:// to change the Excel Services settings. Navigate to the Workbook Properties section. For this recipe, choose the value of the Maximum Workbook Size as 3192 MB. The new settings will take effect once the IIS services are restarted. We have now successfully completed the PowerPivot for SharePoint installation, now the instance is ready to publish and share the PowerPivot workbook. To ensure the PowerPivot for SharePoint installation is successful and to share a workbook, we can test the process by publishing a PowerPivot workbook instance, as follows: Switch on the machine with Excel and PowerPivot for Excel. Open a workbook. Click on File | Save and Send | Save to SharePoint | Save As. Enter http://<yourPowerPivotserver>/PowerPivot Gallery in the folder path of the Save As dialog, and click Save. How it works... Microsoft Excel is a popular Business Intelligence tool on the client side, and to present data from multiple sources PowerPivot for Excel is required. The installation process of the PowerPivot add-in on the client side is a straightforward process, though there is no requirement of SQL Server 2008 R2 components on the client side. Behind the scenes, the Analysis Services VertiPaq engine from PowerPivot for Excel runs all the 'in-process' for Excel. The connectivity to the Analysis Services data source is managed by MDX, XMLA Source, AMO, and ADOMD.NET libraries, which in turn use the Analysis Services OLE DB provider to connect to the PowerPivot data within the workbook. On the workstation, the Analysis Services VertiPaq engine issues queries and receives data from a variety of data sources, including relational or multidimensional databases, documents, public data stores, or Web services. During data import and client-side data refresh, an ATOM data feed provider is used for importing and refreshing data in the ATOM format. In case of connectivity to non-Microsoft data sources such as Oracle/Teradata/DB2/SYBASE/SQLAzure/OLEDB/ODBC sources and most commonly used file sources such as Excel or flat files, we must acquire and install these drivers manually. PowerPivot for SharePoint installs on top of SharePoint 2010, and adds services and functionality to SharePoint. As we have seen, PowerPivot for Excel is an effective tool to create and edit PowerPivot applications, and for data collaboration, sharing, and reporting PowerPivot for SharePoint. Behind the scene, SQL Server 2008 R2 features SharePoint 2010 integrated mode for Analysis Service which includes the VertiPaq engine to provide in-memory data storage. It will also help in processing very large amounts of data, where high performance is accomplished through columnar storage and data compression. The storage of PowerPivot workbooks is quite large and SharePoint 2010 has a default maximum size limit of 50MB for file size. As per the enterprise storage policies, you need to change the file storage setting in SharePoint to publish and upload PowerPivot workbooks. Internally, PowerPivot for SharePoint components, PowerPivot System Service, and Analysis Services in VertiPaq will provide the hosting capability for PowerPivot applications internally. For client connectivity to publish, it includes a web service component that allows applications to connect to the PowerPivot workbook data from outside the SharePoint farm. There's more... Creating an Excel workbook that contains PowerPivot data requires both Excel 2010 and the PowerPivot for Excel add-in. After you create the workbook, you can publish it to a SharePoint Server 2010 farm that has Excel Services, and a deployment of SQL Server PowerPivot for SharePoint. PowerPivot workbooks can be opened in Excel 2007. However, Excel 2007 cannot be used to create or modify PowerPivot data, or to interact with PivotTables, or PivotCharts that use PowerPivot data. You must use Excel 2010 to get full access to all PowerPivot features.
Read more
  • 0
  • 0
  • 1775

article-image-core-sql-server-2008-r2-technologies-deploying-master-data-services
Packt
27 May 2011
8 min read
Save for later

Core SQL Server 2008 R2 Technologies: Deploying Master Data Services

Packt
27 May 2011
8 min read
Microsoft SQL Server 2008 R2 Administration Cookbook Over 70 practical recipes for administering a high-performance SQL Server 2008 R2 system       Installing and configuring a Master Data Services Solution The administrative control of data is a primary task in any line of business management; as long as the data-flow is intended to meet enterprise level business needs, it serves the purpose. The essence of Master Data Management (MDM) is identified when the data is used for operational and analytical processes; however, those management processes must be able to clearly define the business concepts, identify different ways the data sets represent commonly understood concepts, and integrate the data into a consistent view that is available across the organization. SQL Server 2008 R2 introduces Master Data Services (MDS), which is designed to provide hierarchies that can be customized to group and summarize the master data. This kind of hierarchical representation helps to change the reporting structures and incorporate new aspects of the business rules by reducing the data-duplication. The best usage of MDS will help us to maintain a central database of master data, which is managed for reporting and analysis. In this recipe, we will go through the important steps to install and configure an MDS solution on an existing data platform. The core components of MDS are MDS configuration manager, MDS manager, and MDS web service. Getting ready There are certain pre-installation tasks required to install Master Data Services (MDS) and the setup must meet minimum requirements. The installation requirement is divided into MDS setup, MDM web application services, and MDS database. The MDS setup prerequisites are as follows: The MDS setup is possible only on the 64-bit versions of SQL Server 2008 R2 Datacenter, SQL Server 2008 R2 Enterprise, and SQL Server 2008 R2 Developer editions The supported operating systems are Enterprise Editions of Windows Server 2008 R2, Windows Server 2008, and Ultimate editions of Windows 7, and Windows Vista Ultimate. Also, Windows 7 or Vista Professional edition support this feature Microsoft .NET framework 3.5 Service Pack1 is required, to download refer to http://go.microsoft.com/fwlink/?LinkId=128220 The user account used to install MDS must be a member of the Administrators group on the local server The MDM web application and web services prerequisites are as follows: MDM is a web application hosted by IIS On a Windows Server 2008 or Windows Server 2008 R2 machine, use Server Manager to install Web Server (IIS) role and refer to http://msdn.microsoft.com/en-us/library/ee633744.aspx for the required role services The MDS database prerequisites are as follows: MDS requires a database to support MDM web applications and web services In this recipe, the machine that hosts an MDS database is using an instance of SQL Server 2008 R2 database engine How to do it... To implement the Master Data Management solution, MDS must be installed, which has two main components, an MDS database component and an MDS web component. Ensure that all of the prerequisites (mentioned in the earlier sections) are in place and by default the MDS is not installed as a part of the regular SQL Server 2008 R2 installation. We need to install the MDS separately as follows: On the SQL Server 2008 R2 installation media, navigate to the MasterDataServicesX641033_ENU folder and double-click on masterdataservices.msi file that will present a welcome screen Similar to the SQL Server installation screens, the MDS setup requires default information on the remaining steps that are self-explanatory The presented screens are License Agreement, Registration Information, Feature selection, and Ready to install to complete the Master Data Services installation Now that we have installed the Master Data Services components on the server, we need to configure the MDS to make it available for use. The MDM configuration is implemented as a two-fold phase: Databases and Web Configuration. To launch the MDS configuration manager, go to Start | All Programs | SQL Server 2008 R2 | Master Data Services | Configuration Manager. The tabs on the left-hand side represent Databases and Web configuration. Databases configure and store MDS configuration, web configuration for MDS, and web service for application integration. Make sure you click on the Databases page to create an MDS database to configure and store MDS objects. The database that is used or created must be on SQL Server 2008 R2. To ensure that the supplied configuration is working, click on the Test Connection button. Click Next to create a new Master Data Services database. On the Create Database screen, provide the database name and collation (choose SQL Server default collation or Windows collation). The collation is a single-selection, when we choose the SQL Server default collation, the Windows collation and other options (binary/case-sensitive) are disabled. The Collation is an important step to plan when Unicode data is stored and managed within the database. It is essential to plan proper collation settings during the SQL Server installation. The remaining two options Service Account and Administrator Account are self-explanatory. The username specified in the Service Account screen is added as member of dbo role for DBIASSQA_MDS_DB database. The user name specified in the Administrator Account screen will be the site administrator (super user) and will also have permissions to log on and add users to MDM applications. The account must be a domain account that is required to use application pools to connect to a database. Access the database using MDS manager site and web services based on the permissions. The Summary screen is an informative window that will help to review the details, click Next to start the configuration. The status for the MDS database creation and configuration will be shown as a Success. Click Finish to complete the Create Database wizard. To hold the metadata for the MDS repository, proceed to Metadata Services configuration manager. It is ideal to leave all the System Settings to default except Master Data Manager URL for notifications value. The MDS configuration manager for Databases is complete, so we will now proceed to the Web Configuration tab. Click on the Web Configuration option, which will present the following screenshot. The configuration will provide you with various options such as: Select the required website (from Web site drop-down) and web applications that hold MDS and can be configured Select required websites that do not have MDS and click on Create Application Create a new website by clicking on Create Site–that is specific to MDS only–that automatically creates a web application To choose an existing SQL server instance and database for MDS web application to access, click on the Select button. For this recipe, we will use an existing website: DBIASSQA-MDS, so select an existing SQL Server instance: DBIA-SSQASQL2K8R2 and database: DBIASSQA_MDS_DB. To configure the web services to enable programmatic access to MDS, click on the Enable Web services for this Web application option. Click on Apply to complete the MDS configuration. This completes the configuration and presents a popup indicating that the MDS configuration is complete. On the Configuration Complete screen, choose the Launch Web application in browser option and click OK to open the MDS Getting Started page in the browser. This completes the configuration of the Master Data Services databases and web configuration that will enable you to feature the MDS solution on an existing data platform. How it works... The Master Data Services setup will host the MDS web application and installs relevant MDS folders and files at the location and assigns permission to the objects. The setup will register MDS assemblies in the Global Assembly Cache (GAC). The MDS snap-in for Windows PowerShell is registered and installs the MDS configuration manager. A new windows group called MDS_ServiceAccount is created to contain the MDS service accounts for application pools. The MDS installation path creates a folder MDSTempDir where temporary compilation files are compiled for the MDM web application, and permissions are assigned for the MDS_ServiceAccount group. The web application configuration section follows a different workflow method, by including an option to Create Site followed by specifying settings for the new site and the MDS web application configured as the root web application in the site. This process will not allow you to configure the MDS web application under any virtual paths or specify an alias for the application. The Create New Application process enables you to select a website to create an MDS application in and the Create Application screen allows you to specify settings for the new application. The MDS application will be configured as an application in the selected site at the virtual path and at a specified alias.
Read more
  • 0
  • 0
  • 1645