Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Servers

95 Articles
article-image-acting-proxy-httpproxymodule
Packt
24 Dec 2013
9 min read
Save for later

Acting as a proxy (HttpProxyModule)

Packt
24 Dec 2013
9 min read
(For more resources related to this topic, see here.) The HttpProxyModule allows Nginx to act as a proxy and pass requests to another server. location / {   proxy_pass        http://app.localhost:8000; } Note when using the HttpProxyModule (or even when using FastCGI), the entire client request will be buffered in Nginx before being passed on to the proxy server. Explaining directives Some of the important directives of the HttpProxyModule are as follows. proxy_pass The proxy_pass directive sets the address of the proxy server and the URI to which the location will be mapped. The address may be given as a hostname or an address and port, for example: proxy_pass http://localhost:8000/uri/; Or, the address may be given as an UNIX socket path: proxy_pass http://unix:/path/to/backend.socket:/uri/; path is given after the word unix between two colons. You can use the proxy_pass directive to forward headers from the client request to the proxied server. proxy_set_header Host $host; While passing requests, Nginx replaces the location in the URI with the location specified by the proxy_pass directive. If inside the proxied location, URI is changed by the rewrite directive and this configuration will be used to process the request. For example: location  /name/ {   rewrite      /name/([^/] +)  /users?name=$1  break;   proxy_pass   http://127.0.0.1; } A request URI is passed to the proxy server after normalization as follows: Double slashes are replaced by a single slash Any references to current directory like "./" are removed Any references to the previous directory like "../" are removed. If proxy_pass is specified without a URI (for example in "http://example.com/request",/request is the URI part), the request URI is passed to the server in the same form as sent by a client location /some/path/ {   proxy_pass http://127.0.0.1; } If you need the proxy connection to an upstream server group to use SSL, your proxy_pass rule should use https:// and you will also have to set your SSL port explicitly in the upstream definition. For example: upstream https-backend {   server 10.220.129.20:443; }   server {   listen 10.220.129.1:443;   location / {     proxy_pass https://backend-secure;   } } proxy_pass_header The proxy_pass_header directive allows transferring header lines forbidden for response. For example: location / {   proxy_pass_header X-Accel-Redirect; } proxy_connect_timeout The proxy_connect_timeout directive sets a connection timeout to the upstream server. You can't set this timeout value to be more than 75 seconds. Please remember that this is not the response timeout, but only a connection timeout. This is not the time until the server returns the pages which is configured through proxy_read_timeout directive. If your upstream server is up but hanging, this statement will not help as the connection to the server has been made. proxy_next_upstream The proxy_next_upstream directive determines in which cases the request will be transmitted to the next server: error: An error occurred while connecting to the server, sending a request to it, or reading its response timeout: The timeout occurred during the connection with the server, transferring the request, or while reading the response from the server invalid_header: The server returned an empty or incorrect response http_500: The server responded with code 500 http_502: The server responded with code 502 http_503: The server responded with code 503 http_504: The server responded with code 504 http_404: The server responded with code 404 off: Disables request forwarding Transferring the request to the next server is only possible if there is an error sending the request to one of the servers. If the request sending was interrupted due to an error or some other reason, the transfer of request will not take place. proxy_redirect The proxy_redirect directive allows you to manipulate the HTTP redirection by replacing the text in the response from the upstream server. Specifically, it replaces text in the Location and Refresh headers. The HTTP Location header field is returned in response from a proxied server for the following reasons: To indicate that a resource has moved temporarily or permanently. To provide information about the location of a newly created resource. This could be the result of an HTTP PUT. Let us suppose that the proxied server returned the following: Location: http://localhost:8080/images/new_folder If you have the proxy_redirect directive set to the following: proxy_redirect http://localhost:8080/images/ http://xyz/; The Location text will be rewritten to be similar to the following: Location: http://xyz/new_folder/. It is possible to use some variables in the redirected address: proxy_redirect http://localhost:8000/ http://$location:8000; You can also use regular expressions in this directive: proxy_redirect ~^(http://[^:]+):d+(/.+)$ $1$2; The value off disables all the proxy_redirect directives at its level. proxy_redirect off; proxy_set_header The proxy_set_header directive allows you to redefine and add new HTTP headers to the request sent to the proxied server. You can use a combination of static text and variables as the value of the proxy_set_header directive. By default, the following two headers will be redefined: proxy_set_header Host $proxy_host; proxy_set_header Connection Close; You can forward the original Host header value to the server as follows: proxy_set_header Host $http_host; However, if this header is absent in the client request, nothing will be transferred. It is better to use the variable $host; its value is equal to the request header Host or to the basic name of the server in case the header is absent from the client request. proxy_set_header Host $host; You can transmit the name of the server together with the port of the proxied server: proxy_set_header Host $host:$proxy_port; If you set the value to an empty string, the header is not passed to the upstream proxied server. For example, if you want to disable the gzip compression on upstream, you can do the following: proxy_set_header  Accept-Encoding  ""; proxy_store The proxy_store directive sets the path in which upstream files are stored, with paths corresponding to the directives alias or root. The off directive value disables local file storage. Please note that proxy_store is different from proxy_cache. It is just a method to store proxied files on disk. It may be used to construct cache-like setups (usually involving error_page-based fallback). This proxy_store directive parameter is off by default. The value can contain a mix of static strings and variables. proxy_store   /data/www$uri; The modification date of the file will be set to the value of the Last-Modified header in the response. A response is first written to a temporary file in the path specified by proxy_temp_path and then renamed. It is recommended to keep this location path and the path to store files the same to make sure it is a simple renaming instead of creating two copies of the file. Example: location /images/ {   root                 /data/www;   error_page           404 = @fetch; }   location /fetch {   internal;   proxy_pass           http://backend;   proxy_store          on;   proxy_store_access   user:rw  group:rw  all:r;   proxy_temp_path      /data/temp;   alias                /data/www; } In this example, proxy_store_access defines the access rights of the created file. In the case of an error 404, the fetch internal location proxies to a remote server and stores the local copies in the /data/temp folder. proxy_cache The proxy_cache directive either turns off caching when you use the value off or sets the name of the cache. This name can then be used subsequently in other places as well. Let's look at the following example to enable caching on the Nginx server: http {   proxy_cache_path  /var/www/cache levels=1:2 keys_zone=my-     cache:8m max_size=1000m inactive=600m;   proxy_temp_path /var/www/cache/tmp;   server {     location / {       proxy_pass http://example.net;       proxy_cache my-cache;       proxy_cache_valid  200 302  60m;       proxy_cache_valid  404      1m;     }   } } The previous example creates a named cache called my-cache. It sets up the validity of the cache for response codes 200 and 302 to 60m, and for 404 to 1m, respectively. The cached data is stored in the /var/www/cache folder. The levels parameter sets the number of subdirectory levels in the cache. You can define up to three levels. The name of key_zone is followed by an inactive interval. All the inactive items in my-cache will be purged after 600m. The default value for inactive intervals is 10 minutes.   Chapter 5 of the book, Creating Your Own Module, is inspired by the work from Mr. Evan Miller which can be found at http://www.evanmiller.org/nginx-modules-guide.html.   Summary In this article we looked at several standard HTTP modules. These modules provide a very rich set of functionalities by default. You can disable these modules if you please at the time of configuration. However, they will be installed by default if you don't. The list of modules and their directives in this chapter is by no means exhaustive. Nginx's online documentation can provide you with more details. Resources for Article: Introduction to nginx [Article] Nginx Web Services: Configuration and Implementation [Article] Using Nginx as a Reverse Proxy [Article]
Read more
  • 0
  • 2
  • 3089

article-image-installing-apache-karaf
Packt
31 Oct 2013
7 min read
Save for later

Installing Apache Karaf

Packt
31 Oct 2013
7 min read
Before Apache Karaf can provide you with an OSGi-based container runtime, we'll have to set up our environment first. The process is quick, requiring a minimum of normal Java usage integration work. In this article we'll review: The prerequisites for Apache Karaf Obtaining Apache Karaf Installing Apache Karaf and running it for the first time Prerequisites As a lightweight container, Apache Karaf has sparse system requirements. You will need to check that you have all of the below specifications met or exceeded: Operating System: Apache Karaf requires recent versions of Windows, AIX, Solaris, HP-UX, and various Linux distributions (RedHat, Suse, Ubuntu, and so on). Disk space: It requires at least 20 MB free disk space. You will require more free space as additional resources are provisioned into the container. As a rule of thumb, you should plan to allocate 100 to 1000 MB of disk space for logging, bundle cache, and repository. Memory: At least 128 MB memory is required; however, more than 2 GB is recommended. Java Runtime Environment (JRE): The runtime environments such as JRE 1.6 or JRE 1.7 are required. The location of the JRE should be made available via environment setting JAVA_HOME. At the time of writing, Java 1.6 is "end of life". For our demos we'll use Apache Maven 3.0.x and Java SDK 1.7.x; these tools should be obtained for future use. However, they will not be necessary to operate the base Karaf installation. Before attempting to build demos, please set the MAVEN_HOME environment variable to point towards your Apache Maven distribution. After verifying you have the above prerequisite hardware, operating system, JVM, and other software packages, you will have to set up your environment variables for JAVA_HOME and MAVEN_HOME. Both of these will be added to the system PATH. Setting up JAVA_HOME Environment Variable Apache Karaf honors the setting of JAVA_HOME in the system environment; if this is not set, it will pick up and use Java from PATH. For users unfamiliar with setting environment variables, the following batch setup script will set up your windows environment: @echo off REM execute setup.bat to setup environment variables. set JAVA_HOME=C:Program FilesJavajdk1.6.0_31 set MAVEN_HOME=c:x1apache-maven-3.0.4 set PATH=%JAVA_HOME%bin;%MAVEN_HOME%bin;%PATH%echo %PATH% The script creates and sets the JAVA_HOME and MAVEN_HOME variables to point to their local installation directories, and then adds their values to the system PATH. The initial echo off directive reduces console output as the script executes; the final echo command prints the value of PATH. Managing Windows System Environment Variables Windows environment settings can be managed via the Systems Properties control panel. Access to these controls varies according to the Windows release. Conversely, in a Unix-like environment, a script similar to the following one will set up your environment: # execute setup.sh to setup environment variables. JAVA_HOME=/path/to/jdk1.6.0_31 MAVEN_HOME=/path/to/apache-maven-3.0.4 PATH=$JAVA_HOME/bin:$MAVEN_HOME/bin:$PATH export PATH JAVA_HOME MAVEN_HOME echo $PATH The first two directives create and set the JAVA_HOME and MAVEN_HOME environment variables, respectively. These values are added to the PATH setting, and then made available to the environment via the export command. Obtaining Apache Karaf distribution As an Apache open source project, Apache Karaf is made available in both binary and source distributions. The binary distribution comes in a Linux-friendly, GNU-compressed archive and in Windows ZIP format. Your selection of distribution kit will affect which set of scripts are available in Karaf's bin folder. So, if you're using Windows, select the ZIP file; on Unix-like systems choose the tar.gz file. Apache Karaf distributions may be obtained from http://karaf.apache.org/index/community/download.html. The following screenshot shows this link: The primary download site for Apache Karaf provides a list of available mirror sites; it is advisable that you select a server nearer to your location for faster downloads. For the purposes of this article, we will be focusing on Apache Karaf 2.3.x with notes upon the 3.0.x release series. Apache Karaf 2.3.x versus 3.0.x series The major difference between Apache Karaf 2.3 and 3.0 lines is the core OSGi specification supported. Karaf 2.3 utilizes OSGi rev4.3, while Karaf 3.0 uses rev5.0. Karaf 3 also introduces several command name changes. There are a multitude of other internal differences between the code bases, and wherever appropriate, we'll highlight those changes that impact users throughout this text. Installing Apache Karaf The installation of Apache Karaf only requires you to extract the tar.gz or .zip file in your desired target folder destination. The following command is used in Windows: unzip apache-karaf-.zip The following command is used in Unix: tar –zxf apache-karaf-.tar.gz After extraction, the following folder structure will be present: The LICENSE, NOTICE, README, and RELEASE-NOTES files are plain text artifacts contained in each Karaf distribution. The RELEASE-NOTES files are of particular interest, as upon each major and minor release of Karaf, this file is updated with a list of changes. The LICENSE, NOTICE, README, and RELEASE-NOTES files are plain text artifacts contained in each Karaf distribution. The RELEASE-NOTES files are of particular interest, as upon each major and minor release of Karaf, this file is updated with a list of changes. The bin folder contains the Karaf scripts for the interactive shell (Karaf), starting and stopping background Karaf service, a client for connecting to running Karaf instances, and additional utilities. The data folder is home to Karaf's logfiles, bundle cache, and various other persistent data. The demos folder contains an assortment of sample projects for Karaf. It is advisable that new users explore these examples to gain familiarity with the system. For the purposes of this book we strived to create new sample projects to augment those existing in the distribution. The instances folder will be created when you use Karaf child instances. It stores the child instance folders and files. The deploy folder is monitored for hot deployment of artifacts into the running container. The etc folder contains the base configuration files of Karaf; it is also monitored for dynamic configuration updates to the configuration admin service in the running container. An HTML and PDF format copy of the Karaf manual is included in each kit. The lib folder contains the core libraries required for Karaf to boot upon a JVM. The system folder contains a simple repository of dependencies Karaf requires for operating at runtime. This repository has each library jar saved under a Maven-style directory structure, consisting of the library Maven group ID, artifact ID, version, artifact ID-version, any classifier, and extension. First boot! After extracting the Apache Karaf distribution kit and setting our environment variables, we are now ready to start up the container. The container can be started by invoking the Karaf script provided in the bin directory: On Windows, use the following command: binkaraf.bat On Unix, use the following command: ./bin/karaf The following image shows the first boot screen: Congratulations, you have successfully booted Apache Karaf! To stop the container, issue the following command in the console: karaf@root> shutdown –f The inclusion of the –for –-force flag to the shutdown command instructs Karaf to skip asking for confirmation of container shutdown. Pressing Ctrl+ D will shut down Karaf when you are on the shell; however, if you are connected remotely (using SSH), this action will just log off the SSH session, it won't shut down Karaf. Summary We have discovered the prerequisites for installing Karaf, which distribution to obtain, how to install the container, and finally how to start it. Resources for Article: Further resources on this subject: Apache Felix Gogo [Article] WordPress 3 Security: Apache Modules [Article] Configuring Apache and Nginx [Article]
Read more
  • 0
  • 0
  • 4255

article-image-linux-desktop-environments
Packt
29 Oct 2013
7 min read
Save for later

Linux Desktop Environments

Packt
29 Oct 2013
7 min read
(For more resources related to this topic, see here.) A computer desktop is normally composed of windows, icons, directories/folders, a toolbar, and some artwork. A window manager handles what the user sees and the tasks that are performed. A desktop is also sometimes referred to as a graphical user interface (GUI). There are many different desktops available for Linux systems. Here is an overview of some of the more common ones. GNOME 2 GNOME 2 is a desktop environment and GUI that is developed mainly by Red Hat, Inc. It provides a very powerful and conventional desktop interface. There is a launcher menu for quicker access to applications, and also taskbars (called panels). Note that in most cases these can be located on the screen where the user desires. The screenshot of GNOME 2 running on Fedora 14 is as follows: This shows the desktop, a command window, and the Computer folder. The top and bottom "rows" are the panels. From the top, starting on the left, are the Applications, Places, and System menus. I then have a screensaver, the Firefox browser, a terminal, Evolution, and a Notepad. In the middle is the lock-screen app, and on the far right is a notification about updates, the volume control, Wi-Fi strength, battery level, the date/time, and the current user. Note that I have customized several of these, for example, the clock. Getting ready If you have a computer running the GNOME 2 desktop, you may follow along in this section. A good way to do this is by running a Live Image, available from many different Linux distributions. The screenshot showing the Add to Panel window is as follows: How to do it... Let's work with this desktop a bit: Bring this dialog up by right-clicking on an empty location on the task bar. Let's add something cool. Scroll down until you see Weather Report, click on it and then click on the Add button at the bottom. On the panel you should now see something like 0 °F. Right-click on it. This will bring up a dialog, select Preferences. You are now on the General tab. Feel free to change anything here you want, then select the Location tab, and put in your information. When done, close the dialog. On my system the correct information was displayed instantly. Now let's add something else that is even more cool. Open the Add to Panel dialog again and this time add Workspace Switcher. The default number of workspaces is two, I would suggest adding two more. When done, close the dialog. You will now see four little boxes on the bottom right of your screen. Clicking on one takes you to that workspace. This is a very handy feature of GNOME 2. There's more... I find GNOME 2 very intuitive and easy to use. It is powerful and can be customized extensively. It does have a few drawbacks, however. It tends to be somewhat "heavy" and may not perform well on less powerful machines. It also does not always report errors properly. For example, using Firefox open a local file that does not exist on your system (that is, file:///tmp/LinuxBook.doc). A File Not Found dialog should appear. Now try opening another local file that does exist, but which you do not have permissions for. It does not report an error, and in fact doesn't seem to do anything. Remember this if it happens to you. KDE desktop The KDE desktop was designed for desktop PCs and powerful laptops. It allows for extensive customization and is available on many different platforms. The following is a description of some of its features. Getting ready If you have a Linux machine running the KDE desktop you can follow along. These screenshots are from KDE running on a Live Media image of Fedora 18. The desktop icon on the far right allows the user to access Tool Box: You can add panels, widgets, activities, shortcuts, lock the screen, and add a lot more using this dialog. The default panel on the bottom begins with a Fedora icon. This icon is called a Kickoff Application Launcher and allows the user to access certain items quickly. These include Favorites, Applications, a Computer folder, a Recently Used folder, and a Leave button. If you click on the next icon it will bring up the Activity Manager. Here you can create the activities and monitor them. The next icon allows you to select which desktop is currently in the foreground, and the next items are the windows that are currently open. Over to the far right is the Clipboard. Here is a screenshot of the clipboard menu: Next is the volume control, device notifier, and networking status. Here is a screenshot of Interfaces and Connections dialog: Lastly, there is a button to show the hidden icons and the time. How to do it... Let's add a few things to this desktop: We should add a console. Right-click on an empty space on the desktop. A dialog will come up with several options; select Konsole. You should now have a terminal. Close that dialog by clicking on some empty space. Now let's add some more desktops. Right-click on the third icon on the bottom left of the screen. A dialog will appear, click on Add Virtual Desktop. I personally like four of these. Now let's add something to the panel. Right-click on some empty space on the panel and hover the mouse over Panel Options; click on AddWidgets. You will be presented with a few widgets. Note that the list can be scrolled to see a whole lot more. For example, scroll over to Web Browser and double-click on it. The web browser icon will appear on the panel near the time. There's more... You can obviously do quite a bit of customization using the KDE desktop. I would suggest trying out all of the various options, to see which ones you like the best. KDE is actually a large community of open source developers, of which KDE Plasma desktop is a part. This desktop is probably the heaviest of the ones reviewed, but also one of the most powerful. I believe this is a good choice for people who need a very elaborate desktop environment. xfce xfce is another desktop environment for Linux and UNIX systems. It tends to run very crisply and not use as many system resources. It is very intuitive and user-friendly. Getting ready The following is a screenshot of xfce running on the Linux machine I am using to write this article: If you have a machine running the xfce desktop, you can perform these actions. I recommend a Live Media image from the Fedora web page. While somewhat similar to GNOME 2, the layout is somewhat different. Starting with the panel on the top (panel 1) is the Applications Menu, following by a LogOut dialog. The currently open windows are next. Clicking on one of these will either bring it up or minimize it depending on its current state. The next item is the Workspaces of which I have four, then the Internet status. To complete the list is the volume and mixer apps and the date and time. The screen contents are mostly self-explanatory; I have three terminal windows open and the File Manager folder. The smaller panel on the bottom of the screen is called panel 2. How to do it... Let's work with the panels a bit: In order to change panel 2 we must unlock it first. Right-click on the top panel, and go to Panel | PanelPreferences. Use the arrows to change to panel 2. See the screenshot below: You can see it is locked. Click on Lock panel to unlock it and then close this dialog. Now go to panel 2 (on the bottom) and right-click on one of the sides. Click on AddNewItems.... Add the applications you desire. There's more... This is by no means an exhaustive list of what xfce can do. The features are modular and can be added as needed. See http://www.xfce.org for more information.
Read more
  • 0
  • 0
  • 1519

article-image-lets-breakdown-numbers
Packt
24 Oct 2013
8 min read
Save for later

Let's Breakdown the Numbers

Packt
24 Oct 2013
8 min read
(For more resources related to this topic, see here.) John Kirkland is an awesome "accidental" SQL Server DBA for Red Speed Bicycle LLC—a growing bicycle startup based in the United States. The company distributes bikes, bicycle parts, and accessories to various distribution points around the world. To say that they are performing well financially is an understatement. They are booming! They've been expanding their business to Canada, Australia, France, and the United Kingdom in the last three years. The company has upgraded their SQL Server 2000 database recently to the latest version of SQL Server 2012. Linda, from the Finance Group, asked John if they can migrate their Microsoft Access Reports into the SQL Server 2012 Reporting Services. John installed SSRS 2012 in a native mode. He decided to build the reports from the ground up so that the report development process would not interrupt the operation in the Finance Group. There is only one caveat; John has never authored any reports in SQL Server Reporting Services (SSRS) before. Let's give John a hand and help him build his reports from the ground up. Then, we'll see more of his SSRS adventures as we follow his journey throughout this article. Here's the first report requirement for John: a simple table that shows all the sales transactions in their database. Linda wants to see a report with the following data: Date Sales Order ID Category Subcategory Product Name Unit Price Quantity Line Total We will build our report, and all succeeding reports in this article, using the SQL Server Data Tools (SSDT). SSDT is Visual Studio shell which is an integrated environment used to build SQL Server database objects. You can install SSDT from the SQL Server installation media. In June 2013, Microsoft released SQL Server Data Tools-Business Intelligence (SSDTBI). SSDTBI is a component that contains templates for SQL Server Analysis Services (SSAS), SQL Server Integration Services (SSIS), and SQL Server Reporting Services (SSRS) for Visual Studio 2012. SSDTBI replaced Business Intelligence Development Studio (BIDS) from the previous versions of SQL Server. You have two options in creating your SSRS reports: SSDT or Visual Studio 2012. If you use Visual Studio, you have to install the SSDTBI templates. Let's create a new solution and name it SSRS2012Blueprints. For the following exercises, we're using SSRS 2012 in native mode. Also, make a note that we're using the AdventureWorks2012 Sample database all throughout this article unless otherwise indicated. You can download the sample database from CodePlex. Here's the link: http://msftdbprodsamples.codeplex.com/releases/view/55330. Defining a data source for the project Now, let's define a shared data source and shared dataset for the first report. A shared dataset and data source can be shared among the reports within the project: Right-click on the Shared Data Sources folder under the SSRS2012Bueprints solution in the Solution Explorer window, as shown in the following illustration. If the Solution Explorer window is not visible, access it by navigating to Menu | View | Solution Explorer, or press Ctrl + Alt + L: Select Add New Data Source which displays the Shared Data Source Properties window. Let's name our data source DS_SSRS2012Blueprint. For this demonstration, let's use the wizard to create the connection string. As a good practice, I use the wizard for setting up connection strings for my data connections. Aside from convenience, I'm quite confident that I'm getting the right connections that I want. Another option for setting the connection is through the Connection Properties dialog box, as shown in the next screenshot. Clicking on the Edit button next to the connection string box displays the Connection Properties dialog box: Shared versus embedded data sources and datasets: as a good practice, always use shared data sources and shared datasets where appropriate. One characteristic of a productive development project is using reusable objects as much as possible. For the connection, one option is to manually specify the connection string as shown: Data Source=localhost;Initial Catalog=AdventureWorks2012 We may find this option as a convenient way of creating our data connections. But if you're new to the report environment you're currently working on, you may find setting up the connection string manually more cumbersome than setting it up through the wizard. Always test the connection before saving your data source. After testing, click on the OK buttons on both dialog boxes. Defining the dataset for the project Our next step is to create the shared dataset for the project. Before doing that, let's create a stored procedure named dbo.uspSalesDetails. This is going to be the query for our dataset. Download the T-SQL codes included in this article if you haven't done so already. We're going to use the T-SQL file named uspSalesDetails_Ch01.sql for this article. We will use the same stored procedure for this whole article, unless otherwise indicated. Right-click on the Shared Datasets folder in Solution Explorer, just like we did when we created the data source. That displays the Shared Datasets Properties dialog. Let's name our dataset ds_SalesDetailReport. We use the query type stored procedure, and select or type uspSalesDetails on the Select or enter stored procedure name drop-down combo box. Click on OK when you're done: Before we work on the report itself, let's examine our dataset. In the Solution Explorer window, double-click on the dataset ds_SalesDetailReport.rsd, which displays the Shared Dataset Properties dialog box. Notice that the fields returned by our stored procedure have been automatically detected by the report designer. You can rename the field as shown: Ad-hoc Query (Text Query Type) versus Stored Procedure: as a good practice, always use a stored procedure where a query is used. The primary reason for this is that a stored procedure is compiled into a single execution plan. Using stored procedures will also allow you to modify certain elements of your reports without modifying the actual report. Creating the report file Now, we're almost ready to build our first report. We will create our report by building it from scratch by performing the following steps: Going back to the Solution Explorer window, right-click on the Reports folder. Please take note that selecting the Add New Report option will initialize Report Wizard. Use the wizard to build simple tabular or matrix reports. Go ahead if you want to try the wizard but for the purpose of our demonstration, we'll skip the wizard. Select Add, instead of Add New Report, then select New Item: Selecting New Item displays the Add New Item dialog box as shown in the following screenshot. Choose the Report template (default report template) in the template window. Name the report SalesDetailsReport.rdl. Click on the Add button to add the report to our project: Clicking on the Add button displays the empty report in the report designer. It looks similar to the following screenshot: Creating a parameterized report You may have noticed that the stored procedure we created for the shared dataset is parameterized. It has the following parameters: It's a good practice to test all the queries on the database just to make sure we get the datasets that we need. Doing so will eliminate a lot of data quality issues during report execution. This is also the best time to validate all our data. We want our report consumers to have the correct data that is needed for making critical decisions. Let's execute the stored procedure in SQL Server Management Studio (SSMS) and take a look at the execution output. We want to make sure that we're getting the results that we want to have on the report. Now, we add a dataset to our report based on the shared dataset that we had previously created: Right-click on the Datasets folder in the Report Data window. If it's not open, you can open it by navigating to Menu | View | Report Data, or press Ctrl + Alt + D: Selecting Add Dataset displays the Dataset Properties. Let's name our report dataset tblSalesReport. We will use this dataset as the underlying data for the table element that we will create to hold our report data. Indicate that we want to use a shared dataset. A list of the project shared datasets is displayed. We only have one at this point, which is the ds_SalesDetailsReport. Let's select that one, then click on OK. Going back to the Report Data window, you may notice that we now have more objects under the Parameters and Datasets folders. Switch to the Toolbox window. If you don't see it, then go to Menu | View | Toolbox, or press Ctrl + Alt + X. Double-click or drag a table to the empty surface of the designer. Let's add more columns to the table to accommodate all eight dataset fields. Click on the table, then right-click on the bar on the last column and select Insert Column | Right. To add data to the report, let's drag each element from the dataset to their own cell at the table data region. There are three data regions in SSRS: table, matrix, and list. In SSRS 2012, a fourth data region has been added but you can't see that listed anywhere. It's called tablix. Tablix is not shown as an option because it is built into those three data regions. What we're doing in the preceding screenshot is essentially dragging data into the underlying tablix data region. But how can I add my parameters into the report? you may ask. Well, let's switch to the Preview tab. We should now see our parameters already built into the report because we specified them in our stored procedure. Our report should look similar to the following screenshot:
Read more
  • 0
  • 0
  • 1220

Banner background image
article-image-choosing-right-flavor-debian-simple
Packt
08 Oct 2013
7 min read
Save for later

Choosing the right flavor of Debian (Simple)

Packt
08 Oct 2013
7 min read
(For more resources related to this topic, see here.) Getting ready At any point in time, Debian has three different branches available for use: stable, testing, and unstable. Think of unstable as the cutting edge of free software; it has reasonably modern software packages, and sometimes those packages introduce changes or features that may break the user experience. After an amount of time has passed (usually 10 days, but it depends on the package's upload priority), the new software is considered to be relatively safe to use and is moved to testing. Testing can provide a good balance between modern software and relatively reliable software. Testing goes through several iterations during the course of several years, and eventually it's frozen for a new stable release. This stable release is supported by the Debian Project for a number of years, including feature and security updates. Chances are you are building something that has an interesting team of people to back it up. In such scenarios, web development teams have chosen to go with testing, or even unstable, in order to get the latest software available. In other cases, conservative teams or groups with less savvy staff have resorted to stable because it's consistent for years. It is up to you to choose between any, but this book will get you started with stable. You can change your Advanced Packaging Tool (APT ) configuration later and upgrade to testing and unstable, but the initial installation media that we will use will be stable. Also, it is important that developers target the production environment as closely as possible. If you use stable for production, using stable for development will save a lot of time debugging mismatches. You should know which versions of programming languages, modules, libraries, frameworks, and databases your application will be targeting, as this will influence the selection of your branch. You can go to packages.debian.org to check the versions available for a specific package across different branches. Choosing testing (outside a freeze period) and unstable will also mean that you'll need to have an upgrade strategy where you continuously check for new updates (with tools such as cron-apt) and install them if you want to take advantage of new bug fixes and so on. How to do it… Debian offers a plethora of installation methods for the operating system. From standard CDs and DVDs, Debian also offers reduced-size installation media, bootable USB images, network boot, and other methods. The complexity of installation is a relative factor that usually is of no concern for DevOps since installation only happens once, while configuration and administration are continuously happening. Before you start considering replication methods (such as precooked images, network distribution, configuration management, and software delivery), you and your team can choose from the following installation methods: If you are installing Debian on a third-party provider (such as a cloud vendor), they will either provide a Debian image for you, or you can prepare your own in virtualization software and upload the disk later. If you are installing on your own hardware (including virtualized environments), it's advisable to get either the netinst ISO or the full first DVD ISO. It all depends on whether you are installing several servers over the course of several months (thus making the DVD obsolete as new updates come out) or have a good Internet connection (or proxies and caching facilities, nearby CDNs, and so on) for downloading any additional packages that the netinst disk might not contain. In general, if you are only deploying a handful of servers and have a good Internet connection at hand, I'd suggest you choose the amd64 netinst ISO, which we will use in this book. There's more… There are several other points that you need to consider while choosing the right flavor of Debian. One of them is the architecture you're using and targeting for development. Architectures There are tens of computer architectures available in the market. ARM, Intel, AMD, SPARC, and Alpha are all different types of architectures. Debian uses the architecture codenames i386 and amd64 for historical reasons. i386 actually means an Intel or Intel-compatible, 32-bit processor (x86), while amd64 means an Intel or Intel-compatible, 64-bit processor (x86_64). The brand of the processor is irrelevant. A few years ago, choosing between the two was tricky as some binary-only, non-free libraries and software were not always available for 64-bit processors, and architecture mismatches happened. While there were workarounds (such as running a 32-bit-only software using special libraries), it was basically a matter of time until popular software such as Flash caught up with 64-bit versions—thus, the concern was mainly about laptops and desktops. Nowadays, if your CPU (and/or your hypervisor) has 64-bit capabilities (most Intel do), it's considered a good practice to use the amd64 architecture. We will use amd64 in this book. And since Debian 7.0, the multiarch feature has been included, allowing more than one architecture to be installed and be active on the same hardware. While the market seems to settle around 64-bit Intel processors, the choice of an architecture is still important because it determines the future availability of software that you can choose from Debian. There might be some software that is not compiled for or not compatible with your specific architecture, but there is software that is independent of the architecture. DevOps are usually pragmatic when it comes to choosing architectures, so the following two questions aim to help you understand what to expect when it comes to it: Will you run your web applications on your own hardware? If so, do you already have this hardware or will you procure it? If you need to procure hardware, take a look at the existing server hardware in your datacenter. Factors such as a preferred vendor, hardware standardization, and so on are all important when choosing the right architecture. From the most popular 32- or 64-bit Intel and AMD processors, the growing ARM ecosystem, and also the more venerable but declining SPARC or Itanium, Debian is available for lots of architectures. If you are out in the market for new hardware, your options are most likely based on an Intel- or AMD-compatible, 32- or 64-bit, server-grade processor. Your decisions will be influenced by factors such as the I/O capacity (throughput and speed), memory, disk, and so on, and the architecture will most likely be covered by Debian. Will you run your web applications on third-party hardware, such as a Virtual Private Server (VPS ) provider or a cloud Infrastructure as a Service (IaaS ) provider? Most providers will provide you with prebuilt images for Debian. They are either 32- or 64-bit, x86 images that have some sort of community support—but, be aware they might have no vendor support, or in some cases waive warranties and/or other factors such as the SLA. You should be able to prepare your own Debian installation using virtualization software (such as KVM, VirtualBox, or Hyper-V) and then upload the virtual disk (VHD, VDI, and so on) to your provider. Summary In this article, we learned about selecting the right flavor of Debian for our system. We also learned about the different architectures available in the market that we can use for Debian. Resources for Article : Further resources on this subject: Installation of OpenSIPS 1.6 [Article] Installing and customizing Redmine [Article] Installing and Using Openfire [Article]
Read more
  • 0
  • 0
  • 9043

article-image-learning-bukkit-api
Packt
26 Sep 2013
6 min read
Save for later

Learning the Bukkit API

Packt
26 Sep 2013
6 min read
(For more resources related to this topic, see here.) Introduction to APIs API is an acronym for Application Programming Interface. An API helps to control how various software components are used. CraftBukkit includes the Minecraft code in a form that is easier for developers to utilize in creating plugins. CraftBukkit has a lot of code that we do not need to access for creating plugins. It also includes code that we should not use as it could cause the server to become unstable. Bukkit provides us with the classes that we can use to properly modify the game. Basically, Bukkit acts as a bridge between our plugin and the CraftBukkit server. The Bukkit team adds new classes, methods, and so on, to the API as new features develop in Minecraft, but the preexisting code rarely changes. This ensures that our Bukkit plugins will still function correctly months or even years from now. Even though new versions of Minecraft/CraftBukkit are being released. For example, if Minecraft were to change how an entity's health is handled, we would notice no difference. The CraftBukkit jar would account for this change and when our plugin calls the getHealth() method it would function exactly as it had before the update. Another example of how great the Bukkit API is would be the addition of new Minecraft features, such as new items. Let's say that we've created a plugin that gives food an expiration date. To see if an item is food we'd use the isEdible() method. Minecraft continues to create new items. If one of these new items was Pumpkin Bread, CraftBukkit would flag that type of item as edible and would therefore be given an expiration date by our plugin. A year from now, any new food items would still be given expiration dates without us needing to change any of our code. The Bukkit API documentation Documentation of the Bukkit API can be found at jd.bukkit.org. You will see several links regarding the status of the build (Recommended, Beta, or Development) and the form of the documentation (JavaDocs or Doxygen). If you are new to reading documentation of Java code, you may prefer Doxygen. It includes useful features, such as a search bar and collapsible lists and diagrams. If you are already familiar with reading documentation then you may be more comfortable using the JavaDocs. In the following screenshot, both API docs are side by side for comparison. The traditional JavaDocs are on the left and the Doxygen documentation is on the right. The following figure is the inheritance diagram for LivingEntity from the Doxygen site. Take note that on the site you are able to zoom in and click a box to go to that class. I encourage you to browse through each documentation to decide which one you prefer. They are simply displayed differently. When using the Doxygen API docs, you will have to navigate to the bukkit package to see a list of classes and packages. It can be found navigating to the following links within the left column: Bukkit | Classes | Class List | org | bukkit, as shown in the following screenshot: Navigating the Bukkit API Documentation We can look through this documentation to get a general idea of what we are able to modify on a CraftBukkit server. Server-side plugins are different from client-side mods. We are limited with what we are able to modify in the game using server-side plugins. For example, we cannot create a new type of block but we can make lava blocks rain from the sky. We cannot make zombies look and sound like dinosaurs but we can put a zombie on a leash, change its name to Fido and have it not burn in the daylight. For the most part you cannot change the visual aspect of the game, but you can change how it functions. This ensures that everyone who connects to the server with a standard Minecraft client will have the same experience. For some more examples on what we can do, we will view various pages of the API docs. You will notice that the classes are organized into several packages. These packages help group similar classes together. For example, a Cow , a Player, and a Zombie are all types of entities and thus can be found in the org.bukkit.entity package. So if I were to say that the World interface can be found at org.bukkit. World then you will know that the World class can be found within the bukkit package, which is inside the org package. Knowing this will help you find the classes that you are looking for. The search bar near the top right corner of the Doxygen site is another way to quickly find a class. Let's look at the World class and see what it has to offer. The classes are listed in alphabetical order so we will find World near the end of the list within the bukkit package. Once you click on the World class link, all of its methods will be displayed in the main column of the site under the header Public Member Functions as shown in the following screenshot: A World object is an entire world on your server. By default, a Minecraft server has multiple worlds including the main world, nether, and end. CraftBukkit even allows you to add additional worlds. The methods that are listed in the World class apply to the specific world object. For example, the Bukkit.getWorlds() method will give you a list of all the worlds that are on the server; each one is unique. Therefore if you were to call the getName() method on the first world it may return world while calling the same method on the second world may return world_nether. Summary In this article we learnt about what the reader can do by programming plugins. We also learnt the difference between Bukkit and CraftBukkit and how they relate to Minecraft. The term acronym API was also explained. Resources for Article : Further resources on this subject: Coding with Minecraft [Article] Instant Minecraft Designs – Building a Tudor-style house [Article] CryENGINE 3: Breaking Ground with Sandbox [Article]
Read more
  • 0
  • 0
  • 2763
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-coding-minecraft
Packt
17 Sep 2013
7 min read
Save for later

Coding with Minecraft

Packt
17 Sep 2013
7 min read
(For more resources related to this topic, see here.) Getting ready Before you begin, you will need a running copy of Minecraft: Pi Edition. Start a game in a new or existing world, and wait for the game world to load. How to do it... Follow these steps to connect to the running Minecraft game: Open a fresh terminal by double-clicking on the LXTerminal icon on the desktop. Type cd ~/mcpi/api/python/mcpi into the terminal. Type python to begin the Python interpreter. Enter the following Python code: import minecraftmc = minecraft.Minecraft.create()mc.postToChat("Hello, world!") In the Minecraft window, you should see a message appear! How it works... First, we used the cd command, which we have seen previously to move to the location where the Python API is located. The application programming interface (API) consists of code provided by the Minecraft developers that handles some of the more basic functionality you might need. We used the ~ character as a shortcut for your home directory (/home/pi). Typing in cd /home/pi/mcpi/api/python/mcpi would have exactly the same effect, but requires more typing. We then start the Python interpreter. An interpreter is a program that executes code line by line as it is being typed. This allows us to get instant feedback on the code we are writing. You may like to explore the IDLE interpreter by typing idle into the terminal instead of python. IDLE is more advanced, and is able to color your code based on its meaning (so you can spot errors more easily), and can graphically suggest functions available for use. Then we started writing real Python code. The first line, import minecraft, gives us access to the necessary parts of the API by loading the minecraft module. There are several Python code files inside the directory we moved to, one of which is called minecraft.py, each containing a different code module. The main module we want access to is called minecraft. We then create a connection to the game using mc = minecraft.Minecraft.create(). mc is the name we have given to the connection, which allows us to use the same connection in any future code. minecraft. tells Python to look in the minecraft module. Minecraft is the name of a class in the minecraft module that groups together related data and functions. create() is a function of the Minecraft class that creates a connection to the game. Finally, we use the connection we have created, and its postToChat method to display a message in Minecraft. The way that our code interacts with the game is completely hidden from us to increase fl exibility: we can use almost exactly the same code to interact with any game of Minecraft: Pi Edition, and it is possible to use many different programming languages. If the developers want to change the way the communication works, they can do so, and it won't affect any of the code we have written. Behind the scenes, some text describing our command is sent across a network connection to the game, where the command is interpreted and performed. By default, the connection is to the very Raspberry Pi that we are running the code on, but it is also possible to send these commands over the Internet from any computer to any network-connected Raspberry Pi running Minecraft. A description of all of these text-based messages can be found in ~/ mcpi/api/spec: the message sent to the game when we wrote mc.postToChat("Hello,world!") was chat.post("Hello, world!"). This way of doing things allows any programming language to communicate with the running Minecraft game. As well as Python, a Java API is included that is capable of all the same tasks, and the community has created versions of the API in several other languages. There's more... There are many more functions provided in the Python API, some of the main ones are described here. You can explore the available commands using Python's help function: after importing the minecraft module, help(minecraft) will list the contents of that module, along with any text descriptions provided by the writer of the module. You can also use help to provide information on classes and functions. It is also possible to create your own API by building on top of the existing functions. For example, if you find yourself wanting to create a lot of spheres, you could write your own function that makes use of those provided, and import your module wherever you need it. The minecraft module The following code assumes that you have imported the minecraft module and created a connection to the running Minecraft game using mc = minecraft.Minecraft.create(). Whenever x, y, and z coordinates are used, x and z are both different directions that follow the ground, and y is the height, with 0 at sea level. Code Description mc.getBlock(x,y,z) This gets the type of a block at a particular location as a number. These numbers are all provided in the block module. mc.setBlock(x,y,z, type) The sets the block at a particular position to a particular type. There is also a setBlocks function that allows a cuboid to be filled - this will be faster than setting blocks individually. mc.getHeight(x,z) This gets the height of the world at the given location. mc.getPlayerEntityIds() This gets a list of IDs of all connected players. mc.saveCheckpoint() This saves the current state of the world. mc.restoreCheckpoint() This restores the state of the world from the saved checkpoint. mc.postToChat(message) This posts a message to the game chat. mc.setting(setting, status) This changes a game setting (such as "world_immutable" or "nametags_visible") to True or False. mc.camera.setPos(x,y,z) This moves the game camera to a particular location. Other options are setNormal(player_id), setFixed(), and setFollow(player_id). mc.player.getPos() This gets the position of the host player. mc.player.setPos(x,y,z) This moves the host player. mc.events.pollBlockHits() This gets a list of all blocks that have been hit since the last time the events were requested. Each event describes the position of the block that was hit. mc.events.clearAll() This clears the list of events. At the time of writing, only block hits are recorded, but more event types may be included in the future. The block module Another useful module is the block module: use import block to gain access to its contents. The block module has a list of all available blocks, and the numbers used to represent them. For example, a block of dirt is represented by the number 3. You can use 3 directly in your code if you like, or you can use the helpful name block.DIRT, which will help to make your code more readable. Some blocks, such as wool, have additional information to describe their color. This data can be provided after the block's ID in all functions. For example, to create a block of red wool, where 14 is the data value representing "red": mc.setBlock(x, y, z, block.WOOL, 14) Full information on the additional data values can be found online at http://www.minecraftwiki.net/wiki/Data values(Pocket_Edition). Summary This article gave us a simple code to interact with the game. It also explained how Python communicates with the game. The other aspects brought forth by the article include overview of other API functions, which can build useful functions on top of existing ones. Resources for Article: Further resources on this subject: Creating a file server (Samba) [Article] Webcam and Video Wizardry [Article] Instant Minecraft Designs – Building a Tudor-style house [Article]
Read more
  • 0
  • 0
  • 5310

article-image-securing-data-cell-level-intermediate
Packt
01 Aug 2013
4 min read
Save for later

Securing data at the cell level (Intermediate)

Packt
01 Aug 2013
4 min read
(For more resources related to this topic, see here.) Getting ready The following prerequisite is essential for our recipe to continue the recipe: SQL Server 2012 Management Studio (SSMS). The AdventureWorks2012 database. We can obtain the necessary database files and database product samples from SQL Server Database Product Samples landing page (http://msftdbprodsamples.codeplex.com/releases/view/55330). These sample databases cannot be installed on any version of SQL Server other than SQL Server 2012 RTM or higher. Ensure you install the databases to your specified 2012 version instance. For this article I have created a new OLAP database using the AdventureWorksDM.xmlafile. Also, ensure that the user who is granting permissions is a member of Analysis Services server role or member of Analysis Services database role that has Administrator permissions. How to do it... The following steps are continued from the previous recipe, but I believe it is necessary to reiterate them from the beginning. Hence, this recipe's steps are listed as follows: Start the SQL Server Management Studio and connect to the SQL Server 2012 Analysis Services instance. Expand the Databases folder. Choose the AdventureWorksDM database (created within the Getting ready section as previously mentioned) and expand the Roles folder. If you are reading this recipe directly without the previous recipes, you can create the necessary roles as per the Creating security roles(Intermediate) recipe. Right-click on the role (here I have selected the DBIA_Processor role) to choose Role Properties. Click on Cell Data on the Select a page option to present a relevant permissions list. In some cases, if you have observed that there is no option available in the Cube drop-down list in the Cell Data option, ensure you check that the relevant cube is set with appropriate Access and Local Cube/Drillthrough options by choosing the Cubes option on the left-hand side on Select a page. Refer to the following screenshot: Now let us continue with the Cell Data options: Click on Cell Data in the Select a page option to present a relevant permissions list. Select the appropriate cube from the drop-down list; here I have selected the Adventure Works DW2012 cube. Choose the Enable read permissions option and then click on the Edit MDX button. You will be presented with the MDX Builder screen. Then, choose the presented Metadata measure value to grant this permission. Similarly, for the Enable read-contingent permissions option, follow the previous step. Finally, click on the Enable read/write permissions option. As a final check, either we can click on the Check button or the OK button, which will check whether valid syntax is parsed from the MDX expressions previously mentioned. If there are any syntax errors, you can fix them by choosing the relevant Edit MDX button to correct. This completes the steps to secure the data at the cell level using a defined role in the Analysis Services database. How it works... There are a few guidelines and some contextual information that will help us understand how we can best secure the data in a cell. Nevertheless, whether the database role has read, read-contingent, or read/write permissions to the cell data, we need to ensure that we are granting permissions to derived cells correctly. By default, a derived cell obtains the relevant data from the other cells. So, the appropriate database role has the required permissions to the derived cell but not to the cells from which the derived cell obtain its values. Irrespective of the database role, whether the members have read or write permissions on some or all the cells within a cube, the members of the database role have no permissions to view any cube data. Once the denied permissions on certain dimensions are effective, the cell level security cannot expand the rights of the database role members to include cell members from that dimension. The blank expression within the relevant box will have no effect in spite of clicking on Enable read/write permissions. Summary Many databases insufficiently implement security through row- and column-level restrictions. Column-level security is only sufficient when the data schema is static, well known, and aligned with security concerns. Row-level security breaks down when a single record conveys multiple levels of information. The ability to control access at the cell level based on security labels, intrinsically within the relational engine, is an unprecedented capability. It has the potential to markedly improve the management of sensitive information in many sectors, and to enhance the ability to leverage data quickly and flexibly for operational needs. This article showed us just how to secure the data at the cell level. Resources for Article: Further resources on this subject: Getting Started with Microsoft SQL Server 2008 R2 [Article] Microsoft SQL Server 2008 High Availability: Installing Database Mirroring [Article] SQL Server and PowerShell Basic Tasks [Article]
Read more
  • 0
  • 0
  • 2104

article-image-article-insight-hyper-v-storage
Packt
14 May 2013
14 min read
Save for later

Insight into Hyper-V Storage

Packt
14 May 2013
14 min read
Types of Hyper-V virtual storage In the previous section, we discovered what virtual storage is and how it contributes to the server virtualization architecture. With all the information we got from the previous section about virtual storage, let's now move ahead and see the virtual machine storage options that Hyper-V offers us. In this section, we will go through different types of virtual machine storage options, such as VHD, VHDX, fixed disk, dynamic disk, differencing disk, and pass-through disk. We will discuss each of them in detail so that you understand all their ins and outs for better planning and sizing. Each of these virtual storage options has a different set of properties than the others, and the administrator must choose the correct virtual storage that best fits the server operating systems and application needs. Let's now start by discussing each of these virtual storage options for virtual machines based on Hyper-V. Virtual disk formats First we must select the virtual disk format before we go ahead and create the virtual hard disk file. There are two possible virtual disk formats available with Windows Server 2012 Hyper-V; they are VHD and VHDX. Let's first make a short comparison between the two virtual hard disk formats, which will provide us with some quick guidance on making the appropriate selection according to the usage. table Virtual hard disk (VHD) After the release of Hyper-V, Microsoft provided its customers with a native hypervisor for their server virtualization needs. But this newly released product had to face challenges for being called an enterprise virtualization platform. And virtual hard disk (VHD) limitation was one of the caveats that customers faced. There are a few limitations to the VHD format; one is the limited size of the virtual hard disk (which we faced before) and another is the possibility of data inconsistency due to power failures. Virtual hard disk (VHD) is a file-based storage for your virtual machine that is based on Hyper-V; this is a default and basic level of storage functionality for a virtual machine. An administrator can create a virtual hard disk (VHD) file for a virtual machine within Hyper-V for a specific size, where defining the size is mandatory. This VHD file can have a different set of properties based on its type. A virtual hard disk (VHD) file or file extension is similar to the VMKD file format, which is a VMware virtual machine hard disk extension. VHD files also existed in the earlier versions of server and desktop virtualization software from Microsoft, for example, Virtual PC and Virtual Server. Virtual hard disk (VHDX) As we saw, the two main limitations of VHD format based virtual hard disk are size and data inconsistency due to power failure; Microsoft addressed these two main limitations of the VHD file format and introduced a new virtual hard disk format called VHDX. This virtual hard disk format allowed customers to create virtual hard disks of up to 64 TB, where earlier the virtual hard disk format (VHD) only allowed virtual hard disks up to the size of 2 TB. Also, as this new format has a resilient architecture, the possibility of data corruption due to power failure also reduced. Virtual disk types The virtual hard disk format decides the maximum size of a virtual hard disk, while the virtual hard disk type decides the functionality and features a virtual hard disk will provide. Microsoft Windows Server 2012 Hyper-V provides four types of virtual hard disks for virtual machines based on Hyper-V. These four virtual hard disk types are as follows: Dynamic disk Fixed disk Differencing disk Pass-through disk You should choose the virtual hard disk type based on your server and application requirements. Each type of virtual hard disk provides different set of disk performance and functionalities, so proper planning is highly important to ensure that you select the right virtual storage for your workload. Let's now discover each of these storage types in detail. Dynamic disk When you create a new virtual machine, and create a new virtual hard disk from New Virtual Machine Wizard, the wizard chooses the dynamic disk as the virtual hard disk type for you. Dynamic disks, as they sound, are dynamic; this means they get changed over time or due to the occurrence of certain events. Dynamic disks are the best choice for economic usage of the server's storage. With whatever size of dynamic disk you create, it won't immediately deduct the same amount of disk space from the physical storage of the Hyper-V server but instead will get created with a very small size, and over a period of time, keep growing as you put data and content on this disk. This dynamic growth of a disk is the actual concept behind this type of virtual hard disk. Since dynamic disks are not of a fixed size and are actually small in size, they cannot deliver a good disk I/O for storage-intensive applications. Real-world example Over the years, I have seen many cases where a production workload (VM) has had performance bottlenecks, especially for the disk subsystem of a virtual machine. And among these, in the case of performance problems related to the virtual machine disk subsystem, the majority of times I saw people using dynamic virtual hard disks for their production workloads. And since dynamic disks do not have a fixed size storage for virtual machines, they are not a good choice for disk-intensive applications and server roles. A dynamic disk has another problem of not being able to provide good results for disk fragmentation or other similar activities due to its design. Dynamic disks are good for the testing and research and development types of virtual machines where the performance factor is not very important. Now let's see how we can create a dynamic disk for a virtual machine. As we have said previously, when you create a virtual machine via New Virtual Machine Wizard, it also gives you the option to create a virtual hard disk for the virtual machine by default. This wizard for creating a new virtual machine, along with the creation of the virtual hard disk, provides a dynamic disk by default, and so the disk type option is not provided as a selection option. In the following brief steps, we will see how to create a dynamic disk: Open Hyper-V Manager from Administrative Tools. From the Hyper-V Manager snap-in, find the New button on the right action pane and click on Hard Disk. New Hard Disk Wizard will open; it will first ask you to select the hard disk format, which could be either VHD or VHDX, depending on the size of your hard disk. Then you will be prompted to select the disk type; here we will select Dynamic Disk. The next section of New Hard Disk Wizard will ask you the name of the virtual hard disk and the location where you want this virtual hard disk to be created and stored. Now the last section of this wizard will ask you the size of the disk you want to create. This section also gives you the functionality to copy content from a physical disk of the server or any other virtual disk that has already been created. Fixed disk A fixed disk is like a static disk that has a fixed size and doesn't grow over time if we go on adding content to it. Fixed disks provide better performance as compared to dynamic disks, because when we create a fixed disk of 100 GB, Hyper-V creates a VHD or VHDX file of 100 GB. It should be noted here that creating this 100 GB fixed disk will take a long time as it has to create a VHD/VHDX file of 100 GB, and the larger disk you create, the longer time it will take. A fixed disk allocates a fixed size from the physical storage of the Hyper-V server, and so this big chunk of allocated disk space allows the virtual machine to receive better I/O performance from this type of virtual hard disk. Fixed disks are always recommended for production workloads because their better performance allows administrators to perform faster read/write operations on virtual disks. Fixed disks are mainly created for virtual machines that run disk-intensive applications, where a high disk I/O is required for virtual machine storage, for example, the virtual hard disk you will create if you are going to virtualize a file sever. Here you will store all the files to the hard disk and so it should be a fixed disk, but at the same time the operating system disk of the file server can be kept as a dynamic disk because there will not be much disk activity on it. To create a fixed disk, you need to perform the following steps: Open Hyper-V Manager from Administrative Tools. From the Hyper-V Manager snap-in, find the New button on the right action pane and click on Hard Disk. New Hard Disk Wizard will open; it will first ask you to select the hard disk format, which could be either VHD or VHDX, depending on the size of your hard disk. You will then be prompted to select the disk type; here we will select Fixed Disk. The next section of New Hard Disk Wizard will ask you for the name of your virtual hard disk and the location where you want this virtual hard disk to be created and stored. Now the last section of this wizard will ask you the size of the disk you want to create. This section also gives you the functionality to copy content from a physical disk of the server or any other virtual disk that has already been created. Differencing disk A differencing disk has a parent-child model associated with its architecture. Mainly, it comes into use when an administrator takes a snapshot of a virtual machine, where after creating the snapshot, Hyper-V leaves the first parent VHD intact and creates a new child disk that gets linked to the parent virtual hard disk. Both parent and child disks always have the same disk format; this means that if the parent disk is created as VHD, the child disk cannot be VHDX. A differencing disk is usually never recommended for production workloads because if you to create a snapshot of a production workload, you will stop writing to the production virtual hard disk. Differencing disks are the same in nature as dynamic disks, where the disk size grows over a period of time as we go on adding more data to the disk; this nature of the disk may not give you good performance for the disk subsystem of the production workload. Another problem with the differencing disk is that when we create a snapshot of the virtual machine, from that point in time, all the data gets written on to the differencing disk and your parent VHD/VHDX becomes idle and isolated from the new data changes. And in the case of multiple snapshots you will have data written on multiple differencing disks. So if any of the differencing disks (snapshots) get misplaced or deleted, you will lose all the data that was written on it at that particular period of time. So in a nutshell, it is highly recommended not to create a snapshot of a production virtual machine; but if you have taken it already, make sure that you restore it to its parent VHD/VHDX as early as possible. To create a differencing disk, you may perform the following steps: Previously, we saw the steps for creating other disk types; follow those same steps until we reach the step where we need to select the disk type. When we are prompted to select the disk type, select Differencing Disk. The next section of New Hard Disk Wizard will ask you the name of the virtual hard disk and the location where you want this virtual hard disk to be created and stored. Now the last section of this wizard will ask you the size of the disk you want to create. This section also gives you the functionality to copy content from a physical disk of the server or any other virtual disk that has already been created. Pass-through disk A pass-through disk is a storage type in which an administrator presents a physical hard disk, which is associated or attached to the Hyper-V host server, to the virtual machine as a raw disk. This type of virtual machine storage is called a pass-through or raw disk; in this type of storage, the physical disk or LUN passes through the hypervisor and later to the virtual machine guest system. This physical disk that is associated or attached to the Hyper-V server could be a SAN LUN storage bound to the Hyper-V server, or it could be a locally installed physical hard disk. To mitigate all these aforementioned risks, administrators prefer to use a pass-through disk as a local disk for highly critical virtual workloads and keep the virtual machine storage on this pass-through disk. As a first step, the pass-through disk is attached or made available to the Hyper-V server; once the disk is available to the server, you have to bring the disk offline before you pass the disk through to a virtual machine available on the same Hyper-V box. We cannot make a partition on the disk available to a virtual machine; it is only the local disk of the Hyper-V server that can be made available as a pass-through disk. In many cases, admins prefer to use pass-through disks instead of using VHDs, especially if they want their virtual machine to boot from a SAN LUN, or in the case of databases, where an Exchange Server mailbox database can be placed on pass-through SAN LUN disks. Pass-through disks are also suitable for environments where the application's high availability methodology is placed on a build blocks level. In these types of high availability requirements for the application, an administrator needs to bind the same data disks (pass-through LUNs) to another virtual machine, and you are all good to go. It should also be noted here that pass-through disks don't get included in a backup based on a VSS snapshot. This means that if you have taken a backup of a virtual machine using a VSS-based backup solution, all the pass-through disks of the virtual machine will not be backed up and only the VHD-based/VHDX-based disks will be included in the VSS snapshot backup. The following steps need to be carried out to provide a pass-through disk to a virtual machine within the Hyper-V server: Open Hyper-V Manager from Administrative Tools. Take the virtual machine settings that you want to configure and add a pass-through disk. Add SCSI Controller from the Add Hardware section of the virtual machine settings, or if you wish, you can also add a physical disk to an IDE controller. Then select the hard drive and click on the Add button. Once you click on the Add button at the controller level, you will be prompted to select the physical hard drive that you want to connect to the virtual machine. After selecting the appropriate disk, you need to connect it to the virtual machine. First click on Apply and then click on the OK button to commit the changes. Image Virtual Fibre Channel SAN With the release of Windows Server 2012, Hyper-V now offers virtual FC SAN connectivity to virtual machines, to allow virtual machines to connect to a virtual SAN. The administrator first connects a virtual SAN network setup on the Hyper-V server, just like what we would do to create a virtual switch for different network segments. Once the virtual fibre SAN network gets set up, the Fibre Channel adapter can be added to the virtual machine that needs to be associated with the Fibre Channel SAN network. Hyper-V allows the Hyper-V administrator to configure WWNs and other settings related to Fibre Channel SAN; all these settings can be customized from the virtual machine Fibre Channel adapter or as global settings from the Fibre Channel network of the virtual SAN manager. Image The preceding screenshot describes the connectivity of a standalone Hyper-V server to the FC SAN. In the first phase of connectivity, our Hyper-V server gets connected to the FC SAN switch through a Fibre Channel medium. Then in the second phase of connectivity, we configure the virtual Fibre Channel switch on the Hyper-V server. Once the virtual fabric switch is configured, we can simply add the virtual Fibre Channel adapter to the virtual machine and connect the adapter to the virtual Fibre Channel switch.
Read more
  • 0
  • 0
  • 1497

article-image-instant-minecraft-designs-building-tudor-style-house
Packt
04 Apr 2013
6 min read
Save for later

Instant Minecraft Designs – Building a Tudor-style house

Packt
04 Apr 2013
6 min read
(For more resources related to this topic, see here.) Tudor-style house In this recipe, we'll be building a Tudor-style house. We'll be employing some manual building methods, and we'll also introduce some WorldEdit CUI commands and VoxelSniper actions into our workflow. Getting ready Once you have installed the recommended mods, you will need to have Equip an Arrow tools equipped on your action bar. This is used by VoxelSniper to perform its functions. You will also need to equip a Wooden Axe as this item becomes the WorldEdit tool and will be used for making selections. Don't try and use them to break blocks especially if you have made a selection that you don't want to lose. Not only will they not break the block, they will also wreck your selection or worse. How to do it... Let's get started with building our Tudor-style house by performing the following steps: Find a nice area or clear one with roughly 40 x 40 squares of flat land. Mark out a selection of 37 x 13 blocks by left-clicking with the Wooden Axe to set the first point and then right-clicking for the second point. Hit your T key and type the //set 5:1 command. This will make all of the blocks in the selected area turn into Spruce Wood Planks. If you make a mistake, you can do //undo. The //undo command does not alter the selection itself, only the changes made to blocks. Now create a selection 20 x 13 that will complete the L shape of the mansion's bottom floor. Remember to left-click and right-click with the Wooden Axe tool. Now type //set 5:1. In the corner that will be at the end of the outside wall of the longest wing, place a stack of three Spruce Wood blocks on top of each other. Right beside this, place two stacked Wool blocks and one Spruce Wood block on top of them, as shown in the inset of the following screenshot: With the selection in place, we will now stack these six blocks horizontally along the 37 block wall. The stack command works in the direction you face. So face directly down the length of the floor and type //stack 17. If you make a mistake, do //undo. Go to the opposite end of the wall you just made and place a stack of three Spruce Wood blocks in the missing spot at the end. Then just like before, put two blocks of White Wool on the side of the corner Spruce Wood pole with one Spruce Wood block on top. Select these six blocks and facing along the short end wall, type //stack 5. Go to the end of this wall and complete it with the three Spruce Logs and two blocks of Wool with one Spruce block on top where the next wall will go. Select these six blocks. Remember! Wooden Axe, left-click, right-click. Facing down the inside wall, type //stack 11. Place another three Spruce Wood blocks upright in the corner and two Wool blocks with one Spruce block on top for the adjacent inner wall. Make a selection, face in the correct direction, and then type //stack 9. Repeat this same process of placing the six blocks, selecting them, facing in correct direction for the next wall, and typing //stack 5. Finally, type //stack 15 and your base should now be complete. On the corner section, we're going to make some bay windows. So let's create the reinforcing structure for those: Inset by two blocks from the corner place five of these reinforcement structures. They consist of one Spruce Wood upright and two upside down Nether Brick steps, each aligned to the Spruce Wood uprights behind them. Now we'll place the wall sections of the bay windows. You should be able to create these by referring to the right-hand section of the following screenshot: Now comes the use of VoxelSniper GUI. So let's add some windows using it. Hit your V key to bring up the VoxelSniper GUI. We're going to "snipe" some windows into place. The first section, Place, in the top left-hand side represents the block you wish to place. For this we will select Glass. The section directly below Place is the Replace panel. As the name suggests, this is the block you wish to replace. We wish to replace White Wool, so we'll select that. Scroll through and locate the Wool block. In the right-hand side, under the Ink panel scroll box, select the White Wool block. Make sure the No-Physics checkbox is not selected. In the right-hand panel, we will select the tool we wish to use. If it's not already selected, click on the Sized tab and choose Snipe. If you get lost, just follow the preceding screenshot. Choose your Arrow tool and right-click on the White Wool blocks you wish to change to Glass. VoxelSniper works from a distance hence the "Sniper" part of the name, so be careful when experimenting with this tool. If you make a mistake in VoxelSniper, use /u to undo. You can also do /u 5, or /u 7, or /u 22, and so on and so on if you wish to undo multiple actions. The upcoming screenshots should illustrate the sort of pattern we will implement along each of the walls. The VoxelSniper GUI tool retains the last settings used so you can just fill in all the Glass sections of the wall with Wool initially, and then replace them using VoxelSniper once you are done. For now, just do it for the two longest outer walls. The following screenshot shows the 37 and 33 block length walls: On the short wing end wall, we'll fill the whole area with White Wool. So let's type //set 35. On the short side, make a 21 x 4 selection like the one shown in the following screenshot (top-left section), and stand directly on the block as indicated by the player in the top-left section of the screenshot. Do //copy and then move to the pole on the opposite side. Once you are on the corner column like in the bottom-left section of the preceding screenshot, do //paste. To be sure that you are standing exactly on the right block, turn off flying (double-click Space bar), knock the block out below your feet, and make sure you fall down to the block below. Then jump up and replace the block. Do the same for the other wing. Select the wall section with the windows, repeat the process like you did for the previous wall, and then fill in the end wall with Wool blocks for now. Add a wooden floor that is level with the three Wool blocks below the Spruce Window frames. You can use the //set 5:1 command to fill in the large rectangular areas.
Read more
  • 0
  • 0
  • 3597
article-image-customizing-your-ibm-lotus-notes-853-experience
Packt
02 Apr 2013
4 min read
Save for later

Customizing your IBM Lotus Notes 8.5.3 experience

Packt
02 Apr 2013
4 min read
(For more resources related to this topic, see here.) So you are using Lotus Notes 8.5.3 for e-mail, calendar, and collaboration, and you want to know how to go from just using Lotus Notes, to letting Lotus Notes work for you. Lotus Notes is highly customizable, adapting to the way you want to work. We will show you how to make Lotus Notes look and behave in the manner you choose. Getting ready Your IBM Lotus Notes 8.5.3 client should be installed and connected to your home mail server to receive mail. How to do it... Let's start with the Home page. The Home page is the first page you will see when setting up a new client. You can also access it in many different ways if your client is already set up. One way to get to it is from the Open list as shown in the following screenshot: Here is what the default Home page looks like after you open it: How it works... To customize the Home page, click on Click here for Home Page options. Then click on Create a new Home Page. This will bring up the Home Page wizard. Give your new Home page a name, and then you can choose how you want your important data to be displayed via your new Home page. As you can see, there are many ways to customize your Home page to display exactly what you need on your screen. There's more... Now we will look at more ways to customize your IBM Lotus Notes 8.5.3 experience. Open list By clicking on the Open button in the upper left corner of the Notes client, you can access the Open list. The Open list is a convenient place to launch your mail, calendar, contacts, to-dos, websites, and applications. You can also access your workspace and Home page from the Open list. Applications added to your workspace are dynamically added to the Open list. The contextual search feature will help you efficiently find exactly what you are looking for. One option when using the Open list is to dock it. When the Open list is docked, it will appear permanently on the left-hand side of the Lotus Notes client. To dock it, right-click on the Open list and select Dock the Open list. To undock it, right-click in an empty area of the docked list and uncheck the Dock the Open list. Windows and Themes You can choose how you want your windows in Lotus Notes 8.5.3 to look. In the Windows and Themes preference panel, you can control how you want Notes to behave. First, decide if you want your open documents to appear as tabs or windows. Then decide if you want the tabs that you had left open when you exit the client to be retained when you open it again. The option to Group documents from each application on a tab will group any documents or views opened from one application. You can see these options in the following screenshot: New mail notification By checking the preference setting called Sending and Receiving under Preferences | Mail, you can display a pop-up alert when a new mail arrives. The pop up displays the sender and the subject of the message. You can then open the e-mail from the pop up. You can also drag the pop up to pin it open. To turn this off, uncheck the preference setting. Workspace The workspace has been around for a long time, and this is where icons representing Domino applications are found. You can choose to stack icons or have them un-stacked. Stacking the icons places replicas of the same applications on top of each other. The icon on the top of the stack dictates which replica is opened. For example, for a mail if the server is on top, then the local replica will be ignored causing potential slowness. If you would like to make your workspace look more three-dimensional and add texture to the background select this setting in the Basic Notes Client preference. You can also add new pages, change the color of pages and name them, by right clicking on the workspace. Summary This article has provided a brief gist about Lotus Notes 8.5.3. It also explains how you can customize your Lotus Notes client, and make it look and behave in the manner you choose. Resources for Article : Further resources on this subject: Feeds in IBM Lotus Notes 8.5 [Article] Lotus Notes 8 — Productivity Tools [Article] IBM Lotus Quickr Services Overview [Article]
Read more
  • 0
  • 0
  • 3891

article-image-active-directory-migration
Packt
28 Mar 2013
6 min read
Save for later

Active Directory migration

Packt
28 Mar 2013
6 min read
(For more resources related to this topic, see here.) Getting ready The following prerequisites have to be met before we can introduce the first Windows Server 2012 Domain Controller into the existing Active Directory domain: In order to add a Windows Server 2012 Domain Controller, the Forest Functional Level (FFL) must be Windows Server 2003. ADPREP is part of the domain controller process and the schema will get upgraded during this process. So the account must have the Schema and Enterprise admins privileges to install the first Windows Server 2012 Domain Controller. If there is a firewall between the new server and the existing domain controllers, make sure all the RPC high ports are open between these servers. The domain controller installation and replication can be controlled by a static or a range of RPC ports by modifying the registry on the domain controllers. The new Windows 2012 server's primary DNS IP address must be the IP address of an existing domain controller. The new server must be able to access the existing Active Directory domain and controllers by NetBIOS and Fully Qualified Domain Name (FQDN). If the new domain controller will be in a new site or in a new subnet, make sure to update the Active Directory Sites and Services with this information. In Windows Server 2012, domain controllers can be remotely deployed by using the Server Manager. The following recipe provides the step-by-step instructions on how to deploy a domain controller in an existing Active Directory environment. How to do it... Install and configure a Windows Server 2012. Join the new Windows Server 2012 to the existing Active Directory domain. Open Server Manager. Navigate to the All Servers group in the left-hand side pane. From the Server Name box, right-click on the appropriate server and select the Add Roles and Features option. You can also select Add Roles and Features from the Manage menu in the command bar. If the correct server is not listed here, you can manually add it from the Manage tab on the top right-hand side and select Add Server. Click on Next on the Welcome window. In the Select Installation Type window, select Role based or Feature based installation. Click on Next. In the Select destination server window, select Select a server from the server pool option and the correct server from the Server Pool box. Click on Next. On the Select server roles window, select Active Directory Domain Services. You will see a pop-up window to confirm the installation of Group Policy Management Tool. It is not required to install the administrative tools on a domain controller. However, this tool is required for the Group Policy Object management and administration. Click on Next. Click on Next in the Select features window. Click on Next on the Active Directory Domain Services window. In the Confirm Installation Selections window, select the Restart the destination server automatically if required option. In the pop-up window click on Yes to confirm the restart option and click on Install. This will begin the installation process. You will see the progress on the installation window itself. This window can be closed without interrupting the installation process. You can get the status update from the notification section in the command bar as shown in the following screenshot: The Post-deployment Configuration option needs to be completed after the Active Directory Domain Services role installation. This process will promote the new server as a domain controller. From the notification window, select Promote this server to a domain controller hyperlink. From the Deployment Configuration window, you should be able to: Install a new forest Install a new child domain Add an additional domain controller for an existing domain Specify alternative credentials for the domain controller promotion, and so on Since our goal is to install an additional domain controller to an existing domain, select the Add a domain controller to an existing domain option. Click on Next. In the Domain Controller Options window, you will see the following options: Domain Name System (DNS) server Global Catalog (GC) Read only Domain controller (RODC) Site name: Type the Directory Service Restore Mode (DSRM) password Select Domain Name System (DNS) server and Global Catalog (GC) checkboxes and provide the Directory Services Restore Mode (DSRM) password. Click on Next. Click on Next on the DNS Options window. In the Additional Options window you will see the following options: Install from media Replicate from Accept the default options unless you have technical reasons to modify these. Click on Next. In the Paths window, you can specify the AD Database, Log, and SYSVOL locations. Select the appropriate locations and then click on Next. Review the Microsoft Infrastructure Planning and Design (IPD) guides for best practices recommendations. For performance improvements, it is recommended to place database, log, and so on in separate drives. Click on Next on the Preparation Options window. During this process the Active Directory Schema and Domain Preparation will happen in the background. You should be able to review the selected option on the next screen. You can export these settings and configurations to a PowerShell script by clicking on the View Script option in the bottom-right corner of the screen. This script can be used for future domain controller deployments. Click on Next to continue with the installation. The prerequisite checking process will happen in the background. You will see the result in the Prerequisites Check window. This is a new enhancement in Windows Server 2012. Review the result and click on Install. The progress of the domain controller promotion will display on the Installation window. The following warning message will be displayed on the destination server before it restarts the server: You can review the %systemroot%debugdcpromo.log and %SystemRoot%debugnetsetup.log log files to get more information about DCPROMO and domain join-related issues. Summary Thus we learned the details of how to do Active Directory migration and its prerequisites, schema upgrade procedure, verification of the schema version, and installation of the Windows Server 2012 Domain Controller in the existing Windows Server 2008 and Server 2008 R2 domain. Resources for Article : Further resources on this subject: Migrating from MS SQL Server 2008 to EnterpriseDB [Article] Moving a Database from SQL Server 2005 to SQL Server 2008 in Three Steps [Article] Authoring an EnterpriseDB report with Report Builder 2.0 [Article]
Read more
  • 0
  • 0
  • 1497

article-image-cross-premise-connectivity
Packt
08 Feb 2013
14 min read
Save for later

Cross-premise Connectivity

Packt
08 Feb 2013
14 min read
Evolving remote access challenges In order to increase productivity of employees, every company wants to provide access to their applications to their employees from anywhere. The users are no longer tied to work from a single location. The users need access to their data from any location and also from any device they have. They also want to access their applications irrespective of where the application is hosted. Allowing this remote connectivity to increase the productivity is in constant conflict with keeping the edge secure. As we allow more applications, the edge device becomes porous and keeping the edge secure is a constant battle for the administrators. The network administrators will have to ensure that this remote access to their remote users is always available and they can access their application in the same way as they would access it while in the office. Otherwise they would need to be trained on how to access an application while they are remote, and this is bound to increase the support cost for maintaining the infrastructure. Another important challenge for the network administrator is the ability to manage the remote connections and ensure they are secure. Migration to dynamic cloud In a modern enterprise, there is a constant need to optimize the infrastructure based on workload. Most of the time we want to know how to plan for the correct capacity rather than taking a bet on the number of servers that are needed for a given workload. If the business needs are seasonal we need to bet on a certain level of infrastructure expenses. If we don't get the expected traffic, the investment may go underutilized. At the same time if the incoming traffic volume is too high, the organization may lose the opportunity to generate additional revenue. In order to reduce the risk of losing additional revenue and at the same time to reduce large capital expenses, organizations may deploy virtualized solutions. However, this still requires the organization to take a bet on the initial infrastructure. What if the organization could deploy their infrastructure based on need? Then they could expand on demand. This is where moving to the cloud helps to move the capital expense ( CapEx ) to operational expense ( OpEx). If you tell your finance department that you are moving to an OpEx model for your infrastructure needs, you will definitely be greeted by cheers and offered cake (or at least, a fancy calculator). The needs of modern data centers As we said, reducing capital expense is on everyone's to-do list these days, and being able to invest in your infrastructure based on business needs is a key to achieving that goal. If your company is expecting seasonal workload, you would probably want to be able to dynamically expand your infrastructure based on needs. Moving your workloads to the cloud allows you to do this. If you are dealing with sensitive customer data or intellectual property, you probably want to be able to maintain secure connectivity between your premise and the cloud. You might also need to move workloads between your premise and the cloud as per your business demands, and so establishing secure connectivity between corporate and the cloud must be dynamic and transparent to your users. That means the gateway you use at the edge of your on-premise network and the gateway your cloud provider uses must be compatible. Another consideration is that you must also be able to establish or tear down the connection quickly, and it needs to be able to recover from outages very quickly. In addition, today's users are mobile and the data they access is also dynamic (the data itself may move from your on-premise servers to the cloud or back). Ideally, the users need not know where the data is and from where they are accessing the data, and they should not change their behavior depending on from where they access the data and where the data resides. All these are the needs of the modern data center. Things may get even more complex if you have multiple branch offices and multiple cloud locations. Dynamic cloud access with URA Let's see how these goals can be met with Windows Server 2012. In order for the mobile users to connect to the organizational network, they can use either DirectAccess or VPN. When you move resources to the cloud, you need to maintain the same address space of the resources so that your users are impacted by this change as little as possible. When you move a server or an entire network to the cloud, you can establish a Site-to-Site (S2S) connection through an edge gateway. Imagine you have a global deployment with many remote sites, a couple of public cloud data centers and some of your own private cloud. As the number of these remote sites grow, the number of Site-to-Site links needed will grow exponentially. If you have to maintain a gateway server or device for the Site-to-Site connections and another gateway for remote access such as VPN or DirectAccess, the maintenance cost associated with it can increase dramatically. One of the most significant new abilities with Windows Server 2012 Unified Remote Access is the combination of DirectAccess and the traditional Routing and Remote Access Server ( RRAS ) in the same Remote Access role. With this, you can now manage all your remote access needs from one unified console. As we've seen, only certain versions of Windows (Windows 7 Enterprise and Ultimate, Windows 8 Enterprise) can be DirectAccess clients, but what if you have to accommodate some Vista or XP clients or if you have third-party clients that need CorpNet connectivity? With Windows Server 2012, you can enable the traditional VPN from the Remote Access console and allow the down-level and third-party clients to connect via VPN. The Unified Remote Access console also allows the remote access clients to be monitored from the same console. This is very useful as you can now configure, manage, monitor, and troubleshoot all remote access needs from the same place. In the past, you might have used the Site-to-Site demand-dial connections to connect and route to your remote offices, but until now the demand-dial Site-to-Site connections used either the Point-to-Point Tunneling Protocol ( PPTP) or Layer Two Tunnel Protocol ( L2TP) protocols. However, these involved manual steps that needed to be performed from the console. They also produced challenges working with similar gateway devices from other vendors and because the actions are needed to be performed through the console, they did not scale well if the number of Site-to-Site connections increased beyond a certain number. Some products attempted to overcome the limits of the built-in Site-to-Site options in Windows. For example, Microsoft's Forefront Threat Management Gateway 2010 used the Internet Key Exchange ( IKE ) protocol, which allowed it to work with other gateways from Cisco and Juniper. However, the limit of that solution was that in case one end of the IPsec connection fails for some reason, the Dead Peer Detection (DPD) took some time to realize the failure. The time it took for the recovery or fallback to alternate path caused some applications that were communicating over the tunnel to fail and this disruption to the service could cause significant losses. Thanks to the ability to combine both VPN and DirectAccess in the same box as well as the ability to add the Site-to-Site IPsec connection in the same box, Windows Server 2012 allows you to reduce the number of unique gateway servers needed at each site. Also, the Site-to-Site connections can be established and torn down with a simple PowerShell command, making managing multiple connections easier. The S2S tunnel mode IPsec link uses the industry standard IKEv2 protocol for IPsec negotiation between the end points, which is great because this protocol is the current interoperability standard for almost any VPN gateway. That means you don't have to worry about what the remote gateway is; as long as it supports IKEv2, you can confidently create the S2S IPsec tunnel to it and establish connectivity easily and with a much better recovery speed in case of a connection drop. Now let's look at the options and see how we can quickly and effectively establish the connectivity using URA. Let's start with a headquarters location and a branch office location and then look at the high-level needs and steps to achieve the desired connectivity. Since this involves just two locations, our typical needs are that clients in either location should be able to connect to the other site. The connection should be secure and we need the link only when there is a need for traffic flow between the two locations. We don't want to use dedicated links such as T1 or fractional T1 lines as we do not want to pay for the high cost associated with them. Instead, we can use our pre-existing Internet connection and establish Site-to-Site IPsec tunnels that provide us a secure way to connect between the two locations. We also want users from public Internet locations to be able to access any resource in any location. We have already seen how DirectAccess can provide us with the seamless connectivity to the organizational network for domain-joined Windows 7 or Windows 8 clients, and how to set up a multisite deployment. We also saw how multisite allows Windows 8 clients to connect to the nearest site and Windows 7 clients can connect to the site they are configured to connect to. Because the same URA server can also be configured as a S2S gateway and the IPsec tunnel allows both IPv4 and IPv6 traffic to flow through it, it will now allow our DirectAccess clients in public Internet locations to connect to any one of the sites and also reach the remote site through the Site-to-Site tunnel. Adding the site in the cloud is very similar to adding a branch office location and it can be either your private cloud or the public cloud. Typically, the cloud service provider provides its own gateway and will allow you to build your infrastructure behind it. The provider could typically provide you an IP address for you to use as a remote end point and they will just allow you to connect to your resources by NATting the traffic to your resource in the cloud. Adding a cloud location using Site-to-Site In the following diagram, we have a site called Headquarters with a URA server (URA1) at the edge. The clients on the public Internet can access resources in the corporate network through DirectAccess or through the traditional VPN, using the URA1 at the edge. We have a cloud infrastructure provider and we need to build our CloudNet in the cloud and provide connectivity between the corporate network at the Headquarters and CloudNet in the cloud. The clients on the Internet should be able to access resources in the corporate network or CloudNet, and the connection should be transparent to them. The CloudGW is the typical edge device in the cloud that your cloud provider owns and it is used to control and monitor the traffic flow to each tenant. Basic setup of cross-premise connectivity The following steps outline the various options and scenarios you might want to configure: Ask your cloud provider for the public IP address of the cloud gateway they provide. Build a virtual machine running Windows Server 2012 with the Remote Access role and place it in your cloud location. We will refer to this server as URA2. Configure URA2 as a S2S gateway with two interfaces: The interface towards the CloudGW will be the IPsec tunnel endpoint for the S2S connection. The IP address for this interface could be a public IPv4 address assigned by your cloud provider or a private IPv4 address of your choice. If it is a private IPv4 address, the provider should send all the IPsec traffic for the S2S connection from the CloudGW to the Internet-facing interface of URA2. The remote tunnel endpoint configuration in URA1 for the remote site will be the public address that you got in step 1. If the Internet-facing interface of URA2 is also a routable public IPv4 address, the remote tunnel endpoint configuration in URA1 for the remote site will be this public address of URA2. The second interface on URA2 will be a private address that you are going to use in your CloudNet towards servers you are hosting there. Configure the cloud gateway to allow the S2S connections to your gateway (URA2). Establish S2S connectivity between URA2 and URA1. This will allow you to route all traffic between CloudNet and CorpNet. The preceding steps provide full access between the CloudNet and CorpNet and also allow your DirectAccess and VPN clients on the Internet to access any resource in CorpNet or CloudNet without having to worry whether the resource is in CorpNet or in CloudNet. DirectAccess entry point in the cloud Building on the basic setup, you can further extend the capabilities of the clients on the Internet to reach the CloudNet directly without having to go through the CorpNet. To achieve this, we can add a URA Server in the CloudNet (URA3). Here is an overview of the steps to achieve this (assuming your URA server URA3 is already installed with the Remote Access role): Place a domain controller in CloudNet. It can communicate with your domain through the Site-to-Site connection to do Active Directory replication and perform just like any other domain controller. Enable the multisite configuration on your primary URA server (URA1). Add URA3 as an additional entry point. It will be configured as a URA server with the single NIC topology. Register the IP-HTTPS site name in DNS for URA3. Configure your cloud gateway to forward the HTTPS traffic to your URA2 and in turn to URA3 to allow clients to establish the IP-HTTPS connections. Using this setup, clients on the Internet can connect to either the entry point URA1 or URA3. No matter what they choose, they can access all resources either directly or via the Site-to-Site tunnel. Authentication The Site-to-Site connection between the two end points (URA1 and URA2) can be configured with Pre Shared Key ( PSK) for authentication or you can further secure the IPsec tunnel with Certificate Authentication. Here, the certificates you will need for Certificate Authentication would be computer certificates that match the name of the end points. You could use either certificates issued by a third-party provider or certificates issued from your internal Certificate Authority ( CA). As with any certificate authentication, the two end points need to trust the certificates used at either end, so you need to make sure the certificate of the root CA is installed on both servers. To make things simpler, you can start with a simple PSK-based tunnel and once the basic scenario works, change the authentication to computer certificates. We will see the steps to use both PSK and Certificates in the detailed steps in the following section. Configuration steps Even though the Site-to-Site IPsec tunnel configuration is possible via the console, we highly recommend that you get familiar with the PowerShell commands for this configuration as they make it a lot easier to configure this in case you need to manage multiple configurations. If you have multiple remote sites, having to set up and tear down each site based on workload demand is not scalable when configured through the console. Summary we have seen how by combining the DirectAccess and Site-to-Site VPN functionalities we are now able to use one single box to provide all remote access features. With virtual machine live migration options, you can move any workload from your corporate network to cloud network and back over the S2S connection and keep the same names for the servers. This way, clients from any location can access your applications in the same way as they would access them if they were on the corporate network. Resources for Article : Further resources on this subject: Creating and managing user accounts in Microsoft Windows SBS 2011 [Article] Disaster Recovery for Hyper-V [Article] Windows 8 and Windows Server 2012 Modules and Cmdlets [Article]
Read more
  • 0
  • 0
  • 1577
article-image-sql-server-and-powershell-basic-tasks
Packt
07 Jan 2013
6 min read
Save for later

SQL Server and PowerShell Basic Tasks

Packt
07 Jan 2013
6 min read
(For more resources related to this topic, see here.) Listing SQL Server instances In this recipe, we will list all SQL Server instances in the local network. Getting ready Log in to the server that has your SQL Server development instance, as an administrator. How to do it... Open the PowerShell console by going to Start | Accessories | Windows PowerShell | Windows PowerShell ISE. Let's use the Start-Service cmdlet to start SQLBrowser: Import-Module SQLPS -DisableNameChecking #sql browser must be installed and running Start-Service "SQLBrowser" Next, you need to create a ManagedComputer object to get access to instances. Type the following script and run it: $instanceName = "KERRIGAN" $managedComputer = New-Object 'Microsoft.SqlServer.Management.Smo. Wmi.ManagedComputer' $instanceName #list server instances $managedComputer.ServerInstances Your result should look similar to the one shown in the following screenshot: Note that $managedComputer.ServerInstances gives you not only instance names, but also additional properties such as ServerProtocols, Urn , State, and so on. Confirm that these are the same instances you see in Management Studio . Open up Management Studio . Go to Connect | Database Engine . In the Server Name drop-down, click on Browse for More. Select the Network Servers tab, and check the instances listed. Your screen should look similar to this:   How it works... All services in a Windows operating system are exposed and accessible using Windows Management Instrumentation (WMI). WMI is Microsoft's framework for listing, setting, and configuring any Microsoft-related resource. This framework follows Web-based Enterprise Management (WBEM). Distributed Management Task Force, Inc. defines WBEM as follows (http://www.dmtf.org/standards/wbem): a set of management and internet standard technologies developed to unify the management of distributed computing environments. WBEM provides the ability for the industry to deliver a well-integrated set of standard-based management tools, facilitating the exchange of data across otherwise disparate technologies and platforms. In order to access SQL Server WMI-related objects, you can create a WMI ManagedComputer instance: $managedComputer = New-Object 'Microsoft.SqlServer.Management.Smo.Wmi. ManagedComputer' $instanceName The ManagedComputer object has access to a ServerInstance property, which in turn lists all available instances in the local network. These instances, however, are only identifiable if the SQL Server Browser service is running. SQL Server Browser is a Windows service that can provide information on installed instances in a box. You need to start this service if you want to list the SQL Server-related services. There's more... An alternative to using the ManagedComputer object is using the System.Data.Sql. SQLSourceEnumerator class to list all the SQL Server instances in the local network, thus: [System.Data.Sql.SqlDataSourceEnumerator]::Instance.GetDataSources() | Select ServerName, InstanceName, Version | Format-Table -AutoSize When you execute this, your result should look similar to the following screenshot: Yet another way to get a handle to the SQL Server WMI object is by using the Get-WmiObject cmdlet. This will not, however, expose exactly the same properties exposed by the Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer object. To do this, you will need to discover first what namespace is available in your environment, thus: $hostname = "KERRIGAN" $namespace = Get-WMIObject -ComputerName $hostName -NameSpace root MicrosoftSQLServer -Class "__NAMESPACE" | Where Name -Like "ComputerManagement*" If you are using PowerShell V2, you will have to change the Where cmdlet usage to use the curly braces ({}) and the $_ variable, thus: Where {$_.Name -Like "ComputerManagement*" } For SQL Server 2012, this value is: ROOTMicrosoftSQLServerComputerManagement11 Once you have the namespace, you can use this value with Get-WmiObject to retrieve the instances. One property we can use to filter is SqlServiceType. According to MSDN (http://msdn.microsoft.com/en-us/library/ms179591.aspx)), the following are the values of SqlServiceType: SqlServiceType   Description   1 SQL Server service   2 SQL Server Agent service   3 Full-text Search Engine service   4 Integration Services service   5 Analysis Services service   6 Reporting Services service   7 SQL Server Browser service   Thus, to retrieve the SQL Server instances, you need to filter for SQL Server service, or  SQLServiceType = 1. Get-WmiObject -ComputerName $hostname ` -Namespace "$($namespace.__NAMESPACE)$($namespace.Name)" ` -Class SqlService | Where SQLServiceType -eq 1 | Select ServiceName, DisplayName, SQLServiceType | Format-Table -AutoSize If you are using PowerShell V2, you will have to change the Where cmdlet usage to use the curly braces ({}) and the $_ variable: Where {$_.SQLServiceType -Like –eq 1 } Your result should look similar to the following screenshot: Discovering SQL Server services In this recipe, we enumerate all SQL Server services and list their status. Getting ready Check which SQL Server services are installed in your instance. Go to Start | Run and type services.msc. You should see a screen similar to this: How to do it... Let's assume you are running this script on the server box. Open the PowerShell console by going to Start | Accessories | Windows PowerShell | Windows PowerShell ISE. Add the following code and execute it: Import-Module SQLPS #replace KERRIGAN with your instance name $instanceName = "KERRIGAN" $managedComputer = New-Object 'Microsoft.SqlServer.Management.Smo. Wmi.ManagedComputer' $instanceName #list services $managedComputer.Services | Select Name, Type, Status, DisplayName | Format-Table -AutoSize Your result will look similar to the one shown in the following screenshot: Items listed on your screen will vary depending on the features installed and running in your instance. Confirm that these are the services that exist in your server. Check your services window. How it works... Services that are installed on a system can be queried using WMI. Specific services for SQL Server are exposed through SMO's WMI ManagedComputer object. Some of the exposed properties include: ClientProtocols ConnectionSettings ServerAliases ServerInstances Services There's more... An alternative way to get SQL Server-related services is by using Get-WMIObject. We will need to pass in the hostname, as well as SQL Server WMI provider for the Computer Management namespace. For SQL Server 2012, this value is: ROOTMicrosoftSQLServerComputerManagement11 The script to retrieve the services is provided in the following code. Note that we are dynamically composing the WMI namespace here. $hostName = "KERRIGAN" $namespace = Get-WMIObject -ComputerName $hostName -NameSpace root MicrosoftSQLServer -Class "__NAMESPACE" | Where Name -Like "ComputerManagement*" Get-WmiObject -ComputerName $hostname -Namespace "$($namespace.__ NAMESPACE)$($namespace.Name)" -Class SqlService | Select ServiceName Yet another alternative but less accurate way of listing possible SQL Server-related services is the following snippet of code: #alterative - but less accurate Get-Service *SQL* It uses the Get-Service cmdlet and filters based on the service name. It is less accurate because this cmdlet grabs all processes that have SQL in the name but may not necessarily be SQL Server-related. For example, if you have MySQL installed, that will get picked up as a process. Conversely, this cmdlet will not pick up SQL Server-related services that do not have SQL in the name, such as ReportServer.
Read more
  • 0
  • 0
  • 2427

article-image-weblogic-security-realm
Packt
03 Jan 2013
7 min read
Save for later

WebLogic Security Realm

Packt
03 Jan 2013
7 min read
(For more resources related to this topic, see here.) Configuration of local LDAP server: user/roles/lockout The simplest way to configure your security realm is through the WebLogic Administration Console; you can find all about security in the section, on the main tree, Security Realms, where the default configuration called myrealm is placed. Under Security Realms, we have a preconfigured subset of Users, Groups, Authentication methods, Role Mapping, Credential Mapping providers, and some other security settings. You can configure many realms' security sets, but only one will be active. On the myrealm section, we find all security parameters of the internal LDAP server configurations, including users and groups. Consider this; Oracle declares that the embedded WebLogic LDAP server works well with less than 10,000 users; for more users, consider using a different LDAP server and Authentication Provider, for example, an Active Directory Server. Users and groups Obviously, here you can and configure some internal users and some internal groups. A user is an entity that can be authenticated and used to protect our application resources. A group is an aggregation of users who usually have something in common, such as a subset of permissions and authorizations. Users section The console path for the Users section is as follows: Click on Security Realms | myrealm | Users and Groups | Users. In this section, by default you will find your administrator account, used to log in to the WebLogic Administration Console and configured on the wizard during the installation phase; you can also create some other users (note: the names are case insensitive insert ) and set the following settings: User Description: An internal string description tag User Password: User password subjected to some rules View User Attributes: Some user attributes Associate groups: Predefined in the Groups section Please be attentive to preserve the integrity of the administrative user created in the installation configuration wizard; this user is vital for the WebLogic Server (startup process); don't remove this user if you don't have some advanced knowledge of what you are doing and how to roll back changes Take care also to change the admin user's password after installation phase; if you use the automatic startup process without providing a user and password (required when needed to start the admin console in the OS as a service, without prompting any interactive request) you will need to reconfigure the credentials file to start up the admin server at boot. The following file needs to be changed: $DOMAIN_HOMEserversAdminserversecurityboot.properties username=weblogic password=weblogicpassword After the first boot, the WebLogic admin server will encrypt this file with its internal encryption method. Groups section The console path for the Groups section is as follows: Security Realms | myrealm | Users and Groups | Groups In this section, by default, you will find some groups used to profile user grants (only the Administrators' and Oracle System's group was populated) whose names are case insensitive. Define new groups before creating a user to associate with them. The most important groups are as follows: Administrators : This is the most powerful group, which can do everything in the WebLogic environment. Do not add plenty of people to it, otherwise you will have too many users with the power to modify your server configuration. Deployers: This group can manage applications and resources (for example, JDBC, web services) and is very appropriate for the operations team that needs to deploy and update different versions of applications often during the day. Monitors: This group provides a read-only access to WebLogic and is convenient for monitoring WebLogic resources and status Operators: This group provides the grant privilege to stop, start, and resume WebLogic nodes. All users without an associated group are recognized to an Anonymous role. In this case the implicit group (not present in the list) will be the everyone group. Security role condition The console path for Roles and Policies are as follows: Go to Security Realms | myrealm | Users and Groups | Realm Roles | Realm Policies | Roles Go to Security Realms | myrealm | Users and Groups | Realm Roles | Realm Policies | Policies In WebLogic, you can configure some advanced collection of rules to trust or deny the access over role security configuration dynamically; all conditions need to be true if you want to grant a security role. There are some available conditions in WebLogic role mapping, which we will now explore in the next section. Basic The available options are as follows: User: This option adds the user to a specific role if his username matches the specified string Group: This option adds the specified group to the role in the same way as the previous rule Server is in development mode: This option adds the user or group in a role if the server is started in the development mode Allow access to everyone: This option adds all users and groups to the role Deny access to everyone: This option rejects all users from being in the role Date and time-based When used, this role condition can configure a rule based on a date or on a time basis (between, after, before, and specified) to grant a role assignment. Context element The server retrieves information from the ContextHandler object and allows you to define role conditions based on the values of HTTP servlet request attributes, HTTP session attributes, and EJB method parameters. User lockout The console path for User Lockout is Security Realms | myrealm | User Lockout. User Lockout is enabled by default; this process prevents user intrusion and dictionary attacks. It also improves the server security and can configure some policies to lock our local configured users. This option is globally applied to any configured security provider. In this section, you can define the maximum number of consecutive invalid login attempts that can occur before a user's account is locked out and how long the lock lasts. After that period, the account is automatically re-enabled. If you are using an Authentication Provider that has its own mechanism for protecting user accounts, disable the Lockout Enabled option. When a user is locked, you can find a message similar to the following message in the server logs: <Apr 6, 2012 11:10:00 AM CEST> <Notice> <Security> <BEA-090078> <User Test in security realm myrealm has had 5 invalid login attempts, locking account for 30 minutes.> Unlocking user The result of lock settings are a blocked user; if you need to unlock him immediately, you have to go to the section named Domain, created in the wizard installation phase in the left pane under the Security section. Here, you can view the Unlock User tab, where you can specify that the username be re-enabled. Remember to click on the Lock & Edit button before you do any changes. When you manually unlock a user, you can find a message similar to the following message in the server logs: ... .<1333703687507> <BEA-090022> <Explicitly unlocked, user Test.> Summary By using this recipe, we have focused on the key steps to follow application resources in a fast and easy way. Resources for Article : Further resources on this subject: Oracle Enterprise Manager Key Concepts and Subsystems [Article] Configuring and Deploying the EJB 3.0 Entity in WebLogic Server [Article] Developing an EJB 3.0 entity in WebLogic Server [Article]
Read more
  • 0
  • 0
  • 5369