Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS and E-Commerce

830 Articles
article-image-creating-new-ios-social-project
Packt
08 Oct 2013
8 min read
Save for later

Creating a New iOS Social Project

Packt
08 Oct 2013
8 min read
Creating a New iOS Social Project In this article, by Giuseppe Macri, author of Integrating Facebook iOS SDK with Your Application, we will learn about: With this article, we start our coding journey. We are going to build our social application from the group up. In this article we will learn about: Creating a Facebook App ID: It is a key used with our APIs to communicate with the Facebook Platform. Downloading the Facebook SDK: iOS SDK can be downloaded from two different channels. We will look into both of them. Creating a new XCode Project: I will give a brief introduction on how to create a new XCode project and description of the IDE environment. Importing Facebook iOS SDK into our XCode project: I will go through the import of the Facebook SDK into our XCode project step-by-step. Getting familiar with Storyboard to build a better interface: This is a brief introduction on the Apple tool to build our application interface. Creating a Facebook App ID In order to communicate with the Facebook Platform using their SDK, we need an identifier for our application. This identifier, also known as Facebook App ID, will give access to the Platform; at the same time, we will be able to collect a lot of information about its usage, impressions, and ads. To obtain a Facebook App ID, we need a Facebook account. If you don't have one, you can create a Facebook account via the following page at https://www.facebook.com: The previous screenshot shows the new Facebook account sign up form. Fill out all the fields and you will be able to access the Facebook Developer Portal. Once we are logged into Facebook, we need to visit the Developer Portal. You can find it at https://developers.facebook.com/. I already mentioned the important role of Developer Portal in developing our social application. The previous screenshot shows the Facebook Developer Portal. The main section, the top part, is dedicated to the current SDKs. On the top-blue bar, click on the Apps link, and it will redirect us to the Facebook App Dashboard. The previous screenshot shows the Facebook App Dashboard. To the left, we have a list of apps; on the center of the page, we can see the details of the currently selected app from our list. The page shows the application's setting and analytics (Insights). In order to create a new Facebook App ID, you can click on Create New App on the top-right part of the App Dashboard. The previous screenshot shows the first step in order to create a Facebook App ID. When providing the App Name, be sure the name does not already exist or violate any copyright laws; otherwise, Facebook will remove your app. App Namespace is something that we need if we want to define custom objects and/or actions in the Open Graph structure. The App Namespace topic is not part of this book. Web hosting is really useful when creating a social web application. Facebook, in partnership with other providers, can create a web hosting for us if needed. This part is not going to be discussed in this book; therefore, do not check this option for your application. Once all the information is provided, we can move on to the next step. Please fill out the form, and move forward to the next one. On the top of the page, we can see both App ID and App Secret. These are the most important pieces of information about our new social applicaton. App ID is a piece of information that we can share unlike App Secret. At the center of our new Facebook Application Page, we have basic information fields. Do not worry about Namespace, App Domains, and Hosting URL; these fields are for web applications. Sandbox Mode only allows developers to use the current application. Developers are specified through the Developer Roles link on the left side bar. Moving down, select the type of app. For our goal, select Native iOS App. You can select multiple types and create multiplatform social applications. Once you have checked Native iOS App, you will be prompted with the following form: The only field we need to provide for now is the Bundle ID. The Bundle ID is something related to XCode settings. Be sure that the Facebook Bundle ID will match our XCode Social App Bundle Identifier. The format for the bundle identifier is always something like com.MyCompany.MyApp. iPhone/iPad App Store IDs are the App Store identifiers of your application if you have published your app in the App Store. If you didn't provide any of them after you saved your changes, you will receive a warning message; however, don't worry, our new App ID is now ready to be used. Save your changes and get ready to start our developing journey. Downloading the Facebook iOS SDK The iOS Facebook SDK can be downloaded through two different channels: Facebook Developer Portal: For downloading the installation package GitHub: For downloading the SDK source code Using Facebook Developer Portal, we can download the iOS SDK as the installation package. Visit https://developers.facebook.com/ios/ as shown in the following screenshot and click on Download the SDK to download the installation package. The package, once installed, will create a new FacebookSDK folder within our Documents folder. The previous screenshot shows the content of the iOS SDK installation package. Here, we can see four elements: FacebookSDK.framework: This is the framework that we will import in our XCode social project LICENSE: It contains information about licensing and usage of the framework README: It contains all the necessary information about the framework installation Samples: It contains a useful set of sample projects that uses the iOS framework's features With the installation package, we only have the compiled files to use, with no original source code. It is possible to download the source code using the GitHub channel. To clone git repo, you will need a Git client, either Terminal or GUI. The iOS SDK framework git repo is located at https://github.com/facebook/facebook-ios-sdk.git. I prefer the Terminal client that I am using in the following command: git clone https://github.com/facebook/facebook-ios-sdk.git After we have cloned the repo, the target folder will look as the following screenshot: The previous picture shows the content of the iOS SDK GitHub repo. Two new elements are present in this repo: src and scripts. src contains the framework source code that needs to be compiled. The scripts folder has all the necessary scripts needed to compile the source code. Using the GitHub version allows us to keep the framework in our social application always up-to-date, but for the scope of this book, we will be using the installation package. Creating a new XCode project We created a Facebook App ID and downloaded the iOS Facebook SDK. It's time for us to start our social application using XCode. The application will prompt the welcome dialog if Show this window when XCode launches is enabled. Choose the Create a new XCode project option. If the welcome dialog is disabled, navigate to File | New | Project…. Choosing the type of project to work with is the next step as shown in the following screenshot: The bar to the left defines whether the project is targeting a desktop or a mobile device. Navigate to iOS | Application and choose the Single View Application project type. The previous screenshot shows our new project's details. Provide the following information for your new project: Product Name: This is the name of our application Organization Name: I will strongly recommend filling out this part even if you don't belong to an organization because this field will be part of our Bundle Identifier Company Identifier: It is still optional, but we should definitely fill it out to respect the best-practice format for Bundle Identifier Class Prefix: This prefix will be prepended to every class we are going to create in our project Devices: We can select the target device of our application; in this case, it is an iPhone but we could also have chosen iPad or Universal Use Storyboards: We are going to use storyboards to create the user interface for our application Use Automatic Reference Counting: This feature enables reference counting in the Objective C Garbage Collector Include Unit Tests: If it is selected, XCode will also create a separate project target to unit-test our app; this is not part of this book Save the new project. I will strongly recommend checking the Create a local git repository for this project option in order to keep track of changes. Once the project is under version control, we can also decide to use GitHub as the remote host to store our source code.
Read more
  • 0
  • 0
  • 1605

article-image-planning-your-store
Packt
07 Oct 2013
11 min read
Save for later

Planning Your Store

Packt
07 Oct 2013
11 min read
Defining the catalogue The type of products you are selling will determine the structure of your store. Different types of products will have different requirements in terms of the information presented to the customer, and the data that you will need to collect in order to fulfill an order. Base product definition Every product needs to have the following fields which are added by default: Title Stock Keeping Unit (SKU) Price (in the default store currency) Status (a flag indicating if the product is live on the store) This is the minimum you need to define a product in Drupal Commerce—everything else is customized for your store. You can define multiple Product Types (Product Entity Bundles), which can contain different fields depending on your requirements. Physical products If you are dealing with physical products, such as books, CDs, or widgets, you may want to consider these additional fields: Product images Description Size Weight Artist/Designer/Author Color You may want to consider setting up multiple Product Types for your store. For example, if you are selling CDs, you may want to have a field for Artist which would not be relevant for a T-shirt (where designer may be a more appropriate field). Whenever you imagine having distinct pieces of data available, adding them as individual fields is well worth doing at the planning stage so that you can use them for detailed searching and filtering later. Digital downloads If you are selling a digital product such as music or e-books, you will need additional fields to contain the actual downloadable file. You may also want to consider including: Cover image Description Author/Artist Publication date Permitted number of downloads Tickets Selling tickets is a slightly more complex scenario since there is usually a related event associated with the product. You may want to consider including: Related event (which would include date, venue, and so on) Ticket Type / Level / Seat Type Content access and subscriptions Selling content access and subscriptions through Drupal Commerce usually requires associating the product with a Drupal role. The customer is buying membership of the role which in turn allows them to see content that would usually be restricted. You may want to consider including: Associated role(s) Duration of membership Initial cost (for example, first month free) Renewal cost (for example, £10/month ) Customizing products The next consideration is whether products can be customized at the point of purchase. Some common examples of this are: Specifying size Specifying color Adding a personal message (for example, embossing) Selecting a specific seat (in the event example) Selecting a subscription duration Specifying language version of an e-book Gift wrapping or gift messaging It is important to understand what additional user input you will need from the customer to fulfill the order over and above the SKU and quantity. When looking at these options, also consider whether the price changes depending on the options that the customer selects. For example: Larger sizes cost more than smaller sizes Premium for "red" color choice Extra cost for adding an embossed message Different pricing for different seating levels Monthly subscription is cheaper if you commit to a longer duration Classifying products Now that you have defined your Product Types, the next step is to consider the classification of products using Drupal's in-built Taxonomy system. A basic store will usually have a catalog taxonomy vocabulary where you can allocate a product to one or more catalog sections, such as books, CDs, clothing, and so on. The taxonomy can also be hierarchical, however, individual vocabularies for the classification of your products is often more workable, especially when providing the customer with a faceted search or filtering facility later. The following are examples of common taxonomy vocabulary: Author/Artist/Designer Color Size Genre Manufacturer/Brand It is considered best practice to define a taxonomy vocabulary rather than have a simple free text field. This provides consistency during data entry. For example, a free text field for size may end up being populated with S, Small, Sm, all meaning the same thing. A dropdown taxonomy selector would ensure that the value entered was the same for every product. Do not be tempted to use List type fields to provide dropdown menus of choices. List fields are necessarily the reserve of the developer and using them excludes the less technical site owner or administrator from managing them. Pricing Drupal Commerce has a powerful pricing engine, which calculates the actual selling price for the customer, depending on one or more predefined rules. This gives enormous flexibility in planning your pricing strategy. Currency Drupal Commerce allows you to specify a default currency for the store, but also allows you to enter multiple price fields or calculate a different price based on other criteria, such as the preferred currency of the customer. If you are going to offer multiple currencies, you need to consider how the currency exchange will work; do you want to enter a set price for each product and currency you offer, or a base price in the default currency and calculate the other currencies based on a conversion rate? If you use a conversion rate, how often is it updated? Variable pricing Prices do not have to be fixed. Consider scenarios where the prices for your store will vary over time, or situations based on other factors such as volume-based discounts. Will some preferred customers get a special price deal on one or more products? Customers You cannot complete an order without a customer and it is important to consider all of their needs during the planning process. By default, a customer profile in Drupal Commerce contains an address type field which works to the Name and Address Standard (xNAL) format, collecting international addresses in a standard way. However, you may want to extend this profile type to collect more information about the customer. For example: Telephone number Delivery instructions E-mail opt-in permission Do any of the following apply? Is the store open to public or open by invitation only? Do customers have to register before they can purchase? Do customers have to enter an e-mail address in order to purchase? Is there a geographical limit to where products can be sold/shipped? Can a customer access their account online? Can a customer cancel an order once it is placed? What are the time limits on this? Can a customer track the progress of their order? Taxes Many stores are subject to Sales tax or Value Added Tax(VAT) on products sold. However, these taxes often vary depending on the type of product sold and the final destination of the physical goods. During your planning you should consider the following: What are the sales tax / VAT rules for the store? Are there different tax rules depending on the shipping destination? Are there different tax rules depending on the type of product? If you are in a situation where different types of products in your store will incur different rates of taxes, then it is a very good idea to set up different Product Types so that it's easy to distinguish between them. For example, in the UK, physical books are zero rated for VAT, whereas, the same book in digital format will have 20% VAT added. Payments Drupal Commerce can connect to many different payment gateways in order create a transaction for an order. While many of the popular payment gateways, such as PayPal and Sage Pay, have fully functional payment gateway modules on Drupal.org, it's worth checking if the one you want is available because creating a new one is no small undertaking. The following should also be considered: Is there a minimum spend limit? Will there be multiple payment options? Are there surcharges for certain payment types? Will there be account customers that do not have to enter a payment card? How will a customer be refunded if they cancel or return their order? Shipping Not every product will require shipping support, but for physical products, shipping can be a complex area. Even a simple product store can have complex shipping costs based on factors such as weight, destination, total spend, and special offers. Ensure the following points are considered during your planning: Is shipping required? How is the cost calculated? By value/weight/destination? Are there geographical restrictions? Is express delivery an option? Can the customer track their order? Stock With physical products and some virtual products such as event tickets, stock control may be a requirement. Stock control is a complex area and beyond the scope of this book, but the following questions will help uncover the requirements: Are stock levels managed in another system, for example, MRP? If the business has other sales channels, is there dedicated stock for the online store? When should stock levels be updated (at the point of adding to the cart or at the point of completing the order)? How long should stock be reserved? What happens when a product is out of stock? Can a customer order an out-of-stock product (back order)? What happens if a product goes out of stock during the customer checkout process? If stock is controlled by an external system, how often should stock levels be updated in the e-store? Legal compliance It is important to understand the legal requirements of the country where you operate your store. It is beyond the scope of this book to detail the legal requirements of every country, but some examples of e-commerce regulation that you should research and understand are included here: PCI-DSS Compliance—Worldwide The Privacy and Electronic Communications (EC Directive) (also known as the EU cookie law)—European Union Distance Selling Regulations—UK Customer communication Once the customer has placed their order, how much communication will there be? A standard expectation of the customer will be to receive a notification that their order has been placed, but how much information should that e-mail contain? Should the e-mail be plain text or graphical? Does the customer receive an additional e-mail when the order is shipped? If the product has a long lead time, should the customer receive interim updates? What communication should take place if a customer cancels their order? Back office In order for the store to run efficiently, it is important to consider the requirements of the back office system. This will often be managed by a different group of people to those specifying the e-store. Identify the different types of users involved in the order fulfillment process. These roles may include: Sales order processing Warehouse and order handling Customer service for order enquiries Product managers These roles may all have different information available to them when trying to locate the order or product they need, so it's important for the interface to cater to different scenarios: Does the website need to integrate with a third-party system for management of orders? How are order status codes updated on the website so that customers can track progress? In a batch, manually or automatically? User experience How will the customer find the product that they are looking for? Well-structured navigation? Search by SKU? Free text search? Faceted search? The source of product data When you are creating a store with more than a trivial number of products, you will probably want to work on a method of mass importing the product data. Find out where the product data will be coming from, and in what format it will be delivered. You may want to define your Product Types taking into account the format of the data coming in—especially if the incoming data format is fixed. You may also want to define different methods of importing taxonomy terms from the supplied data. Summary Once you have gone through all of these checklists with the business stakeholders, you should have enough information to start your Drupal Commerce build. Drupal Commerce is very flexible, but it is crucial that you understand the outcome that you are trying to achieve before you start installing modules and setting up Product Types. Resources for Article: Further resources on this subject: Drupal Web Services: Twitter and Drupal [Article] Introduction to Drupal Web ServicesIntroduction to Drupal Web Services [Article] Drupal Site Configuration: Performance, Maintenance, Logging and Errors and Reports [Article]
Read more
  • 0
  • 0
  • 1080

article-image-getting-started-omnet
Packt
30 Sep 2013
5 min read
Save for later

Getting Started with OMNeT++

Packt
30 Sep 2013
5 min read
(For more resources related to this topic, see here.) What this book will cover This book will show you how you can get OMNeT++ up and running on your Windows or Linux operating system. This book will then take you through the components that make up an OMNeT++ network simulation. The components include models written in the NED (Network Description) language, initialization files, C++ source files, arrays, queues, and then configuring and running a simulation. This book will show you how these components make up a simulation using different examples, which can all be found online. At the end of the book, I will be focusing on a method to debug your network simulation using a particular type of data visualization known as a sequence chart, and what the visualization means. What is OMNeT++? OMNeT++ stands for Objective Modular Network Testbed in C++. It's a component-based simulation library written in C++ designed to simulate communication networks. OMNeT++ is not a network simulator but a framework to allow you to create your own network simulations. The need for simulation Understanding the need for simulation is a big factor in deciding if this book is for you. Have a look at this table of real network versus simulated network comparison. A real network A network simulation The cost of all the hardware, servers, switches and so on has to be borne. The cost of a single standalone machine with OMNeT++ installed (which is free). It takes a lot of time to set up big specialist networks used for business or academia It takes time to learn how to create simulations, though once you know how it's done, it's much easier to create new ones. Making changes to a pre-existing network takes planning, and if a change is made in error, it may cause the network to fail. Making changes to a simulated network of a real pre-existing network doesn't pose any risk. The outcome of the simulation can be analyzed to determine how the real network will be affected. You get the real thing, so what you observe from the real network is actually happening. If there is a bug in the simulation software, it could cause the simulation to act incorrectly. As you can see, there are benefits of using both real networks and network simulations when creating and testing your network. The point I want to convey though, is that network simulations can make network design cheaper and less costly. Examples of simulation in the industry After looking into different industries, we can see that there is obviously a massive need for simulation where the aim is to solve real-world problems from how a ticketing system should work in a hospital to what to do when a natural disaster strikes. Simulation allows us to forecast potential problems without having to first live through those problems. Different uses of simulation in the industry are as follows: Manufacturing: The following are the uses under manufacturing: To show how labor management will work, such as worker efficiency, and how rotas and various other factors will affect production To show what happens when a component fails on a production line Crowd Management: The following are the uses under crowd management: To show the length of queues at theme parks and how that will affect business To show how people will get themselves seated at an event in a stadium Airports: The following are the uses for airports: Show the effects of flight delays on air-traffic control Show how many bags can be processed at any one time on a baggage handling system, and what happens when it fails Weather Forecasting: The following are the uses under weather forecasting: To predict forthcoming weather To predict the effect of climate change on the weather That's just to outline a few, but hopefully you can see how and where simulation is useful. Simulating your network will allow you to test the network against myriads of network attacks, and test all the constraints of the network without damaging it in real life. What you will learn After reading this book you will know the following things: How to get a free copy of OMNeT++ How to compile and install OMNeT++ on Windows and Linux What makes up an OMNeT++ network simulation How to create network topologies with NED How to create your own network simulations using the OMNeT++ IDE How to use pre-existing libraries in order to make robust and realistic network simulations without reinventing the wheel Learning how to create and run network simulations is definitely a big goal of the book. Another goal of this book is to teach you how you can learn from the simulations you create. That's why this book will also show you how to set up your simulations, and to collect data of the events that occur during the runtime of the simulation. Once you have collected data from the simulation, you will learn how to debug your network by using the Data Visualization tools that come with OMNeT++. Then you will be able to grasp what you learned from debugging the simulated network and apply it to the actual network you would like to create. Summary You should now know that this book is intended for people who want to get network simulations up and running with OMNeT++ as soon as possible. You'll know by now, roughly, what OMNeT++ is, the need for simulation, and therefore OMNeT++. You'll also know what you can expect to learn from this book. Resources for Article: Further resources on this subject: Installing VirtualBox on Linux [Article] Fedora 8 — More than a Linux Distribution [Article] Linux Shell Scripting – various recipes to help you [Article]
Read more
  • 0
  • 0
  • 2494
Banner background image

article-image-plugins-and-extensions
Packt
30 Sep 2013
11 min read
Save for later

Plugins and Extensions

Packt
30 Sep 2013
11 min read
(For more resources related to this topic, see here.) In this modern world of JavaScript, Ext JS is the best JavaScript framework that includes a vast collection of cross-browser utilities, UI widgets, charts, data object stores, and much more. When developing an application, we mostly look for the best functionality support and components that offer it to the framework. But we usually face situations wherein the framework lacks the specific functionality or component that we need. Fortunately, Ext JS has a powerful class system that makes it easy to extend an existing functionality or component, or build new ones altogether. What is a plugin? An Ext JS plugin is a class that is used to provide additional functionalities to an existing component. Plugins must implement a method named init, which is called by the component and is passed as the parameter at the initialization time, at the beginning of the component's lifecycle. The destroy method is invoked by the owning component of the plugin, at the time of the component's destruction. We don't need to instantiate a plugin class. Plugins are inserted in to a component using the plugin's configuration option for that component. Plugins are used not only by components to which they are attached, but also by all the subclasses derived from that component. We can also use multiple plugins in a single component, but we need to be aware that using multiple plugins in a single component should not let the plugins conflict with each other. What is an extension? An Ext JS extension is a derived class or a subclass of an existing Ext JS class, which is designed to allow the inclusion of additional features. An Ext JS extension is mostly used to add custom functionalities or modify the behavior of an existing Ext JS class. An Ext JS extension can be as basic as the preconfigured Ext JS classes, which basically supply a set of default values to an existing class configuration. This type of extension is really helpful in situations where the required functionality is repeated at several places. Let us assume we have an application where several Ext JS windows have the same help button at the bottom bar. So we can create an extension of the Ext JS window, where we can add this help button and can use this extension window without providing the repeated code for the button. The advantage is that we can easily maintain the code for the help button in one place and can get the change in all places. Differences between an extension and a plugin The Ext JS extensions and plugins are used for the same purpose; they add extended functionality to Ext JS classes. But they mainly differ in terms of how they are written and the reason for which they are used. Ext JS extensions are extension classes or subclasses of Ext JS classes. To use these extensions, we need to instantiate these extensions by creating an object. We can provide additional properties, functions, and can even override any parent member to change its behavior. The extensions are very tightly coupled to the classes from which they are derived. The Ext JS extensions are mainly used when we need to modify the behavior of an existing class or component, or we need to create a fully new class or component. Ext JS plugins are also Ext JS classes, but they include the init function. To use the plugins we don't need to directly instantiate these classes; instead, we need to register the plugins in the plugins' configuration option within the component. After adding, the options and functions will become available to the component itself. The plugins are loosely coupled with the components they are plugged in, and they can be easily detachable and interoperable with multiple components and derived components. Plugins are used when we need to add features to a component. As plugins must be attached to an existing component, creating a fully new component, as done in the extensions, is not useful. Choosing the best option When we need to enhance or change the functionality of an existing Ext JS component, we have several ways to do that, each of which has both advantages and disadvantages. Let us assume we need to develop an SMS text field having a simple functionality of changing the text color to red whenever the text length exceeds the allocated length for a message; this way the user can see that they are typing more than one message. Now, this functionality can be implemented in three different ways in Ext JS, which is discussed in the following sections. By configuring an existing class We can choose to apply configuration to the existing classes. For example, we can create a text field by providing the required SMS functionality as a configuration within the listener's configuration, or we can provide event handlers after the text field is instantiated with the on method. This is the easiest option when the same functionality is used only at a few places. But as soon as the functionality is repeated at several places or in several situations, code duplication may arise. By creating a subclass or an extension By creating an extension, we can easily solve the problem as discussed in the previous section. So, if we create an extension for the SMS text field by extending the Ext JS text field, we can use this extension at as many places as we need, and can also create other extensions by using this extension. So, the code is centralized for this extension, and changing one place can reflect in all the places where this extension is used. But there is a problem: when the same functionality is needed for SMS in other subclasses of Ext JS text fields such as Ext JS text area field, we can't use the developed SMS text field extension to take advantage of the SMS functionality. Also, assume a situation where there are two subclasses of a base class, each of which provides their own facility, and we want to use both the features on a single class, then it is not possible in this implementation. By creating a plugin By creating a plugin, we can gain the maximum re-use of a code. As a plugin for one class, it is usable by the subclasses of that class, and also, we have the flexibility to use multiple plugins in a single component. This is the reason why if we create a plugin for the SMS functionality we can use the SMS plugin both in the text field and in the text area field. Also, we can use other plugins, including this SMS plugin, in the class. Building an Ext JS plugin Let us start developing an Ext JS plugin. In this section we will develop a simple SMS plugin, targeting the Ext JS textareafield component. The feature we wish to provide for the SMS functionality is that it should show the number of characters and the number of messages on the bottom of the containing field. Also, the color of the text of the message should change in order to notify the users whenever they exceed the allowed length for a message. Here, in the following code, the SMS plugin class has been created within the Examples namespace of an Ext JS application: Ext.define('Examples.plugin.Sms', { alias : 'plugin.sms', config : { perMessageLength : 160, defaultColor : '#000000', warningColor : '#ff0000' }, constructor : function(cfg) { Ext.apply(this, cfg); this.callParent(arguments); }, init : function(textField) { this.textField = textField; if (!textField.rendered) { textField.on('afterrender', this.handleAfterRender, this); } else { this.handleAfterRender(); } }, handleAfterRender : function() { this.textField.on({ scope : this, change : this.handleChange }); var dom = Ext.get(this.textField.bodyEl.dom); Ext.DomHelper.append(dom, { tag : 'div', cls : 'plugin-sms' }); }, handleChange : function(field, newValue) { if (newValue.length > this.getPerMessageLength()) { field.setFieldStyle('color:' + this.getWarningColor()); } else { field.setFieldStyle('color:' + this.getDefaultColor()); } this.updateMessageInfo(newValue.length); }, updateMessageInfo : function(length) { var tpl = ['Characters: {length}<br/>', 'Messages: {messages}'].join(''); var text = new Ext.XTemplate(tpl); var messages = parseInt(length / this.getPerMessageLength()); if ((length / this.getPerMessageLength()) - messages > 0) { ++messages; } Ext.get(this.getInfoPanel()).update(text.apply({ length : length, messages : messages })); }, getInfoPanel : function() { return this.textField.el.select('.plugin-sms'); } }); In the preceding plugin class, you can see that within this class we have defined a "must implemented" function called init. Within the init function, we check whether the component, on which this plugin is attached, has rendered or not, and then call the handleAfterRender function whenever the rendering is. Within this function, a code is provided, such that when the change event fires off the textareafield component, the handleChange function of this class should get executed; simultaneously, create an HTML <div> element within the handleAfterRender function, where we want to show the message information regarding the characters and message counter. And the handleChange function is the handler that calculates the message length in order to show the colored, warning text, and call the updateMessageInfo function to update the message information text for the characters length and the number of messages. Now we can easily add the following plugin to the component: { xtype : 'textareafield', plugins : ['sms'] } Also, we can supply configuration options when we are inserting the plugin within the plugins configuration option to override the default values, as follows: plugins : [Ext.create('Examples.plugin.Sms', { perMessageLength : 20, defaultColor : '#0000ff', warningColor : "#00ff00" })] Building an Ext JS extension Let us start developing an Ext JS extension. In this section we will develop an SMS extension that exactly satisfies the same requirements as the earlier-developed SMS plugin. We already know that an Ext JS extension is a derived class of existing Ext JS class, we are going to extend the Ext JS's textarea field that facilitates for typing multiline text and provides several event handling, rendering and other functionalities. Here is the following code where we have created the Extension class under the SMS view within the Examples namespace of an Ext JS application: Ext.define('Examples.view.sms.Extension', { extend : 'Ext.form.field.TextArea', alias : 'widget.sms', config : { perMessageLength : 160, defaultColor : '#000000', warningColor : '#ff0000' }, constructor : function(cfg) { Ext.apply(this, cfg); this.callParent(arguments); }, afterRender : function() { this.on({ scope : this, change : this.handleChange }); var dom = Ext.get(this.bodyEl.dom); Ext.DomHelper.append(dom, { tag : 'div', cls : 'extension-sms' }); }, handleChange : function(field, newValue) { if (newValue.length > this.getPerMessageLength()) { field.setFieldStyle('color:' + this.getWarningColor()); } else { field.setFieldStyle('color:' + this.getDefaultColor()); } this.updateMessageInfo(newValue.length); }, updateMessageInfo : function(length) { var tpl = ['Characters: {length}<br/>', 'Messages: {messages}'].join(''); var text = new Ext.XTemplate(tpl); var messages = parseInt(length / this.getPerMessageLength()); if ((length / this.getPerMessageLength()) - messages > 0) { ++messages; } Ext.get(this.getInfoPanel()).update(text.apply({ length : length, messages : messages })); }, getInfoPanel : function() { return this.el.select('.extension-sms'); } }); As seen in the preceding code, the extend keyword is used as a class property to extend the Ext.form.field.TextArea class in order to create the extension class. Within the afterRender event handler, we provide a code so that when the change event fires off the textarea field, we can execute the handleChange function of this class and also create an Html <div> element within this afterRender event handler where we want to show the message information regarding the characters counter and message counter. And from this section, the logic to show the warning, message character counter, and message counter is the same as we used in the SMS plugin. Now we can easily create an instance of this extension: Ext.create('Examples.view.sms.Extension'); Also, we can supply configuration options when we are creating the instance of this class to override the default values: Ext.create('Examples.view.sms.Extension', { perMessageLength : 20, defaultColor : '#0000ff', warningColor : "#00ff00" }); The following is the screenshot where we've used the SMS plugin and extension: In the preceding screenshot we have created an Ext JS window and incorporated the SMS extension and SMS plugin. As we have already discussed on the benefit of writing a plugin, we can not only use the SMS plugin with text area field, but we can also use it with text field. Summary We have learned from this article what a plugin and an extension are, the differences between the two, the facilities they offer, how to use them, and take decisions on choosing either an extension or a plugin for the needed functionality. In this article we've also developed a simple SMS plugin and an SMS extension. Resources for Article: Further resources on this subject: So, what is Ext JS? [Article] Ext JS 4: Working with the Grid Component [Article] Custom Data Readers in Ext JS [Article]
Read more
  • 0
  • 0
  • 1294

article-image-managing-content-must-know
Packt
27 Sep 2013
8 min read
Save for later

Managing content (Must know)

Packt
27 Sep 2013
8 min read
(For more resources related to this topic, see here.) Getting ready Content in Edublogs can take many different forms—posts, pages, uploaded media, and embedded media. The first step needs to be developing an understanding of what each of these types of content are, and how they fit into the Edublogs framework. Pages: Pages are generally static content, such as an About or a Frequently Asked Questions page. Posts: Posts are the content that is continually updated on a blog. When you write an article, it is referred to as a post. Media [uploaded]: Edublogs has a media manager that allows you to upload pictures, videos, audio files, and other files that readers would be able to interact with or download. Media [embedded]: Embedded media is different than internal media in that it is not stored on your Edublogs account. If you record a video and upload it, the video resides on your website and is considered internal to that website. If you want to add a YouTube video, a Prezi presentation, a slideshow, or any content that actually resides on another website, that is considered embedding. How to do it... Posts and pages are very similar. When you click on the Pages link on the left navigation column, if you are just beginning, you will see an empty list or the Sample Page that Edublogs provides. However, this page will show a list of all of the pages that you have written, as shown in the following screenshot: Click on any column header (Title, Author, Comments, and Date) to sort the pages by that criterion. A page can be any of several types: Published (anyone can see), Drafts, Private, Password Protected, or in the Trash. You can filter by those pages as well. You will only see the types of pages that you are currently using. For example, in the following screenshot, I have 3 Draft pages. If I had none, Drafts would not show as an option. When you hover over a page, you are provided with several options, such as Edit, Quick Edit, Trash, and View. View: This option shows you the actual live post, the same way that a reader would see it. Trash: This deletes the page. Edit: This brings you back to the main editing screen, where you can change the actual body of the page. Quick Edit: This allows you to change some of the main options of the post: Title, Slug (the end of the URL to access the page), Author, if the page has a parent, and if it should be published. The following screenshot demonstrates these options: How it works... Everything above about Pages also applies to Posts. Posts, though, have several additional options. It's also more common to use the additional options to customize Posts than Pages. Right away, hovering over Posts, it shows two new links: Categories and Tags. These tools are optional, and serve the dual purpose of aiding the author by providing an organizational structure, and helping the reader to find posts more effectively. A Category is usually very general; on one of my educational blogs, I limit my categories to a few: technology integration, assessment, pedagogy, and lessons. If I happen to write a post that does not fit, I do not categorize it. Tags are becoming ubiquitous in many applications and operating systems. They provide an easy way to browse a store of information thematically. On my educational blog, I have over 160 tags. On one post about Facebook's new advertising system, I added the following tags: Digital Literacy, Facebook, Privacy. Utilizing tags can help you to see trends in your writing and makes it much easier for new readers to find posts that interest them, and regular readers to find old posts that they want to re-reference. Let's take a look at some of the advanced features. When adding or editing a post, the following features are all located on the right-hand side column: Publish: The Publish box is necessary any time you want to remove your Post (or Page) from the draft stage, and allow readers to be able to see it. Most new bloggers simply click on Publish/Update when they are done writing a Post, which works fine. It is limited though. People often find that there are certain times of day that result in higher readership. If you click on Edit next to Publish Immediately, you can choose a date and time to schedule the publication. In addition, the Visibility line also allows you to set a Post as private, password protected, or always at the top of the page (if you have a post you particularly want to highlight, for example). Format: Most of the time, changing the format is not necessary, particularly if you run a normal, text driven blog. However, different formats lend themselves to different types of content. For example, if publishing a picture as a Post, as is often done on the microblogging site Tumblr, choosing Image would format the post more effectively. Categories: Click on + Add New Category, or check any existing categories to append them to the Post. Tags: Type any tags that you want to use, separated by commas (such as writing, blogging, Edublogs). Featured Image: Uploading and choosing a feature image adds a thumbnail image, to provide a more engaging browsing experience for the viewer. All of these features are optional, but they are useful for improving the experience, both for yourself and your readers. There's more... While for most people, the heart of a blog is the actual writing that they do. Media serves help to both make the experience more memorable and engaging, as well as to illustrate a point more effectively than text would alone. Media is anything other than text that a user can interact with; primarily, it is video, audio, or pictures. As teachers know, not everyone learns ideally through a text-based medium; media is an important part of engaging readers just as it is an important part of engaging students. There are a few ways to get media into your posts. The first is through the Media Library. On a free account, space is limited to 32 MB, a relatively small account. Pro accounts get 10 GB of space. Click on Media from the navigation menu on the left; it brings up the library. This will have a list of your media, similar to that which is used for Posts and Pages. To add media, simply click on Add New and choose an image, audio file, or video from your computer. This will then be available to any post or page to use. The following screenshot shows the Media Library page: If you are already in a post, you have even more options. Click on the Add Media button above the text editor, as shown in the following screenshot: Following are some of the options you have to embed media: Insert Media: This allows you to directly upload a file or choose one from the Media Library. Create Gallery: Creating a gallery allows you to create a set of images that users can browse through. Set Featured Image: As described above, set a thumbnail image representative of the post. Insert from URL: This allows you to insert an image by pasting in the direct URL. Make sure you give attribution, if you use someone else's image. Insert Embed Code: Embed code is extremely helpful. Many sites provide embed code (often referred to as share code) to allow people to post their content on other websites. One of the most common examples is adding a YouTube video to a post. The following screenshot is from the Share menu of a YouTube video. Copying the code provided and pasting it into the Insert Embed Code field will put the YouTube video right in the post, as shown in the following screenshot. This is much more effective than just providing a link, because readers can watch the video without ever having to leave the blog. Embedding is an Edublogs Pro feature only. Utilizing media effectively can dramatically improve the experience for your readers. Summary This article on managing content provided details about managing different types of content, in the form of posts, pages, uploaded media, and embedded media. It taught us the different features such as publish, format, categories, tags and features image. Resources for Article : Further resources on this subject: Customizing WordPress Settings for SEO [Article] Getting Started with WordPress 3 [Article] Dynamic Menus in WordPress [Article]
Read more
  • 0
  • 0
  • 1167

article-image-creating-dynamic-ui-android-fragments
Packt
26 Sep 2013
2 min read
Save for later

Creating Dynamic UI with Android Fragments

Packt
26 Sep 2013
2 min read
(For more resources related to this topic, see here.) Many applications involve several screens of data that a user might want to browse or flip through to view each screen. As an example, think of an application where we list a catalogue of books with each book in the catalogue appearing on a single screen. A book's screen contains an image, title, and description like the following screenshot: To view each book's information, the user needs to move to each screen. We could put a next button and a previous button on the screen, but a more natural action is for the user to use their thumb or finger to swipe the screen from one edge of the display to the other and have the screen with the next book's information slide into place as represented in the following screenshot: This creates a very natural navigation experience, and honestly, is a more fun way to navigate through an application than using buttons. Summary Fragments are the foundation of modern Android app development, allowing us to display multiple application screens within a single activity. Thanks to the flexibility provided by fragments, we can now incorporate rich navigation into our apps with relative ease. Using these rich navigation capabilities, we're able to create a more dynamic user interface experience that make our apps more compelling and that users find more fun to work with. Resources for Article : Further resources on this subject: So, what is Spring for Android? [Article] Android Native Application API [Article] Animating Properties and Tweening Pages in Android 3-0 [Article]
Read more
  • 0
  • 0
  • 3403
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-self-service-business-intelligence-creating-value-data
Packt
20 Sep 2013
15 min read
Save for later

Self-service Business Intelligence, Creating Value from Data

Packt
20 Sep 2013
15 min read
(For more resources related to this topic, see here.) Over the years most businesses have spent considerable amount of time, money, and effort in building databases, reporting systems, and Business Intelligence (BI) systems. IT often thinks that they are providing the necessary information to the business users for them to make the right decisions. However, when I meet the users they tell me a different story. Most often they say that they do not have the information they need to do their job. Or they have to spend a lot of time getting the relevant information. Many users state that they spend more time getting access to the data than understanding the information. This divide between IT and business is very common, it causes a lot of frustration and can cost a lot of money, which is a real issue for companies that needs to be solved for them to be profitable in the future. Research shows that by 2015 companies that build a good information management system will be 20 percent more profitable compared to their peers. You can read the entire research publication from http://download.microsoft.com/download/7/B/8/7B8AC938-2928-4B65-B1B3-0B523DDFCDC7/Big%20Data%20Gartner%20 information_management_in_the_21st%20Century.pdf. So how can an organization avoid the pitfalls in business intelligence systems and create an effective way of working with information? This article will cover the following topics concerning it: Common user requirements related to BI Understanding how these requirements can be solved by Analysis Services An introduction to self-service reporting Identifying common user requirements for a business intelligence system In many cases, companies that struggle with information delivery do not have a dedicated reporting system or data warehouse. Instead the users have access only to the operational reports provided by each line of business application. This is extremely troublesome for the users that want to compare information from different systems. As an example, think of a sales person that wants to have a report that shows the sales pipeline, from the Customer Relationship Management (CRM) system together with the actual sales figures from the Enterprise Resource Planning (ERP) system. Without a common reporting system the users have to combine the information themselves with whatever tools are available to them. Most often this tool is Microsoft Excel. While Microsoft Excel is an application that can be used to effectively display information to the users, it is not the best system for data integration. To perform the steps of extracting, transforming, and loading data (ETL), from the source system, the users have to write tedious formulas and macros to clean data, before they can start comparing the numbers and taking actual decisions based on the information. Lack of a dedicated reporting system can also cause trouble with the performance of the Online Transaction Processing (OLTP) system. When I worked in the SQL Server support group at Microsoft, we often had customers contacting us on performance issues that they had due to the users running the heavy reports directly on the production system. To solve this problem, many companies invest in a dedicated reporting system or a data warehouse. The purpose of this system is to contain a database customized for reporting, where the data can be transformed and combined once and for all from all source systems. The data warehouse also serves another purpose and that is to serve as the storage of historic data. Many companies that have invested in a common reporting database or data warehouse still require a person with IT skills to create a report. The main reason for this is that the organizations that have invested in a reporting system have had the expert users define the requirements for the system. Expert users will have totally different requirements than the majority of the users in the organization and an expert tool is often very hard to learn. An expert tool that is too hard for the normal users will put a strain on the IT department that will have to produce all the reports. This will result in the end users waiting for their reports for weeks and even months. One large corporation that I worked with had invested millions of dollars in a reporting solution, but to get a new report the users had to wait between nine and 12 months, before they got the report in their hand. Imagine the frustration and the grief that waiting this long before getting the right information causes the end users. To many users, business intelligence means simple reports with only the ability to filter data in a limited way. While simple reports such as the one in the preceding screenshot can provide valuable information, it does not give the users the possibility to examine the data in detail. The users cannot slice-and-dice the information and they cannot drill down to the details, if the aggregated level that the report shows is insufficient for decision making. If a user would like to have these capabilities, they would need to export the information into a tool that enables them to easily do so. In general, this means that the users bring the information into Excel to be able to pivot the information and add their own measures. This often results in a situation where there are thousands of Excel spreadsheets floating around in the organization, all with their own data, and with different formulas calculating the same measures. When analyzing data, the data itself is the most important thing. But if you cannot understand the values, the data is of no benefit to you. Many users find that it is easier to understand information, if it is presented in a way that they can consume efficiently. This means different things to different users, if you are a CEO, you probably want to consume aggregated information in a dashboard such as the one you can see in the following screenshot: On the other hand, if you are a controller, you want to see the numbers on a very detailed level that would enable you to analyze the information. A controller needs to be able to find the root cause, which in most cases includes analyzing information on a transaction level. A sales representative probably does not want to analyze the information. Instead, he or she would like to have a pre-canned report filtered on customers and time to see what goods the customers have bought in the past, and maybe some suggested products that could be recommended to the customers. Creating a flexible reporting solution What the companies need is a way for the end users to access information in a user-friendly interface, where they can create their own analytical reports. Analytical reporting gives the user the ability to see trends, look at information on an aggregated level, and drill down to the detailed information with a single-click. In most cases this will involve building a data warehouse of some kind, especially if you are going to reuse the information in several reports. The reason for creating a data warehouse is mainly the ability to combine different sources into one infrastructure once. If you build reports that do the integration and cleaning of the data in the reporting layer, then you will end up doing the same tasks of data modification in every report. This is both tedious and could cause unwanted errors as the developer would have to repeat all the integration efforts in all the reports that need to access the data. If you do it in the data warehouse you can create an ETL program that will move the data, and prepare it for the reports once, and all the reports can access this data. A data warehouse is also beneficial from many other angles. With a data warehouse, you have the ability to offload the burden of running the reports from the transactional system, a system that is built mainly for high transaction rates at high speed, and not for providing summarized data in a report to the users. From a report authoring perspective a data warehouse is also easier to work with. Consider the simple static report shown in the first screenshot. This report is built against a data warehouse that has been modeled using dimensional modeling. This means that the query used in the report is very simple compared to getting the information from a transactional system. In this case, the query is a join between six tables containing all the information that is available about dates, products, sales territories, and sales. selectf.SalesOrderNumber,s.EnglishProductSubcategoryName,SUM(f.OrderQuantity) as OrderQuantity,SUM(f.SalesAmount) as SalesAmount,SUM(f.TaxAmt) as TaxAmtfrom FactInternetSales fjoin DimProduct p on f.ProductKey=p.ProductKeyjoin DimProductSubcategory s on p.ProductSubcategoryKey =s.ProductSubcategoryKeyjoin DimProductCategory c on s.ProductCategoryKey =c.ProductCategoryKeyjoin DimDate d on f.OrderDateKey = d.DateKeyjoin DimSalesTerritory t on f.SalesTerritoryKey =t.SalesTerritoryKeywhere c.EnglishProductCategoryName = @ProductCategoryand d.CalendarYear = @Yearand d.EnglishMonthName = @MonthNameand t.SalesTerritoryCountry = @Countrygroup by f.SalesOrderNumber, s.EnglishProductSubcategoryName You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. The preceding query is included for illustrative purposes. As you can see it is very simple to write for someone that is well versed in Transact-SQL. Compare this to getting all the information from the operational system necessary to produce this report, and all the information stored in the six tables. It would be a daunting task. Even though the sample database for AdventureWorks is very simple, we still need to query a lot of tables to get to the information. The following figure shows the necessary tables from the OLTP system you would need to query, to get the information available in the six tables in the data warehouse. Now imagine creating the same query against a real system, it could easily be hundreds of tables involved to extract the data that are stored in a simple data model used for sales reporting. As you can see clearly now, working against a model that has been optimized for reporting is much simpler when creating the reports. Even with a well-structured data warehouse, many users would struggle with writing the select query driving the report shown earlier. The users, in general, do not know SQL. They typically do not understand the database schema since the table and column names usually consists of abbreviations that can be cryptic to the casual user. What if a user would like to change the report, so that it would show data in a matrix with the ability to drill down to lower levels? Then they most probably would need to contact IT. IT would need to rewrite the query and change the entire report layout, causing a delay between the need of the data and the availability. What is needed is a tool that enables the users to work with the business attributes instead of the tables and columns, with simple understandable objects instead of a complex database engine. Fortunately for us SQL Server contains this functionality; it is just for us database professionals to learn how to bring these capabilities to the business. That is what this article is all about, creating a flexible reporting solution allowing the end users to create their own reports. I have assumed that you as the reader have knowledge of databases and are well-versed with your data. What you will learn in this article is, how to use a component of SQL Server 2012 called SQL Server Analysis Services to create a cube or semantic model, exposing data in the simple business attributes allowing the users to use different tools to create their own ad hoc reports. Think of the cube as a PivotTable spreadsheet in Microsoft Excel. From the users perspective, they have full flexibility when analyzing the data. You can drag-and-drop whichever column you want to, into either the rows, columns, or filter boxes. The PivotTable spreadsheet also summarizes the information depending on the different attributes added to the PivotTable spreadsheet. The same capabilities are provided through the semantic model or the cube. When you are using the semantic model the data is not stored locally within the PivotTable spreadsheet, as it is when you are using the normal PivotTable functionality in Microsoft Excel. This means that you are not limited to the number of rows that Microsoft Excel is able to handle. Since the semantic model sits in a layer between the database and the end user reporting tool, you have the ability to rename fields, add calculations, and enhance your data. It also means that whenever new data is available in the database and you have processed your semantic model, then all the reports accessing the model will be updated. The semantic model is available in SQL Server Analysis Services. It has been part of the SQL Server package since Version 7.0 and has had major revisions in the SQL Server 2005, 2008 R2, and 2012 versions. This article will focus on how to create semantic models or cubes through practical step-by-step instructions. Getting user value through self-service reporting SQL Server Analysis Services is an application that allows you to create a semantic model that can be used to analyze very large amounts of data with great speed. The models can either be user created, or created and maintained by IT. If the user wants to create it, they can do so, by using a component in Microsoft Excel 2010 and upwards called PowerPivot. If you run Microsoft Excel 2013, it is included in the installed product, and you just need to enable it. In Microsoft Excel 2010, you have to download it as a separate add-in that you either can find on the Microsoft homepage or on the site called http://www.powerpivot.com. PowerPivot creates and uses a client-side semantic model that runs in the context of the Microsoft Excel process; you can only use Microsoft Excel as a way of analyzing the data. If you just would like to run a user created model, you do not need SQL Server at all, you just need Microsoft Excel. On the other hand, if you would like to maintain user created models centrally then you need, both SQL Server 2012 and SharePoint. Instead, if you would like IT to create and maintain a central semantic model, then IT need to install SQL Server Analysis Services. IT will, in most cases, not use Microsoft Excel to create the semantic models. Instead, IT will use Visual Studio as their tool. Visual Studio is much more suitable for IT compared to Microsoft Excel. Not only will they use it to create and maintain SQL Server Analysis Services semantic models, they will also use it for other database related tasks. It is a tool that can be connected to a source control system allowing several developers to work on the same project. The semantic models that they create from Visual Studio will run on a server that several clients can connect to simultaneously. The benefit of running a server-side model is that they can use the computational power of the server, this means that you can access more data. It also means that you can use a variety of tools to display the information. Both approaches enable users to do their own self-service reporting. In the case where PowerPivot is used they have complete freedom; but they also need the necessary knowledge to extract the data from the source systems and build the model themselves. In the case where IT maintains the semantic model, the users only need the knowledge to connect an end user tool such as Microsoft Excel to query the model. The users are, in this case, limited to the data that is available in the predefined model, but on the other hand, it is much simpler to do their own reporting. This is something that can be seen in the preceding figure that shows Microsoft Excel 2013 connected to a semantic model. SQL Server Analysis Services is available in the Standard edition with limited functionality, and in the BI and Enterprise edition with full functionality. For smaller departmental solutions the Standard edition can be used, but in many cases you will find that you need either the BI or the Enterprise edition of SQL Server. If you would like to create in-memory models, you definitely cannot run the Standard edition of the software since this functionality is not available in the Standard edition of SQL Server. Summary In this article, you learned about the requirements that most organizations have when it comes to an information management platform. You were introduced to SQL Server Analysis Services that provides the capabilities needed to create a self-service platform that can serve as the central place for all the information handling. SQL Server Analysis Services allows users to work with the data in the form of business entities, instead of through accessing a databases schema. It allows users to use easy to learn query tools such as Microsoft Excel to analyze the large amounts of data with subsecond response times. The users can easily create different kinds of reports and dashboards with the semantic model as the data source. Resources for Article : Further resources on this subject: MySQL Linked Server on SQL Server 2008 [Article] Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio [Article] FAQs on Microsoft SQL Server 2008 High Availability [Article]
Read more
  • 0
  • 0
  • 2058

article-image-oracle-goldengate-advanced-administration-tasks-i
Packt
20 Sep 2013
19 min read
Save for later

Oracle GoldenGate- Advanced Administration Tasks - I

Packt
20 Sep 2013
19 min read
(For more resources related to this topic, see here.) Upgrading Oracle GoldenGate binaries In this recipe you will learn how to upgrade GoldenGate binaries. You will also learn about GoldenGate patches and how to apply them. Getting ready For this recipe, we will upgrade the GoldenGate binaries from version 11.2.1.0.1 to 11.2.1.0.3 on the source system, that is prim1-ol6-112 in our case. Both of these binaries are available from the Oracle Edelivery website under the part number V32400-01 and V34339-01 respectively. 11.2.1.0.1 binaries are installed under /u01/app/ggate/112101. How to do it... The steps to upgrade the Oracle GoldenGate binaries are: Make a new directory for 11.2.1.0.3 binaries: mkdir /u01/app/ggate/112103 Copy the binaries ZIP file to the server in the new directory. Unzip the binaries file: [ggate@prim1-ol6-112 112103]$ cd /u01/app/ggate/112103 [ggate@prim1-ol6-112 112103]$ unzip V34339-01.zip Archive: V34339-01.zip inflating: fbo_ggs_Linux_x64_ora11g_64bit.tar inflating: Oracle_GoldenGate_11.2.1.0.3_README.doc inflating: Oracle GoldenGate_11.2.1.0.3_README.txt inflating: OGG_WinUnix_Rel_Notes_11.2.1.0.3.pdf Install the new binaries in /u01/app/ggate/112103: [ggate@prim1-ol6-112 112103]$ tar -pxvf fbo_ggs_Linux_x64_ora11g_64bit.tar Stop the processes in the existing installation: [ggate@prim1-ol6-112 112103]$ cd /u01/app/ggate/112101 [ggate@prim1-ol6-112 112101]$ ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230_FBO Linux, x64, 64bit (optimized), Oracle 11g on Apr 23 2012 08:32:14 Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved. GGSCI (prim1-ol6-112.localdomain) 1> stop * Sending STOP request to EXTRACT EGGTEST1 ... Request processed. Sending STOP request to EXTRACT PGGTEST1 ... Request processed. Stop the manager process: GGSCI (prim1-ol6-112.localdomain) 2> STOP MGRManager process is required by other GGS processes.Are you sure you want to stop it (y/n)? ySending STOP request to MANAGER ...Request processed.Manager stopped. Copy the subdirectories to the new binaries: [ggate@prim1-ol6-112 112101]$ cp -R dirprm /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirrpt /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirchk /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R BR /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirpcs /u01/app/ggate/112103/ [ggate@prim1-ol6-112 112101]$ cp -R dirdef /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirout /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirdat /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirtmp /u01/app/ggate/112103/ Modify any parameter files under dirprm if you have hardcoded old binaries path in them. Edit the ggate user profile and update the value of the GoldenGate binaries home: vi .profile export GG_HOME=/u01/app/ggate/112103 Start the manager process from the new binaries: [ggate@prim1-ol6-112 ~]$ cd /u01/app/ggate/112103/ [ggate@prim1-ol6-112 112103]$ ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.2.1.0.3 14400833 OGGCORE_11.2.1.0.3_PLATFORMS_120823.1258_FBO Linux, x64, 64bit (optimized), Oracle 11g on Aug 23 2012 20:20:21 Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved. GGSCI (prim1-ol6-112.localdomain) 1> START MGR Manager started. Start the processes: GGSCI (prim1-ol6-112.localdomain) 18> START EXTRACT * Sending START request to MANAGER ... EXTRACT EGGTEST1 starting Sending START request to MANAGER ... EXTRACT PGGTEST1 starting How it works... The method to upgrade the GoldenGate binaries is quite straightforward. As seen in the preceding section, you need to download and install the binaries on the server in a new directory. After this, you would stop the all GoldenGate processes that are running from the existing binaries. Then you would copy all the important GoldenGate directories with parameter files, trail files, report files, checkpoint files, and recovery files to the new binaries. If your trail files are kept on a separate filesystem which is linked to the dirdat directory using a softlink, then you would just need to create a new softlink under the new GoldenGate binaries home. Once all the files are copied, you would need to modify the parameter files if you have the path of the existing binaries hardcoded in them. The same would also need to be done in the OS profile of the ggate user. After this, you just start the manager process and rest of the processes from the new home. GoldenGate patches are all delivered as full binaries sets. This makes the procedure to patch the binaries exactly the same as performing major release upgrades. Table structure changes in GoldenGate environments with similar table definitions Almost all of the applications systems in IT undergo some change over a period of time. This change might include a fix of an identified bug, an enhancement or some configuration change required due to change in any other part of the system. The data that you would replicate using GoldenGate will most likely be part of some application schema. These schemas, just like the application software, sometimes require some changes which are driven by the application vendor. If you are replicating DDL along with DML in your environment then these schema changes will most likely be replicated by GoldenGate itself. However, if you are only replicating only DML and there are any DDL changes in the schema particularly around the tables that you are replicating, then these will affect the replication and might even break it. In this recipe, you will learn how to update the GoldenGate configuration to accommodate the schema changes that are done to the source system. This recipe assumes that the definitions of the tables that are replicated are similar in both the source and target databases. Getting ready For this recipe we are making the following assumptions: GoldenGate is set up to replicate only DML changes between the source and target environments. The application will be stopped for making schema changes in the source environment. The table structures in the source and target database are similar. The replication is configured for all objects owned by a SCOTT user using a SCOTT.* clause. The GoldenGate Admin user has been granted SELECT ANY TABLE in the source database and INSERT ANY TABLE, DELETE ANY TABLE, UPDATE ANY TABLE, SELECT ANY TABLE in the target database. The schema changes performed in this recipe are as follows: Add a new column called DOB (DATE) to the EMP table. Modify the DNAME column in the DEPT table to VARCHAR(20). Add a new table called ITEMS to the SCOTT schema: ITEMS ITEMNO NUMBER(5) PRIMARY KEY NAME VARCHAR(20) Add a new table called SALES to the SCOTT schema: SALES INVOICENO NUMBER(9) PRIMARY KEY ITEMNO NUMBER(5) FOREIGN KEY ITEMS(ITEMNO) EMPNO NUMBER(4) FOREIGN KEY EMP(EMPNO) Load the values for the DOB column in the EMP table. Load a few records in the ITEMS table. How to do it… Here are the steps that you can follow to implement the preceding schema changes in the source environment: Ensure that the application accessing the source database is stopped. There should not be any process modifying the data in the database. Once you have stopped the application, wait for 2 to 3 minutes so that all pending redo is processed by the GoldenGate extract. Check the latest timestamp read by the Extract and Datapump processes and ensure it is the current timestamp: GGSCI (prim1-ol6-112.localdomain) 9> INFO EXTRACT EGGTEST1 GGSCI (prim1-ol6-112.localdomain) 10> INFO EXTRACT * EXTRACT EGGTEST1 Last Started 2013-03-25 22:24 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:07 ago) Log Read Checkpoint Oracle Redo Logs 2013-03-25 22:35:06 Seqno 350, RBA 11778560 SCN 0.11806849 (11806849) EXTRACT PGGTEST1 Last Started 2013-03-25 22:24 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File /u01/app/ggate/dirdat/st000010 2013-03-25 22:35:05.000000 RBA 7631 Stop the Extract and Datapump processes in the source environment: GGSCI (prim1-ol6-112.localdomain) 1> STOP EXTRACT * Sending STOP request to EXTRACT EGGTEST1 ... Request processed. Sending STOP request to EXTRACT PGGTEST1 ... Request processed. Check the status of the Replicat process in the target environment and ensure that it has processed the timestamp noted in step 3: GGSCI (stdby1-ol6-112.localdomain) 54> INFO REPLICAT * REPLICAT RGGTEST1 Last Started 2013-03-25 22:25 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File ./dirdat/rt000061 2013-03-25 22:37:04.950188 RBA 10039 Stop the Replicat process in the target environment: GGSCI (stdby1-ol6-112.localdomain) 48> STOP REPLICAT * Sending STOP request to REPLICAT RGGTEST1 ... Request processed. Apply the schema changes to the source database: SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete. Apply the schema changes to the target database: SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete. Add supplemental logging for the newly added tables: GGSCI (prim1-ol6-112.localdomain) 4> DBLOGIN USERID GGATE_ADMIN@ DBORATEST Password: Successfully logged into database. GGSCI (prim1-ol6-112.localdomain) 5> ADD TRANDATA SCOTT.ITEMS Logging of supplemental redo data enabled for table SCOTT.ITEMS. GGSCI (prim1-ol6-112.localdomain) 6> ADD TRANDATA SCOTT.SALES Logging of supplemental redo data enabled for table SCOTT.SALES. Alter the Extract and Datapump processes to skip the changes generated by the Application Schema Patch: GGSCI (prim1-ol6-112.localdomain) 7> ALTER EXTRACT EGGTEST1 BEGIN NOW EXTRACT altered. GGSCI (prim1-ol6-112.localdomain) 8> ALTER EXTRACT PGGTEST1 BEGIN NOW EXTRACT altered. Start the Extract and Datapump in the source environment: GGSCI (prim1-ol6-112.localdomain) 9> START EXTRACT * Sending START request to MANAGER ... EXTRACT EGGTEST1 starting Sending START request to MANAGER ... EXTRACT PGGTEST1 starting Start the Replicat process in the target environment: GGSCI (stdby1-ol6-112.localdomain) 56> START REPLICAT RGGTEST1 Sending START request to MANAGER ... REPLICAT RGGTEST1 starting How it works... The preceding steps cover a high level procedure that you can follow to modify the structure of the replicated tables in your GoldenGate configuration. Before you start to alter any processes or parameter file, you need to ensure that the applications are stopped and no user sessions in the database are modifying the data in the tables that you are replicating. Once the application is stopped, we check that all the redo data has been processed by GoldenGate processes and then stop. At this point we run the scripts that need to be run to make DDL changes to the database. This step needs to be run on both the source and target database as we will not be replicating these changes using GoldenGate. Once this is done, we alter the GoldenGate processes to start from the current time and start them. There's more... Some of the assumptions made in the earlier procedure might not hold true for all environments. Let's see what needs to be done in such cases where the environment does not satisfy these conditions: Specific tables defined in GoldenGate parameter files Unlike the earlier example, where the tables are defined in the parameter files using a schema qualifier for example SCOTT.*, if you have individual tables defined in the GoldenGateparameterfiles, you would need to modify the GoldenGate parameter files to add these newly created tables to include them in replication. Individual table permissions granted to the GoldenGate Admin user If you have granted table-specific permissions to the GoldenGate Admin user in the source and target environments, you would need to grant them on the newly added tables to allow the GoldenGate user to read their data in the source environment and also to apply the changes to these tables in the target environment. Supplemental logging for modified tables without any keys If you are adding or deleting any columns from the tables in the source database which do not have any primary/unique keys, you would then need to drop the existing supplemental log group and read them. This is because when there are no primary/unique keys in a table, GoldenGate adds all columns to the supplemental log group. This supplemental log group will have to be modified when the structure of the underlying table is modified. Supplemental log groups with all columns for modified tables In some cases, you would need to enable supplemental logging on all columns of the source tables that you are replicating. This is mostly applicable for consolidation replication topologies where all changes are captured and converted into INSERTs in the target environment, which usually is a Data warehouse. In such cases, you need to drop and read the supplemental logging on the tables in which you are adding or removing any columns. Table structure changes in GoldenGate environments with different table definitions In this recipe you will learn how to perform table structure changes in a replication environment where the table structures in the source and target environments are not similar. Getting ready For this recipe we are making the following assumptions: GoldenGate is set up to replicate only DML changes between the source and target environments. The application will be stopped for making schema changes in the source environment. The table structures in the source and target databases are not similar. The GoldenGate Admin user has been granted SELECT ANY TABLE in the source database and INSERT ANY TABLE, DELETE ANY TABLE, UPDATE ANY TABLE, SELECT ANY TABLE in the target database. The definition file was generated for the source schema and is configured in the replicat parameter file. The schema changes performed in this recipe are as follows: Add a new column called DOB (DATE) to the EMP table. Modify the DNAME column in the DEPT table to VARCHAR(20). Add a new table called ITEMS to the SCOTT schema: ITEMS ITEMNO NUMBER(5) PRIMARY KEY NAME VARCHAR(20) Add a new table called SALES to the SCOTT schema: SALES INVOICENO NUMBER(9) PRIMARY KEY ITEMNO NUMBER(5) FOREIGN KEY ITEMS(ITEMNO) EMPNO NUMBER(4) FOREIGN KEY EMP(EMPNO) Load the values for the DOB column in the EMP table. Load a few records in the ITEMS table. How to do it... Here are the steps that you can follow to implement the previous schema changes in the source environment: Ensure that the application accessing the source database is stopped. There should not be any process modifying the data in the database. Once you have stopped the application, wait for 2 to 3 minutes so that all pending redo is processed by the GoldenGate extract. Check the latest timestamp read by the Extract and Datapump process, and ensure it is the current timestamp: GGSCI (prim1-ol6-112.localdomain) 9> INFO EXTRACT EGGTEST1 GGSCI (prim1-ol6-112.localdomain) 10> INFO EXTRACT * EXTRACT EGGTEST1 Last Started 2013-03-28 10:12 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:07 ago) Log Read Checkpoint Oracle Redo Logs 2013-03-28 10:16:06 Seqno 352, RBA 12574320 SCN 0.11973456 (11973456) EXTRACT PGGTEST1 Last Started 2013-03-28 10:12 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File /u01/app/ggate/dirdat/st000010 2013-03-28 10:15:43.000000 RBA 8450 Stop the Extract and Datapump processes in the source environment: GGSCI (prim1-ol6-112.localdomain) 1> STOP EXTRACT * Sending STOP request to EXTRACT EGGTEST1 ... Request processed. Sending STOP request to EXTRACT PGGTEST1 ... Request processed. Check the status of the Replicat process in the target environment and ensure that it has processed the timestamp noted in step 3: GGSCI (stdby1-ol6-112.localdomain) 54> INFO REPLICAT * REPLICAT RGGTEST1 Last Started 2013-03-28 10:15 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File ./dirdat/rt000062 2013-03-28 10:15:04.950188 RBA 10039 Stop the Replicat process in the target environment: GGSCI (stdby1-ol6-112.localdomain) 48> STOP REPLICAT * Sending STOP request to REPLICAT RGGTEST1 ... Request processed. Apply the schema changes to the source database: SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete. Apply the schema changes to the target database: SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete. Add supplemental logging for the newly added tables: GGSCI (prim1-ol6-112.localdomain) 4> DBLOGIN USERID GGATE_ADMIN@DBORATEST Password: Successfully logged into database. GGSCI (prim1-ol6-112.localdomain) 5> ADD TRANDATA SCOTT.ITEMS Logging of supplemental redo data enabled for table SCOTT.ITEMS. GGSCI (prim1-ol6-112.localdomain) 6> ADD TRANDATA SCOTT.SALES Logging of supplemental redo data enabled for table SCOTT.SALES. Update the parameter file for generating definitions as follows: vi $GG_HOME/dirprm/defs.prm DEFSFILE ./dirdef/defs.def USERID ggate_admin@dboratest, PASSWORD XXXX TABLE SCOTT.EMP; TABLE SCOTT.DEPT; TABLE SCOTT.BONUS; TABLE SCOTT.DUMMY; TABLE SCOTT.SALGRADE; TABLE SCOTT.ITEMS; TABLE SCOTT.SALES; Generate the definitions in the source environment: ./defgen paramfile ./dirprm/defs.prm Push the definitions file to the target server using scp: scp ./dirdef/defs.def stdby1-ol6-112:/u01/app/ggate/dirdef/ Edit the Extract and Datapump process parameter to include the newly created tables if you have specified individual table names in them. Alter the Extract and Datapump processes to skip the changes generated by the Application Schema Patch: GGSCI (prim1-ol6-112.localdomain) 7> ALTER EXTRACT EGGTEST1 BEGIN NOW EXTRACT altered. GGSCI (prim1-ol6-112.localdomain) 8> ALTER EXTRACT PGGTEST1 BEGIN NOW EXTRACT altered. Start the Extract and Datapump in the source environment: GGSCI (prim1-ol6-112.localdomain) 9> START EXTRACT * Sending START request to MANAGER ... EXTRACT EGGTEST1 starting Sending START request to MANAGER ... EXTRACT PGGTEST1 starting Edit the Replicat process parameter file to include the tables: ./ggsci EDIT PARAMS RGGTEST1 REPLICAT RGGTEST1 USERID GGATE_ADMIN@TGORTEST, PASSWORD GGATE_ADMIN DISCARDFILE /u01/app/ggate/dirrpt/RGGTEST1.dsc,append,MEGABYTES 500 SOURCEDEFS ./dirdef/defs.def MAP SCOTT.BONUS, TARGET SCOTT.BONUS; MAP SCOTT.SALGRADE, TARGET SCOTT.SALGRADE; MAP SCOTT.DEPT, TARGET SCOTT.DEPT; MAP SCOTT.DUMMY, TARGET SCOTT.DUMMY; MAP SCOTT.EMP, TARGET SCOTT.EMP; MAP SCOTT.EMP,TARGET SCOTT.EMP_DIFFCOL_ORDER; MAP SCOTT.EMP, TARGET SCOTT.EMP_EXTRACOL, COLMAP(USEDEFAULTS, LAST_UPDATE_TIME = @DATENOW ()); MAP SCOTT.SALES, TARGET SCOTT.SALES; MAP SCOTT.ITEMS, TARGET SCOTT.ITEMS; Start the Replicat process in the target environment: GGSCI (stdby1-ol6-112.localdomain) 56> START REPLICAT RGGTEST1 Sending START request to MANAGER ... REPLICAT RGGTEST1 starting How it works... You can follow the previously mentioned procedure to apply any DDL changes to the tables in the source database. This procedure is valid for environments where existing table structures between the source and the target databases are not similar. The key things to note in this method are: The changes should only be made when all the changes extracted by GoldenGate are applied to the target database, and the replication processes are stopped. Once the DDL changes have been performed in the source database, the definitions file needs to be regenerated. The changes that you are making to the table structures needs to be performed on both sides. There's more… Some of the assumptions made in the earlier procedure might not hold true for all environments. Let's see what needs to be done in cases where the environment does not satisfy these conditions: Individual table permissions granted to the GoldenGate Admin user If you have granted table-specific permissions to the GoldenGate Admin user in the source and target environments, you would need to grant them on the newly added tables to allow the GoldenGate user to read their data in the source environment and also to apply the changes to these tables in the target environment. Supplemental logging for modified tables without any keys If you are adding or deleting any columns from the tables in the source database which do not have any primary/unique keys, you would then need to drop the existing supplemental log group and read them. This is because when there are no primary/unique keys in a table, GoldenGate adds all columns to the supplemental log group. This supplemental log group will need to be modified when the structure of the underlying table is modified. Supplemental log groups with all columns for modified tables In some cases, you would need to enable supplemental logging on all columns of the source tables that you are replicating. This is mostly applicable for consolidation replication topologies where all changes are captured and converted into INSERTs in the target environment, which usually is a Data warehouse. In such cases, you need to drop and read the supplemental logging on the tables in which you are adding or removing any columns.
Read more
  • 0
  • 0
  • 3518

article-image-using-different-jquery-event-listeners-responsive-interaction
Packt
16 Sep 2013
9 min read
Save for later

Using different jQuery event listeners for responsive interaction

Packt
16 Sep 2013
9 min read
(For more resources related to this topic, see here.) Getting Started First we want to create JavaScript that transforms a select form element into a button widget that changes the value of the form element when a button is pressed. So the first part of that task is to build a form with a select element. How to do it This part is simple; start by creating a new web page. Inside it, create a form with a select element. Give the select element some options. Wrap the form in a div element with the class select. See the following example. I have added a title just for placement. <h2>Super awesome form element</h2><div class="select"> <form> <select> <option value="1">1</option> <option value="Bar">Bar</option> <option value="3">3</option> </select> </form></div> Next, create a new CSS file called desktop.css and add a link to it in your header. After that, add a media query to the link for screen media and min-device-width:321px. The media query causes the browser to load the new CSS file only on devices with a screen larger than 320 pixels. Copy and paste the link to the CSS, but change the media query to screen and min-width:321px. This will help you test and demonstrate the mobile version of the widget on your desktop. <link rel="stylesheet" media="screen and (min-device-width:321px)" href="desktop.css" /><link rel="stylesheet" media="screen and (min-width:321px)" href="desktop.css" /> Next, create a script tag with a link to a new JavaScript file called uiFunctions.js and then, of course, create the new JavaScript file. Also, create another script element with a link to the recent jQuery library. <script src = "http://code.jquery.com/jquery-1.8.2.min.js"></script><script src = "uiFunctions.js"></script> Now open the new JavaScript file uiFunctions.js in your editor and add instructions to do something on a document load. $(document).ready(function(){ //Do something}); The first thing your JavaScript should do when it loads is determine what kind of device it is on—a mobile device or a desktop. There are a few logical tests you can utilize to determine whether the device is mobile. You can test navigator.userAgent; specifically, you can use the .test() method, which in essence tests a string to see whether an expression of characters is in it, checks the window width, and checks whether the screen width is smaller than 600 pixels. For this article, let's use all three of them. Ultimately, you might just want to test navigator.userAgent. Write this inside the $(document).ready() function. if( /Android|webOS|iPhone|iPad|iPod|BlackBerry/i.test(navigator.userAgent) || $(window).width()<600 ||window.screen.width<600) { //Do something for mobile} else { //Do something for the desktop} Inside, you will have to create a listener for the desktop device interaction event and the mouse click event, but that is later. First, let's write the JavaScript to create the UI widget for the select element. Create a function that iterates for each select option and appends a button with the same text as the option to the select div element. This belongs inside the $(document).ready() function, but outside and before the if condition. The order of these is important. $('select option').each(function(){ $('div.select').append('<button>'+$(this).html()+'</button>');}); Now, if you load the page on your desktop computer, you will see that it generates new buttons below the select element, one for each select option. You can click on them, but nothing happens. What we want them to do is change the value of the select form element. To do so, we need to add an event listener to the buttons inside the else condition. For the desktop version, you need to add a .click() event listener with a function. Inside the function, create two new variables, element and itemClicked. Make element equal the string button, and itemClicked, the jQuery object event target, or $(event.target). The next line is tricky; we're going to use the .addClass() method to add a selected class to the element variable :nth-child(n). Also, the n of the :nth-child(n) should be a call to a function named .eventAction(), to which we will add the integer 2. We will create the function next. $('button').click(function(){ var element = 'button'; var itemClicked = $(event.target); $(element+':nth-child(' + (eventAction(itemClicked,element) + 2) + ')').addClass('selected');}); Next, outside the $(document).ready() function, create the eventAction() function. It will receive the variables itemClicked and element. The reason we make this function is because it performs the same functions for both the desktop click event and the mobile tap or long tap events. function eventAction(itemClicked,element){ //Do something!}; Inside the eventAction() function, create a new variable called choiceAction. Make choiceAction equal to the index of the element object in itemClicked, or just take a look at the following code: var choiceAction = $(element).index(itemClicked); Next, use the .removeClass() method to remove the selected class from the element object. $(element).removeClass('selected'); There are only two more steps to complete the function. First, add the selected attribute to the select field option using the .eq() method and the choiceAction variable. Finally, remember that when the function was called in the click event, it was expecting something to replace the n in :nth-child(n); so end the function by returning the value of the choiceAction variable. $('select option').eq(choiceAction).attr('selected','selected');return choiceAction; That takes care of everything but the mobile event listeners. The button style will be added at the end of the article. See how it looks in the following screenshot: This will be simple. First, using jQuery's $.getScript() method, add a line to retrieve the jQuery library in the first if condition where we tested navigator.userAgent and the screen sizes to see whether the page was loaded into the viewport of a mobile device. The jQuery Mobile library will transform the HTML into a mobile, native-looking app. $.getScript("http://code.jquery.com/mobile/1.2.0/jquery.mobile-1.2.0.min.js"); The next step is to copy the desktop's click event listener, paste it below the $.getScript line, and change some values. Replace the .click() listener with a jQuery Mobile event listener, .tap() or .taphold(), change the value of the element variable to the string .uti-btn, and append the daisy-chained .parent().prev() methods to the itemClicked variable value, $(event.target). Replace the line that calls the eventAction() function in the :nth-child(n) selector with a more simple call to eventAction(), with the variables itemClicked and element. $('button').click(function(){ var element = '.ui-btn'; var itemClicked = $(event.target).parent().prev(); eventAction(itemClicked,element);}); When you click on the buttons to update the select form element in the mobile device, you will need to instruct jQuery Mobile to refresh its select menu. jQuery Mobile has a method to refresh its select element. $('select').selectmenu("refresh",true); That is all you need for the JavaScript file. Now open the HTML file and add a few things to the header. First, add a style tag to make the select form element and .ui-select hidden with the CSS display:none;. Next, add links to the jQuery Mobile stylesheets and desktop.css with a media query for media screen and max-width: 600px; or max-device-width:320px;. <style> select,.ui-select{display:none;}</style><link rel="stylesheet" media="screen and (max-width:600px)" href="http://code.jquery.com/mobile/1.2.0/jquery.mobile-1.2.0.min.css"><link rel="stylesheet" media="screen and (min-width:600px)" href="desktop.css" /> When launched on a mobile device, the widget will look like this: Then, open the desktop.css file and create some style for the widget buttons. For the button element, add an inline display, padding, margins, border radius, background gradient, box shadow, font color, text shadow, and a cursor style. button { display:inline; padding:8px 15px; margin:2px; border-top:1px solid #666; border-left:1px solid #666; border-bottom:1px solid #333; border-right:1px solid #333; border-radius:5px; background: #7db9e8; /* Old browsers */ background:-moz-linear-gradient(top, #7db9e8 0%,#207cca 49%,#2989d8 50%,#1e5799 100%); /* FF3.6+ */ background:-webkit-gradient(linear,left top,left bottom, color-stop(0%,#7db9e8), color-stop(49%,#207cca), color-stop(50%,#2989d8), color-stop(100%,#1e5799)); /* Chrome,Safari4+ */ background:-webkit-linear-gradient(top,#7db9e8 0%,#207cca 49%, #2989d8 50%,#1e5799 100%); /* Chrome10+,Safari5.1+ */ background:-o-linear-gradient(top,#7db9e8 0%,#207cca 49%,#2989d8 50%,#1e5799 100%); /* Opera 11.10+ */ background:-ms-linear-gradient(top,#7db9e8 0%,#207cca 49%,#2989d8 50%,#1e5799 100%); /* IE10+ */ background:linear-gradient(to bottom,#7db9e8 0%,#207cca 49%,#2989d8 50%,#1e5799 100%); /* W3C */ filter:progid:DXImageTransform.Microsoft.gradient ( startColorstr='#7db9e8', endColorstr='#1e5799',GradientType=0 ); /* IE6-9 */ color:white; text-shadow: -1px -1px 1px #333; box-shadow: 1px 1px 4px 2px #999; cursor:pointer;} Finally, add CSS for the .selected class that was added by the JavaScript. This CSS will change the button to look as if the button has been pressed in. .selected{ border-top:1px solid #333; border-left:1px solid #333; border-bottom:1px solid #666; border-right:1px solid #666; color:#ffff00; box-shadow:inset 2px 2px 2px 2px #333; background: #1e5799; /* Old browsers */ background:-moz-linear-gradient(top,#1e5799 0%,#2989d8 50%, #207cca 51%, #7db9e8 100%); /* FF3.6+ */ background:-webkit-gradient(linear,left top,left bottom, color-stop(0%,#1e5799),color-stop(50%,#2989d8), color-stop(51%,#207cca),color-stop(100%,#7db9e8)); /* Chrome,Safari4+ */ background:-webkit-linear-gradient(top, #1e5799 0%,#2989d8 50%, #207cca 51%,#7db9e8 100%); /* Chrome10+,Safari5.1+ */ background:-o-linear-gradient(top, #1e5799 0%,#2989d8 50%,#207cca 51%,#7db9e8 100%); /* Opera 11.10+ */ background:-ms-linear-gradient(top, #1e5799 0%,#2989d8 50%,#207cca 51%,#7db9e8 100%); /* IE10+ */ background: linear-gradient(to bottom, #1e5799 0%,#2989d8 50%,#207cca 51%,#7db9e8 100%); /* W3C */ filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#1e5799', endColorstr='#7db9e8',GradientType=0 ); /* IE6-9 */} How it works This uses a combination of JavaScript and media queries to build a dynamic HTML form widget. The JavaScript tests the user agent and screen size to see if it is a mobile device and responsively delivers a different event listener for the different device types. In addition to that, the look of the widget will be different for different devices. Summary In this article we learned how to create an interactive widget that uses unobtrusive JavaScript, which uses different event listeners for desktop versus mobile devices. This article also helped you build your own web app that can transition between the desktop and mobile versions without needing you to rewrite your entire JavaScript code. Resources for Article : Further resources on this subject: Video conversion into the required HTML5 Video playback [Article] LESS CSS Preprocessor [Article] HTML5 Presentations - creating our initial presentation [Article]
Read more
  • 0
  • 0
  • 3538

Packt
16 Sep 2013
16 min read
Save for later

Linux Shell Scripting – various recipes to help you

Packt
16 Sep 2013
16 min read
(For more resources related to this topic, see here.) The shell scripting language is packed with all the essential problem-solving components for Unix/Linux systems. Text processing is one of the key areas where shell scripting is used, and there are beautiful utilities such as sed, awk, grep, and cut, which can be combined to solve problems related to text processing. Various utilities help to process a file in fine detail of a character, line, word, column, row, and so on, allowing us to manipulate a text file in many ways. Regular expressions are the core of pattern-matching techniques, and most of the text-processing utilities come with support for it. By using suitable regular expression strings, we can produce the desired output, such as filtering, stripping, replacing, and searching. Using regular expressions Regular expressions are the heart of text-processing techniques based on pattern matching. For fluency in writing text-processing tools, one must have a basic understanding of regular expressions. Using wild card techniques, the scope of matching text with patterns is very limited. Regular expressions are a form of tiny, highly-specialized programming language used to match text. A typical regular expression for matching an e-mail address might look like [a-z0-9_]+@[a-z0-9]+\.[a-z]+. If this looks weird, don't worry, it is really simple once you understand the concepts through this recipe. How to do it... Regular expressions are composed of text fragments and symbols, which have special meanings. Using these, we can construct any suitable regular expression string to match any text according to the context. As regex is a generic language to match texts, we are not introducing any tools in this recipe. Let's see a few examples of text matching: To match all words in a given text, we can write the regex as follows: ( ?[a-zA-Z]+ ?) ? is the notation for zero or one occurrence of the previous expression, which in this case is the space character. The [a-zA-Z]+ notation represents one or more alphabet characters (a-z and A-Z). To match an IP address, we can write the regex as follows: [0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3} Or: [[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3} We know that an IP address is in the form 192.168.0.2. It is in the form of four integers (each from 0 to 255), separated by dots (for example, 192.168.0.2). [0-9] or [:digit:] represents a match for digits from 0 to 9. {1,3} matches one to three digits and \. matches the dot character (.). This regex will match an IP address in the text being processed. However, it doesn't check for the validity of the address. For example, an IP address of the form 123.300.1.1 will be matched by the regex despite being an invalid IP. This is because when parsing text streams, usually the aim is to only detect IPs. How it works... Let's first go through the basic components of regular expressions (regex): regex Description Example ^ This specifies the start of the line marker. ^tux matches a line that starts with tux. $ This specifies the end of the line marker. tux$ matches a line that ends with tux. . This matches any one character. Hack. matches Hack1, Hacki, but not Hack12 or Hackil; only one additional character matches. [] This matches any one of the characters enclosed in [chars]. coo[kl] matches cook or cool. [^] This matches any one of the characters except those that are enclosed in [^chars]. 9[^01] matches 92 and 93, but not 91 and 90. [-] This matches any character within the range specified in []. [1-5] matches any digits from 1 to 5. ? This means that the preceding item must match one or zero times. colou?r matches color or colour, but not colouur. + This means that the preceding item must match one or more times. Rollno-9+ matches Rollno-99 and Rollno-9, but not Rollno-. * This means that the preceding item must match zero or more times. co*l matches cl, col, and coool. () This treats the terms enclosed as one entity ma(tri)?x matches max or matrix. {n} This means that the preceding item must match n times. [0-9]{3} matches any three-digit number. [0-9]{3} can be expanded as [0-9][0-9][0-9]. {n,} This specifies the minimum number of times the preceding item should match. [0-9]{2,} matches any number that is two digits or longer. {n, m} This specifies the minimum and maximum number of times the preceding item should match. [0-9]{2,5} matches any number has two digits to five digits. | This specifies the alternation-one of the items on either of sides of | should match. Oct (1st | 2nd) matches Oct 1st or Oct 2nd. \ This is the escape character for escaping any of the special characters mentioned previously. a\.b matches a.b, but not ajb. It ignores the special meaning of . because of \. For more details on the regular expression components available, you can refer to the following URL: http://www.linuxforu.com/2011/04/sed-explained-part-1/ There's more... Let's see how the special meanings of certain characters are specified in the regular expressions. Treatment of special characters Regular expressions use some characters, such as $, ^, ., *, +, {, and }, as special characters. But, what if we want to use these characters as normal text characters? Let's see an example of a regex, a.txt. This will match the character a, followed by any character (due to the '.' character), which is then followed by the string txt . However, we want '.' to match a literal '.' instead of any character. In order to achieve this, we precede the character with a backward slash \ (doing this is called escaping the character). This indicates that the regex wants to match the literal character rather than its special meaning. Hence, the final regex becomes a\.txt. Visualizing regular expressions Regular expressions can be tough to understand at times, but for people who are good at understanding things with diagrams, there are utilities available to help in visualizing regex. Here is one such tool that you can use by browsing to http://www.regexper.com; it basically lets you enter a regular expression and creates a nice graph to help understand it. Here is a screenshot showing the regular expression we saw in the previous section: Searching and mining a text inside a file with grep Searching inside a file is an important use case in text processing. We may need to search through thousands of lines in a file to find out some required data, by using certain specifications. This recipe will help you learn how to locate data items of a given specification from a pool of data. How to do it... The grep command is the magic Unix utility for searching in text. It accepts regular expressions, and can produce output in various formats. Additionally, it has numerous interesting options. Let's see how to use them: To search for lines of text that contain the given pattern: $ grep pattern filenamethis is the line containing pattern Or: $ grep "pattern" filenamethis is the line containing pattern We can also read from stdin as follows: $ echo -e "this is a word\nnext line" | grep wordthis is a word Perform a search in multiple files by using a single grep invocation, as follows: $ grep "match_text" file1 file2 file3 ... We can highlight the word in the line by using the --color option as follows: $ grep word filename --color=autothis is the line containing word Usually, the grep command only interprets some of the special characters in match_text. To use the full set of regular expressions as input arguments, the -E option should be added, which means an extended regular expression. Or, we can use an extended regular expression enabled grep command, egrep. For example: $ grep -E "[a-z]+" filename Or: $ egrep "[a-z]+" filename In order to output only the matching portion of a text in a file, use the -o option as follows: $ echo this is a line. | egrep -o "[a-z]+\." line. In order to print all of the lines, except the line containing match_pattern, use: $ grep -v match_pattern file The -v option added to grep inverts the match results. Count the number of lines in which a matching string or regex match appears in a file or text, as follows: $ grep -c "text" filename 10 It should be noted that -c counts only the number of matching lines, not the number of times a match is made. For example: $ echo -e "1 2 3 4\nhello\n5 6" | egrep -c "[0-9]" 2 Even though there are six matching items, it prints 2, since there are only two matching lines. Multiple matches in a single line are counted only once. To count the number of matching items in a file, use the following trick: $ echo -e "1 2 3 4\nhello\n5 6" | egrep -o "[0-9]" | wc -l 6 Print the line number of the match string as follows: $ cat sample1.txt gnu is not unix linux is fun bash is art $ cat sample2.txt planetlinux $ grep linux -n sample1.txt 2:linux is fun or $ cat sample1.txt | grep linux -n If multiple files are used, it will also print the filename with the result as follows: $ grep linux -n sample1.txt sample2.txt sample1.txt:2:linux is fun sample2.txt:2:planetlinux Print the character or byte offset at which a pattern matches, as follows: $ echo gnu is not unix | grep -b -o "not" 7:not The character offset for a string in a line is a counter from 0, starting with the first character. In the preceding example, not is at the seventh offset position (that is, not starts from the seventh character in the line; that is, gnu is not unix). The -b option is always used with -o. To search over multiple files, and list which files contain the pattern, we use the following: $ grep -l linux sample1.txt sample2.txt sample1.txt sample2.txt The inverse of the -l argument is -L. The -L argument returns a list of non-matching files. There's more... We have seen the basic usages of the grep command, but that's not it; the grep command comes with even more features. Let's go through those. Recursively search many files To recursively search for a text over many directories of descendants, use the following command: $ grep "text" . -R -n In this command, "." specifies the current directory. The options -R and -r mean the same thing when used with grep. For example: $ cd src_dir $ grep "test_function()" . -R -n ./miscutils/test.c:16:test_function(); test_function() exists in line number 16 of miscutils/test.c. This is one of the most frequently used commands by developers. It is used to find files in the source code where a certain text exists. Ignoring case of pattern The -i argument helps match patterns to be evaluated, without considering the uppercase or lowercase. For example: $ echo hello world | grep -i "HELLO" hello grep by matching multiple patterns Usually, we specify single patterns for matching. However, we can use an argument -e to specify multiple patterns for matching, as follows: $ grep -e "pattern1" -e "pattern" This will print the lines that contain either of the patterns and output one line for each match. For example: $ echo this is a line of text | grep -e "this" -e "line" -o this line There is also another way to specify multiple patterns. We can use a pattern file for reading patterns. Write patterns to match line-by-line, and execute grep with a -f argument as follows: $ grep -f pattern_filesource_filename For example: $ cat pat_file hello cool $ echo hello this is cool | grep -f pat_file hello this is cool Including and excluding files in a grep search grep can include or exclude files in which to search. We can specify include files or exclude files by using wild card patterns. To search only for .c and .cpp files recursively in a directory by excluding all other file types, use the following command: $ grep "main()" . -r --include *.{c,cpp} Note, that some{string1,string2,string3} expands as somestring1 somestring2 somestring3. Exclude all README files in the search, as follows: $ grep "main()" . -r --exclude "README" To exclude directories, use the --exclude-dir option. To read a list of files to exclude from a file, use --exclude-from FILE. Using grep with xargs with zero-byte suffix The xargs command is often used to provide a list of file names as a command-line argument to another command. When filenames are used as command-line arguments, it is recommended to use a zero-byte terminator for the filenames instead of a space terminator. Some of the filenames can contain a space character, and it will be misinterpreted as a terminator, and a single filename may be broken into two file names (for example, New file.txt can be interpreted as two filenames New and file.txt). This problem can be avoided by using a zero-byte suffix. We use xargs so as to accept a stdin text from commands such as grep and find. Such commands can output text to stdout with a zero-byte suffix. In order to specify that the input terminator for filenames is zero byte (\0), we should use -0 with xargs. Create some test files as follows: $ echo "test" > file1 $ echo "cool" > file2 $ echo "test" > file3 In the following command sequence, grep outputs filenames with a zero-byte terminator (\0), because of the -Z option with grep. xargs -0 reads the input and separates filenames with a zero-byte terminator: $ grep "test" file* -lZ | xargs -0 rm Usually, -Z is used along with -l. Silent output for grep Sometimes, instead of actually looking at the matched strings, we are only interested in whether there was a match or not. For this, we can use the quiet option (-q), where the grep command does not write any output to the standard output. Instead, it runs the command and returns an exit status based on success or failure. We know that a command returns 0 on success, and non-zero on failure. Let's go through a script that makes use of grep in a quiet mode, for testing whether a match text appears in a file or not. #!/bin/bash #Filename: silent_grep.sh #Desc: Testing whether a file contain a text or not if [ $# -ne 2 ]; then echo "Usage: $0 match_text filename" exit 1 fi match_text=$1 filename=$2 grep -q "$match_text" $filename if [ $? -eq 0 ]; then echo "The text exists in the file" else echo "Text does not exist in the file" fi The silent_grep.sh script can be run as follows, by providing a match word (Student) and a file name (student_data.txt) as the command argument: $ ./silent_grep.sh Student student_data.txt The text exists in the file Printing lines before and after text matches Context-based printing is one of the nice features of grep. Suppose a matching line for a given match text is found, grep usually prints only the matching lines. But, we may need "n" lines after the matching line, or "n" lines before the matching line, or both. This can be performed by using context-line control in grep. Let's see how to do it. In order to print three lines after a match, use the -A option: $ seq 10 | grep 5 -A 3 5 6 7 8 In order to print three lines before the match, use the -B option: $ seq 10 | grep 5 -B 3 2 3 4 5 Print three lines after and before the match, and use the -C option as follows: $ seq 10 | grep 5 -C 3 2 3 4 5 6 7 8 If there are multiple matches, then each section is delimited by a line "--": $ echo -e "a\nb\nc\na\nb\nc" | grep a -A 1 a b -- a b Cutting a file column-wise with cut We may need to cut the text by a column rather than a row. Let's assume that we have a text file containing student reports with columns, such as Roll, Name, Mark, and Percentage. We need to extract only the name of the students to another file or any nth column in the file, or extract two or more columns. This recipe will illustrate how to perform this task. How to do it... cut is a small utility that often comes to our help for cutting in column fashion. It can also specify the delimiter that separates each column. In cut terminology, each column is known as a field . To extract particular fields or columns, use the following syntax: cut -f FIELD_LIST filename FIELD_LIST is a list of columns that are to be displayed. The list consists of column numbers delimited by commas. For example: $ cut -f 2,3 filename Here, the second and the third columns are displayed. cut can also read input text from stdin. Tab is the default delimiter for fields or columns. If lines without delimiters are found, they are also printed. To avoid printing lines that do not have delimiter characters, attach the -s option along with cut. An example of using the cut command for columns is as follows: $ cat student_data.txt No Name Mark Percent 1 Sarath 45 90 2 Alex 49 98 3 Anu 45 90 $ cut -f1 student_data.txt No 1 2 3 Extract multiple fields as follows: $ cut -f2,4 student_data.txt Name Percent Sarath 90 Alex 98 Anu 90 To print multiple columns, provide a list of column numbers separated by commas as arguments to -f. We can also complement the extracted fields by using the --complement option. Suppose you have many fields and you want to print all the columns except the third column, then use the following command: $ cut -f3 --complement student_data.txt No Name Percent 1 Sarath 90 2 Alex 98 3 Anu 90 To specify the delimiter character for the fields, use the -d option as follows: $ cat delimited_data.txt No;Name;Mark;Percent 1;Sarath;45;90 2;Alex;49;98 3;Anu;45;90 $ cut -f2 -d";" delimited_data.txt Name Sarath Alex Anu There's more The cut command has more options to specify the character sequences to be displayed as columns. Let's go through the additional options available with cut. Specifying the range of characters or bytes as fields Suppose that we don't rely on delimiters, but we need to extract fields in such a way that we need to define a range of characters (counting from 0 as the start of line) as a field. Such extractions are possible with cut. Let's see what notations are possible: N- from the Nth byte, character, or field, to the end of line N-M from the Nth to Mth (included) byte, character, or field -M from first to Mth (included) byte, character, or field We use the preceding notations to specify fields as a range of bytes or characters with the following options: -b for bytes -c for characters -f for defining fields For example: $ cat range_fields.txt abcdefghijklmnopqrstuvwxyz abcdefghijklmnopqrstuvwxyz abcdefghijklmnopqrstuvwxyz abcdefghijklmnopqrstuvwxy You can print the first to fifth characters as follows: $ cut -c1-5 range_fields.txt abcde abcde abcde abcde The first two characters can be printed as follows: $ cut range_fields.txt -c -2 ab ab ab ab Replace -c with -b to count in bytes. We can specify the output delimiter while using with -c, -f, and -b, as follows: --output-delimiter "delimiter string" When multiple fields are extracted with -b or -c, the --output-delimiter is a must. Otherwise, you cannot distinguish between fields if it is not provided. For example: $ cut range_fields.txt -c1-3,6-9 --output-delimiter "," abc,fghi abc,fghi abc,fghi abc,fghi
Read more
  • 0
  • 0
  • 1744
article-image-designing-sizing-building-and-configuring-citrix-vdi-box
Packt
13 Sep 2013
7 min read
Save for later

Designing, Sizing, Building, and Configuring Citrix VDI-in-a-Box

Packt
13 Sep 2013
7 min read
(For more resources related to this topic, see here.) Sizing the servers There are a number of tools and guidelines to help you to size Citrix VIAB appliances. Essentially, the guides cover the following topics: CPU Memory Disk IO Storage In their sizing guides, Citrix classifies users into the following two groups: 4kers Knowledge workers Therefore, the first thing to determine is how many of your proposed VIAB users are task workers, and how many are knowledge workers? Task workers Citrix would define task workers as users who run a small set of simple applications, not very graphical in nature or CPU- or memory-intensive, for example, Microsoft Office and a simple line of business applications. Knowledge workers Citrix would define knowledge workers as users who run multimedia and CPU- and memory-intensive applications. They may include large spreadsheet files, graphics packages, video playback, and so on. CPU Citrix offers recommendations based on CPU cores, such as the following: 3 x desktops per core per knowledge worker 6 x desktops per core per task user 1 x core for the hypervisor These figures can be increased slightly if the CPUs have hyper-threading. You should also add another 15 percent if delivering personal desktops. The sizing information has been gathered from the Citrix VIAB sizing guide PDF. Example 1 If you wanted to size a server appliance to support 50 x task-based users running pooled desktops, you would require 50 / 6 = 8.3 + 1 (for the hypervisor) = 9.3 cores, rounded up to 10 cores. Therefore, a dual CPU with six cores would provide 12 x CPU cores for this requirement. Example 2 If you wanted to size a server appliance to support 15 x task and 10 x knowledge workers you would require (15 / 6 = 2.5) + (10 / 3 = 3.3) + 1 (for the hypervisor) = 7 cores. Therefore, a dual CPU with 4 cores would provide 8 x CPU cores for this requirement. Memory The memory required depends on the desktop OS that you are running and also on the amount of optimization that you have done to the image. Citrix recommends the following guidelines: Task worker for Windows 7 should be 1.5 GB Task worker for Windows XP should be 0.5 GB Knowledge worker Windows 7 should be 2 GB Knowledge worker Windows XP should be 1 GB It is also important to allocate memory for the hypervisor and the VIAB virtual appliance. This can vary depending on the number of users, so we would recommend using the sizing spreadsheet calculator available in the Resources section of the VIAB website. However, as a guide, we would allocate 3 GB memory (based on 50 users) for the hypervisor and 1 GB for VIAB. The amount of memory required by the hypervisor will grow as the number of users on the server grows. Citrix also recommends adding 10 percent more memory for server operations. Example 1 If you wanted to size a server appliance to support 50 x task-based users, with Windows 7, you would require 50 x 1.5 + 4 GB (for VIAB and the hypervisor) = 75 GB + 10% = 87 GB. Therefore, you would typically round this up to a 96 GB memory, providing an ideal configuration for this requirement. Example 2 Therefore, if you wanted to size a server appliance to support 15 x task and 10 x knowledge workers, with Windows 7, you would require (15 x 1.5) + (10 x 2) + 4 GB (for VIAB and the hypervisor) = 75 GB + 10% = 51.5 GB. Therefore, a 64 GB memory would be an ideal configuration for this requirement. Disk IO As multiple Windows images run on the appliances, disk IO becomes very important and can often become the first bottleneck for VIAB.Citrix calculates IOPS with a 40-60 split between read and write OPS, during end user desktop access.Citrix doesn't recommend using slow disks for VIAB and has statistic information for SAS 10 and 15K and SSD disks.The following table shows the IOPS delivered from the following disks: Hard drive RPM IOPS RAID 0 IOPS RAID 1 SSD 6000   15000 175 122.5 10000 125 87.7 The following table shows the IOPS required for task and knowledge workers for Windows XP and Windows 7: Desktop IOPS Windows XP Windows 7 Task user 5 IOS 10 IOPS Knowledge user 10 IOPS 20 IOPS Some organizations decide to implement RAID 1 or 10 on the appliances to reduce the chance of an appliance failure. This does require many more disks however, and significantly increases the cost of the solution. SSD SSD is becoming an attractive proposition for organizations that want to run a larger number of users on each appliance. SSD is roughly 30 times faster than 15K SAS drives, so it will eliminate desktop IO bottlenecks completely. SSD continues to come down in price, so can be well worth considering at the start of a VIAB project. SSDs have no moving mechanical components. Compared with electromechanical disks, SSDs are typically less susceptible to physical shock, run more quietly, have lower access time, and less latency. However, while the price of SSDs has continued to decline, SSDs are still about 7 to 8 times more expensive per unit of storage than HDDs. A further option to consider would be Fusion-IO, which is based on NAND flash memory technology and can deliver an exceptional number of IOPS. Example 1 If you wanted to size a server appliance to support 50 x task workers, with Windows 7, using 15K SAS drives, you would require 175 / 10 = 17.5 users on each disk, therefore, 50 / 17. 5 = 3 x 15K SAS disks. Example 2 If you wanted to size a server appliance to support 15 x task workers and 10 knowledge workers, with Windows 7, you would require the following: 175 / 10 = 17.5 task users on each disk, therefore 15 / 17.5 = 0.8 x 15K SAS disks 175 / 20 = 8.75 knowledge users on each disk, therefore 10 / 8.75 = 1.1 x 15K SAS disks Therefore, 2 x 15K SAS drives would be required. Storage Storage capacity is determined by the number of images, number of desktops, and types of desktop. It is best practice to store user profile information and data elsewhere. Citrix uses the following formula to determine the storage capacity requirement: 2 x golden image x number of images (assume 20 GB for an image) 70 GB for VDI-in-a-Box 15 percent of the size of the image / desktop (achieved with linked clone technology) Example 1 Therefore, if you wanted to size a server appliance to support 50 x task-based users, with two golden Windows 7 images, you would require the following: Space for the golden image: 2 x 20 GB x 2 = 80 GB VIAB appliance space: 70 GB Image space/desktop: 15% x 20 GB x 50 = 150 GB Extra room for swap and transient activity: 100 GB Total: 400 GB Recommended: 500 GB to 1 TB per server We have already specified 3 x 15K SAS drives for our IO requirements. If those were 300-GB disks, they should provide enough storage. This section of the article provides you with a step-by-step guide to help you to build and configure a VIAB solution; starting with the hypervisor install. It then goes onto to cover adding an SSL certificate, the benefits of using the GRID IP Address feature, and how you can use the Kiosk mode to deliver a standard desktop to public access areas. It then covers adding a license file and provides details on the useful features contained within Citrix profile management. It then highlights how VIAB can integrate with other Citrix products such as NetScaler VPX, to enable secure connections across the Internet and GoToAssist, a support and monitoring package which is very useful if you are supporting a number of VIAB appliances across multiple sites. ShareFile can again be a very useful tool to enable data files to follow the user, whether they are connecting to a local device or a virtual desktop. This can avoid the problems of files being copied across the network, delaying users. We then move on to a discussion on the options available for connecting to VIAB, including existing PCs, thin clients, and other devices, including mobile devices. The chapter finishes with some useful information on support for VIAB, including the support services included with subscription and the knowledge forums. Installing the hypervisor All the hypervisors have two elements; the bare metal hypervisor that installs on the server and its management tools that you would typically install on the IT administrator workstations. Bare Metal Hypervisor Management tool Citrix XenServer XenCenter Microsoft Hyper-V Hyper V Manager VMware ESXi vSphere Client It is relatively straightforward to install the hypervisor. Make sure you enable linked clones in XenServer, because this is required for the linked clone technology. Give the hypervisor a static IP address and make a note of the administrator's username and password. You will need to download ISO images for the installation media; if you don't already have them, they can be found on the Internet.
Read more
  • 0
  • 0
  • 3398

article-image-ibm-cognos-insight
Packt
12 Sep 2013
9 min read
Save for later

IBM Cognos Insight

Packt
12 Sep 2013
9 min read
(For more resources related to this topic, see here.) An example case for IBM Cognos Insight Consider an example of a situation where an organization from the retail industry heavily depends on spreadsheets as its source of data collection, analysis, and decision making. These spreadsheets contain data that is used to analyze customers' buying patterns across the various products sold by multiple channels in order to boost the sales across the company. The analysis hopes to reveal customers' buying patterns demographically, streamline sales channels, improve supply chain management, give an insight into forecast spending, and redirect budgets to advertising, marketing, and human capital management, as required. As this analysis is going to involve multiple departments and resources working with spreadsheets, one of the challenges will be to have everyone speak in similar terms and numbers. Collaboration across departments is important for a successful analysis. Typically in such situations, multiple spreadsheets are created across resource pools and segregated either by time, product, or region (due to the technical limitations of spreadsheets) and often the analysis requires the consolidation of these spreadsheets to be able to make the educated decision. After the number-crunching, a consolidated spreadsheet showing high level summaries is sent out to executives, while the details remain on other tabs within the same spreadsheet or on altogether separate spreadsheet files. This manual procedure has a high probability of errors. The similar data analysis process in IBM Cognos Insight would result in faster decision making by keeping the details and the summaries in a highly compressed Online Analytical Processing (OLAP) in-memory cube. Using the intuitive drag-and-drop functionality or the smart-metadata import wizard, the spreadsheet data now appears instantaneously (due to the in-memory analysis) in a graphical and pivot table format. Similar categorical data values, such as customer, time, product, sales channel and retail location are stored as dimension structures. All the numerical values bearing factual data such as revenue, product cost, and so on, defined as measures are stored in the OLAP cube along with the dimensions. Two or more of these dimensions and measures together form a cube view that can be sliced and diced and viewed at a summarized or a detailed level. Within each dimension, elements such as customer name, store location, revenue amount generated, and so on, are created. These can be used in calculations and trend analysis. These dimensions can be pulled out on the analysis canvas as explorer points that can be used for data filtering and sorting. Calculations, business rules and differentiator metrics can be added to the cube view to enhance the analysis. After enhancements to the IBM Cognos Insight workspace have been saved, these workspaces or fi les can be e-mailed and distributed as offline analyses. Also, the users have the option to publish the workspace into the IBM Cognos Business Intelligence web portal, Cognos Connection or IBM Cognos Express, both of which are targeted to larger audiences, where this information can be shared with broader workgroups. Security layers can be included to protect sensitive data, if required. The publish-and-distribute option within IBM Cognos Insight is used for advanced analytics features and write-back functionality in larger deployments. where, the users can modify plans online or offline, and sync up to the enterprise environment on an as-and-when basis. As an example, the analyst can create what-if scenarios for business purposes to simulate the introduction of a new promotion price for a set of smart phones during high foot traffic times to drive up sales. Or simulating an extension of store hours during summer months to analyze the effects on overall store revenue can be created. The following diagram shows the step-by-step process of dropping a spreadsheet into IBM Cognos Insight and viewing the dashboard and the scorecard style reports instantaneously, which can then be shared on the IBM Cognos BI web-portal or published to an IBM TM1 environment. The preceding screenshot demonstrates the steps from raw data in spreadsheets being imported into IBM Cognos Insight to reveal a dashboard style report instantaneously. Additional calculations to this workspace creates scorecard type graphical variances, thus giving an overall picture through rich graphics. Using analytics successfully Over the past few years, there have been huge improvements in the technology and processes of gathering the data. Using Business Analytics and applications such as IBM Cognos Insight we can now analyze and accurately measure anything and everything. This leads to the question: Are we using Analytics successfully? The following high-level recommendations should be used as a guidance for organizations that are either attempting a Business Analytics implementation for the first time or for those who are already involved with Business Analytics, both working towards a successful implementation: The first step is to prioritize the targets that will produce intelligent analytics from the available trustworthy data. Choosing this target wisely and thoughtfully has an impact on the success rate of the implementation. Usually, these are high value targets that need problem solving and/or quick wins to justify the need and/or investment towards a Business Analytics solution. Avoid the areas with a potential for probable budget cuts and/or involving corporate cultural and political battles that are considered to be the major factors leading to an implementation pitfall. Improve your chances by asking the question—where will we achieve maximum business value? Selecting the appropriate product to deliver the technology is the key for success—a product that is suitable for all the skill levels and that can be supported by the organization's infrastructure. IBM Cognos Insight is one such product where the learning curve is minimal; thanks to its ease of use and vast features. The analysis produced by using IBM Cognos Insight can then be shared by publishing to an enterprise-level solution such as IBM Cognos BI, IBM Cognos Express, or IBM TM1. This product reduces dependencies on IT departments in terms of personnel and IT resources due to the small learning curve, easy setup, intuitive look, feel, and vast features. The sharing and collaborating capabilities eliminate the need for multiple silos of spreadsheets, one of the reasons why organizations want to move towards a more structured and regulated Enterprise Analytics approach. Lastly, organize a governing body such as a Analytics Competency Center (ACC) or Analytics Center of Excellence (ACE) that has the primary responsibility to do the following: Provide the leadership and build the team Plan and manage the Business Analytics vision and strategy (BA Roadmap) Act as a governing body maintaining standardization at the Enterprise level Develop, test, and deliver Business Analytic solutions Document all the processes and procedures, both functional and technical Train and support end users of Business Analytics Find ways to increase the Return on Investment (ROI) Integrate Business Analytics into newer technologies such as mobile and cloud computing The goals of a mature, enterprise-wide Analytics solution is when any employee within the organization, be it an analyst to an executive, or a member of the management team, can have their business-related questions answered in real time or near real time. These answers should also be able to predict the unknown and prepare for the unforeseen circumstances better. With the success of a Business Analytics solution and realized ROI, a question that should be asked is—are the solutions robust and flexible enough to expand regionally/globally? Also, can it sustain a merger or acquisition with minimal consolidation efforts? If the Business Analytics solution provides the confidence in all of the above, the final question should be—can the Business Analytics solution be provided as a service to the organizations' suppliers and customers? In 2012, a global study was conducted jointly by IBM's Institute of Business Value (IBV) and MIT Sloan Management Review. This study, which included 1700 CEOs globally, reinforced the fact that one of the top objectives within their organizations was sharing and collaboration. IBM Cognos Insight, the desktop analysis application, provides collaborative features that allow the users to launch development efforts via IBMs Cognos Business Intelligence, Cognos Express, and Performance Management enterprise platforms. Let us consider a fictitious company called PointScore. Having completed its marketing, sales, and price strategy analysis, PointScore is now ready to demonstrate its research and analysis efforts to its client. Using IBM Cognos Insight, PointScore has three available options. All of these will leverage the Cognos Suite of products that its client has been using and is familiar with. Each of these options can be used to share the information with a larger audience within the organization. Though technical, this article is written for a non-technical audience as well. IBM Cognos Insight is a product that has its roots embedded in Business Intelligence and its foundation is built upon Performance Management solutions. This article provides the readers with Business Analytics techniques and discusses the technical aspects of the product, describing its features and benefits. The goal of writing this article was to make you feel confident about the product. This article is meant to expand on your creativity so that you can build better analysis and workspaces using Cognos Insight. The article focuses on the strengths of the product, which is to share and collaborate the development efforts into an existing IBM Cognos BI, Cognos Express, or TM1 environment. This sharing is possible because of the tight integration among all the products under the IBM Business Analytics umbrella. Summary After reading this article, you should be able to tackle Business Analytics implementations It will also help you to leverage the sharing capability to reach an end goal of spreading the value of Business Analytics throughout their organizations. Resources for Article: Further resources on this subject: How to Set Up IBM Lotus Domino Server [Article] Tips and Tricks on IBM FileNet P8 Content Manager [Article] Reporting Planning Data in IBM Cognos 8: Publish and BI Integration [Article]
Read more
  • 0
  • 0
  • 1544

article-image-master-virtual-desktop-image-creation
Packt
11 Sep 2013
11 min read
Save for later

Master Virtual Desktop Image Creation

Packt
11 Sep 2013
11 min read
(For more resources related to this topic, see here.) When designing your VMware Horizon View infrastructure, creating a Virtual Desktop master image is second only to infrastructure design in terms of importance. The reason for this is simple; as ubiquitous as Microsoft Windows is, it was never designed to be a hosted Virtual Desktop. The good news is that with a careful bit of planning, and a thorough understanding of what your end users need, you can build a Windows desktop that serves all your needs, while requiring the bare minimum of infrastructure resources. A default installation of Windows contains many optional components and configuration settings that are either unsuitable for, or not needed in, a Virtual Desktop environment, and understanding their impact is critical to maintaining Virtual Desktop performance over time and during peak levels of use. Uninstalling unneeded components and disabling services or scheduled tasks that are not required will help reduce the amount of resources the Virtual Desktop requires, and ensure that the View infrastructure can properly support the planned number of desktops even as resources are oversubscribed. Oversubscription is defined as having assigned more resources than what is physically available. This is most commonly done with processor resources in Virtual Desktop environments, where a single server processor core may be shared between multiple desktops. As the average desktop does not require 100 percent of its assigned resources at all times, we can share those resources between multiple desktops without affecting the performance. Why is desktop optimization important? To date, Microsoft has only ever released a version of Windows designed to be installed on physical hardware. This isn't to say that Microsoft is unique is this regard, as neither Linux and Mac OS X offers an installation routine that is optimized for a virtualized hardware platform. While nothing stops you from using a default installation of any OS or software package in a virtualized environment, you may find it difficult to maintain consistent levels of performance in Virtual Desktop environments where many of the resources are shared, and in almost every case oversubscribed in some manner. In this section, we will examine a sample of the CPU and disk IO resources that can be recovered were you to optimize the Virtual Desktop master image. Due to the technological diversity that exists from one organization to the next, optimizing your Virtual Desktop master image is not an exact science. The optimization techniques used and their end results will likely vary from one organization to the next due to factors unrelated to View or vSphere. The information contained within this article will serve as a foundation for optimizing a Virtual Desktop master image, focusing primarily on the operating system. Optimization results – desktop IOPS Desktop optimization benefits one infrastructure component more than any other: storage. Until all flash storage arrays achieve price parity with the traditional spinning disk arrays many of us use today, reducing the per-desktop IOPS required will continue to be an important part of any View deployment. On a per-disk basis, a flash drive can accommodate more than 15 times the IOPS of an enterprise SAS or SCSI disk, or 30 times the IOPS of a traditional desktop SATA disk. Organizations that choose an all-flash array may find that they have more than sufficient IOPS capacity for their Virtual Desktops, even without doing any optimization. The following graph shows the reduction in IOPS that occurred after performing the optimization techniques described later in this article. The optimized desktop generated 15 percent fewer IOPS during the user workload simulation. By itself that may not seem like a significant reduction, but when multiplied by hundreds or thousands of desktops the savings become more significant. Optimization results – CPU utilization View supports a maximum of 16 Virtual Desktops per physical CPU core. There is no guarantee that your View implementation will be able to attain this high consolidation ratio, though, as desktop workloads will vary from one type of user to another. The optimization techniques described in this article will help maximize the number of desktops you can run per each server core. The following graph shows the reduction in vSphere host % Processor Time that occurred after performing the optimization techniques described later in this article: % Processor Time is one of the metrics that can be used to measure server processor utilization within vSphere. The statistics in the preceding graph were captured using the vSphere ESXTOP command line utility, which provides a number of performance statistics that the vCenter performance tabs do not offer, in a raw format that is more suited for independent analysis. The optimized desktop required between 5 to 10 percent less processor time during the user workload simulation. As was the case with the IOPS reduction, the savings are significant when multiplied by large numbers of desktops. Virtual Desktop hardware configuration The Virtual Desktop hardware configuration should provide only what is required based on the desktop needs and the performance analysis. This section will examine the different virtual machine configuration settings that you may wish to customize, and explain their purpose. Disabling virtual machine logging Every time a virtual machine is powered on, and while it is running, it logs diagnostic information within the datastore that hosts its VMDK file. For environments that have a large number of Virtual Desktops, this can generate a noticeable amount of storage I/O. The following steps outline how to disable virtual machine logging: In the vCenter client, right-click on the desktop master image virtual machine and click on Edit Settings to open the Virtual Machine Properties window. In the Virtual Machine Properties window, select the Options tab. Under Settings , highlight General . Clear Enable logging as shown in the following screenshot, which sets the logging = "FALSE" option in the virtual machine VMX file: While disabling logging does reduce disk IO, it also removes log files that may be used for advanced troubleshooting or auditing purposes. The implications of this change should be considered before placing the desktop into production. Removing unneeded devices By default, a virtual machine contains several devices that may not be required in a Virtual Desktop environment. In the event that these devices are not required, they should be removed to free up server resources. The following steps outline how to remove the unneeded devices: In the vCenter client, right-click on the desktop master image virtual machine and click on Edit Settings to open the Virtual Machine Properties window. In the Virtual Machine Properties window, under Hardware , highlight Floppy drive 1 as shown in the following screenshot and click on Remove : In the Virtual Machine Properties window, select the Options tab. Under Settings , highlight Boot Options . Check the checkbox under the Force BIOS Setup section as shown in the following screenshot: Click on OK to close the Virtual Machine Properties window. Power on the virtual machine; it will boot into the PhoenixBIOS Setup Utility . The PhoenixBIOS Setup Utility menu defaults to the Main tab. Use the down arrow key to move down to the Legacy Diskette A , and then press the Space bar key until the option changes to Disabled . Use the right arrow key to move to the Advanced tab. Use the arrow down key to select I/O Device Configuration and press Enter to open the I/O Device Configuration window. Disable the serial ports, parallel port, and floppy disk controller as shown in the following screenshot. Use the up and down arrow keys to move between devices, and the Space bar to disable or enable each as required: Press the F10 key to save the configuration and exit the PhoenixBIOS Setup Utility . Do not remove the virtual CD-ROM device, as it is used by vSphere when performing an automated installation or upgrade of the VMware Tools software. Customizing the Windows desktop OS cluster size Microsoft Windows uses a default cluster size, also known as allocation unit size, of 4 KB when creating the boot volume during a new installation of Windows. The cluster size is the smallest amount of disk space that will be used to hold a file, which affects how many disk writes must be made to commit a file to disk. For example, when a file is 12 KB in size, and the cluster size is 4 KB, it will take three write operations to write the file to disk. The default 4 KB cluster size will work with any storage option that you choose to use with your environment, but that does not mean it is the best option. Storage vendors frequently do performance testing to determine which cluster size is optimal for their platforms, and it is possible that some of them will recommend that the Windows cluster size should be changed to ensure optimal performance. The following steps outline how to change the Windows cluster size during the installation process; the process is the same for both Windows 7 and Windows 8. In this example, we will be using an 8 KB cluster size, although any size can be used based on the recommendation from your storage vendor. The cluster size can only be changed during the Windows installation, not after. If your storage vendor recommends the 4 KB Windows cluster size, the default Windows settings are acceptable. Boot from the Windows OS installer ISO image or physical CD and proceed through the install steps until the Where do you want to install Windows? dialog box appears. Press Shift + F10 to bring up a command window. In the command window, enter the following commands: diskpart select disk 0 create partition primary size=100 active format fs=ntfs label="System Reserve" quick create partition primary format fs=ntfs label=OS_8k unit=8192 quick assign exit Click on Refresh to refresh the Where do you want to install Windows? window. Select Drive 0 Partition 2: OS_8k , as shown in the following screenshot, and click on Next to begin the installation: The System Reserve partition is used by Windows to store files critical to the boot process and will not be visible to the end user. These files must reside on a volume that uses a 4 KB cluster size, so we created a small partition solely for that purpose. Windows will automatically detect this partition and use it when performing the Windows installation. In the event that your storage vendor recommends a different cluster size than shown in the previous example, replace the 8192 in the sample command in step 3 with whatever value the vendor recommends, in bytes, without any punctuation. Windows OS pre-deployment tasks The following tasks are unrelated to the other optimization tasks that are described in this article but they should be completed prior to placing the desktop into production. Installing VMware Tools VMware Tools should be installed prior to the installation of the View Agent software. To ensure that the master image has the latest version of the VMware Tools software, apply the latest updates to the host vSphere Server prior to installing the tools package on the desktop. The same applies if you are updating your VMware Tools software. The View Agent software should be reinstalled after the VMware Tools software is updated to ensure that the appropriate View drivers are installed in place of the versions included with VMware Tools. Cleaning up and defragmenting the desktop hard disk To minimize the space required by the Virtual Desktop master image and ensure optimal performance, the Virtual Desktop hard disks should be cleaned of nonessential files and optimized prior to deployment into production. The following actions should be taken once the Virtual Desktop master image is ready for deployment: Use the Windows Disk Cleanup utility to remove any unnecessary files. Use the Windows Defragment utility to defragment the virtual hard disk. If the desktop virtual hard disks are thinly provisioned, you may wish to shrink them after the defragmentation completes. This can be performed with utilities from your storage vendor if available, by using the vSphere vmkfstools utility, or by using the vSphere storage vMotion feature to move the virtual machine to a different datastore. Visit your storage vendor or the VMware vSphere Documentation (http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html) for instructions on how to shrink virtual hard disks or perform a storage vMotion.
Read more
  • 0
  • 0
  • 1569
article-image-aperture-action
Packt
06 Sep 2013
14 min read
Save for later

Aperture in Action

Packt
06 Sep 2013
14 min read
Controlling clipped highlights The problem of clipped highlights is a very common issue that a photographer will often have to deal with. Digital cameras only have limited dynamic range, so clipping becomes an issue, especially with high-contrast scenes. However, if you shoot RAW, then your camera will often record more highlighted information than is visible in the image. You may already be familiar with recovering highlights by using the recovery slider in Aperture, but there are actually a couple of other ways that you can bring this information back into range. The three main methods of controlling lost highlights in Aperture are: Using the recovery slider Using curves Using shadows and highlights For many cases, using the recovery slider will be good enough, but the recovery slider has its limitations. Sometimes it still leaves your highlights looking too bright, or it doesn't give you the look you wish to achieve. The other two methods mentioned give you more control over the process of recovery. If you use a Curves adjustment, you can control the way the highlight rolls off, and you can reduce the artificial look that clipped highlights can give your image, even if technically the highlight is still clipped. A highlights & shadows adjustment is also useful because it has a different look, as compared to the one that you get when using the recovery slider. It works in a slightly different way, and includes more of the brighter tones of your image when making its calculations. The highlights and shadows adjustment has the added advantage of being able to be brushed in. So, how do you know which one to use? Consider taking a three-stepped approach. If the first step doesn't work, move on to the second, and so on. Eventually, it will become second nature, and you'll know which way will be the best by just looking at the photograph. Step 1 Use the recovery slider. Drag the slider up until any clipped areas of the image start to reappear. Only drag the slider until the clipped areas have been recovered, and then stop. You may find that if your highlights are completely clipped, you may need to drag the slider all the way to the right, as per the following screenshot: For most clipped highlight issues, this will probably be enough. If you want to see what's going on, add a Curves adjustment and set the Range field to the Extended range. You don't have to make any adjustments at this point, but the histogram in the Curves adjustment will now show you how much image data is being clipped, and how much data that you can actually recover. Real world example In the following screenshot, the highlights on the right-hand edge of the plant pot have been completely blown out: If we zoom in, you will be able to see the problem in more detail. As you can see, all the image information has been lost from the intricate edge of this cast iron plant pot. Luckily this image had been shot in RAW, and the highlights are easily recovered. In this case, all that was necessary was the use of the recovery slider. It was dragged upward until it reached a value of around 1.1, and this brought most of the detail back into the visible range. As you can see from the preceding image, the detail has been recovered nicely and there are no more clipped highlights. The following screenshot is the finished image after the use of the recovery slider: Step 2 If the recovery slider brought the highlights back into range, but still they are too bright, then try the Highlights & Shadows adjustment. This will allow you to bring the highlights down even further. If you find that it is affecting the rest of your image, you can use brushes to limit the highlight adjustment to just the area you want to recover. You may find that with the Highlight and Shadows adjustment, if you drag the sliders too far the image will start to look flat and washed out. In this case, using the mid-contrast slider can add some contrast back into the image. You should use the mid-contrast slider carefully though, as too much can create an unnatural image with too much contrast. Step 3 If the previous steps haven't addressed the problem to your satisfaction, or if the highlight areas are still clipped, you can add a roll off to your Curves adjustment. The following is a quick refresher on what to do: Add a Curves adjustment, if you haven't already added one. From the pop-up range menu at the bottom of the Curves adjustment, set the range to Extended. Drag the white point of the Curves slider till it encompasses all the image information. Create a roll off on the right-hand side of the curve, so it looks something like the following screenshot: If you're comfortable with curves, you can skip directly to step 3 and just use a Curves adjustment, but for better results, you should combine the preceding differing methods to best suit your image. Real world example In the following screenshot (of yours truly), the photo was taken under poor lighting conditions, and there is a badly blown out highlight on the forehead: Before we fix the highlights, however, the first thing that we need to do is to fix the overall white balance, which is quite poor. In this case, the easiest way to fix this problem is to use the Aperture's clever skin tone white-balance adjustment. On the White Balance adjustment brick from the pop-up menu, set the mode to Skin Tone. Now, select the color picker and pick an area of skin tone in the image. This will set the white balance to a more acceptable color. (You can tweak it more if it's not right, but this usually gives satisfactory results.) The next step is to try and fix the clipped highlight. Let's use the three-step approach that we discussed earlier. We will start by using the recovery slider. In this case, the slider was brought all the way up, but the result wasn't enough and leaves an unsightly highlight, as you can see in the following screenshot: The next step is to try the Highlight & Shadows adjustment. The highlights slider was brought up to the mid-point, and while this helped, it still didn't fix the overall problem. The highlights are still quite ugly, as you can see in the following screenshot: Finally, a Curves adjustment was added and a gentle roll off was applied to the highlight portion of the curve. While the burned out highlight isn't completely gone, there is no longer a harsh edge to it. The result is a much better image than the original, with a more natural-looking highlight as shown in the following screenshot: Finishing touches To take this image further, the face was brightened using another Curves adjustment, and the curves was brushed in over the facial area. A vignette was also added. Finally, a skin softening brush was used over the harsh shadow on the nose, and over the edges of the halo on the forehead, just to soften it even further. The result is a much better (and now useable) image than the one we started with. Fixing blown out skies Another common problem one often encounters with digital images is blown out skies. Sometimes it can be as a result of the image being clipped beyond the dynamic range of the camera, whereas other times the day may simply have been overcast and there is no detail there to begin with. While there are situations when the sky is too bright and you just need to bring the brightness down to better match the rest of the scene, that is easily fixed. But what if there is no detail there to recover in the first place? That scenario is what we are going to look at in the next section. This covers what to do when the sky is completely gone and there's nothing left to recover. There are options open to you in this case. The first is pretty obvious. Leave it as it is. However, you might have an image that is nicely lit otherwise, but all that's ruining it is a flat washed-out sky. What would add a nice balance to an image in such a scenario is some subtle blue in the sky, even if it's just a small amount. Luckily, this is fairly easy to achieve in Aperture. Perform the following steps: Try the steps outlined in the previous section to bring clipped highlights back into range. Sometimes simply using the recovery slider will bring clipped skies back into the visible range, depending on the capabilities of your camera. In order for the rest of this trick to work, your highlights must be in the visible range. If you have already made any enhancements using the Enhance brick and you want to preserve those, add another Enhance brick by choosing Add New Enhance adjustment from the cog pop-up on the side of the interface. If the Tint controls aren't visible on the Enhance brick, click on the little arrow beside the word Tint to reveal the Tint controls. Using the right-hand Tint control (the one with the White eyedropper under it), adjust the control until it adds some blue back to the sky. If this is adding too much blue to other areas of your image, then brush the enhance adjustment in by choosing Brush Enhance In from the cog pop-up menu. Real world example In this example, the sky has been completely blown out and has lost most of its color detail. The first thing to try is to see whether any detail can be recovered by using the recovery slider. In this case, some of the sky was recovered, but a lot of it was still burned out. There is simply no more information to recover. The next step is to use the tint adjustment as outlined in the instructions. This puts some color back in the sky and it looks more natural. A small adjustment of the Highlights & Shadows also helps bring the sky back into the range. Finishing touches While the sky has now been recovered, there is still a bit of work to be done. To brighten up the rest of the image, a Curves adjustment was added, and the upper part of the curve was brought up, while the shadows were brought down to add some contrast. The following is the Curves adjustment that was used: Finally, to reduce the large lens flare in the center of the image, I added a color adjustment and reduced the saturation and brightness of the various colors in the flare. I then painted the color adjustment in over the flare, and this reduced the impact of it on the image. This is the same technique that can be used for getting rid of color fringing, which will be discussed later in this article. The following screenshot is the final result: Removing objects from a scene One of the myths about photo workflow applications such as Aperture is that they're not good for pixel-level manipulations. People will generally switch over to something such as Photoshop if they need to do more complex operations, such as cloning out an object. However, Aperture's retouch tool is surprisingly powerful. If you need to remove small distracting objects from a scene, then it works really well. The following is an example of a shot that was entirely corrected in Aperture: It is not really practical to give step-by-step instructions for using the tool because every situation is different, so instead, what follows is a series of tips on how best to use the retouch function: To remove complex objects you will have to switch back and forth between the cloning and healing mode. Don't expect to do everything entirely in one mode or the other. To remove long lines, such as the telegraph wires in the preceding example, start with the healing tool. Use this till you get close to the edge of an object in the scene you want to keep. Then switch to the cloning tool to fix the areas close to the kept object. The healing tool can go a bit haywire near the edges of the frame, or the edges of another object, so it's often best to use the clone tool near the edges. Remember when using the clone tool that you need to keep changing your clone source so as to avoid leaving repetitive patterns in the cloned area. To change your source area, hold down the option key, and click on the image in the area that you want to clone from. Sometimes doing a few smaller strokes works better than one long, big stroke. You can only have one retouch adjustment, but each stroke is stored separately within it. You can delete individual strokes, but only in the reverse order in which they were created. You can't delete the first stroke, and keep the following ones if for example, you have 10 other strokes. It is worth taking the time to experiment with the retouch tool. Once you get the hang of this feature, you will save yourself a lot of time by not having to jump to another piece of software to do basic (or even advanced) cloning and healing. Fixing dust spots on multiple images A common use for the retouch tool is for removing sensor dust spots on an image. If your camera's sensor has become dirty, which is surprisingly common, you may find spots of dust creeping onto your images. These are typically found when shooting at higher f-stops (narrower apertures), such as f/11 or higher, and they manifest as round dark blobs. Dust spots are usually most visible in the bright areas of solid color, such as skies. The big problem with dust spots is that once your sensor has dust on it, it will record that dust in the same place in every image. Luckily Aperture's tools makes it pretty easy to remove those dust spots, and once you've removed them from one image, it's pretty simple to remove them from all your images. To remove dust spots on multiple images, perform the following steps: Start by locating the image in your batch where the dust spots are most visible.   Zoom in to 1:1 view (100 percent zoom), and press X on your keyboard to activate the retouch tool.   Switch the retouch tool to healing mode and decrease the size of your brush till it is just bigger than the dust spot. Make sure there is some softness on the brush. Click once over the spot to get rid of it. You should try to click on it rather than paint when it comes to dust spots, as you want the least amount of area retouched as possible. Scan through your image when viewing at 1:1, and repeat the preceding process until you have removed all the dust spots Close the retouch tool's HUD to drop the tool. Zoom back out. Select the lift tool from the Aperture interface (it's at the bottom of the main window). In the lift and stamp HUD, delete everything except the Retouch adjustment in the Adjustments submenu. To do this, select all the items except the retouch entry, and press the delete (or backspace) key. Select another image or group of images in your batch, and press the Stamp Selected Images button on the Lift and Stamp HUD. Your retouched settings will be copied to all your images, and because the dust spots don't move between shots, the dust should be removed on all your images.
Read more
  • 0
  • 0
  • 1175

article-image-playing-max-6-framework
Packt
06 Sep 2013
17 min read
Save for later

Playing with Max 6 Framework

Packt
06 Sep 2013
17 min read
(For more resources related to this topic, see here.) Communicating easily with Max 6 – the [serial] object The easiest way to exchange data between your computer running a Max 6 patch and your Arduino board is via the serial port. The USB connector of our Arduino boards includes the FTDI integrated circuit EEPROM FT-232 that converts the RS-232 plain old serial standard to USB. We are going to use again our basic USB connection between Arduino and our computer in order to exchange data here. The [serial] object We have to remember the [serial] object's features. It provides a way to send and receive data from a serial port. To do this, there is a basic patch including basic blocks. We are going to improve it progressively all along this article. The [serial] object is like a buffer we have to poll as much as we need. If messages are sent from Arduino to the serial port of the computer, we have to ask the [serial] object to pop them out. We are going to do this in the following pages. This article is also a pretext for me to give you some of my tips and tricks in Max 6 itself. Take them and use them; they will make your patching life easier. Selecting the right serial port we have used the message (print) sent to [serial] in order to list all the serial ports available on the computer. Then we checked the Max window. That was not the smartest solution. Here, we are going to design a better one. We have to remember the [loadbang] object. It fires a bang, that is, a (print) message to the following object as soon as the patch is loaded. It is useful to set things up and initialize some values as we could inside our setup() block in our Arduino board's firmware. Here, we do that in order to fill the serial port selector menu. When the [serial] object receives the (print) message, it pops out a list of all the serial ports available on the computer from its right outlet prepended by the word port. We then process the result by using [route port] that only parses lists prepended with the word port. The [t] object is an abbreviation of [trigger]. This object sends the incoming message to many locations, as is written in the documentation, if you assume the use of the following arguments: b means bang f means float number i means integer s means symbol l means list (that is, at least one element) We can also use constants as arguments and as soon as the input is received, the constant will be sent as it is. At last, the [trigger] output messages in a particular order: from the rightmost outlet to the leftmost one. So here we take the list of serial ports being received from the [route] object; we send the clear message to the [umenu] object (the list menu on the left side) in order to clear the whole list. Then the list of serial ports is sent as a list (because of the first argument) to [iter]. [iter] splits a list into its individual elements. [prepend] adds a message in front of the incoming input message. That means the global process sends messages to the [umenu] object similar to the following: append xxxxxx append yyyyyy Here xxxxxx and yyyyyy are the serial ports that are available. This creates the serial port selector menu by filling the list with the names of the serial ports. This is one of the typical ways to create some helpers, in this case the menu, in our patches using UI elements. As soon as you load this patch, the menu is filled, and you only have to choose the right serial port you want to use. As soon as you select one element in the menu, the number of the element in the list is fired to its leftmost outlet. We prepend this number by port and send that to [serial], setting it up to the right-hand serial port. Polling system One of the most used objects in Max 6 to send regular bangs in order to trigger things or count time is [metro]. We have to use one argument at least; this is the time between two bangs in milliseconds. Banging the [serial] object makes it pop out the values contained in its buffer. If we want to send data continuously from Arduino and process them with Max 6, activating the [metro] object is required. We then send a regular bang and can have an update of all the inputs read by Arduino inside our Max 6 patch. Choosing a value between 15 ms and 150 ms is good but depends on your own needs. Let's now see how we can read, parse, and select useful data being received from Arduino. Parsing and selecting data coming from Arduino First, I want to introduce you to a helper firmware inspired by the Arduino2Max page on the Arduino website but updated and optimized a bit by me. It provides a way to read all the inputs on your Arduino, to pack all the data read, and to send them to our Max 6 patch through the [serial] object. The readAll firmware The following code is the firmware. int val = 0; void setup() { Serial.begin(9600); pinMode(13,INPUT); } void loop() { // Check serial buffer for characters incoming if (Serial.available() > 0){ // If an 'r' is received then read all the pins if (Serial.read() == 'r') { // Read and send analog pins 0-5 values for (int pin= 0; pin<=5; pin++){ val = analogRead(pin); sendValue (val); } // Read and send digital pins 2-13 values for (int pin= 2; pin<=13; pin++){ val = digitalRead(pin); sendValue (val); } Serial.println();// Carriage return to mark end of data flow. delay (5); // prevent buffer overload } } } void sendValue (int val){ Serial.print(val); Serial.write(32); // add a space character after each value sent } For starters, we begin the serial communication at 9600 bauds in the setup() block. As usual with serial communication handling, we check if there is something in the serial buffer of Arduino at first by using the Serial.available() function. If something is available, we check if it is the character r. Of course, we can use any other character. r here stands for read, which is basic. If an r is received, it triggers the read of both analog and digital ports. Each value (the val variable) is passed to the sendValue()function; this basically prints the value into the serial port and adds a space character in order to format things a bit to provide an easier parsing by Max 6. We could easily adapt this code to only read some inputs and not all. We could also remove the sendValue() function and find another way of packing data. At the end, we push a carriage return to the serial port by using Serial.println(). This creates a separator between each pack of data that is sent. Now, let's improve our Max 6 patch to handle this pack of data being received from Arduino. The ReadAll Max 6 patch The following screenshot is the ReadAll Max patch that provides a way to communicate with our Arduino: Requesting data from Arduino First, we will see a [t b b] object. It is also a trigger, ordering bangs provided by the [metro] object. Each bang received triggers another bang to another [trigger] object, then another one to the [serial] object itself. The [t 13 r] object can seem tricky. It just triggers a character r and then the integer 13. The character r is sent to [spell] that converts it to ASCII code and then sends the result to [serial]. 13 is the ASCII code for a carriage return. This structure provides a way to fire the character r to the [serial] object, which means to Arduino, each time that the metro bangs. As we already see in the firmware, it triggers Arduino to read all its inputs, then to pack the data, and then to send the pack to the serial port for the Max 6 patch. To summarize what the metro triggers at each bang, we can write this sequence: Send the character r to Arduino. Send a carriage return to Arduino. Bang the [serial] object. This triggers Arduino to send back all its data to the Max patch. Parsing the received data Under the [serial] object, we can see a new structure beginning with the [sel 10 13] object. This is an abbreviation for the [select] object. This object selects an incoming message and fires a bang to the specific output if the message equals the argument corresponding to the specific place of that output. Basically, here we select 10 or 13. The last output pops the incoming message out if that one doesn't equal any argument. Here, we don't want to consider a new line feed (ASCII code 10). This is why we put it as an argument, but we don't do anything if that's the one that has been selected. It is a nice trick to avoid having this message trigger anything and even to not have it from the right output of [select]. Here, we send all the messages received from Arduino, except 10 or 13, to the [zl group 78] object. The latter is a powerful list for processing many features. The group argument makes it easy to group the messages received in a list. The last argument is to make sure we don't have too many elements in the list. As soon as [zl group] is triggered by a bang or the list length reaches the length argument value, it pops out the whole list from its left outlet. Here, we "accumulate" all the messages received from Arduino, and as soon as a carriage return is sent (remember we are doing that in the last rows of the loop() block in the firmware), a bang is sent and all the data is passed to the next object. We currently have a big list with all the data inside it, with each value being separated from the other by a space character (the famous ASCII code 32 we added in the last function of the firmware). This list is passed to the [itoa] object. itoa stands for integer to ASCII . This object converts integers to ASCII characters. The [fromsymbol] object converts a symbol to a list of messages. Finally, after this [fromsymbol] object we have our big list of values separated by spaces and totally readable. We then have to unpack the list. [unpack] is a very useful object that provides a way to cut a list of messages into individual messages. We can notice here that we implemented exactly the opposite process in the Arduino firmware while we packed each value into a big message. [unpack] takes as many arguments as we want. It requires knowing about the exact number of elements in the list sent to it. Here we send 12 values from Arduino, so we put 12 i arguments. i stands for integer . If we send a float, [unpack] would cast it as an integer. It is important to know this. Too many students are stuck with troubleshooting this in particular. We are only playing with the integer here. Indeed, the ADC of Arduino provides data from 0 to 1023 and the digital input provides 0 or 1 only. We attached a number box to each output of the [unpack] object in order to display each value. Then we used a [change] object. This latter is a nice object. When it receives a value, it passes it to its output only if it is different from the previous value received. It provides an effective way to avoid sending the same value each time when it isn't required. Here, I chose the argument -1 because this is not a value sent by the Arduino firmware, and I'm sure that the first element sent will be parsed. So we now have all our values available. We can use them for different jobs. But I propose to use a smarter way, and this will also introduce a new concept. Distributing received data and other tricks Let's introduce here some other tricks to improve our patching style. Cordless trick We often have to use some data in our patches. The same data has to feed more than one object. A good way to avoid messy patches with a lot of cord and wires everywhere is to use the [send] and [receive] objects. These objects can be abbreviated with [s] and [r], and they generate communication buses and provide a wireless way to communicate inside our patches. These three structures are equivalent. The first one is a basic cord. As soon as we send data from the upper number box, it is transmitted to the one at the other side of the cord. The second one generates a data bus named busA. As soon as you send data into [send busA], each [receive busA] object in your patch will pop out that data. The third example is the same as the second one, but it generates another bus named busB. This is a good way to distribute data. I often use this for my master clock, for instance. I have one and only one master clock banging a clock to [send masterClock], and wherever I need to have that clock, I use [receive masterClock] and it provides me with the data I need. If you check the global patch, you can see that we distribute data to the structures at the bottom of the patch. But these structures could also be located elsewhere. Indeed, one of the strengths of any visual programming framework such as Max 6 is the fact that you can visually organize every part of your code exactly as you want in your patcher. And please, do that as much as you can. This will help you to support and maintain your patch all through your long development months. Check the previous screenshot. I could have linked the [r A1] object at the top left corner to the [p process03] object directly. But maybe this will be more readable if I keep the process chains separate. I often work this way with Max 6. This is one of the multiple tricks I teach in my Max 6 course. And of course, I introduced the [p] object, that is the [patcher] abbreviation. Let's check a couple of tips before we continue with some good examples involving Max 6 and Arduino. Encapsulation and subpatching When you open Max 6 and go to File | New Patcher , it opens a blank patcher. The latter, if you recall, is the place where you put all the objects. There is another good feature named subpatching . With this feature, you can create new patchers inside patchers, and embed patchers inside patchers as well. A patcher contained inside another one is also named a subpatcher. Let's see how it works with the patch named ReadAllCutest.maxpat. There are four new objects replacing the whole structures we designed before. These objects are subpatchers. If you double-click on them in patch lock mode or if you push the command key (or Ctrl for Windows), double-click on them in patch edit mode and you'll open them. Let's see what is there inside them. The [requester] subpatcher contains the same architecture that we designed before, but you can see the brown 1 and 2 objects and another blue 1 object. These are inlets and outlets. Indeed, they are required if you want your subpatcher to be able to communicate with the patcher that contains it. Of course, we could use the [send] and [receive] objects for this purpose too. The position of these inlets and outlets in your subpatcher matters. Indeed, if you move the 1 object to the right of the 2 object, the numbers get swapped! And the different inlets in the upper patch get swapped too. You have to be careful about that. But again, you can organize them exactly as you want and need. Check the next screenshot: And now, check the root patcher containing this subpatcher. It automatically inverts the inlets, keeping things relevant. Let's now have a look at the other subpatchers: The [p portHandler] subpatcher The [p dataHandler] subpatcher The [p dataDispatcher] subpatcher In the last figure, we can see only one inlet and no outlets. Indeed, we just encapsulated the global data dispatcher system inside the subpatcher. And this latter generates its data buses with [send] objects. This is an example where we don't need and even don't want to use outlets. Using outlets would be messy because we would have to link each element requesting this or that value from Arduino with a lot of cords. In order to create a subpatcher, you only have to type n to create a new object, and type p, a space, and the name of your subpatcher. While I designed these examples, I used something that works faster than creating a subpatcher, copying and pasting the structure on the inside, removing the structure from the outside, and adding inlets and outlets. This feature is named encapsulate and is part of the Edit menu of Max 6. You have to select the part of the patch you want to encapsulate inside a subpatcher, then click on Encapsulate , and voilà! You have just created a subpatcher including your structures that are connected to inlets and outlets in the correct order. Encapsulate and de-encapsulate features You can also de-encapsulate a subpatcher. It would follow the opposite process of removing the subpatcher and popping out the whole structure that was inside directly outside. Subpatching helps to keep things well organized and readable. We can imagine that we have to design a whole patch with a lot of wizardry and tricks inside it. This one is a processing unit, and as soon as we know what it does, after having finished it, we don't want to know how it does it but only use it . This provides a nice abstraction level by keeping some processing units closed inside boxes and not messing the main patch. You can copy and paste the subpatchers. This is a powerful way to quickly duplicate process units if you need to. But each subpatcher is totally independent of the others. This means that if you need to modify one because you want to update it, you'd have to do that individually in each subpatcher of your patch. This can be really hard. Let me introduce you to the last pure Max 6 concept now named abstractions before I go further with Arduino. Abstractions and reusability Any patch created and saved can be used as a new object in another patch. We can do this by creating a new object by typing n in a patcher; then we just have to type the name of our previously created and saved patch. A patch used in this way is called an abstraction . In order to call a patch as an abstraction in a patcher, the patch has to be in the Max 6 path in order to be found by it. You can check the path known by Max 6 by going to Options | File Preferences . Usually, if you put the main patch in a folder and the other patches you want to use as abstractions in that same folder, Max 6 finds them. The concept of abstraction in Max 6 itself is very powerful because it provides reusability . Indeed, imagine you need and have a lot of small (or big) patch structures that you are using every day, every time, and in almost every project. You can put them into a specific folder on your disk included in your Max 6 path and then you can call (we say instantiate ) them in every patch you are designing. Since each patch using it has only a reference to the one patch that was instantiated itself, you just need to improve your abstraction; each time you load a patch using it, the patch will have up-to-date abstractions loaded inside it. It is really easy to maintain all through the development months or years. Of course, if you totally change the abstraction to fit with a dedicated project/patch, you'll have some problems using it with other patches. You have to be careful to maintain even short documentation of your abstractions. Let's now continue by describing some good examples with Arduino.
Read more
  • 0
  • 0
  • 1865