Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - CMS and E-Commerce

830 Articles
article-image-finishing-touches-and-publishing
Packt
18 Feb 2014
7 min read
Save for later

Finishing Touches and Publishing

Packt
18 Feb 2014
7 min read
(For more resources related to this topic, see here.) Publishing a Video Demo project Due to its very nature, a Video Demo project can only be published as an .mp4 video file. In the following exercise, you will return to the encoderVideo.cpvc project and explore the available publishing options: Open the encoderVideo.cpvc file under Chapter08. Make sure the file opens in Edit mode. If you are not in Edit mode, click on the Edit button at the lower-right corner of the screen. (If the Edit button is not displayed on the screen, it simply means that you already are in Edit mode.) When the file is open in Edit mode, take a look at the main toolbar at the top of the interface. Click on the Publish icon or navigate to File | Publish. The Publish Video Demo dialog opens. In the Publish Video Demo dialog, make sure the Name of the project is encoderVideo. Click on the … button and choose the publish folder of your exercises as the destination of the published video file. Open the Preset dropdown. Take some time to inspect the available presets. When done, choose the Video - Apple iPad preset. Make sure the Publish Video Demo dialog looks similar to what is shown in the following screenshot and click on the Publish button: Publishing a Video Demo project can be quite a lengthy process, so be patient. When the process is complete, a message asks you what to do next. Notice that one of the options enables you to upload your newly created video to YouTube directly. Click on the Close button to discard the message. Use Windows Explorer (Windows) or the Finder (Mac) to go to the publish folder of your exercises. Double-click on the encoderDemo.mp4 file to open the video in the default video player of your system. Remember that a Video Demo project can only be published as a video file. Also remember that the published .mp4 video file can only be experienced in a linear fashion and does not support any kind of interactivity. Publishing to Flash In the history of Captivate, publishing to Flash has always been the primary publishing option. Even though HTML5 publishing is a game changer, publishing to Flash is still an important capability of Captivate. Remember that this publishing format is currently the only one that supports every single feature, animation, and object of Captivate. In the following exercise, you will publish the Encoder Demonstration project in Flash using the default options: Return to the encoderDemo_800.cptx file under Chapter08. Click on the Publish icon situated right next to the Preview icon. Alternatively, you can also navigate to File | Publish. The Publish dialog box opens as shown in the following screenshot: Notice that the Publish dialog of a regular Captivate project contains far more options than its Publish Video Demo counterpart in .cpvc files. The Publish dialog box is divided into four main areas: The Publish Format area (1): This is where you choose the format in which you want to publish your projects. Basically, there are three options to choose from: SWF/HTML5, Media, and Print. The other options (E-Mail, FTP, and Adobe Connect) are actually suboptions of the SWF/HTML5, Media, and Print formats. The Output Format Options area (2): The content of this area depends on the format chosen in the Publish Format (1) area. The Project Information area (3): This area is a summary of the main project preferences and metadata. Clicking on the links of this area will bring you back to the corresponding preferences dialog boxes. The Advanced Options area (4): This area provides some additional advanced publishing options. You will now move on to the actual publication of the project in the Flash format. In the leftmost column of the Publish dialog, make sure the chosen format is SWF/HTML5. In the central area, change the Project Title to encoderDemo_800_flash. Click on the Browse… button situated just below the Folder field and choose to publish your movie in the publish folder of your exercises folder. Make sure the Publish to Folder checkbox is selected. Take a quick look at the remaining options, but leave them all at their current settings. Click on the Publish button at the bottom-right corner of the Publish dialog box. When Captivate has finished publishing the movie, an information box appears on the screen asking whether you want to view the output. Click on No to discard the information box and return to Captivate. You will now use the Finder (Mac) or the Windows Explorer (Windows) to take a look at the files Captivate has generated. Use the Finder (Mac) or the Windows Explorer (Windows) to browse to the publish folder of your exercises. Because you selected the Publish to Folder checkbox in the Publish dialog, Captivate has automatically created the encoderDemo_800_flash subfolder in the publish folder. Open the encoderDemo_800_flash subfolder to inspect its content.There should be five files stored in this location: encoderDemo_800_flash.swf: This is the main Flash file containing the compiled version of the .cptx project encoderDemo_800_flash.html: This file is an HTML page used to wrap the Flash file standard.js: This is a JavaScript file used to make the Flash player work well within the HTML page demo_en.flv: This is the video file used on slide 2 of the movie captivate.css: This file provides the necessary style rules to ensure there is proper formatting of the HTML page If you want to embed the compiled Captivate movie in an existing HTML page, only the .swf file (plus, in this case, the .flv video) is needed. The HTML editor (such as Adobe Dreamweaver) will recreate the necessary HTML, JavaScript, and CSS files. Captivate and DreamweaverAdobe Dreamweaver CC is the HTML editor of the Creative Cloud and the industry-leading solution for authoring professional web pages. Inserting a Captivate file in a Dreamweaver page is dead easy! First, move or copy the main Flash file (.swf) as well as the needed support files (in this case, the .flv video file), if any, somewhere in the root folder of the Dreamweaver site. When done, use the Files panel of Dreamweaver to drag and drop the main .swf file onto the HTML page. That's it! More information on Dreamweaver can be found at http://www.adobe.com/products/dreamweaver.html. You will now test the compiled project in a web browser. This is an important test as it closely recreates the conditions in which the students will experience the movie once uploaded on a web server. Double-click on the encoderDemo_800_flash.html file to open it in a web browser. Enjoy the final version of the demonstration you have created! Now that you have experienced the workflow of publishing the project to Flash with the default options, you will explore some additional publishing options. Using the Scalable HTML content option Thanks to Scalable HTML content option of Captivate, the eLearning content is automatically resized to fit the screen on which it is viewed. Let's experiment with this option hands on using the following steps: If needed, return to the encoderDemo_800.cptx file under Chapter08. Click on the Publish icon situated right next to the Preview icon. Alternatively, you can also navigate to File | Publish. In the leftmost column, make sure the chosen format is SWF/HTML5. In the central column, change the Project Title to encoderDemo_800_flashScalable. Click on the Browse… button situated just below the Folder field and ensure that the publish folder is still the publish folder of your exercises. Make sure the Publish to Folder checkbox is selected. In the Advanced Options section (lower-right corner of the Publish dialog), select the Scalable HTML content checkbox. Leave the remaining options at their current value and click on the Publish button at the bottom-right corner of the Publish dialog box. When Captivate has finished publishing the movie, an information box appears on the screen asking whether you want to view the output. Click on Yes to discard the information box and open the published movie in the default web browser. During the playback, use your mouse to resize your browser window and notice how the movie is resized and always fits the available space without being distorted. The Scalable HTML content option also works when the project is published in HTML5.
Read more
  • 0
  • 0
  • 1185

article-image-creating-shipping-module
Packt
17 Feb 2014
12 min read
Save for later

Creating a Shipping Module

Packt
17 Feb 2014
12 min read
(For more resources related to this topic, see here.) Shipping ordered products to customers is one of the key parts of the e-commerce flow. In most cases, a shop owner has a contract with a shipping handler where everyone has their own business rules. In a standard Magento, the following shipping handlers are supported: DHL FedEx UPS USPS If your handler is not on the list, have a look if there is a module available at Magento Connect. If not, you can configure a standard shipping method or you can create your own, which we will do in this article. Initializing module configurations In this recipe, we will create the necessary files for a shipping module, which we will extend with more features using the recipes of this article. Getting ready Open your code editor with the Magento project. Also, get access to the backend where we will check some things. How to do it... The following steps describe how we can create the configuration for a shipping module: Create the following folders: app/code/local/Packt/ app/code/local/Packt/Shipme/ app/code/local/Packt/Shipme/etc/ app/code/local/Packt/Shipme/Model/ app/code/local/Packt/Shipme/Model/Carrier Create the module file named Packt_Shipme.xml in the folder app/etc/modules with the following content: <?xml version="1.0"?> <config> <modules> <Packt_Shipme> <active>true</active> <codePool>local</codePool> <depends> <Mage_Shipping /> </depends> </Packt_Shipme> </modules> </config> Create a config.xml file in the folder app/code/local/Packt/Shipme/etc/ with the following content: <?xml version="1.0" encoding="UTF-8"?> <config> <modules> <Packt_Shipme> <version>0.0.1</version> </Packt_Shipme> </modules> <global> <models> <shipme> <class>Packt_Shipme_Model</class> </shipme> </models> </global> <default> <carriers> <shipme> <active>1</active> <model>shipme/carrier_shipme</model> <title>Shipme shipping</title> <express_enabled>1</express_enabled> <express_title>Express delivery</express_title> <express_price>4</express_price> <business_enabled>1</business_enabled> <business_title>Business delivery</business_title> <business_price>5</business_price> </shipme> </carriers> </default> </config> Clear the cache and navigate in the backend to System | Configuration | Advanced Disable Modules Output. Observe that the Packt_Shipme module is on the list. At this point, the module is initialized and working. Now, we have to create a system.xml file where we will put the configuration parameters for our shipping module. Create the file app/code/local/Packt/Shipme/etc/system.xml. In this file, we will create the configuration parameters for our shipping module. When you paste the following code in the file, you will create an extra group in the shipping method's configuration. In this group, we can set the settings for the new shipping method: <?xml version="1.0" encoding="UTF-8"?> <config> <sections> <carriers> <groups> <shipme translate="label" module="shipping"> <label>Shipme</label> <sort_order>15</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> <fields> <!-- Define configuration fields below --> <active translate="label"> <label>Enabled</label> <frontend_type>select</frontend_type> <source_model>adminhtml/ system_config_source_yesno</source_model> <sort_order>10</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </active> <title translate="label"> <label>Title</label> <frontend_type>text</frontend_type> <sort_order>20</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </title> </fields> </shipme> </groups> </carriers> </sections> </config> Clear the cache and navigate in the backend to the shipping method configuration page. To do that, navigate to System | Configuration | Sales | Shipping methods. You will see that an extra group is added as shown in the following screenshot: You will see that there is a new shipping method called Shipme. We will extend this configuration with some values. Add the following code under the <fields> tag of the module: <active translate="label"> <label>Enabled</label> <frontend_type>select</frontend_type> <source_model>adminhtml/system_config_source_yesno</source_ model> <sort_order>10</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </active> <title translate="label"> <label>Title</label> <frontend_type>text</frontend_type> <sort_order>20</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </title> <express_enabled translate="label"> <label>Enable express</label> <frontend_type>select</frontend_type> <source_model>adminhtml/system_config_source_yesno</source_ model> <sort_order>30</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </express_enabled> <express_title translate="label"> <label>Title express</label> <frontend_type>text</frontend_type> <sort_order>40</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </express_title> <express_price translate="label"> <label>Price express</label> <frontend_type>text</frontend_type> <sort_order>50</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </express_price> <business_enabled translate="label"> <label>Enable business</label> <frontend_type>select</frontend_type> <source_model>adminhtml/system_config_source_yesno</source_ model> <sort_order>60</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </business_enabled> <business_title translate="label"> <label>Title business</label> <frontend_type>text</frontend_type> <sort_order>70</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </business_title> <business_price translate="label"> <label>Price business</label> <frontend_type>text</frontend_type> <sort_order>80</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </business_price> Clear the cache and reload the backend. You will now see the other configurations under the Shipme – Express shipping method as shown in the following screenshot: How it works... The first thing we have done is to create the necessary files to initialize the module. The following files are required to initialize a module: app/etc/modules/Packt_Shipme.xml app/code/local/Packt/Shipme/etc/config.xml In the first file, we will activate the module with the <active> tag. The <codePool> tag describes that the module is located in the local code pool, which represents the folder app/code/local/. In this file, there is also the <depends> tag. First this will check if the Mage_Shipping module is installed or not. If not, Magento will throw an exception. If the module is available, the dependency will load this module after the Mage_Shipping module. This makes it possible to rewrite some values from the Mage_Shipping module. In the second file, config.xml, we configured all the stuff that we will need in this module. These are the following things: The version number (0.0.1) The models Some default values for the configuration values The last thing we did was create a system.xml file so that we can create a custom configuration for the shipping module. The configuration in the system.xml file adds some extra values to the shipping method configuration, which is available in the backend under the menu System | Configuration | Sales | Shipping methods. In this module, we created a new shipping handler called Shipme. Within this handler, you can configure two shipping options: express and business. In the system.xml file, we created the fields to configure the visibility, name, and price of the options. See also In this recipe, we used the system.xml file of the module to create the configuration values. Writing an adapter model A new shipping module is initialized in the previous recipe. What we did in the previous recipe was a preparation to continue with the business part we will see in this recipe. We will add a model with the business logic for the shipping method. The model is called an adapter class because Magento requires an adapter class for each shipping method. This class will extend the Mage_Shipping_Model_Carrier_Abstract class. This class will be used for the following things: Make the shipping method available Calculate the shipping costs Set the title in the frontend of the shipping methods How to do it... Perform the following steps to create the adapter class for the shipping method: Create the folder app/code/local/Packt/Shipme/Model/Carrier if it doesn't already exist. In this folder, create a file named Shipme.php and add the following content to it: <?php class Packt_Shipme_Model_Carrier_Shipme extends Mage_Shipping_Model_Carrier_Abstract implements Mage_Shipping_Model_Carrier_Interface { protected $_code = 'shipme'; public function collectRates (Mage_Shipping_Model_Rate_Request $request) { $result = Mage::getModel('shipping/rate_result'); //Check if express method is enabled if ($this->getConfigData('express_enabled')) { $method = Mage::getModel ('shipping/rate_result_method'); $method->setCarrier($this->_code); $method->setCarrierTitle ($this->getConfigData('title')); $method->setMethod('express'); $method->setMethodTitle ($this->getConfigData('express_title')); $method->setCost ($this->getConfigData('express_price')); $method->setPrice ($this->getConfigData('express_price')); $result->append($method); } //Check if business method is enabled if ($this->getConfigData('business_enabled')) { $method = Mage::getModel ('shipping/rate_result_method'); $method->setCarrier($this->_code); $method->setCarrierTitle ($this->getConfigData('title')); $method->setMethod('business'); $method->setMethodTitle ($this->getConfigData('business_title')); $method->setCost ($this->getConfigData('business_price')); $method->setPrice ($this->getConfigData('business_price')); $result->append($method); } return $result; } public function isActive() { $active = $this->getConfigData('active'); return $active==1 || $active=='true'; } public function getAllowedMethods() { return array('shipme'=>$this->getConfigData('name')); } } Save the file and clear the cache; your adapter model has now created. How it works... The previously created class handles all the business logic that is needed for the shipping method. Because this adapter class is an extension of the Mage_Shipping_Model_Carrier_Abstract class, we can overwrite some methods to customize the business logic of the standard. The first method we overwrite is the isAvailable() function. In this function, we have to return true or false to say that the module is active. In our code, we will activate the module based on the system configuration field active. The second method is the collectRates() function. This function is used to set the right parameters for every shipping method. For every shipping method, we can set the title and price. The class implements the interface Mage_Shipping_Model_Carrier_Interface. In this interface, two functions are declared: the isTrackingAvailable() and getAllowedMethods() functions. We created the function getAllowedMethods() in the adapter class. The isTrackingAvailable() function is declared in the parent class Mage_Shipping_Model_Carrier_Abstract. We configured two options under the Shipme shipping method. These options are called Express delivery and Business delivery. We will check if they are enabled in the configuration and set the configured title and price for each option. The last thing to do is return the right values. We have to return an instance of the class Mage_Shipping_Model_Rate_Result. We created an empty instance of the class, where we will append the methods to when they are available. To add a method, we have to use the function append($method). This function requires an instance of the class Mage_Shipping_Model_Rate_Result_Method that we created in the two if statements.
Read more
  • 0
  • 0
  • 1300

article-image-preparing-and-configuring-your-magento-website
Packt
10 Jan 2014
8 min read
Save for later

Preparing and Configuring Your Magento Website

Packt
10 Jan 2014
8 min read
(For more resources related to this topic, see here.) Focusing on your keywords We'll focus on three major considerations when choosing where to place our keywords within a Magento store: Purpose : What is the purpose of optimizing this keyword? Relevance : Is the keyword relevant to the page we have chosen to optimize it for? Structure : Does the structure of the website re-enforce the nature of our keyword? The purpose for choosing keywords to optimize on our Magento store must always be to increase our sales. It is true that (generically speaking) optimizing keywords means driving visitors to our website, but in the case of an e-commerce website, the end goal—the true justification of any SEO campaign—must be increasing the number of sales. We must then make sure that our visitors not just visit our website, but visit with the intention of buying something. The keywords we have chosen to optimize must be relevant to the page we are optimizing them on. The page, therefore, must contain elements specifically related to our keyword, and any unrelated material must be kept to a minimum. Driving potential customers to a page where their search term is unrelated to the content not only frustrates the visitor, but also lessens their desire to purchase from our website. The structure of our website must complement our chosen keyword. Competitive phrases, usually broader phrases with the highest search volume, are naturally the hardest to optimize. These types of keywords require a strong page to effectively optimize them. In most cases, the strength of a page is related to its level or tier within the URL. For example, the home page is normally seen as being the strongest page suitable for high search volume broad phrases followed by a tiered structure of categories, subcategories, and finally, product pages, as this diagram illustrates: With that said, we must be mindful of all three considerations when matching our keywords to our pages. As the following diagram shows, the relationship between these three elements is vital for ensuring not only that our keyword resides on a page with enough strength to enable it to perform, but also that it has enough relevance to retain our user intent at the same time as adhering to our overall purpose: The role of the home page You may be forgiven for thinking that optimizing our most competitive keyword on the home page would lead to the best results. However, when we take into account the relevance of our home page, does it really match our keyword? The answer is usually that it doesn't. In most cases, the home page should be used exclusively as a platform for building our brand identity . Our brand identity is the face of our business and is how customers will remember us long after they've purchased our goods and exited our website. In rare cases, we could optimize keywords on our home page that directly match our brand; for example, if our company name is "Wooden Furniture Co.", it might be acceptable to optimize for "Wooden Furniture" on our home page. It would also be acceptable if we were selling a single item on a single-page e-commerce website. In a typical Magento store, we would hope to see the following keyword distribution pattern: The buying intention of our visitors will almost certainly differ between each of these types of pages. Typically, a user entering our website via a broad phrase will have less of an intention to buy our products than a visitor entering our website through a more specific, product-related search term. Structuring our categories for better optimization Normally, our most competitive keywords will be classified as broad keywords, meaning that their relevance could be attributed to a variety of similar terms. This is why it makes sense to use top-level or parent categories as a basis for our broad phrases. To use our example, Wooden Furniture would be an ideal top-level category to contain subcategories such as 'Wooden Tables', 'Wooden Chairs', and 'Wooden Wardrobes', with content on our top-level category page to highlight these subcategories. On the Magento administration panel, go to Catalog | Manage Categories . Here, we can arrange our category structure to match our keyword relevance and broadness. In an ideal world, we would plan out our category structure before implementing it; sadly, that is not always the case. If we need to change our category structure to better match our SEO strategy, Magento provides a simple way to alter our category hierarchy. For example, say we currently have a top-level category called Furniture , and within this category, we have Wooden Furniture , and we decide that we're only optimizing for Wooden Furniture ; we can use Magento's drag-and-drop functionality to move Wooden Furniture to become a top-level category. To do this, we would have to perform the following steps: Navigate to Catalog | Manage Categories . Drag our Wooden Furniture category to the same level as Furniture . We will see that our URL has now changed from http://www.mydomain.com/furniture/wooden-furniture.html to http://www.mydomain.com/wooden-furniture.html. We will also notice that our old URL now redirects to our new URL; this is due to Magento's inbuilt URL Rewrite System. When moving our categories within the hierarchy, Magento will remember the old URL path that was specified and automatically create a redirect to the new location. This is fantastic for our SEO strategy as 301 redirects are vital for passing on authority from the old page to the new. If we wanted to have a look at these rewrites ourselves, we could perform the following steps: Navigate to Catalog | URL Rewrite Management . From the table, we could find our old request path and see the new target path that has been assigned. Not only does Magento keep track of our last URL, but any previous URLs also become rewritten. It is therefore not surprising that a large Magento store with numerous products and categories could have thousands upon thousands of rows within this table, especially when each URL is rewritten on a per-store basis. There are many configuration options within Magento that allow us to decide how and what Magento rewrites for us automatically. Another important point to note is that your category URL key may change depending on whether an existing category with the same URL key at the same level had existed previously in the system. If this situation occurs, an automatic incremental integer is appended to the URL key, for example, wooden-furniture-2.html. Magento Enterprise Edition has been enhanced to only allow unique URL keys. To know more, go to goo.gl/CKprNB. Optimizing our CMS pages CMS pages within Magento are primarily used as information pages. Terms and conditions, privacy policy, and returns policy are all examples of CMS pages that are created and configured within the Magento administration panel under CMS | Pages . By default, the home page of a Magento store is a CMS page with the title Home Page . The page that is served as the home page can be configured within the Magento Configuration under System | Configuration | Web | Default Pages . The most important part of a CMS page setup is that its URL key is always relative to the website's base URL. This means that when creating CMS pages, you can manually choose how deep you wish the page to exist on the site. This gives us the ability to create as many nested CMS pages as we like. Another important point to note is that, by default, CMS pages have no file extension (URL suffix) as opposed to the category and product URLs where we can specify which extension to use (if any). For CMS pages, the default optimization methods that are available to us are found within the Page Information tabs after selecting a CMS page: Under the Page Information subtab, we can choose our Page Title and URL key Under the Content subtab, we can enter our Content Heading (by default, this gets inserted into an <h1> tag) and enter our body content Under the Meta Data subtab, we can specify our keywords and description As mentioned previously, we would focus optimization on these pages purely for the intent of our users. If we were not using custom blocks or other methods to display product information, we would not optimize these information pages for keywords relating to purchasing a product. Summary In this article, we have learned the basic concepts of keyword placement and the roles of the different types of pages to prepare and configure your Magento website. Resources for Article : Further resources on this subject: Magento: Exploring Themes [Article] Magento : Payment and shipping method [Article] Integrating Twitter with Magento [Article]
Read more
  • 0
  • 0
  • 1483

article-image-creating-identity-and-resource-pools
Packt
24 Dec 2013
7 min read
Save for later

Creating Identity and Resource Pools in Cisco Unified Computing System

Packt
24 Dec 2013
7 min read
Computers and their various peripherals have some unique identities such as Universally Unique Identifiers (UUIDs), Media Access Control (MAC) addresses of Network Interface Cards (NICs), World Wide Node Numbers (WWNNs) for Host Bus Adapters (HBAs), and others. These identities are used to uniquely identify a computer system in a network. For traditional computers and peripherals, these identities were burned into the hardware and, hence, couldn't be altered easily. Operating systems and some applications rely on these identities and may fail if these identities are changed. In case of a full computer system failure or failure of a computer peripheral with unique identity, administrators have to follow cumbersome firmware upgrade procedures to replicate the identities of the failed components on the replacement components. The Unified Computing System (UCS) platform introduced the idea of creating identity and resource pools to abstract the compute node identities from the UCS Manager (UCSM) instead of using the hardware burned-in identities. In this article, we'll discuss the different pools you can create during UCS deployments and server provisioning. We'll start by looking at what pools are and then discuss the different types of pools and show how to configure each of them. Understanding identity and resource pools The salient feature of the Cisco UCS platform is stateless computing . In the Cisco UCS platform, none of the computer peripherals consume the hardware burned-in identities. Rather, all the unique characteristics are extracted from identity and resource pools, which reside on the Fabric Interconnects (FIs) and are managed using UCSM. These resource and identity pools are defined in an XML format, which makes them extremely portable and easily modifiable. UCS computers and peripherals extract these identities from UCSM in the form of a service profile. A service profile has all the server identities including UUIDs, MACs, WWNNs, firmware versions, BIOS settings, and other server settings. A service profile is associated with the physical server using customized Linux OS that assigns all the settings in a service profile to the physical server. In case of server failure, the failed server needs to be removed and the replacement server has to be associated with the existing service profile of the failed server. In this service profile association process, the new server will automatically pick up all the identities of the failed server, and the operating system or applications dependent upon these identities will not observe any change in the hardware. In case of peripheral failure, the replacement peripheral will automatically acquire the identities of the failed component. This greatly improves the time required to recover a system in case of a failure. Using service profiles with the identity and resource pools also greatly improves the server provisioning effort. A service profile with all the settings can be prepared in advance while an administrator is waiting for the delivery of the physical server. The administrator can create service profile templates that can be used to create hundreds of service profiles; these profiles can be associated with the physical servers with the same hardware specifications. Creating a server template is highly recommended as this greatly reduces the time for server provisioning. This is because a template can be created once and used for any number of physical servers with the same hardware. Server identity and resource pools are created using the UCSM. In order to better organize, it is possible to define as many pools as are needed in each category. Keep in mind that each defined resource will consume space in the UCSM database. It is, therefore, a best practice to create identity and resource pool ranges based on the current and near-future assessments. For larger deployments, it is best practice to define a hierarchy of resources in the UCSM based on geographical, departmental, or other criteria; for example, a hierarchy can be defined based on different departments. This hierarchy is defined as an organization, and the resource pools can be created for each organizational unit. In the UCSM, the main organization unit is root, and further suborganizations can be defined under this organization. The only consideration to be kept in mind is that pools defined under one organizational unit can't be migrated to other organizational units unless they are deleted first and then created again where required. The following diagram shows how identity and resource pools provide unique features to a stateless blade server and components such as the mezzanine card: Learning to create a UUID pool UUID is a 128-bit number assigned to every compute node on a network to identify the compute node globally. UUID is denoted as 32 hexadecimal numbers. In the Cisco UCSM, a server UUID can be generated using the UUID suffix pool. The UCSM software generates a unique prefix to ensure that the generated compute node UUID is unique. Operating systems including hypervisors and some applications may leverage UUID number binding. The UUIDs generated with a resource pool are portable. In case of a catastrophic failure of the compute node, the pooled UUID assigned through a service profile can be easily transferred to a replacement compute node without going through complex firmware upgrades. Following are the steps to create UUIDs for the blade servers: Log in to the UCSM screen. Click on the Servers tab in the navigation pane. Click on the Pools tab and expand root. Right-click on UUID Suffix Pools and click on Create UUID Suffix Pool as shown in the following screenshot: In the pop-up window, assign the Name and Description values to the UUID pool. Leave the Prefix value as Derived to make sure that UCSM makes the prefix unique. The selection of Assignment Order as Default is random. Select Sequential to assign the UUID sequentially. Click on Next as shown in the following screenshot: Click on Add in the next screen. In the pop-up window, change the value for Size to create a desired number of UUIDs. Click on OK and then on Finish in the previous screen as shown in the following screenshot: In order to verify the UUID suffix pool, click on the UUID Suffix Pools tab in the navigation pane and then on the UUID Suffixes tab in the work pane as shown in the following screenshot: Learning to create a MAC pool MAC is a 48-bit address assigned to the network interface for communication in the physical network. MAC address pools make server provisioning easier by providing scalable NIC configurations before the actual deployment. Following are the steps to create MAC pools: Log in to the UCSM screen. Click on the LAN tab in the navigation pane. Click on the Pools tab and expand root. Right-click on MAC Pools and click on Create MAC Pool as shown in the following screenshot: In the pop-up window, assign the Name and Description values to the MAC pool. The selection of Default as the Assignment Order value is random. Select Sequential to assign the MAC addresses sequentially. Click on Next as shown in the following screenshot: Click on Add in the next screen. In the pop-up window, change Size to create the desired number of MAC addresses. Click on OK and then on Finish in the previous screen as shown in the following screenshot: In order to verify the MAC pool, click on the MAC Pools tab in the navigation pane and then on the MAC Addresses tab in the work pane as shown in the following screenshot:
Read more
  • 0
  • 0
  • 2396

article-image-working-tooltips
Packt
23 Dec 2013
6 min read
Save for later

Working with Tooltips

Packt
23 Dec 2013
6 min read
(For more resources related to this topic, see here.) The jQuery team introduced their version of the tooltip as part of changes to Version 1.9 of the library; it was designed to act as a direct replacement for the standard tooltip used in all browsers. The difference here, though, was that whilst you can't style the standard tooltip, jQuery UI's replacement is intended to be accessible, themeable, and completely customizable. It has been set to display not only when a control receives focus, but also when you hover over that control, which makes it easier to use for keyboard users. Implementing a default tooltip Tooltips were built to act as direct replacements for the browser's native tooltips. They will recognize the default markup of the title attribute in a tag, and use it to automatically add the additional markup required for the widget. The target selector can be customized though using tooltip's items and content options. Let's first have a look at the basic structure required for implementing tooltips. In a new file in your text editor, create the following page: <!DOCTYPE HTML> <html> <head> <meta charset="utf-8"> <title>Tooltip</title> <link rel="stylesheet" href="development- bundle/themes/redmond/jquery.ui.all.css"> <style> p { font-family: Verdana, sans-serif; } </style> <script src = "js/jquery-2.0.3.js"></script> <script src = "development- bundle/ui/jquery.ui.core.js"></script> <script src = "development-bundle/ui/jquery.ui.widget.js"> </script> <script src = "development-bundle/ui/jquery.ui.position.js"> </script> <script src = "development-bundle/ui/jquery.ui.tooltip.js"> </script> <script> $(document).ready(function($){ $(document).tooltip(); }); </script> </head> <body> <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla blandit mi quis imperdiet semper. Fusce vulputate venenatis fringilla. Donec vitae facilisis tortor. Mauris dignissim nibh ac justo ultricies, nec vehicula ipsum ultricies. Mauris molestie felis ligula, id tincidunt urna consectetur at. Praesent <a href="http://www. ipsum.com" title="This was generated from www.ipsum.com">blandit</a> faucibus ante ut semper. Pellentesque non tristique nisi. Ut hendrerit tempus nulla, sit amet venenatis felis lobortis feugiat. Nam ac facilisis magna. Praesent consequat, risus in semper imperdiet, nulla lorem aliquet nisi, a laoreet nisl leo rutrum mauris.</p> </body> </html> Save the code as tooltip1.html in your jqueryui working folder. Let's review what was used. The following script and CSS resources are needed for the default tooltip widget configuration: jquery.ui.all.css jquery-2.0.3.js jquery.ui.core.js jquery.ui.widget.js jquery.ui.tooltip.js The script required to create a tooltip, when using the title element in the underlying HTML can be as simple as this, which should be added after the last <script> element in your code, as shown in the previous example: <script> $(document).ready(function($){ $(document).tooltip(); }); </script> In this example, when hovering over the link, the library adds in the requisite aria described by the code for screen readers into the HTML link. The widget then dynamically generates the markup for the tooltip, and appends it to the document, just before the closing </body> tag. This is automatically removed as soon as the target element loses focus. ARIA, or Accessible Rich Internet Applications, provides a way to make content more accessible to people with disabilities. You can learn more about this initiative at https://developer.mozilla.org/en-US/docs/Accessibility/ARIA. It is not necessary to only use the $(document) element when adding tooltips. Tooltips will work equally well with classes or selector IDs; using a selector ID, will give a finer degree of control. Overriding the default styles When styling the Tooltip widget, we are not limited to merely using the prebuilt themes on offer, we can always elect to override existing styles with our own. In our next example, we’ll see how easy this is to accomplish, by making some minor changes to the example from tooltip1.html. In a new document, add the following styles, and save it as tooltipOverride.css, within the css folder: p { font-family: Verdana, sans-serif; } .ui-tooltip { background: #637887; color: #fff; } Don't forget to link to the new style sheet from the <head> of your document: <link rel="stylesheet" href="css/tooltipOverride.css"> Before we continue, it is worth explaining a great trick for styling tooltips before committing the results to code. If you are using Firefox, you can download and install the Toggle JS add-on for Firefox, which is available from https://addons.mozilla.org/en-US/firefox/addon/toggle-js/. This allows us to switch off JavaScript on a per-page basis; we can then hover over the link to create the tooltip, before expanding the markup in Firebug and styling it at our leisure. Save your HTML document as tooltip2.html. When we run the page in a browser, you should see the modified tooltip appear when hovering over the link in the text: Using prebuilt themes If creating completely new styles by hand is overkill for your needs, you can always elect to use one of the prebuilt themes that are available for download from the jQuery UI site. This is a really easy change to make. We first need to download a copy of the replacement theme; in our example, we’re going to use one called Excite Bike. Let’s start by browsing to http://jqueryui.com/download/, then deselecting the Toggle All option. We don’t need to download the whole library, just the theme at the bottom, change the theme option to display Excite Bike then select Download. Next, open a copy of tooltip2.html then look for this line: <link rel="stylesheet" href="development-bundle/themes/redmond /jquery.ui.all.css"> You will notice the highlighted word in the above line. This is the name of the existing theme. Change this to excite-bike then save the document as tooltip3.html, then remove the tooltipOverride.css link, and you’re all set. The following is our replacement theme in action: With a single change of word, we can switch between any of the prebuilt themes available for use with jQuery UI (or indeed even any of the custom ones that others have made available online), as long as you have downloaded and copied the theme into the appropriate folder. There may be occasions though, were we need to tweak the settings. This gives us the best of both worlds, where we only need to concentrate on making the required changes. Let’s take a look at how we can alter an existing theme, using ThemeRoller.
Read more
  • 0
  • 0
  • 1093

article-image-creating-direct2d-game-window-class
Packt
23 Dec 2013
12 min read
Save for later

Creating a Direct2D game window class

Packt
23 Dec 2013
12 min read
(For more resources related to this topic, see here.) To put some graphics on the screen; the first step for us would be creating a new game window class that will use Direct2D. This new game window class will derive from our original game window class, while adding the Direct2D functionality. Open Visual Studio. Add a new class to the project called GameWindow2D. We need to change its declaration to: public class GameWindow2D : GameWindow, IDispoable As you can see, it inherits from the GameWindow class meaning that it has all of the public and protected members of the GameWindow class, as though we had implemented them again in this class. It also implements the IDisposable interface, just as the GameWindow class does. Also, don't forget to add a reference to SlimDX to this project if you haven't already. We need to add some using statements to the top of this class file as well. They are all the same using statements that the GameWindow class has, plus one more. The new one is SlimDX.Direct2D. They are as follows: using System.Windows.Forms; using System.Diagnostics; using System.Drawing; using System; using SlimDX; using SlimDX.Direct2D; using SlimDX.Windows; Next, we need to create a handful of member variables: WindowRenderTarget m_RenderTarget; Factory m_Factory; PathGeometry m_Geometry; SolidColorBrush m_BrushRed; SolidColorBrush m_BrushGreen; SolidColorBrush m_BrushBlue; The first variable is a WindowRenderTarget object. The term render target is used to refer to the surface we are going to draw on. In this case, it is our game window. However, this is not always the case. Games can render to other places as well. For example, rendering into a texture object is used to create various effects. One example would be a simple security camera effect. Say, we have a security camera in one room and a monitor in another room. We want the monitor to display what our security camera sees. To do this, we can render the camera's view into a texture, which can then be used to texture the screen of the monitor. Of course, this has to be re-done in every frame so that the monitor screen shows what the camera is currently seeing. This idea is useful in 2D too. Back to our member variables, the second one is a Factory object that we will be using to set up our Direct2D stuff. It is used to create Direct2D resources such as RenderTargets. The third variable is a PathGeometry object that will hold the geometry for the first thing we will draw, which will be a rectangle. The last three variables are all SolidColorBrush objects. We use these to specify the color we want to draw something with. There is a little more to them than that, but that's all we need right now. The constructor Let's turn our attention now to the constructor of our Direct2D game window class. It will do two things. Firstly, it will call the base class constructor (remember the base class is the original GameWindow class), and it will then get our Direct2D stuff initialized. The following is the initial code for our constructor: public GameWindow2D(string title, int width, int height,   bool fullscreen)     : base(title, width, height, fullscreen) {     m_Factory = new Factory();     WindowRenderTargetProperties properties = new       WindowRenderTargetProperties();     properties.Handle = FormObject.Handle;     properties.PixelSize = new Size(width, height);     m_RenderTarget = new WindowRenderTarget(m_Factory,       properties); } In the preceding code, the line starting with a colon is calling the constructor of the base class for us. This ensures that everything inherited from the base class is initialized. In the body of the constructor, the first line creates a new Factory object and stores it in our m_Factory member variable. Next, we create a WindowRenderTargetProperties object and store the handle of our RenderForm object in it. Note that FormObject is one of the properties defined in our GameWindow base class. Remember that the RenderForm object is a SlimDX object that represents a window for us to draw on. The next line saves the size of our game window in the PixelSize property. The WindowRenderTargetProperties object is basically how we specify the initial configuration for a WindowRenderTarget object when we create it. The last line in our constructor creates our WindowRenderTarget object, storing it in our m_RenderTarget member variable. The two parameters we pass in are our Factory object and the WindowRenderTargetProperties object we just created. A WindowRenderTarget object is a render target that refers to the client area of a window. We use the WindowRenderTarget object to draw in a window. Creating our rectangle Now that our render target is set up, we are ready to draw stuff, but first we need to create something to draw! So, we will add a bit more code at the bottom of our constructor. First, we need to initialize our three SolidColorBrush objects. Add these three lines of code at the bottom of the constructor: m_BrushRed = new SolidColorBrush(m_RenderTarget, new Color4(1.0f,   1.0f, 0.0f, 0.0f)); m_BrushGreen = new SolidColorBrush(m_RenderTarget, new   Color4(1.0f, 0.0f, 1.0f, 0.0f)); m_BrushBlue = new SolidColorBrush(m_RenderTarget, new Color4(1.0f,   0.0f, 0.0f, 1.0f)); This code is fairly simple. For each brush, we pass in two parameters. The first parameter is the render target we will use this brush on. The second parameter is the color of the brush, which is an ARGB (Alpha Red Green Blue) value. The first parameter we give for the color is 1.0f. The f character on the end indicates that this number is of the float data type. We set alpha to 1.0 because we want the brush to be completely opaque. A value of 0.0 will make it completely transparent, and a value of 0.5 will be 50 percent transparent. Next, we have the red, green, and blue parameters. These are all float values in the range 0.0 to 1.0 as well. As you can see for the red brush, we set the red channel to 1.0f and the green and blue channels are both set to 0.0f. This means we have maximum red, but no green or blue in our color. With our SolidColorBrush objects set up, we now have three brushes we can draw with, but we still lack something to draw! So, let's fix that by adding some code to make our rectangle. Add this code to the end of the constructor: m_Geometry = new PathGeometry(m_RenderTarget.Factory); using (GeometrySink sink = m_Geometry.Open()) {     int top = (int) (0.25f * FormObject.Height);     int left = (int) (0.25f * FormObject.Width);     int right = (int) (0.75f * FormObject.Width);     int bottom = (int) (0.75f * FormObject.Height);     PointF p0 = new Point(left, top);     PointF p1 = new Point(right, top);     PointF p2 = new Point(right, bottom);     PointF p3 = new Point(left, bottom);     sink.BeginFigure(p0, FigureBegin.Filled);     sink.AddLine(p1);     sink.AddLine(p2);     sink.AddLine(p3);     sink.EndFigure(FigureEnd.Closed);     sink.Close(); } This code is a bit longer, but it's still fairly simple. The first line creates a new PathGeometry object and stores it in our m_Geometry member variable. The next line starts the using block and creates a new GeometrySink object that we will use to build the geometry of our rectangle. The using block will automatically dispose of the GeometrySink object for us when program execution reaches the end of the using block. The using blocks only work with objects that implement the IDisposable interface. The next four lines calculate where each edge of our rectangle will be. For example, the first line calculates the vertical position of the top edge of the rectangle. In this case, we are making the rectangle's top edge be 25 percent of the way down from the top of the screen. Then, we do the same thing for the other three sides of our rectangle. The second group of four lines of code creates four Point objects and initializes them using the values we just calculated. These four Point objects represent the corners of our rectangle. A point is also often referred to as a vertex. When we have more than one vertex, we call them vertices (pronounced as vert-is-ces). The final group of code has six lines. They use the GeometrySink and the Point objects we just created to set up the geometry of our rectangle inside the PathGeometry object. The first line uses the BeginFigure() method to begin the creation of a new geometric figure. The next three lines each add one more line segment to the figure by adding another point or vertex to it. With all four vertices added, we then call the EndFigure() method to specify that we are done adding vertices. The last line calls the Close() method to specify that we are finished adding geometric figures, since we can have more than one if we want. In this case, we are only adding one geometric figure, our rectangle. Drawing our rectangle Since our rectangle never changes, we don't need to add any code to our UpdateScene() method. We will override the base class's UpdateScene() method anyway, in case we need to add some code in here later, which is given as follows: public override void UpdateScene(double frameTime) {     base.UpdateScene(frameTime); } As you can see, we only have one line of code in this override modifier of the base class's UpdateScene() method. It simply calls the base class's version of this method. This is important because the base class's UpdateScene() method contains our code that gets the latest user input data each frame. Now, we are finally ready to write the code that will draw our rectangle on the screen! We will override the RenderScene() method so we can add our custom code. The following is the code: public override void RenderScene() {     if ((!this.IsInitialized) || this.IsDisposed)     {         return;     }     m_RenderTarget.BeginDraw();     m_RenderTarget.Clear(ClearColor);     m_RenderTarget.FillGeometry(m_Geometry, m_BrushBlue);     m_RenderTarget.DrawGeometry(m_Geometry, m_BrushRed, 1.0f);     m_RenderTarget.EndDraw(); } First, we have an if statement, which happens to be identical to the one we put in the base class's RenderScene() method. This is because we are not calling the base class's RenderScene() method, since the only code in it is this if statement. Not calling the base class version of this method will give us a slight performance boost, since we don't have the overhead of that function call. We could do the same thing with the UpdateScene() method as well. In this case we didn't though, because the base class version of that method has a lot more code in it. In your own projects you may want to copy and paste that code into your override of the UpdateScene() method. The next line of code calls the render target's BeginDraw() method to tell it that we are ready to begin drawing. Then, we clear the screen on the next line by filling it with the color stored in the ClearColor property that is defined by our GameWindow base class. The last three lines draw our geometry twice. First, we draw it using the FillGeometry() method of our render target. This will draw our rectangle filled in with the specified brush (in this case, solid blue). Then, we draw the rectangle a second time, but this time with the DrawGeometry() method. This draws only the lines of our shape but doesn't fill it in, so this draws a border on our rectangle. The extra parameter on the DrawGeometry() method is optional and specifies the width of the lines we are drawing. We set it to 1.0f, which means the lines will be one-pixel wide. And the last line calls the EndDraw() method to tell the render target that we are finished drawing. Cleanup As usual, we need to clean things up after ourselves when the program closes. So, we need to add override of the base class's Dispose(bool) method. We've already done this a few times, so it should be somewhat familiar and is not shown here. Our blue rectangle with a red border As you might guess, there is a lot more you can do with drawing geometry. You can draw curved line segments and draw shapes with gradient brushes too for example. You can also draw text on the screen using the render target's DrawText() method. But since we have limited space on these pages, we're going to look at how to draw bitmap images on the screen. These images are something that make up the graphics of most 2D games. Summary In this article, we first made a simple demo application that drew a rectangle on the screen. Then, we got a bit more ambitious and built a 2D tile-based game world. Resources for Article: Further resources on this subject: HTML5 Games Development: Using Local Storage to Store Game Data [Article] Flash Game Development: Creation of a Complete Tetris Game [Article] Interface Designing for Games in iOS [Article]
Read more
  • 0
  • 0
  • 2849
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-enabling-your-new-theme-magento
Packt
18 Dec 2013
3 min read
Save for later

Enabling your new theme in Magento

Packt
18 Dec 2013
3 min read
(For more resources related to this topic, see here.) After your new theme is in place, you can enable it in Magento. Log in to your Magento store's administration panel. Once you have logged in, navigate to System | Configuration, as shown in the following screenshot: From there, select the global configuration scope (labeled Default Config in the following screenshot) you want to apply your new theme to, from the Current Configuration Scope dropdown in the top left of your screen: Once this has loaded, navigate to the Design tab under GENERAL in the left-hand column and expand the Themes block in the right-hand column, as shown in the following screenshot: From here, you can tell Magento to use your new theme. The values given here correspond to the name you gave to the directories when creating your theme. The example uses responsive as the value here, as shown in the following screenshot: Click on the Save Config button at the top right of your screen to save the changes. Next, check that your new theme has been activated. Remember the styles.css file you added in the skin/frontend/default/responsive/css directory? The presence of that file is telling Magento to load your new theme's CSS file instead of the default styles.css file for Magento from the default package, so your store now has none of the original CSS styling it. As such, you should see the following screenshot when you attempt to view the frontend of your Magento store: Overwriting the default Magento templates Noticed the name of your Magento theme appearing next to the logo in the header of your store? You can overwrite the default header.phtml that's causing it by copying the contents of app/design/frontend/base/default/template/page/html/header.phtml into app/design/frontend/default/responsive/template/ page/html/header.phtml. Open the file and find the following lines: <?php if ($this->getIsHomePage()):?> <h1 class="logo"><strong><?php echo $this->getLogoAlt() ?></strong><a href="<?php echo $this->getUrl('') ?>" title= "<?php echo $this->getLogoAlt() ?>" class="logo"><img src = "<?php echo $this->getLogoSrc() ?>" alt="<?php echo $this->getLogoAlt() ?>" /></a></h1> <?php else:?> <a href="<?php echo $this->getUrl('') ?>" title="<?php echo $this->getLogoAlt() ?>" class="logo"><strong><?php echo $this->getLogoAlt() ?></strong><img src = "<?php echo $this->getLogoSrc() ?>" alt="<?php echo $this->getLogoAlt() ?>" /></a> <?php endif?> Replace them with these lines: <a href="<?php echo $this->getUrl('') ?>" title="<?php echo $this- >getLogoAlt() ?>" class="logo"><img src = "<?php echo $this-> getLogoSrc() ?>" alt="<?php echo $this->getLogoAlt() ?>" /></a> Now if you save that file (and upload it to your server, if needed), you can see that the logo now looks tidier, as shown in the following screenshot: That's it! Your basic responsive Magento theme is up and running. Summary Hopefully after reading this article you will get a better understanding of how to enable your new theme in Magento. Resources for Article: Further resources on this subject: Magento : Payment and shipping method [Article] Categories and Attributes in Magento: Part 2 [Article] Magento: Exploring Themes [Article]
Read more
  • 0
  • 0
  • 2079

article-image-clojure-domain-specific-languages-design-concepts-clojure
Packt
13 Dec 2013
3 min read
Save for later

Clojure for Domain-specific Languages - Design Concepts with Clojure

Packt
13 Dec 2013
3 min read
(For more resources related to this topic, see here.) Every function is a little program When I first started getting deep into Clojure development, my friend Tom Marble taught me a very good lesson with a single sentence. I'm not sure if he's the originator of this idea, but he told me to think of writing functions as though "every function is a small program". I'm not really sure what I thought about functions before I heard this, but it all made sense the very moment he told me this. Why write a function as if it were its own program? Because both a function and a program are created to handle a specific set of problems, and this method of thinking allows us to break down our problems into a simpler group of problems. Each set of problems might only need a very limited collection of functions to solve them, so to make a function that fits only a single problem isn't really any different from writing a small program to get the very same result. Some might even call this the Unix philosophy, in the sense that you're trying to build small, extendable, simple, and modular code. A pure function What are the benefits of a program-like function? There are many benefits to this approach of development, but the two clear advantages are that the debugging process can be simplified with the decoupling of task, and this approach can make our code more modular. This approach also allows us to better build pure functions. A pure function isn't dependent on any variable outside the function. Anything other than the arguments passed to the function can't be realized by a pure function. Because our program will cause side effects as a result of execution, not all of our functions can be truly pure. This doesn't mean we should forget about trying to develop program-like functions. Our code inherently becomes more modular because pure functions can survive on their own. This is key when needing to build flexible, extendable, and reusable code components. Floor to roof development It is also known as bottom-up development and is the concept of building basic low- level pieces of a program and then combining them to build the whole program. This approach leads to more reusable code that can be more easily tested because each part of the program acts as an individual building block and doesn't require a large portion of the program to be completed to run a test. Each function only does one thing When a function is written to perform a specific task, that function shouldn't do anything unrelated to the original problem it's needed to solve. For example, if you were to write a function named parse-xml, the function should be able to act as a program that can only parse XML data. If the example function does anything else other than parse lines of XML input, it is probably badly designed and will cause confusion when trying to debug errors in our programs. This practice will help us keep our functions to a more reasonable size and can also help simplify the debugging process.
Read more
  • 0
  • 0
  • 1902

article-image-joomla-template-system
Packt
12 Dec 2013
9 min read
Save for later

Joomla! Template System

Packt
12 Dec 2013
9 min read
(For more resources related to this topic, see here.) Every website has some content, and all kinds of information is provided on websites; not just text, but pictures, animations, and video clips—anything that communicates a site's body of knowledge. However, visual design is the appearance of the site. A good visual design is one that is high quality, appropriate, and relevant to the audience and the message it supports. As a large amount of companies feel the need to redesign their site very few years, they need someone who can stand back and figure out what all that content should communicate. This could be you. The basic principle of Joomla! (and other content management systems) is to separate the content from its visual form. Although this separation is not absolute, it is distinct enough to facilitate quick and efficient customization and deployment of websites. Changing the appearance of web pages built on CMS comes down to installing and configuring a new template. A template is a set of files that determine the look and feel of your Joomla-powered website. Templates include information about the general layout of the site and other content, such as graphics, colors, background images, headers, logos, and typography and footers. Each template is different, offering many choices for site owners to almost instantly change the look of their website. You can see the result of this separation of content from presentation by changing the default template (preinstalled in Joomla!). For web designers, learning how to develop templates for content management systems such as Joomla! opens up lots of opportunities. Joomla! gives you big opportunities to build websites. Taking into account the evolution of web browsers, you are only limited by your imagination and skill set, thanks to a powerful and flexible CMS infrastructure. The ability to change or modify the content and appearance of web pages is important in today's online landscape. What is a Joomla! template? As in the case of traditional HTML templates, Joomla! template is a collection of files (PHP, CSS, and JavaScript) that define the visual appearance of the site. Each template has variations on these files, and each template's files are different, but they have a common purpose; they control the placement of the elements on the screen and impact both the presentation of the contents and the usability of the functionality. In general, a template does not have any content, but it can include logo and background images. The Joomla! template controls the way all information is shown on each page of the website. A template contains the stylesheets, locations, and layout information for the web content being displayed. Also each installed component can have its own template to present content that can overwrite the default template's CSS styles. A template alone cannot be called a website. Generally, people think of the template as the appearance of their site. But a template is only a structure (usually painted and colored) with active fields. It determines the appearance of individual elements (for example, font size, color, backgrounds, style, and spacing) and arrangement of individual elements (including modules). In Joomla!, a single page view is generated by the HTML output of one component, selected modules, and the template. Unlike typical websites, where different components of the template are duplicated throughout the website pages, in case of Joomla!, there is just one assigned template that is responsible for displaying content for the entire site. Most CMS's, Joomla! included, have a modular structure that allows easy improvement of the site's appearance and functionality by installing and publishing modules in appropriate areas. Search engines don't care about design, but people do. How well a template is designed and implemented is, therefore, largely responsible for the first impression made by a website, which later translates into the perception that people have of the entire website. Joomla! released Joomla! Version 3.0.0 on September 27, 2012 with significant updates and major developments. With the adoption of the Twitter Bootstrap framework, Joomla! has become the first major CMS to be mobile ready in both visitor and administrator areas. Bootstrap (http://twitter.github.com/bootstrap) is an open source JavaScript framework developed by the team at Twitter. It is a combination of HTML, CSS, and JavaScript code designed to help build user interface components. Bootstrap was also programmed to support both HTML5 and CSS3. As a result, page layout uses a 1152 px * 1132 px * 1116 px * 1104 px grid, whereas previous versions of Joomla! templates used a 940 px wide layout. Default template stylesheets in Joomla! 3.x are written with LESS, and are then compiled to generate the CSS files. Because of the use of Bootstrap, Joomla! 3.x will slowly begin to migrate toward jQuery in the core (instead of MooTools). Mootools is no longer the primary JavaScript library interface. Joomla! 3.x templates are not compatible with previous versions of Joomla! and have been developed as a separate product. Templates – download for free, buy, or build your own I also want to show you the sites where you can download templates for free or buy them; after all, this book is supposed to teach you how to create your own sites. There are a number of reasons for this. First, you might not have the time or the ability to design a template or create it from scratch for the customer. You can set up your website within minutes because all you have to do is install or upload your template and begin adding content. By swapping the header image and changing the background color or image, you can transform a template with very little additional work. Second, as you read the book you will get acquainted with the basic principles of modifying templates, and thus you will learn how to adapt the ready-made solutions to the specific needs of your project. In general, you don't need to know much about PHP to use or tweak prebuilt templates. Templates can be customized by anyone with basic HTML/CSS knowledge. You can customize template elements to make it suit your needs or those of your client using a simple CSS editor; your template can be configured by template parameters. Third, learn from other template developers. Follow every move of your competitors. When they release an interesting, functional, and popular template, follow (but do not copy) them. We can all learn from others; projects by other people are probably one of the most obvious sources of inspiration. The following screenshot presents a few commercial templates for Joomla! 3.x. built in 2013 by popular developers, bearing in mind, however, that the line between inspiration and plagiarism is often very thin: Free templates Premade free templates are a great solution for those who have a limited budget. It is good experience to use the work of different developers and is also a great way to test a new web concept without investing much apart from your time. There are some decent free templates out there that may even be suitable for a small or medium production website. If you don't like a certain template after using it for a bit, ditching it doesn't mean any loss in investment. Unfortunately, there are also some disadvantages of using free templates. These templates are not unique. Several thousands of web designers from around the world may have already downloaded and used the template you have chosen. So if you don't change the colors or layout a bit, your site will look like a clone, which would be quite unprofessional. Generally, free Joomla! templates don't have any important or useful features such as color variants, Google fonts, advanced typography, CSS compression options, or even responsive layout. On the downside of free templates, you have the obvious quality issues. The majority of free templates are very basic and sometimes even buggy. The support for free templates is almost always lacking. While there are a few free templates that are supported by their creators, they are under no obligation to provide full support to your template if you need help adjusting the layout or fixing a problem due to an error. Realize that developers often use free templates to advertise their cost structures, expansion versions, or club subscriptions. That's why some developers require you to leave a link to their website on the bottom of your page if you use their free templates. What was surprising to me was that not all the free templates for Joomla! 3.x are mobile friendly, despite the fact that even the built-in CMS are built as Responsive Web Design (RWD). In most cases, it was presumably intended by the creators to look like JoomlaShine or Globbersthemes. The following is a list of resources from where you can download different kinds of free templates: www.joomla24.com www.joomlaos.de www.siteground.com/joomla-templates.htm www.bestofjoomla.com Quite often, popular developers publish free templates on their websites; in this way they promote their brand and other products such as modules or commercial versions of templates. Those templates always have better quality and features than others. I suggest that you download free templates only from reliable sources. It is with a great deal of care that you should approach templates shared on discussion forums or blogs because there's a high probability that the code template has been deliberately modified. A huge proportion of templates available for free are in fact packaged with malicious code. Summary Hopefully, after reading this article you will have a better understanding of the features of the Joomla! Template Manager and the types of problems it is able to solve. Resources for Article: Further resources on this subject: Installing and Configuring Joomla! on Local and Remote Servers [Article] Joomla! 1.5: Installing, Creating, and Managing Modules [Article] Tips and Tricks for Joomla! Multimedia [Article]
Read more
  • 0
  • 0
  • 1537

article-image-creating-blog-content-wordpress
Packt
25 Nov 2013
18 min read
Save for later

Creating Blog Content in WordPress

Packt
25 Nov 2013
18 min read
(For more resources related to this topic, see here.) Posting on your blog The central activity you'll be doing with your blog is adding posts. A post is like an article in a magazine; it's got a title, content, and an author (in this case, you, though WordPress allows multiple authors to contribute to a blog). If a blog is like an online diary, every post is an entry in that diary. A blog post also has a lot of other information attached to it, such as a date, excerpt, tags, and categories. In this section, you will learn how to create a new post and what kind of information to attach to it. Adding a simple post Let's review the process of adding a simple post to your blog. Whenever you want to add content or carry out a maintenance process on your WordPress website, you have to start by logging in to the WP Admin (WordPress Administration panel) of your site. To get to the admin panel, just point your web browser to http://yoursite.com/wp-admin. Remember that if you have installed WordPress in a subfolder (for example, blog), your URL has to include the subfolder (that is, http://yoursite.com/blog/wp-admin). When you first log in to the WP Admin, you'll be at the Dashboard. The Dashboard has a lot of information on it so don't worry about that right now. The quickest way to get to the Add New Post page at any time is to click on + New and then the Post link at the top of the page in the top bar. This is the Add New Post page: To add a new post to your site quickly, all you have to do is: Type in a title into the text field under Add New Post (for example, Making Lasagne). Type the text of your post in the content box. Note that the default view is Visual, but you actually have a choice of the Text view as well. Click on the Publish button, which is at the far right. Note that you can choose to save a draft or preview your post as well. Once you click on the Publish button, you have to wait while WordPress performs its magic. You'll see yourself still on the Edit Post screen, but now the following message would have appeared telling you that your post was published, and giving you a link View post: If you view the front page of your site, you'll see that your new post has been added at the top (newest posts are always at the top). Common post options Now that we've reviewed the basics of adding a post, let's investigate some of the other options on the Add New Post and Edit Post pages. In this section we'll look at the most commonly used options, and in the next section we'll look at the more advanced options. Categories and tags Categories and tags are two types of information that you can add to a blog post. We use them to organize the information in your blog by topic and content (rather than just by, say, date), and to help visitors find what they are looking for on your blog. Categories are primarily used for structural organizing. They can be hierarchical, meaning a category can be a parent of another category. A relatively busy blog will probably have at least 10 categories, but probably not more than 15 or 20. Each post in such a blog is likely to have from one up to, maybe four categories assigned to it. For example, a blog about food and cooking might have these categories: Cooking Adventures, In The Media, Ingredients, Opinion, Recipes Found, Recipes Invented, and Restaurants. Of course, the numbers mentioned are just suggestions; you can create and assign as many categories as you like. The way you structure your categories is entirely up to you as well. There are no true rules regarding this in the WordPress world, just guidelines like these. Tags are primarily used as shorthand for describing the topics covered in a particular blog post. A relatively busy blog will have anywhere from 15 to even 100 tags in use. Each post in this blog is likely to have 3 to 10 tags assigned to it. For example, a post on the food blog about a recipe for butternut squash soup may have these tags: soup, vegetarian, autumn, hot, and easy. Again, you can create and assign as many tags as you like. Let's add a new post to the blog. After you give it a title and content, let's add tags and categories. While adding tags, just type your list of tags into the Tags box on the right, separated by commas: Then click on the Add button. The tags you just typed in will appear below the text field with little x buttons next to them. You can click on an x button to delete a tag. Once you've used some tags in your blog, you'll be able to click on the Choose from the most used tags link in this box so that you can easily re-use tags. Categories work a bit differently than tags. Once you get your blog going, you'll usually just check the boxes next to existing categories in the Categories box. In this case, as we don't have any existing categories, we'll have to add one or two. In the Categories box on the right, click on the + Add New Category link. Type your category into the text field, and click on the Add New Category button. Your new category will show up in the list, already checked. Look at the following screenshot: If in the future you want to add a category that needs a parent category, select — Parent Category — from the pull-down menu before clicking on the Add New Category button. If you want to manage more details about your categories, move them around, rename them, assign parent categories, and assign descriptive text. You can do so on the Categories page. Click on the Publish button, and you're done (you can instead choose to schedule a post; we'll explore that in detail in a few pages). When you look at the front page of your site, you'll see your new post on the top, your new category in the sidebar, and the tags and category (that you chose for your post) listed under the post itself. Images in your posts Almost every good blog post needs an image! An image will give the reader an instant idea of what the post is about, and the image will draw people's attention as well. WordPress makes it easy to add an image to your post, control default image sizes, make minor edits to that image, and designate a featured image for your post. Adding an image to a post Luckily, WordPress makes adding images to your content very easy. Let's add an image to the post we just created. You can click on Edit underneath your post on the front page of your site to get there quickly. Alternatively, go back to the WP Admin, open Posts in the main menu, and then click on the post's title. To add an image to a post, first you'll need to have that image on your computer, or know the exact URL pointing to the image if it's already online. Before you get ready to upload an image, make sure that your image is optimized for the Web. Huge files will be uploaded slowly and slow down the process of viewing your site. Just to give you a good example here, I'm using a photo of my own so I don't have to worry about any copyright issues (always make sure to use only the images that you have the right to use, copyright infringement online is a serious problem, to say the least). I know it's on the desktop of my computer. Once you have a picture on your computer and know where it is, carry out the following steps to add the photo to your blog post: Click on the Add Media button, which is right above the content box and below the title box: The box that appears allows you to do a number of different things regarding the media you want to include in your post. The most user-friendly feature here, however, is the drag-and-drop support. Just drag the image from your desktop and drop it into the center area of the page labeled as Drop files anywhere to upload. Immediately after dropping the image, the uploader bar will show the progress of the operation, and when it's done, you'll be able to do some final tuning up. The fields that are important right now are Title, Alt Text, Alignment, Link To, and Size. Title is a description for the image, Alt Text is a phrase that's going to appear instead of the image in case the file goes missing or any other problems present themselves, Alignment will tell the image whether to have text wrap around it and whether it should be right, left, or center, Link To instructs WordPress whether or not to link the image to anything (a common solution is to select None), and Size is the size of the image. Once you have all of the above filled out click on Insert into post. This box will disappear, and your image will show up in the post—right where your cursor was prior to clicking on the Add Media button—on the edit page itself (in the visual editor, that is. If you're using the text editor, the HTML code of the image will be displayed instead). Now, click on the Update button, and go and look at the front page of your site again. There's your image! Controlling default image sizes You may be wondering about those image sizes. What if you want bigger or smaller thumbnails? Whenever you upload an image, WordPress creates three versions of that image for you. You can set the pixel dimensions of those three versions by opening Settings in the main menu, and then clicking on Media. This takes you to the Media Settings page. Here you can specify the size of the uploaded images for: Thumbnail size Medium size Large size If you change the dimensions on this page, and click on the Save Changes button, only images you upload in the future will be affected. Images you've already uploaded to the site will have had their thumbnail, medium, and large versions created already using the old settings. It's a good idea to decide what you want your three media sizes to be early on in your site, so you can set them and have them applied to all images, right from the start. Another thing about uploading images is the whole craze with HiDPI displays, also called Retina displays. Currently, WordPress is in a kind of a transitional phase with images and being in tune with the modern display technology; the Retina Ready functionality was introduced quite recently in WordPress 3.5. In short, if you want to make your images Retina-compatible (meaning that they look good on iPads and other devices with HiDPI screens), you should upload the images at twice the dimensions you plan to display them in. For example, if you want your image to be presented as 800 pixel wide and 600 pixel high, upload it as 1,600 pixel wide and 1,200 pixel high. WordPress will manage to display it properly anyway, and whoever visits your site from a modern device will see a high-definition version of the image. In future versions, WordPress will surely provide a more managed way of handling Retina-compatible images. Editing an uploaded image As of WordPress 2.9, you can now make minor edits on images you've uploaded. In fact, every image that has been previously uploaded to WordPress can be edited. In order to do this, go to Media Library by clicking on the Media button in the main sidebar. What you'll see is a standard WordPress listing (similar to the one we saw while working with posts) presenting all media files and allowing you to edit each one. When you click on the Edit link and then the Edit Image button on the subsequent screen, you'll enter the Edit Media section. Here, you can perform a number of operations to make your image just perfect. As it turns out, WordPress does a good enough job with simple image tuning so you don't really need expensive software such as Photoshop for this. Among the possibilities you'll find cropping, rotating, and flipping vertically and horizontally. For example, you can use your mouse to draw a box as I have done in the preceding image. On the right, in the box marked Image Crop, you'll see the pixel dimensions of your selection. Click on the Crop icon (top left), then the Thumbnail radio button (on the right), and then Save (just below your photo). You now have a new thumbnail! Of course, you can adjust any other version of your image just by making a different selection prior to hitting the Save button. Play around a little and you can become familiar with the details. Designating a featured image As of WordPress 2.9, you can designate a single image that represents your post. This is referred to as the featured image. Some themes will make use of this, and some will not. The default theme, the one we've been using, is named Twenty Thirteen, and it uses the featured image right above the post on the front page. Depending on the theme you're using, its behavior with featured images can vary, but in general, every modern theme supports them in one way or the other. In order to set a featured image, go to the Edit Post screen. In the sidebar you'll see a box labeled Featured Image. Just click on the Set featured image link. After doing so, you'll see a pop-up window, very similar to the one we used while uploading images. Here, you can either upload a completely new image or select an existing image by clicking on it. All you have to do now is click on the Set featured image button in the bottom right corner. After completing the operation, you can finally see what your new image looks like on the front page. Also, keep in mind that WordPress uses featured images in multiple places not only the front page. And as mentioned above, much of this behavior depends on your current theme. Using the visual editor versus text editor WordPress comes with a visual editor, otherwise known as a WYSIWYG editor (pronounced wissy-wig, and stands for What You See Is What You Get). This is the default editor for typing and editing your posts. If you're comfortable with HTML, you may prefer to write and edit your posts using the text editor—particularly useful if you want to add special content or styling. To switch from the rich text editor to the text editor, click on the Text tab next to the Visual tab at the top of the content box: You'll see your post in all its raw HTML glory, and you'll get a new set of buttons that lets you quickly bold and italicize text, as well as add link code, image code, and so on. You can make changes and swap back and forth between the tabs to see the result. Even though the text editor allows you to use some HTML elements, it's not a fully fledged HTML support. For instance, using the <p> tags is not necessary in the text editor, as they will be stripped by default. In order to create a new paragraph in the text editor, all you have to do is press the Enter key twice. That being said, at the same time, the text editor is currently the only way to use HTML tables in WordPress (within posts and pages). You can easily place your table content inside the <table><tr><td> tags and WordPress won't alter it in any way, effectively allowing you to create the exact table you want. Another thing the text editor is most commonly used for is introducing custom HTML parameters in the <img /> and <a> tags and also custom CSS classes in other popular tags. Some content creators actually prefer working with the text editor rather than the visual editor because it gives them much more control and more certainty regarding the way their content is going to be presented on the frontend. Lead and body One of many interesting publishing features WordPress has to offer is the concept of the lead and the body of the post. This may sound like a strange thing, but it's actually quite simple. When you're publishing a new post, you don't necessarily want to display its whole contents right away on the front page. A much more user-friendly approach is to display only the lead, and then display the complete post under its individual URL. Achieving this in WordPress is very simple. All you have to do is use the Insert More Tag button available in the visual editor (or the more button in the text editor). Simply place your cursor exactly where you want to break your post (the text before the cursor will become the lead) and then click on the Insert More Tag button: An alternative way of using this tag is to switch to the text editor and input the tag manually, which is <!--more-->. Both approaches produce the same result. Clicking on the main Update button will save the changes. On the front page, most WordPress themes display such posts by presenting the lead along with a Continue reading link, and then the whole post (both the lead and the rest of the post) is displayed under the post's individual URL. Drafts, pending articles, timestamps, and managing posts There are four additional, simple but common, items I'd like to cover in this section: drafts, pending articles, timestamps, and managing posts. Drafts WordPress gives you the option to save a draft of your post so that you don't have to publish it right away but can still save your work. If you've started writing a post and want to save a draft, just click on the Save Draft button at the right (in the Publish box), instead of the Publish button. Even if you don't click on the Save Draft button, WordPress will attempt to save a draft of your post for you, about once a minute. You'll see this in the area just below the content box. The text will say Saving Draft... and then show the time of the last draft saved: At this point, after a manual save or an autosave, you can leave the Edit Post page and do other things. You'll be able to access all of your draft posts from Dashboard or from the Edit Posts page. In essence, drafts are meant to hold your "work in progress" which means all the articles that haven't been finished yet, or haven't even been started yet, and obviously everything in between. Pending articles Pending articles is a functionality that's going to be a lot more helpful to people working with multi-author blogs, rather than single-author blogs. The thing is that in a bigger publishing structure, there are individuals responsible for different areas of the publishing process. WordPress, being a quality tool, supports such a structure by providing a way to save articles as Pending Review. In an editor-author relationship, if an editor sees a post marked as Pending Review, they know that they should have a look at it and prepare it for the final publication. That's it for the theory, and now how to do it. While creating a new post, click on the Edit link right next to the Status: Draft label: Right after doing so, you'll be presented with a new drop-down menu from which you can select Pending Review and then click on the OK button. Now just click on the Save as Pending button that will appear in place of the old Save Draft button, and you have a shiny new article that's pending review. Timestamps WordPress will also let you alter the timestamp of your post. This is useful if you are writing a post today that you wish you'd published yesterday, or if you're writing a post in advance and don't want it to show up until the right day. By default, the timestamp will be set to the moment you publish your post. To change it, just find the Publish box, and click on the Edit link (next to the calendar icon and Publish immediately), and fields will show up with the current date and time for you to change: Change the details, click on the OK button, and then click on Publish to publish your post (or save a draft). Managing posts If you want to see a list of your posts so that you can easily skim and manage them, you just need to go to the Edit Post page in the WP Admin and navigate to Posts in the main menu. Once you do so, there are many things you can do on this page as with every management page in the WP Admin.
Read more
  • 0
  • 0
  • 6419
article-image-exploring-streams
Packt
22 Nov 2013
15 min read
Save for later

Exploring streams

Packt
22 Nov 2013
15 min read
(For more resources related to this topic, see here.) According to Bjarne Stoustrup in his book The C++ Programming Language, Third Edition: Designing and implementing a general input/output facility for a programming language is notoriously difficult... An I/O facility should be easy, convenient, and safe to use; efficient and flexible; and, above all, complete. It shouldn't surprise anyone that a design team, focused on providing efficient and easy I/O, has delivered such a facility through Node. Through a symmetrical and simple interface, which handles data buffers and stream events so that the implementer does not have to, Node's Stream module is the preferred way to manage asynchronous data streams for both internal modules and, hopefully, for the modules developers will create. A stream in Node is simply a sequence of bytes. At any time, a stream contains a buffer of bytes, and this buffer has a zero or greater length: Because each character in a stream is well defined, and because every type of digital data can be expressed in bytes, any part of a stream can be redirected, or "piped", to any other stream, different chunks of the stream can be sent do different handlers, and so on. In this way stream input and output interfaces are both flexible and predictable and can be easily coupled. Digital streams are well described using the analogy of fluids, where individual bytes (drops of water) are being pushed through a pipe. In Node, streams are objects representing data flows that can be written to and read from asynchronously. The Node philosophy is a non-blocking flow, I/O is handled via streams, and so the design of the Stream API naturally duplicates this general philosophy. In fact, there is no other way of interacting with streams except in an asynchronous, evented manner—you are prevented, by design, from blocking I/O. Five distinct base classes are exposed via the abstract Stream interface: Readable, Writable, Duplex, Transform, and PassThrough. Each base class inherits from EventEmitter, which we know of as an interface to which event listeners and emitters can be bound. As we will learn, and here will emphasize, the Stream interface is an abstract interface. An abstract interface functions as a kind of blueprint or definition, describing the features that must be built into each constructed instance of a Stream object. For example, a readable stream implementation is required to implement a public read method which delegates to the interface's internal _read method. In general, all stream implementations should follow these guidelines: As long as data exists to send, write to a stream until that operation returns false, at which point the implementation should wait for a drain event, indicating that the buffered stream data has emptied Continue to call read until a null value is received, at which point wait for a readable event prior to resuming reads Several Node I/O modules are implemented as streams. Network sockets, file readers and writers, stdin and stdout, zlib, and so on. Similarly, when implementing a readable data source, or data reader, one should implement that interface as a Stream interface. It is important to note that as of Node 0.10.0 the Stream interface changed in some fundamental ways. The Node team has done its best to implement backwards-compatible interfaces, such that (most) older programs will continue to function without modification. In this article we will not spend any time discussing the specific features of this older API, focusing on the current (and future) design. The reader is encouraged to consult Node's online documentation for information on migrating older programs. Implementing readable streams Streams producing data that another process may have an interest in are normally implemented using a Readable stream. A Readable stream saves the implementer all the work of managing the read queue, handling the emitting of data events, and so on. To create a Readable stream: var stream = require('stream'); var readable = new stream.Readable({ encoding : "utf8", highWaterMark : 16000, objectMode: true }); As previously mentioned, Readable is exposed as a base class, which can be initialized through three options: encoding: Decode buffers into the specified encoding, defaulting to UTF-8. highWaterMark: Number of bytes to keep in the internal buffer before ceasing to read from the data source. The default is 16 KB. objectMode: Tell the stream to behave as a stream of objects instead of a stream of bytes, such as a stream of JSON objects instead of the bytes in a file. Default false. In the following example we create a mock Feed object whose instances will inherit the Readable stream interface. Our implementation need only implement the abstract _read method of Readable, which will push data to a consumer until there is nothing more to push, at which point it triggers the Readable stream to emit an "end" event by pushing a null value: var Feed = function(channel) { var readable = new stream.Readable({ encoding : "utf8" }); var news = [ "Big Win!", "Stocks Down!", "Actor Sad!" ]; readable._read = function() { if(news.length) { return readable.push(news.shift() + "n"); } readable.push(null); }; return readable; } Now that we have an implementation, a consumer might want to instantiate the stream and listen for stream events. Two key events are readable and end. The readable event is emitted as long as data is being pushed to the stream. It alerts the consumer to check for new data via the read method of Readable. Note again how the Readable implementation must provide a private _read method, which services the public read method exposed to the consumer API. The end event will be emitted whenever a null value is passed to the push method of our Readable implementation. Here we see a consumer using these methods to display new stream data, providing a notification when the stream has stopped sending data: var feed = new Feed(); feed.on("readable", function() { var data = feed.read(); data && process.stdout.write(data); }); feed.on("end", function() { console.log("No more news"); }); Similarly, we could implement a stream of objects through the use of the objectMode option: var readable = new stream.Readable({ objectMode : true }); var prices = [ { price : 1 }, { price : 2 } ]; ... readable.push(prices.shift()); // } { prices : 1 } // } { prices : 2 } Here we see that each read event is receiving an object, rather than a buffer or string. Finally, the read method of a Readable stream can be passed a single argument indicating the number of bytes to be read from the stream's internal buffer. For example, if it was desired that a file should be read one byte at a time, one might implement a consumer using a routine similar to: readable.push("Sequence of bytes"); ... feed.on("readable", function() { var character; while(character = feed.read(1)) { console.log(character); }; }); // } S // } e // } q // } ... Here it should be clear that the Readable stream's buffer was filled with a number of bytes all at once, but was read from discretely. Pushing and pulling We have seen how a Readable implementation will use push to populate the stream buffer for reading. When designing these implementations it is important to consider how volume is managed, at either end of the stream. Pushing more data into a stream than can be read can lead to complications around exceeding available space (memory). At the consumer end it is important to maintain awareness of termination events, and how to deal with pauses in the data stream. One might compare the behavior of data streams running through a network with that of water running through a hose. As with water through a hose, if a greater volume of data is being pushed into the read stream than can be efficiently drained out of the stream at the consumer end through read, a great deal of back pressure builds, causing a data backlog to begin accumulating in the stream object's buffer. Because we are dealing with strict mathematical limitations, read simply cannot be compelled to release this pressure by reading more quickly—there may be a hard limit on available memory space, or other limitation. As such, memory usage can grow dangerously high, buffers can overflow, and so forth. A stream implementation should therefore be aware of, and respond to, the response from a push operation. If the operation returns false this indicates that the implementation should cease reading from its source (and cease pushing) until the next _read request is made. In conjunction with the above, if there is no more data to push but more is expected in the future the implementation should push an empty string (""), which adds no data to the queue but does ensure a future readable event. While the most common treatment of a stream buffer is to push to it (queuing data in a line), there are occasions where one might want to place data on the front of the buffer (jumping the line). Node provides an unshift operation for these cases, which behavior is identical to push, outside of the aforementioned difference in buffer placement. Writable streams A Writable stream is responsible for accepting some value (a stream of bytes, a string) and writing that data to a destination. Streaming data into a file container is a common use case. To create a Writable stream: var stream = require('stream'); var readable = new stream.Writable({ highWaterMark : 16000, decodeStrings: true }); The Writable streams constructor can be instantiated with two options: highWaterMark: The maximum number of bytes the stream's buffer will accept prior to returning false on writes. Default is 16 KB decodeStrings: Whether to convert strings into buffers before writing. Default is true. As with Readable streams, custom Writable stream implementations must implement a _write handler, which will be passed the arguments sent to the write method of instances. One should think of a Writable stream as a data target, such as for a file you are uploading. Conceptually this is not unlike the implementation of push in a Readable stream, where one pushes data until the data source is exhausted, passing null to terminate reading. For example, here we write 100 bytes to stdout: var stream = require('stream'); var writable = new stream.Writable({ decodeStrings: false }); writable._write = function(chunk, encoding, callback) { console.log(chunk); callback(); } var w = writable.write(new Buffer(100)); writable.end(); console.log(w); // Will be `true` There are two key things to note here. First, our _write implementation fires the callback function immediately after writing, a callback that is always present, regardless of whether the instance write method is passed a callback directly. This call is important for indicating the status of the write attempt, whether a failure (error) or a success. Second, the call to write returned true. This indicates that the internal buffer of the Writable implementation has been emptied after executing the requested write. What if we sent a very large amount of data, enough to exceed the default size of the internal buffer? Modifying the above example, the following would return false: var w = writable.write(new Buffer(16384)); console.log(w); // Will be 'false' The reason this write returns false is that it has reached the highWaterMark option—default value of 16 KB (16 * 1024). If we changed this value to 16383, write would again return true (or one could simply increase its value). What to do when write returns false? One should certainly not continue to send data! Returning to our metaphor of water in a hose: when the stream is full, one should wait for it to drain prior to sending more data. Node's Stream implementation will emit a drain event whenever it is safe to write again. When write returns false listen for the drain event before sending more data. Putting together what we have learned, let's create a Writable stream with a highWaterMark value of 10 bytes. We will send a buffer containing more than 10 bytes (composed of A characters) to this stream, triggering a drain event, at which point we write a single Z character. It should be clear from this example that Node's Stream implementation is managing the buffer overflow of our original payload, warning the original write method of this overflow, performing a controlled depletion of the internal buffer, and notifying us when it is safe to write again: var stream = require('stream'); var writable = new stream.Writable({ highWaterMark: 10 }); writable._write = function(chunk, encoding, callback) { process.stdout.write(chunk); callback(); } writable.on("drain", function() { writable.write("Zn"); }); var buf = new Buffer(20, "utf8"); buf.fill("A"); console.log(writable.write(buf.toString())); // false The result should be a string of 20 A characters, followed by false, then followed by the character Z. The fluid data in a Readable stream can be easily redirected to a Writable stream. For example, the following code will take any data sent by a terminal (stdin is a Readable stream) and pass it to the destination Writable stream, stdout: process.stdin.pipe(process.stdout); Whenever a Writable stream is passed to a Readable stream's pipe method, a pipe event will fire. Similarly, when a Writable stream is removed as a destination for a Readable stream, the unpipe event fires. To remove a pipe, use the following: unpipe(destination stream)   Duplex streams A duplex stream is both readable and writeable. For instance, a TCP server created in Node exposes a socket that can be both read from and written to: var stream = require("stream"); var net = require("net"); net .createServer(function(socket) { socket.write("Go ahead and type something!"); socket.on("readable", function() { process.stdout.write(this.read()) }); }) .listen(8080); When executed, this code will create a TCP server that can be connected to via Telnet: telnet 127.0.0.1 8080 Upon connection, the connecting terminal will print out Go ahead and type something! —writing to the socket. Any text entered in the connecting terminal will be echoed to the stdout of the terminal running the TCP server (reading from the socket). This implementation of a bi-directional (duplex) communication protocol demonstrates clearly how independent processes can form the nodes of a complex and responsive application, whether communicating across a network or within the scope of a single process. The options sent when constructing a Duplex instance merge those sent to Readable and Writable streams, with no additional parameters. Indeed, this stream type simply assumes both roles, and the rules for interacting with it follow the rules for the interactive mode being used. As a Duplex stream assumes both read and write roles, any implementation is required to implement both _write and _read methods, again following the standard implementation details given for the relevant stream type. Transforming streams On occasion stream data needs to be processed, often in cases where one is writing some sort of binary protocol or other "on the fly" data transformation. A Transform stream is designed for this purpose, functioning as a Duplex stream that sits between a Readable stream and a Writable stream. A Transform stream is initialized using the same options used to initialize a typical Duplex stream. Where Transform differs from a normal Duplex stream is in its requirement that the custom implementation merely provide a _transform method, excluding the _write and _read method requirement. The _transform method will receive three arguments, first the sent buffer, an optional encoding argument, and finally a callback which _transform is expected to call when the transformation is complete: _transform = function(buffer, encoding, cb) { var transformation = "..."; this.push(transformation) cb(); } Let's imagine a program that wishes to convert ASCII (American Standard Code for Information Interchange) codes into ASCII characters, receiving input from stdin. We would simply pipe our input to a Transform stream, then piping its output to stdout: var stream = require('stream'); var converter = new stream.Transform(); converter._transform = function(num, encoding, cb) { this.push(String.fromCharCode(new Number(num)) + "n") cb(); } process.stdin.pipe(converter).pipe(process.stdout); Interacting with this program might produce an output resembling the following: 65 A 66 B 256 A 257 a   Using PassThrough streams This sort of stream is a trivial implementation of a Transform stream, which simply passes received input bytes through to an output stream. This is useful if one doesn't require any transformation of the input data, and simply wants to easily pipe a Readable stream to a Writable stream. PassThrough streams have benefits similar to JavaScript's anonymous functions, making it easy to assert minimal functionality without too much fuss. For example, it is not necessary to implement an abstract base class, as one does with for the _read method of a Readable stream. Consider the following use of a PassThrough stream as an event spy: var fs = require('fs'); var stream = new require('stream').PassThrough(); spy.on('end', function() { console.log("All data has been sent"); }); fs.createReadStream("./passthrough.js").pipe(spy).pipe(process.std out); Summary As we have learned, Node's designers have succeeded in creating a simple, predictable, and convenient solution to the very difficult problem of enabling efficient I/O between disparate sources and targets. Its abstract Stream interface facilitates the instantiation of consistent readable and writable interfaces, and the extension of this interface into HTTP requests and responses, the filesystem, child processes, and other data channels makes stream programming with Node a pleasant experience. Resources for Article: Further resources on this subject: So, what is Node.js? [Article] Getting Started with Zombie.js [Article] So, what is KineticJS? [Article]
Read more
  • 0
  • 0
  • 1978

article-image-our-first-machine-learning-method-linear-classification
Packt
21 Nov 2013
10 min read
Save for later

Our First Machine Learning Method – Linear Classification

Packt
21 Nov 2013
10 min read
(For more resources related to this topic, see here.) To get a grip on the problem of machine learning in scikit-learn, we will start with a very simple machine learning problem: we will try to predict the Iris flower species using only two attributes: sepal width and sepal length. This is an instance of a classification problem, where we want to assign a label (a value taken from a discrete set) to an item according to its features. Let's first build our training dataset—a subset of the original sample, represented by the two attributes we selected and their respective target values. After importing the dataset, we will randomly select about 75 percent of the instances, and reserve the remaining ones (the evaluation dataset) for evaluation purposes (we will see later why we should always do that): >>> from sklearn.cross_validation import train_test_split >>> from sklearn import preprocessing >>> # Get dataset with only the first two attributes >>> X, y = X_iris[:, :2], y_iris >>> # Split the dataset into a training and a testing set >>> # Test set will be the 25% taken randomly >>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=33) >>> print X_train.shape, y_train.shape (112, 2) (112,) >>> # Standardize the features >>> scaler = preprocessing.StandardScaler().fit(X_train) >>> X_train = scaler.transform(X_train) >>> X_test = scaler.transform(X_test) The train_test_split function automatically builds the training and evaluation datasets, randomly selecting the samples. Why not just select the first 112 examples? This is because it could happen that the instance ordering within the sample could matter and that the first instances could be different to the last ones. In fact, if you look at the Iris datasets, the instances are ordered by their target class, and this implies that the proportion of 0 and 1 classes will be higher in the new training set, compared with that of the original dataset. We always want our training data to be a representative sample of the population they represent. The last three lines of the previous code modify the training set in a process usually called feature scaling. For each feature, calculate the average, subtract the mean value from the feature value, and divide the result by their standard deviation. After scaling, each feature will have a zero average, with a standard deviation of one. This standardization of values (which does not change their distribution, as you could verify by plotting the X values before and after scaling) is a common requirement of machine learning methods, to avoid that features with large values may weight too much on the final results. Now, let's take a look at how our training instances are distributed in the two-dimensional space generated by the learning feature. pyplot, from the matplotlib library, will help us with this: >>> import matplotlib.pyplot as plt >>> colors = ['red', 'greenyellow', 'blue'] >>> for i in xrange(len(colors)): >>> xs = X_train[:, 0][y_train == i] >>> ys = X_train[:, 1][y_train == i] >>> plt.scatter(xs, ys, c=colors[i]) >>> plt.legend(iris.target_names) >>> plt.xlabel('Sepal length') >>> plt.ylabel('Sepal width') The scatter function simply plots the first feature value (sepal width) for each instance versus its second feature value (sepal length) and uses the target class values to assign a different color for each class. This way, we can have a pretty good idea of how these attributes contribute to determine the target class. The following screenshot shows the resulting plot: Looking at the preceding screenshot, we can see that the separation between the red dots (corresponding to the Iris setosa) and green and blue dots (corresponding to the two other Iris species) is quite clear, while separating green from blue dots seems a very difficult task, given the two features available. This is a very common scenario: one of the first questions we want to answer in a machine learning task is if the feature set we are using is actually useful for the task we are solving, or if we need to add new attributes or change our method. Given the available data, let's, for a moment, redefine our learning task: suppose we aim, given an Iris flower instance, to predict if it is a setosa or not. We have converted our problem into a binary classification task (that is, we only have two possible target classes). If we look at the picture, it seems that we could draw a straight line that correctly separates both the sets (perhaps with the exception of one or two dots, which could lie in the incorrect side of the line). This is exactly what our first classification method, linear classification models, tries to do: build a line (or, more generally, a hyperplane in the feature space) that best separates both the target classes, and use it as a decision boundary (that is, the class membership depends on what side of the hyperplane the instance is). To implement linear classification, we will use the SGDClassifier from scikit-learn. SGD stands for Stochastic Gradient Descent, a very popular numerical procedure to find the local minimum of a function (in this case, the loss function, which measures how far every instance is from our boundary). The algorithm will learn the coefficients of the hyperplane by minimizing the loss function. To use any method in scikit-learn, we must first create the corresponding classifier object, initialize its parameters, and train the model that better fits the training data. You will see while you advance that this procedure will be pretty much the same for what initially seemed very different tasks. >>> from sklearn.linear_modelsklearn._model import SGDClassifier >>> clf = SGDClassifier() >>> clf.fit(X_train, y_train)</p></pre> The SGDClassifier initialization function allows several parameters. For the moment, we will use the default values, but keep in mind that these parameters could be very important, especially when you face more real-world tasks, where the number of instances (or even the number of attributes) could be very large. The fit function is probably the most important one in scikit-learn. It receives the training data and the training classes, and builds the classifier. Every supervised learning method in scikit-learn implements this function. What does the classifier look like in our linear model method? As we have already said, every future classification decision depends just on a hyperplane. That hyperplane is, then, our model. The coef_ attribute of the clf object (consider, for the moment, only the first row of the matrices), now has the coefficients of the linear boundary and the intercept_ attribute, the point of intersection of the line with the y axis. Let's print them: >>> print clf.coef_ [[-28.53692691 15.05517618] [ -8.93789454 -8.13185613] [ 14.02830747 -12.80739966]] >>> print clf.intercept_ [-17.62477802 -2.35658325 -9.7570213 ] Indeed in the real plane, with these three values, we can draw a line, represented by the following equation: -17.62477802 - 28.53692691 * x1 + 15.05517618 * x2 = 0 Now, given x1 and x2 (our real-valued features), we just have to compute the value of the left-side of the equation: if its value is greater than zero, then the point is above the decision boundary (the red side), otherwise it will be beneath the line (the green or blue side). Our prediction algorithm will simply check this and predict the corresponding class for any new iris flower. But, why does our coefficient matrix have three rows? Because we did not tell the method that we have changed our problem definition (how could we have done this?), and it is facing a three-class problem, not a binary decision problem. What, in this case, the classifier does is the same we did—it converts the problem into three binary classification problems in a one-versus-all setting (it proposes three lines that separate a class from the rest). The following code draws the three decision boundaries and lets us know if they worked as expected: >>> x_min, x_max = X_train[:, 0].min() - .5, X_train[:, 0].max() + .5 >>> y_min, y_max = X_train[:, 1].min() - .5, X_train[:, 1].max() + .5 >>> xs = np.arange(x_min, x_max, 0.5) >>> fig, axes = plt.subplots(1, 3) >>> fig.set_size_inches(10, 6) >>> for i in [0, 1, 2]: >>> axes[i].set_aspect('equal') >>> axes[i].set_title('Class '+ str(i) + ' versus the rest') >>> axes[i].set_xlabel('Sepal length') >>> axes[i].set_ylabel('Sepal width') >>> axes[i].set_xlim(x_min, x_max) >>> axes[i].set_ylim(y_min, y_max) >>> sca(axes[i]) >>> plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=plt.cm.prism) >>> ys = (-clf.intercept_[i] – Xs * clf.coef_[i, 0]) / clf.coef_[i, 1] >>> plt.plot(xs, ys, hold=True) The first plot shows the model built for our original binary problem. It looks like the line separates quite well the Iris setosa from the rest. For the other two tasks, as we expected, there are several points that lie on the wrong side of the hyperplane. Now, the end of the story: suppose that we have a new flower with a sepal width of 4.7 and a sepal length of 3.1, and we want to predict its class. We just have to apply our brand new classifier to it (after normalizing!). The predict method takes an array of instances (in this case, with just one element) and returns a list of predicted classes: >>>print clf.predict(scaler.transform([[4.7, 3.1]])) [0] If our classifier is right, this Iris flower is a setosa. Probably, you have noticed that we are predicting a class from the possible three classes but that linear models are essentially binary: something is missing. You are right. Our prediction procedure combines the result of the three binary classifiers and selects the class in which it is more confident. In this case, we will select the boundary line whose distance to the instance is longer. We can check that using the classifier decision_function method: >>>print clf.decision_function(scaler.transform([[4.7, 3.1]])) [[ 19.73905808 8.13288449 -28.63499119]] Summary In this article we included a very simple example of classification, trying to show the main steps for learning. Resources for Article: Further resources on this subject: Python Testing: Installing the Robot Framework [Article] Inheritance in Python [Article] Python 3: Object-Oriented Design [Article]
Read more
  • 0
  • 0
  • 2556

article-image-platform-service
Packt
21 Nov 2013
5 min read
Save for later

Platform as a Service

Packt
21 Nov 2013
5 min read
(For more resources related to this topic, see here.) Platform as a Service is a very interesting take on the traditional cloud computing models. While there are many (often conflicting) definitions of a PaaS, for all practical purposes, PaaS provides a complete platform and environment to build and host applications or services. Emphasis is clearly on providing an end-to-end precreated environment to develop and deploy the application that automatically scales as required. PaaS packs together all the necessary components such as an operating system, database, programming language, libraries, web or application container, and a storage or hosting option. PaaS offerings vary and their chargebacks are dependent on what is utilized by the end user. There are excellent public offerings of PaaS such as Google App Engine, Heroku, Microsoft Azure, and Amazon Elastic Beanstalk. In a private cloud offering for an enterprise, it is possible to implement a similar PaaS environment. Out of the various possibilities, we will focus on building a Database as a Service (DBaaS) infrastructure using Oracle Enterprise Manager. DBaaS is sometimes seen as a mix of PaaS or SaaS depending on the kind of service it provides. DBaaS that provides services such as a database would be leaning more towards its PaaS legacy; but if it provides a service such as Business Intelligence, it takes more of a SaaS form. Oracle Enterprise Manager enables self-service provisioning of virtualized database instances out of a common shared database instance or cluster. Oracle Database is built to be clustered, and this makes it an easy fit for a robust DBaaS platform. Setting up the PaaS infrastructure Before we go about implementing a DBaaS, we will need to make sure our common platform is up and working. We will now check how we can create a PaaS Zone. Creating a PaaS Zone Enterprise Manager groups host or Oracle VM Manager Zones into PaaS Infrastructure Zones. You will need to have at least one PaaS Zone before you can add more features into the setup. To create a PaaS Zone, make sure that you have the following: The EM_CLOUD_ADMINISTRATOR, EM_SSA_ADMINISTRATOR, and EM_SSA_USER roles created A software library To set up a PaaS Infrastructure Zone, perform the following steps: Navigate to Setup | Cloud | PaaS Infrastructure Zone. Click on Create in the PaaS Infrastructure Zone main page. Enter the necessary details for PaaS Infrastructure Zone such as Name and Description. Based on the type of members you want to add to this zone, you can select any of the following member types: Host: This option will only allow the host targets to be part of this zone. Also, make sure you provide the necessary details for the placement policy constraints defined per host. These values are used to prevent over utilization of hosts which are already being heavily used. You can set a percentage threshold for Maximum CPU Utilization and Maximum Memory Allocation. Any host exceeding this threshold will not be used for provisioning. OVM Zone: This option will allow you to add Oracle Virtual Manager Zone targets: If you select Host at this stage, you will see the following page: Click on the + button to add named credentials and make sure you click on Test Credentials button to verify the credential. These named credentials must be global and available on all the hosts in this zone. Click on the Add button to add target hosts to this zone. If you selected OVM Zone in the previous screen (step 1 of 4), you will be presented with the following screen: Click on the Add button to add roles that can access this PaaS Infrastructure Zone. Once you have created a PaaS Infrastructure Zone, you can proceed with setting up necessary pieces for a DBaaS. However, time and again you might want to edit or review your PaaS Infrastructure Zone. To view and manage your PaaS Infrastructure Zones, navigate to Enterprise Menu | Cloud | Middleware and Database Cloud | PaaS Infrastructure Zones. From this page you can create, edit, delete, or view more details for a PaaS Infrastructure Zone. Clicking on the PaaS infrastructure zone link will display a detailed drill-down page with quite a few details related to that zone. The page is shown as follows: This page shows a lot of very useful details about the zone. Some of them are listed as follows: General: This section shows stats for this zone and shows details such as the total number of software pools, Oracle VM zones, member types (hosts or Oracle VM Zones), and other related details. CPU and Memory: This section gives an overview of CPU and memory utilization across all servers in the zone. Issues: This section shows incidents and problems for the target. This is a handy summary to check if there are any issues that needs attention. Request Summary: This section shows the status of requests being processed currently. Software Pool Summary: This section shows the name and type of each software pool in the zone. Unallocated Servers: This section shows a list of servers that are not associated with any software pool. Members: This section shows the members of the zones and the member. Service Template Summary: Shows the service templates associated with the zone. Summary We saw in this article, how PaaS plays a vital role in the structure of a DBaaS architechture. Resources for Article: Further resources on this subject: What is Oracle Public Cloud? [Article] Features of CloudFlare [Article] Oracle Tools and Products [Article]
Read more
  • 0
  • 0
  • 1268
article-image-zurb-foundation-overview
Packt
21 Nov 2013
7 min read
Save for later

Zurb Foundation – an Overview

Packt
21 Nov 2013
7 min read
(For more resources related to this topic, see here.) Most importantly, you can apply your creativity to make the design your own. Foundation gives you the tools you need for this. Then it gets out of the way and your site becomes your own. Especially when you advance to using the Foundation's SASS variables, functions and mixins, you have the ability to make your site your own unique creation. Foundation's grid system The foundation (pun intended) of Zurb Foundation is its grid system—rows and columns—much like a spread sheet, a blank sheet of graph paper, or tables, similar to what we used to use for HTML layout. Think of it as the canvas upon which you design your website. Each cell is a content area that can be merged with other cells, beside or below it, to make larger content areas. A default installation of Foundation will be based on twelve cells in a row. A column is comprised of one or more individual cells. Lay out a website Let's put Foundation's grid system to work in an example. We'll build a basic website with a two part header, a two part content area, a sidebar, and a three part footer area. With the simple techniques we demonstrate here, you can craft mostly any layout you want. Here is the mobile view Foundation works best when you design for small devices first, so here is what we want our small device (mobile) view to look like: This is the layout we want on mobile or small devices. But we've labeled the content areas with a title that describes where we want them on a regular desktop. By doing this, we are thinking ahead and creating a view ready for the desktop as well. Here is the desktop view Since a desktop display is typically wider than a mobile display, we have more horizontal space and things that had to be presented vertically on the mobile view can be displayed horizontally on the desktop view. Here is how we want our regular desktop or laptop to display the same content areas: These are not necessarily drawn to scale. It is the layout we are interested in. The two part header went from being one above the other in the mobile view to being side-by-side in the desktop view. The header on the top went left and the bottom header went right. All these make perfect sense. However, the sidebar shifted from being above the content area in the mobile view and shifted to its right in the mobile view. That's not natural when rendering HTML. Something must have happened! The content areas, left and right, stayed the same in both the views. And that's exactly what we wanted. The three part footer got rearranged. The center footer appears to have slid down between the left and right footers. That makes sense from a design perspective but it isn't natural from an HTML rendering perspective. Foundation provides the classes to easily make all this magic happen. Here is the code Unlike the early days of mobile design where a separate website was built for mobile devices, with Foundation you build your site once, and use classes to specify how it should look on both mobile and regular displays. Here is the HTML code that generates the two layouts: <header class="row"> <div class="large-6 column">Header Left</div> <div class="large-6 column">Header Right</div> </header> <main class="row"> <aside class="large-3 push-9 column">Sidebar Right</aside> <section class="large-9 pull-3 columns"> <article class="row"> <div class="small-9 column">Content Left</div> <div class="small-3 column">Content Right</div> </article> </section> </main> <footer class="row"> <div class="small-6 small-centered large-4 large-uncentered push-4 column">Footer Center</div> <div class="small-6 large-4 pull-4 column">Footer Left</div> <div class="small-6 large-4 column">Footer Right</div> </footer> That's all there is to it. Replace the text we used for labels with real content and you have a design that displays on mobile and regular displays in the layouts we've shown in this article. Toss in some widgets What we've shown above is just the core of the Foundation framework. As a toolkit, it also includes numerous CSS components and JavaScript plugins. Foundation includes styles for labels, lists, and data tables. It has several navigation components including Breadcrumbs, Pagination, Side Nav, and Sub Nav. You can add regular buttons, drop-down buttons, and button groups. You can make unique content areas with Block Grids, a special variation of the underlying grid. You can add images as thumbnails, put content into panels, present your video feed using the Flex Video component, easily add pricing tables, and represent progress bars. All these components only require CSS and are the easiest to integrate. By tossing in Foundation's JavaScript plugins, you have even more capabilities. Plugins include things like Alerts, Tooltips, and Dropdowns. These can be used to pop up messages in various ways. The Section plugin is very powerful when you want to organize your content into horizontal or vertical tabs, or when you want horizontal or vertical navigation. Like most components and plugins, it understands the mobile and regular desktop views and adapts accordingly. The Top Bar plugin is a favorite for many developers. It is a multi-level fly out menu plugin. Build your menu in HTML the way Top Bar expects. Set it up with the appropriate classes and it just works. Magellan and Joyride are two plugins that you can put to work to help show your viewers where they are on a page or to help them navigate to various sections on a page. Orbit is Foundation's slide presentation plugin. You often see sliders on the home page of websites these days. Clearing is similar to Orbit except that it displays thumbnails of the images in a presentation below the main display window. A viewer clicks on a thumbnail to display the full image. Reveal is a plugin that allows you to put a link anywhere on your page and when the viewer clicks on it, a box pops up extra content, which could even be an Orbit slider, is revealed. Interchange is one of the most recent additions to Foundation's plugin factory. With it you can selectively load images depending on the target environment. This lets you optimize bandwidth between your web server and your viewer's browser. Foundation also provides a great Forms plugin. On its own it is capable. With the additional Abide plugin you have a great deal of control over form layout and editing. Summary As you can see, Foundation is very capable of laying out web page for mobile devices and regular displays. One set of code, two very different looks. And that's just the beginning. Foundation's CSS components and JavaScript plugins can be placed on a web page in almost any content area. With these widgets you can have much more interaction with your viewers than you otherwise would. Put Foundation to work in your website today! Resources for Article: Further resources on this subject: Quick start – using Foundation 4 components for your first website [Article] Introduction to RWD frameworks [Article] Nesting, Extend, Placeholders, and Mixins [Article]
Read more
  • 0
  • 0
  • 995

article-image-issues-and-wikis-gitlab
Packt
20 Nov 2013
6 min read
Save for later

Issues and Wikis in GitLab

Packt
20 Nov 2013
6 min read
(For more resources related to this topic, see here.) Issues The built-in features for issue tracking and documentation will be very beneficial to you, especially if you're working on extensive software projects, the ones with many components, or those that need to be supported in multiple versions at once, for example, stable, testing, and unstable. In this article, we will have a closer look at the formats that are supported for issues and wiki pages (in particular, Markdown); also the elements that can be referenced from within these and how issues can be organized. Furthermore, we will go through the process of assigning issues to team members, and keeping documentation in wiki pages, which can also be edited locally. Lastly, we will see how the RSS feeds generated by GitLab can keep your team in a closer loop around the projects they work on. The metadata covered in this article may seem trivial, but many famous software projects have gained traction due to their extensive and well-written documentation, which initially was done by core developers. It enables your users to do the same with their projects, even if only internally; it opens up for a much more efficient collaboration. GitLab-flavored Markdown GitLab comes with a Markdown formatting parser that is fairly similar to GitHubs, which makes it very easy to adapt and migrate. Many standalone editors also support this format, such as Mou (http://mouapp.com/) for Mac or MarkdownPad (http://markdownpad.com/) for Windows. On Linux, editors with a split view, such as ReText (http://sourceforge.net/projects/retext/) or the more Zen-writing UberWriter (http://uberwriter.wolfvollprecht.de/) are available. For the popular Vim editor , multiple Markdown plugins too are up for grabs on a number of GitHub repositories; one of them is Vim Markdown (https://github.com/tpope/vim-markdown) by Tim Pope. Lastly, I'd like to mention that you don't need a dedicated editor for Markdown because they are plain text files. The mentioned editors simply enhance the view through syntax highlighting and preview modes. About Markdown Markdown was originally written by John Gruber, and has since evolved into various flavors. The intention of this very lightweight markup language is to have a source that is easy to edit and can be transformed into meaningful HTML to be displayed on the Web. Different variations of Markdown have made it to a majority of very successful software projects as the default language; readme files, documentation, and even blogging engines adopt it. In Markdown, text styles can be applied, links placed, and images can be inserted. If ever Markdown, by default, does not support what you are currently trying to do, you can insert plain HTML, which will not be altered by the Markdown parser. Referring to elements inside GitLab When working with source code, it can be of importance to refer to a line of code, a file, or other things, when discussing something. Because many development teams are nowadays spread throughout the world, GitLab adapts to that and makes it easy to refer and reference many things directly from comments, wiki pages, or issues. Some things like files or lines can be referenced via links, because GitLab has unique links to the branches of a repository; others are more directly accessible. The following items (basically, prefixed strings or IDs) can be referenced through shortcodes: commit messages comments wall posts issues merge requests milestones wiki pages To reference items, use the following shortcodes inside any field that supports Markdown or RDoc on the web interface: @foofor team members #123for issues !123for merge requests $123for snippets 1234567for commits Issues, knowing what needs to be done An issue is a text message of variable length, describing a bug in the code, an improvement to be made, or something else that should be done or discussed. By commenting on the issue, developers or project leaders can respond to this request or statement. The meta information attached to an issue can be very valuable to the team, because developers can be assigned to an issue, and it can be tagged or labeled with keywords that describe the content or area to which it belongs. Furthermore, you can also set a goal for the milestone to be included in this fix or feature. In the following screenshot, you can see the interface for issues: Creating issues By navigating to the Issues tab of a repository in the web interface, you can easily create new issues. Their title should be brief and precise, because a more elaborate description area is available. The description area supports the GitLab-flavored Markdown, as mentioned previously. Upon creation, you can choose a milestone and a user to assign an issue to, but you can also leave these fields unset, possibly to let your developers themselves choose with what they want to work and at what time. Before they begin their work, they can assign the issues to themselves. In the following screenshot, you can see what the issue creation form looks like: Working with labels or tags Labels are tags used to organize issues by the topic and severity. Creating labels is as easy as inserting them, separated by a comma, into the respective field while creating an issue. Currently in Version 5.2, certain keywords trigger a certain background color on the label. Labels like critical or bug turn red, feature turns green, and other labels are blue by default. The following screenshot shows what a list of labeled features looks like: After the creation of a label, it will be listed under the Labels tab within the Issues page, with a link that lists all the issues that have been labeled the same. Filtering by the label, assigned user, or milestone is also possible from the list of issues within each projects overview. Summary In this article, we have had a look at the project management side of things. You can now make use of the built-in possibilities to distribute tasks across team members through issues, keep track of things that still have to do with the issues, or enable observers to point out bugs. Resources for Article : Further resources on this subject: Using Gerrit with GitHub [Article] The architecture of JavaScriptMVC [Article] Using the OSGi Bundle Repository in OSGi and Apache Felix 3.0 [Article]
Read more
  • 0
  • 1
  • 4549