Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - CMS and E-Commerce

830 Articles
article-image-getting-store
Packt
28 Oct 2013
21 min read
Save for later

Getting into the Store

Packt
28 Oct 2013
21 min read
(For more resources related to this topic, see here.) This all starts by visiting https://appdev.microsoft.com/StorePortals, which will get you to the store dashboard that you use to submit and manage your applications. If you already have an account you'll just log in here and proceed. If not, we'll take a look at ways of getting it set up. There are a couple of ways to get a store account, which you will need before you can submit any game or application to the store. There are also two different types of accounts: Individual accounts Company accounts In most cases you will only need the first option. It's cheaper and easier to get, and you won't require the enterprise features provided by the company account for a game. For this reason we'll focus on the individual account. To register you'll need a credit card for verification, even if you gain a free account another way. Just follow the registration instructions, pay the fee, and complete verification, after which you'll be ready to go. Free accounts Students and developers with MSDN subscriptions can access registration codes that waive the fee for a minimum of one year. If you meet either of these requirements you can gain a code using the respective methods, and use that code during the registration process to set the fee to zero. Students can access their free accounts using the DreamSpark service that Microsoft runs. To access this you need to create an account on www.dreamspark.com and create an account. From there follow the steps to verify your student status and visit https://www.dreamspark.com/Student/Windows-Store-Access.aspx to get your registration code. If you have access to an MSDN subscription you can use this to gain a store account for free. Just log in to your account and in your account benefits overview you should be able to generate your registration code. Submitting your game So your game is polished and ready to go. What do you need to do to get it in the store? First log in to the dashboard and select Submit an App from the menu on the left. Here you can see the steps required to submit the app. This may look like a lot to do, but don't worry. Most of these are very simple to resolve and can be done before you even start working on the game. The first step is to choose a name for your game, and this can be done whenever you want. By reserving a name and creating the application entry you have a year to submit your application, giving you plenty of time to complete it. This is why it's a good idea to jump in and register your application once you have a name for it. If you change your mind later and want a different name you can always change it. The next step is to choose how and where you will sell your game. The other thing you need to choose here are the markets you want to sell your game in. This can be an area you need to be careful of, because the markets you choose here define the localization or content you need to watch for in your game. Certain markets are restrictive and including content that isn't appropriate for a market you say you want to sell in can cause you to fail the certification process. Once that is done you need to choose when you want to release your game—you can choose to release as soon as certification finishes or on a specific date, and then you choose the app category, which in this case will be Games. Don't forget to specify the genre of your game as the sub-category so players can find it. The final option on the Selling Details page which applies to us is the Hardware requirements section. Here we define the DirectX feature-level required for the game, and the minimum RAM required to run it. This is important because the store can help ensure that players don't try to play your game on systems that cannot run it. The next section allows you to define the in-app offers that will be made available to players. The Age rating and rating certificates section allows you to define the minimum age required to play the game, as well as submit official ratings certificates from ratings boards so that they may be displayed in the store to meet legal requirements. The latter part is optional in some cases, and may affect where you can submit your game depending on local laws. Aside from official ratings, all applications and games submitted to the store require a voluntary rating, chosen from one of the following age options: 3+ 7+ 12+ 16+ 18+ While all content is checked, the 7+ and 3+ ratings both have extra checks because of the extra requirements for those age ranges. The 3+ rating is especially restrictive as apps submitted with that age limit may not contain features that could connect to online services, collect personal information, or use the webcam or microphone. To play it safe it's recommended the 12+ rating is chosen, and if you're still uncertain, higher is safer. GDF Certificates The other entry required here if you have official ratings certificates is a GDF file. This is a Game Definition File, which defines the different ratings in a single location and provides the necessary information to display the rating and inform any parental settings. To do this you need to use the GDFMAKER.exe utility that ships with the Windows 8 SDK, and generate a GDF file which you can submit to the store. Alongside that you need to create a DLL containing that file (as a resource) without any entry point to include in the application package. For full details on how to create the GDF as well as the DLL, view the following MSDN article: http://msdn.microsoft.com/en-us/library/windows/apps/hh465153.aspx The final section before you need to submit your compiled application package is the cryptography declaration. For most games you should be able to declare that you aren't using any cryptography within the game and quickly move through this step. If you are using cryptography, including encrypting game saves or data files, you will need to declare that here and follow the instructions to either complete the step or provide an Export Control Classification Number (ECCN). Now you need to upload the compiled app package before you can continue, so we'll take a look at what it takes to do that before you continue. App packages To submit your game to the store, you need to package it up in a format that makes it easy to upload, and easy for the store to distribute. This is done by compiling the application as an .appx file. But before that happens we need to ensure we have defined all of the required metadata, and fulfill the certification requirements, otherwise we'll be uploading a package only to fail soon after. Part of this is done through the application manifest editor, which is accessible in Visual Studio by double-clicking on the Package.appxmanifest file in solution explorer. This editor is where you specify the name that will be seen in the start menu, as well as the icons used by the application. To pass certification all icons have to be provided at 100 percent DPI, which is referred to as Scale 100 in the editor. Icon/Image Base resolution Required Standard 150 x 150 px Yes Wide 310 x 150 px If Wide Tile Enabled Small 30 x 30 px Yes Store 50 x 50 px Yes Badge 24 x 24 px If Toasts Enabled Splash 620 x 300 px Yes If you wish to provide a higher quality images for people running on high DPI setups, you can do so with a simple filename change. If you add scale-XXX to your filename, just before the extension, and replace XXX with one of the following values, Windows will automatically make use of it at the appropriate DPI. scale-100 scale-140 scale-180 In the following image you can see the options available for editing the visual assets in the application. These all apply to the start menu and application start-up experience, including the splash screen and toast notifications. Toast Notifications in Windows 8 are pop-up notifications that slide in from the edge of the screen and show the users some information for a short period of time. They can click on the toast to open the application if they want. Alongside Live Tiles, Toast Notifications allow you to give the user information when the application is not running (although they work when the application is running). The previous table shows the different images you required for a Windows 8 application, and if they are mandatory or just required in certain situations. Note that this does not include the imagery required for the store, which includes some screenshots of the application and optional promotional art in case you want your application to be featured. You must replace all of the required icons with your own. Automated checks during certification will detect the use of the default "box" icon shown in the previous screenshot and automatically fail the submission. Capabilities Once you have the visual aspects in place, you need to declare the capabilities that the application will receive. Your game may not need any, however you should still only specify what you need to run, as some of these capabilities come with extra implications and non-obvious requirements. Adding a privacy policy One of those requirements is the privacy policy. Even if you are creating a game, there may be situations where you are collecting private information, which requires you to have a privacy policy. The biggest issue here is connecting to the internet. If your game marks any of the Internet capabilities in the manifest, you automatically trigger a check for a privacy policy as private information—in this case an IP address—is being shared. To avoid failing certification for this, you need to put together a privacy policy if you collect privacy information, or use any of the capabilities that would indicate you collect information. These include the Internet capabilities as well as location, webcam, and microphone capabilities. This privacy policy just needs to describe what you will do with the information, and directly mention your game and publisher name. Once you have the policy written, it needs to be posted in two locations. The first is a publicly accessible website, which you will provide a link to when filling out the description after uploading your game. The second is within the game itself. It is recommended you place this policy in the Windows 8 provided settings menu, which you can build using XAML or your own code. If you're going with a completely native Windows 8 application you may want to display the policy in your own way and link to it from options within your game. Declarations Once you've indicated the capabilities you want, you need to declare any operating system integration you've done. For most games you won't use this, however if you're taking advantage of Windows 8 features such as share targets (the destination for data shared using the Share Charm), or you have a Game Definition File, you will need to declare it here and provide the required information for the operating system. In the case of the GDF, you need to provide the file so that the parental controls system can make use of the ratings to appropriately control access. Certification kit The next step is to make sure you aren't going to fail the automated tests during certification. Microsoft provides the same automated tests used when you submit your app in the Windows Application Certification Kit (WACK). WACK is installed by default with Visual Studio 2012 or higher version. There are two ways to run the test: after you build your application package, or by running the kit directly against an installed app. We'll look at the latter first, as you might want to run the test on your deployed test game well before you build anything for the store. This is also the only way to run the WACK on a WinRT device, if you want to cover all bases. If you haven't already deployed or tested your app, deploy it using the Build menu in Visual Studio and then search for the Windows App Cert Kit using the start menu (just start typing). When you run this you will be given an option to choose which type of application you want to validate. In this case we want to select the Windows Store App option, which will then give you access to the list of apps installed on your machine. From there it's just a matter of selecting the app you want and starting the test. At this point you will want to leave your machine alone until the automated tests are complete. Any interference could lead to an incorrect failure of the certification tests. The results will indicate ways you can fix any issues; however, you should be fine for most of the tests. The biggest issues will arise from third party libraries that haven't been developed or ported to Windows 8. In this case the only option is to fix them yourself (if they're open source) or find an alternative. Once you have the test passing, or you feel confident that it won't be an issue, you need to create app packages that are compatible with the store. At this point your game will be associated with the submission you have created in the Windows Store dashboard so that it is prepared for upload. Creating your app packages To do this, right click on your game project in Visual Studio and click on Create App Packages inside the Store menu. Once you do that, you'll be asked if you want to create a package for the store. The difference between the two options comes down to how the package is signed. If you choose No here, you can create a package with your test certificate, which can be distributed for testing. These packages must be manually installed and cannot be submitted to the store. You can, however, use this type of package on other machines to install your game for testers to try out. Choosing No will give you a folder with a .ps1 file (Powershell), which you can run to execute the install script. Choosing Yes at this option will take you to a login screen where you can enter your Windows Store developer account details. Once you've logged in you will be presented with a list of applications that you have registered with the store. If you haven't yet reserved the name of your application, you can click on the Reserve Name link, which will take you directly to the appropriate page in the store dashboard. Otherwise select the name of the game you're trying to build and click on Next. The next screen will allow you to specify which architectures to build for, and the version number of the built package. As this is a C++ game, we need to provide separate packages for the ARM, x86, and x64 builds, depending on what you want to support. Simply providing an x86 and ARM build will cover the entire market; 64 bit can be nice to have if you need a lot of memory, but ultimately it is optional, and some users may not even be able to run x64 code. When you're ready click on Create and Visual Studio will proceed to build your game and compile the requested packages, placing them in the directory specified. If you've built for the store, you will need the .appxupload files from this directory when you proceed to upload your game. Once the build has completed you will be asked if you want to launch the Windows Application Certification Kit. As mentioned previously this will test your game for certification failures, and if you're submitting to the store it's strongly recommended you run this. Doing so at this screen will automatically deploy the built package and run the test, so ensure you have a little bit of time to let it run. Uploading and submitting Now that you have a built app package you can return to the store dashboard to submit your game. Just edit the submission you made previously and enter the Packages section, which will take you to the page where you can upload the appxupload file. Once you have successfully uploaded your game you will gain access to the next section, the Description. This is where you define the details that will be displayed in the store. This is also where your marketing skills come into play as you prepare the content that will hopefully get players to buy your game. You start with the description of your game, and any big feature bullet points you want to emphasize. This is the best place to mention any reviews or praise, as well as give a quick description that will help the players decide if they want to try your game. You can have a number of app features listed; however, like any "back of the box" bullet points, keep them short and exciting. Along with the description, the store requires at least one screenshot to display to the potential player. These screenshots need to be of the entire screen, and that means they need to be at least 1366x768, which is the minimum resolution of Windows 8. These are also one of the best ways to promote your game, so ensure you take some great screenshots that show off the fun and appeal of your game. There are a few ways to take a screenshot of your game. If you're testing in the simulator you can use the screenshot icon on the right toolbar of the simulator to take the screenshot. If not, you can use Windows Key + Prt Scr SysRq to take a screenshot of your entire screen, and then use that (or edit it if you have multiple monitors). Screenshots taken with either of these tools can be found in the Screenshots folder within your Pictures library. There are two other small pieces of information that are required during this stage: Copyright Info and Support contact info. For the support info, an e-mail address will usually suffice. At this point you can also include your website and, if applicable to your game, a link to the privacy policy included in your game. Note that if you require a privacy policy, it must be included in two places: your game, and the privacy policy field on this form. The last items you may want to add here are promotional images. These images are intended for use in store promotions and allow Microsoft to easily feature your game with larger promotional imagery in prominent locations within the store. If you are serious about maximizing the reach of your game, you will want to include these images. If you don't, the number of places your game can be featured will be reduced. At a minimum the 414x180 px image should be included if you want some form of promotion. Now you're almost done! The next section allows you to leave notes for the testing team. This is where you would leave test account details for any features in your game that require an account so that they can test those features. This is also the location to leave any notes about testing in case there are situations where you can point out any features that might not be obvious. In certain situations you may have an exemption from Microsoft for a certification requirement; this would be where you include that exemption. When every step has been completed and you have tick marks in all of the stages, the Submit for Certification button will unlock, allowing you to complete your submission and send it off for certification. At this stage a number of automated tests will run before human testers will try your game on a variety of devices to ensure it fits the requirements for the store. If all goes well, you will receive an email notifying you of your successful certification and, depending on if you set the release date as ASAP, you will find your game in the store a few hours later (it may take a few hours to appear in the store once you receive an email informing you that your game or app is in the store). Certification tips Your first stop should be the certification requirements page, which lists all of the current requirements your game will be tested for: http://msdn.microsoft.com/en-us/library/windows/apps/hh694083.aspx. There are some requirements that you should take note of, and in this section we'll take a look at ways to help ensure you pass those requirements. Privacy The first of course is the privacy policy. As mentioned before, if your game collects any sort of personal information, you will need that policy in two places: In full text within the game Accessible through an Internet link The default app template generated by Visual Studio automatically enables the Internet capability, and by simply having that enabled you require a privacy policy. If you aren't connecting to the Internet at all in your game, you should always ensure that none of the Internet options are enabled before you package your game. If you share any personal information, then you need to provide players with a method of opting in to the sharing. This could be done by gating the functionality behind a login screen. Note that this functionality can be locked away, and the requirement doesn't demand that you find a way to remain fully functional even if the user opts out. Features One requirement is that your game support both touch input and keyboard/mouse input. You can easily support this by using an input system like the one described in this article; however, by supporting touch input you get mouse input for free and technically fulfill this requirement. It's all about how much effort you want to put into the experience your player will have, and that's why including gamepad input is recommended as some players may want to use a connected Xbox 360 gamepad for their input device in games. Legacy APIs Although your game might run while using legacy APIs, it won't pass certification. This is checked through an automated test that also occurs during the WACK testing process, so you can easily check if you have used any illegal APIs. This often arises in third party libraries which make use of parts of the standard IO library such as the console, or the insecure versions of functions such as strcpy or fopen. Some of these APIs don't exist in WinRT for good reason; the console, for example, just doesn't exist, so calling APIs that work directly with the console makes no sense, and isn't allowed. Debug Another issue that may arise through the use of third-party libraries is that some of them may be compiled in debug mode. This could present issues at runtime for your app, and the packaging system will happily include these when compiling your game, unless it has to compile them itself. This is detected by the WACK and can be resolved by finding a release mode version, or recompiling the library. WACK The final tip is: run WACK. This kit quickly and easily finds most of the issues you may encounter during certification, and you see the issues immediately rather than waiting for it to fail during the certification process. Your final step before submitting to the store should be to run WACK, and even while developing it's a good idea to compile in release mode and run the tests to just make sure nothing is broken. Summary By now you should know how to submit your game to the store, and get through certification with little to no issues. We've looked at what you require for the store including imagery and metadata, as well as how to make use of the Windows Application Certification Kit to find problems early on and fix them up without waiting hours or days for certification to fail your game. One area unique to games that we have covered in this article is game ratings. If you're developing your game for certain markets where ratings are required, or if you are developing children's games, you may need to get a rating certificate, and hopefully you have an idea of where to look to do this. Resources for Article: Further resources on this subject: Introduction to Game Development Using Unity 3D [Article] HTML5 Games Development: Using Local Storage to Store Game Data [Article] Unity Game Development: Interactions (Part 1) [Article]
Read more
  • 0
  • 0
  • 1684

article-image-advanced-system-management
Packt
25 Oct 2013
11 min read
Save for later

Advanced System Management

Packt
25 Oct 2013
11 min read
(For more resources related to this topic, see here.) Beyond backups Of course, backups are not the only issue with managing multiple, remote systems. In particular, managing such multiple configurations using a centralized application is often desirable. Configuration management One of the issues frequently faced by administrators is that of having multiple, remote systems all with similar software for the most part, but with minor differences in what is installed or running. Debian provides several packages that can help manage such an environment in a unified manner. Two of the more popular packages, both available in Debian, are FAI and Puppet. While we don't have the space to go into details, both applications are described briefly here. Fully Automated Installation Fully Automated Installation (FAI) focuses on managing Linux installations, and is developed using Debian, although it works with many different distributions, not just Debian. FAI uses a class concept for categorizing similar systems, and provides a good deal of flexibility and customization via hooks. FAI provides for unattended, automatic installation as well as tools for monitoring and updating groups of systems. FAI is frequently used for creating and maintaining clusters. More information is available at http://fai-project.org/. Puppet Probably the best known application for distributed management is Puppet, developed by Puppet Labs. Unlike FAI, only the Open Source edition is free, the Enterprise edition, which has many additional features, is not. Puppet does include support for environments other than Linux. The desired configuration is described in a custom, high-level definition language, and distributed to systems with installed clients. Unlike FAI, Puppet does not provide its own bare metal remote installation method, but does use existing methods (such as kickstart) to provide this function. A number of companies that make heavy use of distributed and clustered systems use Puppet to manage their environments. More information is available at http://puppetlabs.com/. Other packages There are other packages that can be used to manage a distributed environment, such as Chef and BCFG2. While simpler than Puppet or FAI, they support similar functions and have been used in some distributed and clustered environments. The use of FAI, Puppet, and others in cluster management warrants a brief look at clustering next, and what packages in Debian support clustering. Clusters A cluster is a group of systems that work together in such a way that the whole functions as a single unit. Such clusters can be loosely coupled or tightly coupled. A loosely coupled environment, each system is complete in itself, and can handle all of the tasks any of the other systems can handle. The environment provides mechanisms for redundancy, load sharing, and fail-over between systems, and is often called a High Availability (HA) cluster. In a tightly coupled environment, the systems involved are highly dependent on one another, often sharing memory and disk storage, and all work on the same task together. The environment provides mechanisms for data sharing, avoiding storage conflicts, keeping the systems in synchronization, and splitting up tasks appropriately. This design is often used in super-computing environments. Clustering is an advanced technique that involves more than just installing and configuring software. It also involves hardware integration, and systems and network design, and implementation. Along with the URLs mentioned below, a good text on the subject is Building Clustered Linux Systems, by Robert W. Lucke, Prentice Hall. Here we will only touch the very basics, along with what tools Debian provides. Let's take a brief look at each environment, and some of the tools used to create them. High Availability clusters Two primary functions are required to implement a high availability cluster: A way to handle load balancing and individual host fail-over. A way to synchronize storage so that all servers provide the same view of the data they serve. Debian includes meta packages that bring together software from the Linux High Availability project, including cluster-agents and resource-agents, two of the higher-level meta packages. These packages install various agents that are useful in coordinating and managing load balancing and fail-over. In some cases, a master server is designated to distribute the processing load among other servers. Data synchronization is handled by using shared storage and any of the filesystems that provide for multiple accesses and shared files, such as NFS or AFS. High Availability clusters generally use standard software, along with software that is readily available to manage the dynamics of such environments. Beowulf clusters In addition to the considerations for High Availability clusters, more tightly coupled environments such as Beowulf clusters also require an infrastructure to manage and distribute computing tasks. There are several web pages devoted to creating a Beowulf cluster using Debian as well as packages that aid in creating such a cluster. One such page is https://wiki.debian.org/StartaBeowulf, a Debian Wiki page on Beowulf basics. The manual for FAI also has a section on creating a Beowulf cluster. Books are available as well. Debian provides several packages that are helpful in building such a cluster, such as the OpenMPI libraries for message passing, and various utilities that run commands on multiple systems, such as those in the kadif package. There are even projects that have released scripts and live CDs that allow you to set up a cluster quickly (one such project is the PelicanHPC project, developed for Debian Lenny, hosted at http://www.pelicanhpc.org/. This type of cluster is not something that you can set up and go. Beowulf and other tightly coupled clusters are intended for highly parallel computing, and the programs that do the actual computing must be designed specifically for such an environment. That said, some packages for specific parallel computations do exist in Debian, such as nwchem, which provides several applications for computational chemistry that take advantage of parallelism. Common tools Some common components of clusters have already been mentioned, such as the OpenMPI libraries. Aside from the meta-packages already mentioned, the redhat-cluster suite of tools is available in Debian, as well as many useful libraries, scheduling tools, and failover tools such as booth. All of these can be found using apt-cache or Synaptic by searching for "cluster". Webmin Many administrators will never have to administer a cluster, and many won't be responsible for a large number of systems requiring central backup solutions. However, even administering a single system using command line tools and text editors can be a chore. Even clusters sometimes require administrative tasks on individual systems. Fortunately, there is an application that can ease many administrative tasks, is easy to use, and can handle many aspects of Linux administration. It is called Webmin. Up until Debian Sarge, Webmin was a part of Debian distributions. However, the Debian developer in charge of packaging it had difficulty keeping up with the frequent releases, and it was eventually dropped from Debian. However, the upstream Webmin developers maintain current packages that install cleanly. Some users have reported issues because Webmin does not always handle configuration files exactly as Debian intends, but it most certainly attempts to handle them in a compatible manner, and while some users have experienced problems with upgrades, many administrators are quite happy with Webmin. As long as you are willing to deal with conflicts during upgrades, or restrict use of modules that have major configuration impacts, you will find Webmin quite useful. Installing Webmin Webmin may be installed by adding the following lines to your apt sources file: deb http://download.webmin.com/download/repository sarge contrib deb http://webmin.mirror.somersettechsolutions.co.uk/repository sarge contrib Usually, this is added to a separate webmin.list file in /etc/apt/sources.list.d. The use of 'sarge' for the release name in the configuration is not a mistake. Since Webmin was dropped after the Sarge release (Debian 3.1), the developers update the repository as it is and haven't bothered changing it to keep up with the Debian code names. However, the versions available in the repository are compatible with any Debian release since 3.1. After updating your cache file, Webmin can be installed and maintained using apt-get, aptitude, or Synaptic. Also, if you request a Webmin upgrade from within Webmin itself on a Debian system, it will use the proper Debian package to upgrade. Using Webmin Webmin runs in the background, and provides an HTTP or HTTPS server on localhost port 10,000. You can use any web browser to connect to http://localhost:10000/ to access Webmin. Upon first installation, only the root user or those in a group allowed to use sudo to access the root account, may log in but Webmin users can be managed separately or in conjunction with local users. Webmin provides extensive and easy to understand menus and icons for various configuration tasks. Webmin is also highly modular and extensible, and an extensive list of standard modules is included with the base package. It is not possible to cover Webmin as fully here as it deserves, but a short list of some of its capabilities includes: Configuration of Webmin itself (the server, users, modules, and security) Local system user and password management Filesystem management Bootup and service management CRON job management Software updates Basic filesystem backups Authentication and security configuration APACHE, DNS, SSH, and FTP (if you're using ProFTP) configuration User mail management Qmail or sendmail configuration Network and Firewall configuration and management Bandwidth monitoring Printer management There are even modules that apply to clusters. Also, Webmin can search and allow access to other Webmin servers on the local network or you can define remote servers manually. This allows a central Webmin server, installed on a particular system, to be the gateway to all of the other servers in your environment, essentially providing a single point of access to manage all Webmin enabled servers. Webmin and Debian Webmin understands the configuration file layout of many distributions. The main problem is when a particular module does not handle certain types of configuration in the way the Debian developers prefer, which can make package upgrades somewhat difficult. This can be handled in a couple of ways. Most modules provide a means to edit configuration files directly, so if you have read the Debian documentation you can modify the configuration appropriately to use Debian specific configuration techniques. Or, you may choose to allow Webmin to modify files as it sees fit, and handle any conflicts manually when you upgrade the software involved. Finally, you can avoid those modules involved with specific software that are more likely to cause problems. One such module is Apache, which doesn't use links from sites-enabled to sites-available. Rather, it configures directly in the sites-enabled directory. Some administrators create the configuration in Webmin, and then move and link the files. Others prefer to manually configure Apache outside of Webmin. Webmin modules are constantly changing, and some actually recognize the Debian file layouts well, so it is not possible to give a comprehensive list of modules to avoid at this time. Best practice when using Webmin is to read the documentation and check the configuration files for specific software prior to using Webmin. Then, after configuring with Webmin, check the files again to determine whether changes may be required to work within the particular package's Debian configuration framework. Based upon this, you can decide whether to continue to configure using Webmin or switch back to manual configuration of that particular software. Webmin security Security is always a concern when remote access to a system is involved. Webmin handles this by requiring authentication and providing for detailed access restrictions that provide a layer of control beyond the firewall. Webmin users can be defined separately, or certain local users can be designated. Access to the various modules in Webmin can be restricted to certain users or groups of users, and detailed logs of Webmin actions are kept. Usermin In addition to Webmin, there is a server called Usermin which may be installed from the same repository as Webmin. It allows individual users to perform a number of functions more easily, such as changing their password, accessing their files, read and manage their email, and managing some aspects of their user profile. It is also modular and has the same security features as Webmin. Summary Several powerful and flexible central backup solutions exist that help manage backups for multiple remote servers and sites. Debian provides packages that assist in building High Availability and Beowulf style multiprocessing clusters as well. And, whether you are involved in managing clusters or not, or even a single system, Webmin can ease an administrator's tasks. Resources for Article: Further resources on this subject: Customizing a Linux kernel [Article] Microsoft SharePoint 2010 Administration: Farm Governance [Article] Testing Workflows for Microsoft Dynamics AX 2009 Administration [Article]
Read more
  • 0
  • 0
  • 1014

article-image-implementing-opencart-modules
Packt
24 Oct 2013
6 min read
Save for later

Implementing OpenCart Modules

Packt
24 Oct 2013
6 min read
(For more resources related to this topic, see here.) OpenCart is an e-commerce cart application built with its own in-house framework which uses Model-View-Controller (MVC) language pattern thus each module in OpenCart also follows the MVCL patterns. Controller creates logics and gathers data from Model and pass it to display them in the view OpenCart modules have admin/ and catalog/ folders and files in admin/ folder helps in controlling setting of module and files in catalog/ folder handles the presentation layer (front-end). Learning how to clone and write codes for Opencart Modules We assume that you already know PHP and have already installed the OpenCart and familiar with the OpenCart backend and frontend as well as some coding knowledge with PHP. You are going to create Hello World module which just has one input box at the admin, settings for the module and same content are shown in front end. First step on module creation is using a unique name, so there will not be conflict with other modules. The same unique name is used to create the file name and class name to extend controller and model. There are generally 6-8 files that need to be created for each module, and they follow a similar structure. If there is interaction with the database tables, we have to create two extra models. The following screenshot shows the hierarchy of files and folder of OpenCart module. So now you know the basic directory structure of OpenCart module. The file structure is divided into two sections admin and catalog. Admin folders and files deal with the setting of the modules and data handling while the catalog folders and files handles the frontend. Let's start with an easy way to make the module. You are going to make the duplicate of the default google_talk module of OpenCart and change it to the Hello World module. We are using the Dreamweaver to work with files. Changes made at the admin folder Go to admin/controller/module/ and copy google_talk.php and paste in the same folder and rename it to helloworld.php and open it to your favorite text editor, then find the following lines: classControllerModuleGoogleTalk extends Controller { Change the class name to classControllerModuleHelloworld extends Controller { Now look for google_talk and replace all with helloworld as shown in the following screenshot: Then, save the file Go to admin/language/english/module/ and copy google_talk.php and paste in the same folder and rename it to helloworld.php and open it. Then look for the following line of code: $_['entry_code'] = 'Google Talk Code:<br /> <span class="help">Goto <a href="http://www.google.com/talk/service/badge/New" target="_blank"> <u>Create a Google Talk chatback badge</u> </a> and copy &amp; paste the generated code into the text box. </span>'; Replace with the following line of code: $_['entry_code'] = 'Hello World Content'; Then again find Google Talk and replace all with Hello World, and save the file. Go to admin/view/template/module/ and copy the google_talk.tpl file and paste in the same folder and rename it to helloworld.tpl and open it and then find google_talk and replace it with helloworld then save it. Changes made at the catalog folder Go to catalog/controller/module/ and copy the google_talk.php file and paste in the same folder and rename it to helloworld.php and open it and look for the following line: class ControllerModuleGoogleTalk extends Controller { change the class name to class ControllerModuleHelloworld extends Controller { Now look for google_talk and replace all with helloworld and save it. Go to catalog/language/english/module/ and copy the google_talk.php file and paste in the same folder and rename it to helloworld.php and open it and then look for Live Chat and replace it with Hello World then save it. Go to catalog/view/theme/default/template/module/ and copy the google_talk.tpl file and paste in the same folder and rename it to helloworld.tpl. With the preceding files and codes change, our Hello World module is ready to be installed. Now log in to the admin section and go to Extensions | Module, look for the Hello World and click on [install] then click on [Edit]. Then type the content that you would like to show at the frontend in Hello World Content field after that click on the Add Module button and provide the setting as per your need and click on Save. Understanding the global Library methods used in OpenCart OpenCart has many predefined methods which can be called anywhere like in the controller or in the model and as well as in the view template files as per the need. You can find system level library files at system/library/. We have defined all the library functions so that it is easy for programmers to use it. For example: $this->customer->login($email, $password, $override = false) Log a customer in. It checks for the customer username and password if $override is passed false, else only for current logged in status and the e-mail. If it finds the correct entry then the cart entry, wishlist entries are retrieved. As well as customer ID, first name, last name, e-mail, telephone, fax, newsletter subscription status, customer group ID, and address ID are globally accessible for the customer. It also updates the customer IP address from where it logs in. Developing and customizing modules, pages, order totals, shipping, and payments extensions in OpenCart We describe the featured module of OpenCart, create feedback module and tips module, and describe and show how codes work and are managed. We helped to learn how to make pages and modules in OpenCart as well as let visualize the uses of database structure; how data are saved as per language, as per store so it helps OpenCart programmers to understand and follow the patterns of Opencart coding style. We describe codes how form works, how to list out the data from database, how edit works in module, and how they are saved. Show them how to code shipping module in OpenCart as well as total order modules and payment modules. We have outlined how templates, models, and controllers work for extensions. Summary In this way we have learned how to clone and write codes for OpenCart modules and the changes made at the admin and catalog folder. We also learned the global library methods used in OpenCart. Also, covered all ways to code the OpenCart extensions. Resources for Article: Further resources on this subject: Upgrading OpenCart [Article] OpenCart FAQs [Article] OpenCart: Layout Structure [Article]
Read more
  • 0
  • 0
  • 2024

article-image-working-audio
Packt
23 Oct 2013
9 min read
Save for later

Working with Audio

Packt
23 Oct 2013
9 min read
(For more resources related to this topic, see here.) Planning the audio In Camtasia Studio, we can stack multiple audio tracks on top of each other. While this is a useful and powerful way to build a soundtrack, it can lead to a cluttered audio output if we do not plan ahead. Audio tracks can be used for a wide range of purposes. It's best to storyboard audio to avoid creating a confusing audio mix. If we consider how each audio track will be used before we begin to overlay each file on the timeline, we can visualize the end result and resist the temptation to layer too many audio effects on top of each other. The importance of consistency Producing professional video in Camtasia Studio comes down to consistency and detail. The more consistent we are, the more professional the result will be. The more we pay attention to detail, the more professional the result is. By being consistent in our use of audio effects, we can avoid creating unintentional distractions or misleading the viewer. For example, if we choose to use a ping sound to represent a mouse click, we should make sure that all mouse clicks use the same ping sound so that the viewer understands and associates the sound with the action. A note on background music When deciding what audio we want in our video, we should always think about our target audience and the type of message we are trying to deliver. Never use background music unless it adds to the video content. For example, background music can be a useful way of engaging our viewer, but if we are delivering an important health and safety message, or delivering a quiz, a backing track may be distracting. If our audience are the staff in customer-facing departments, we may not want to include audio tracks at all. We wouldn't want the sound from our videos to be audible to a customer. Types of audio There are three main types of audio we can add to our video: Voice-over tracks Background music Sound effects Preparing to record a voice-over Various factors affect the quality and consistency of voice-over recordings. In Camtasia Studio, we can add effects but it's best to get the source audio right in the first instance. The factors are given as follows: We often don't pay attention to the qualities and tones in our own voices, but they can and do change. From day to day, your tone of voice can subtly change. Air temperature, illness, or mood can affect the way your voice sounds in a recording. In addition, the environment we use to record a voice-over can have a dramatic effect on the end result. Some rooms will give your voice natural reverb; others will sound very dead. The equipment we use will affect the recording. For example, different microphones will produce different results. When we prepare for a voice-over recording, we must aim to keep our voice, environment, and equipment as stable and consistent as possible. That means we should aim to record the voice-over in one session so that we can control all these factors. We may choose a different person to provide the voice-over. Again, we should take a consistent approach in how we use their voice. Voice-over recording is always a long process and involves trial, error, and multiple takes. We should allow more time than we feel is strictly necessary. Many recordings inevitably overrun. If any sections of the recording are questionable, we should aim to record all of the alternatives in the same session for a seamless result. The studio environment Most Camtasia Studio users do not have access to a professional recording studio. This need not be a problem. We can use practically any quiet room to record our voice-over, although there are some basic pointers that will improve the result. When choosing a studio location, consider the following: Ambient noise: Try to record in quiet environment. If we can use an empty room where there are no passers by or devices making any noise, this will make our recording clearer. Choose a room away from potential sources of noise (busy corridors, main roads, and so on). Noise leakage: Ensure that any doors and windows are closed to minimize noise pollution from outside the room and outside the building. Equipment noise: Ensure that all unnecessary programs on the PC are closed to prevent any unwanted sounds or alerts. End any background tasks, such as email checkers or task schedulers, and ensure any instant messaging software is closed or in offline mode. Positioning: Experiment with placing the microphone in different places around the room. The acoustics of a room can greatly affect the quality of a recording and taking time to find the best place for the microphone will help. For efficiency, we can test the audio quality quickly by wearing headphones while speaking into the microphone. Consider posture: Standing up opens up the diaphragm and improves the sound of our voice when we record. Avoid recording while seated, and hold any notes or papers at eye level to maintain a constant tone. Using scripts When it comes to voice-over recording, a well-prepared script is the most important piece of preparation we can do. Working from a script is far simpler than attempting to make up our narration as we go along. It helps to maintain a good pace in the video and greatly reduces the need for multiple takes, making recording far more efficient. Creating a script need not be time-consuming. If we have already planned out and recorded our video track, writing a script will be far simpler. Writing an effective script The script you write should support the action in the video and maintain a healthy pace. There are a number of tips we can bear in mind to do this. These tips are given as follows: Sync audio with video: Plan the script to coincide with any actions we take in the video. This may mean incorporating pauses into the script to allow a certain on-screen action to complete. Be flexible: We may need to go back and lengthen a section of video to incorporate the voice-over and explanation. It is better to do this than rush the voice-over and attempt to force it to fit. Use basic copywriting techniques: We should consider the message in the video and use the appropriate style. For example, if we are describing a process, we would want to use the active voice. In an internal company update, we may want to adopt a more conversational tone. Be direct and concise: A short and simple statement is far easier to process than a long, drawn out argument. We should always test our script prior to the recording session. We should also be prepared to re-write and hone the content. Reading a script aloud is a useful way of estimating its length and picking out any awkward phrases that do not flow. We will save time if we perfect the script before we sit down in front of the microphone. Recording equipment Most laptop computers have a built in microphone, as do some desktop computers. While these microphones are perfectly adequate for video or audio chats and other casual uses, we should not use them to create Camtasia Studio recordings. Although the quality may be good, and the audio may be clear, these microphones often pick up a large amount of ambient noise, such as the fans inside the computer. Additionally, the audio captured using built-in microphones often require processing and amplification, which can degrade its quality. Camtasia Studio has a range of editing tools that can help you to tweak your audio recording. However, processing should always be a last resort. The more we use a tool to process our voice-over, the more the source material is prone to being distorted. If we have better quality source material, we will not need to rely on these features; this will make the editing process much simpler. When working in Camtasia Studio, it is preferable to invest in a good quality external microphone. Basic microphones are inexpensive and offer considerably better audio recording than built-in microphones. Choosing a microphone External microphones are very affordable. Unless you have specific need for a professional-standard microphone, we recommend a USB microphone. Many of these microphones are sold as podcasting microphones and are perfectly adequate for use in Camtasia Studio. There are two main types of external microphone: Consider a lapel microphone if you plan to operate the computer as you record or present to the camera while you are speaking. Lapel microphones clip on to your clothing and leave your hands free. If you are more comfortable working at a desk, a microphone with a sturdy tripod stand will be a good investment. An external microphone with built in noise cancellation can give us a degree of control at the recording stage, rather than having to edit out noise later. A good stand will give us a greater degree of flexibility when it comes to microphone placement. How to set up an external microphone We can set up the external microphone before we begin recording by following the given steps: Navigate to Tools | Voice Narration. The Voice Narration screen is displayed. Click on Audio setup wizard.... The Audio Setup Wizard screen is displayed. Select the Audio device, as shown in the following screenshot. Summary In this article, we have looked at a range of ways to improve the quality of the audio in our Camtasia Studio projects. We have considered voice-over recording techniques, equipment, editing, sound effects, and background music. Resources for Article: Further resources on this subject: Editing attributes [Article] Basic Editing [Article] Video Editing in Blender using Video Sequence Editor: Part 1 [Article]
Read more
  • 0
  • 0
  • 1143

article-image-improving-your-development-speed
Packt
23 Oct 2013
7 min read
Save for later

Improving Your Development Speed

Packt
23 Oct 2013
7 min read
(For more resources related to this topic, see here.) What all developers want is to do their job as fast as they can without sacrificing the quality of their work. IntelliJ has a large range of features that will reduce the time spent in development. But, to achieve the best performance that IntelliJ can offer, it is important that you understand the IDE and adapt some of your habits. In this article, we will navigate through the features that can help you do your job even faster. You will understand IntelliJ's main elements and how they work, and beyond this, learn how IntelliJ can help you organize your activities and the files you are working on. To further harness IntelliJ's abilities, you will also learn how to manage plugins and see a short list of plugins that can help you. Identifying and understanding window elements Before we start showing you techniques you can use to improve your performance using IntelliJ, you need to identify and understand the visual elements present in the main window of the IDE. Knowing these elements will help you find what you want faster. The following screenshot shows the IntelliJ main window: The main window can be divided into seven parts as shown in the previous screenshot: The main menu contains options that you can use to do tasks such as creating projects, refactoring, managing files in version control, and more. The main toolbar element contains some essential options. Some buttons are shown or hidden depending on the configuration of the project; version control buttons are an example of this. The Navigation Bar is sometimes a quick and good alternative to navigate easily and fast through the project files. Tool tabs are shown on both sides of the screen and at the bottom of IntelliJ. They represent the tools that are available for the project. Some tabs are available only when facets are enabled in the project (e.g. the Persistence tab). When the developer clicks on a tool tab, a window appears. These windows will present the project in different perspectives. The options available in each tool window will provide the developer with a wide range of development tasks. The editor is where you can write your code. The Status Bar indicates the current IDE state and provides some options to manipulate the environment. For example, you can hide the tool tabs by clicking on the icon at the bottom-left of the window. In almost all elements, there are context menus available. These menus will provide extra options that may complement and ease your work. For example, the context menu, available in the tool bar, provides an option to hide itself and another to customize the menu and toolbars. You will notice that some tool tabs have numbers. These numbers are used in conjunction with the Alt key to access the tool window you want quickly, Alt + 1, for example, will open the Project tool window. Each tool window will have different options; some will present search facilities, others will show specific options. They use a common structure: a title bar, a toolbar, and the content pane. Some tool windows don't have a toolbar and, in others, the options in the title bar may vary. However, all of them will have at least two buttons in the rightmost part of the title bar: a gear and a small bar with an arrow. The first button is used to configure some properties of the tool and the second will just minimize the window. The following screenshot shows some options in the Database tool: The options available under the gear button icon generally differ from tool to tool. However, in the drop-down list, you will find four common options: Pinned, Docked, Floating, and Split modes. As you may have already imagined, these options are used to define how the tool window will be shown. The Pinned mode is very useful when it is unmarked; using this, when you focus on code editor you don't lose time minimizing the tool window. Identifying and understanding code editor elements The editor provides some elements that can facilitate navigation through the code and help identify problems in it. In the following screenshot, you can see how the editor is divided: The editor area, as you probably know, is where you edit your source code. The gutter area is where different types of information about the code is shown, simply using icons or special marks like breakpoints and ranges. The indicators used here aren't used to just display information; you can perform some actions depending on the indicator, such as reverting changes or navigating through the code. The smart completion popup, as you've already seen, provides assistance to the developer in accordance with the current context. The document tabs area is where the tabs of each opened document are available. The type of document is identified by an icon and the color in the name of the file shows its status in version control: blue stands for "modified", green for "new", red for "not in VCS", and black for "not changed". This component has a context menu that provides some other facilities as well. The marker bar is positioned to the right-hand side of the IDE and its goal is to show the current status of the code. At the top, the square mark can be green for when your code is OK, yellow for warnings that are not critical, and red for compilation errors, respectively. Below the square situated on top of the IDE this element can have other colored marks used to help the developer go directly to the desired part of the code. Sometimes, while you are coding, you may notice a small icon floating near the cursor; this icon represents that there are some intentions available that could help you: indicates that IntelliJ proposes a code modification that isn't totally necessary. It covers warning corrections to code improvement. indicates an intention action that can be used but doesn't provide any improvement or code correction. indicates there is a quick fix available to correct an eminent code error. indicates that the alert for the intention is disabled but the intention is still available. The following figure shows the working intention: Intention actions can be grouped in four categories listed as follows: Create from usage is the kind of intention action that proposes the creation of code depending on the context. For example, if you enter a method name that doesn't exist, this intention will recognize it and propose the creation of the method. Quick fixes is the type of intention that responds to code mistakes, such as wrong type usage or missing resources. Micro refactoring is the kind of intention that is shown when the code is syntactically correct; however, it could be improved (for readability for example). Fragment action is the type of intention used when there are string literals of an injected language; this type of injection can be used to permit you to edit the corresponding sentence in another editor. Intention actions can be enabled or disabled on-the-fly or in the Intention section in the configuration dialog; by default, all intentions come activated. Adding intentions is possible only after installing plugins for them or creating your own plugin. If you prefer, you can use the Alt + Enter shortcut to invoke the intentions popup. Summary As you have seen in this article, IntelliJ provides a wide range of functionalities that will improve your development speed. More important than knowing all the shortcuts IntelliJ offers, is to understand what is possible do with them and when to use a feature. Resources for Article: Further resources on this subject: NetBeans IDE 7: Building an EJB Application [Article] JBI Binding Components in NetBeans IDE 6 [Article] Smart Processes Using Rules [Article]
Read more
  • 0
  • 0
  • 1132

article-image-state-play-buddypress-themes
Packt
22 Oct 2013
13 min read
Save for later

State of Play of BuddyPress Themes

Packt
22 Oct 2013
13 min read
(For more resources related to this topic, see here.) What is BuddyPress? BuddyPress had its first release in April 2009 and is a plugin that you use with WordPress to bring community features to your site. BuddyPress is capable of so much, from connecting and bringing together an existing community through to building new communities. A few things you can create are: A community for your town or village An intranet for a company A safe community for students of a school to interact with each other A community around a product or event A support network for people with the same illness BuddyPress has a lot of different features; you can choose which you want to use. These include groups, streams, messaging, and member profiles. BuddyPress and WordPress are open source projects released under the GPL license. You can find out more about GPL here: http://codex.wordpress.org/License. A team of developers work on the project and anyone can get involved and contribute. As you use BuddyPress, you may want to get more involved in the project itself or find out more. There are a number of ways you can do this: The main site http://buddypress.org/ and the development blog at http://bpdevel.wordpress.com. For support and information there is http://buddypress.org/support/ and http://codex.buddypress.org. If you use IRC, you can use the dedicated channels on irc.freenode.net #buddypress or #buddypress-dev. The developer meeting is every Wednesday at 19:00 UTC in #buddypress-dev. IRC (Internet Relay Chat) is a form of real-time Internet text messaging (chat). You can find out more here: http://codex.wordpress.org/IRC. What is a theme? Your site theme can be thought of as the site design, but it's more than colors and fonts. Themes work by defining templates, which are then used for each section of your site (for instance, a front page or a single blog post). In the case of a BuddyPress theme, it brings BuddyPress templates and functionality on top of the normal WordPress theme. At the heart of any BuddyPress theme are the same files a WordPress theme needs; here are some useful WordPress theme resources: Wordpress.org theme handbook: http://make.wordpress.org/docs/theme-developer-handbook/ WordPress CSS coding standards: http://make.wordpress.org/core/handbook/coding-standards/css/ Theme check plugin (make sure you're using the WordPress standards): http://wordpress.org/extend/plugins/theme-check/ There is a great section in the WordPress codex on design and layout: http://codex.wordpress.org/Blog_Design_and_Layout How BuddyPress themes used to work BuddyPress in the past needed a theme to work. This theme had several variations; the latest of these was BP-Default, which is shown in the following screenshot: This is the default theme before BuddyPress 1.7 This theme was a workhorse solution giving everything a style, making sure you could be up and running with BuddyPress on your site. For a long time, the best way was to use this theme and create a child theme to then do what you wanted for your site. A child theme inherits the functionality of the parent theme. You can add, edit, and modify in the child theme without affecting the parent. This is a great way to build on the foundation of an existing theme and still be able to do updates and not affect the parent. For more information about a child theme see: http://codex.wordpress.org/Child_Themes. The trouble with default The BuddyPress default theme allowed people without much knowledge to set up BuddyPress, but it wasn't ideal. Fairly quickly, everything started to look the same and the learning curve to create your own theme was steep. People made child themes that often just looked more like clones. The fundamental issue above all was that it was a plugin that needed a theme and this wasn't the right way to do things. A plugin should work as best it can in any theme. There had been work done on bbPress to make the change in the theme compatibility and the time was right for BuddyPress to follow suit. bbPress is a plugin that allows you to easily create forums in your WordPress site and also integrate into BuddyPress. You can learn more about bbPress here: http://bbpress.org. Theme compatibility Theme compatibility in simple terms means that BuddyPress works with any theme. Everything that is required to make it work is included in the plugin. You can now go and get any theme from wordpress.org or other sites and be up and running in no time. Should you want to use your own custom template, it's really easy thanks to theme compatibility. So long as you give it the right name and place it in the right location in your theme, BuddyPress when loading will use your file instead of its own. Theme compatibility is always active as this allows BuddyPress to add new features in future versions. You can see this in the following screenshot: Here you see BuddyPress 1.7 in the Twenty Twelve theme Do you still need a BuddyPress theme? Probably by now you're asking yourself if you even need a theme for BuddyPress. With theme compatibility while, you don't have to, there are many reasons why you'd want to. A custom theme allows you to tailor the experience for your community. Different communities have different needs, one size doesn't fit all. Theme compatibility is great, but there is still a need for themes that focus on community and the other features of BuddyPress. A default experience will always be default and not tailored to your site or for BuddyPress. There are many things when creating a community theme that a normal WordPress theme won't have taken into account. This is what we mean when we nowadays refer to BuddyPress themes. There is still a need for these and a need to understand how to create them. Communities A community is a place for people to belong. Creating a community isn't something you should do lightly. The best advice when creating a community is to start small and build up functionality over time. Social networking on the other hand is purely about connections. Social networking is only part of a community. Most communities have far more than just social networking. When you are creating a community you want to focus on enabling your members to make strong connections. Niche communities BuddyPress is perfect for creating niche communities. These are pockets of people united by a cause, by a hobby, or by something else. However, don't think niche means small. Niche communities can be of any size. Having a home online for a community, a place where people of the same mindset, same experiences can unite, that's powerful. Niche communities can be forces for change, a place of comfort, give people a chance to be heard, or sometimes a place just to be. Techniques A community should encourage engagement and provide paths for users to easily accomplish tasks. You may provide areas, such as for user promotion, for user generated content, set achievements for completing tasks or allow users to collect points and have leaderboards. The idea of points and achievements comes from something called Gamification and is a powerful technique. Gamification techniques leverage people's natural desires for competition, achievement, status, self-expression, altruism and closure —Wikipedia (http://en.wikipedia.org/wiki/Gamification) Community is something you experience offline and online. When you are creating a community it's great to look at both. Think how groups of people interact, how they get tasks done together, and what hurdles they experience. Knowing a little bit about psychology will help you create a design that works for the community. Responsive design Responsive web design (RWD) is a web design approach aimed at crafting sites to provide an optimal viewing experience—easy reading and navigation with a minimum of resizing, panning, and scrolling—across a wide range of devices (from desktop computer monitors to mobile phones) —Wikipedia When you create your theme, you need to be careful to design it not only for one device. The world is multi-device and your theme should adapt to the device it's being used on without causing any issues. There are several options open to you when looking to create a theme that works across all devices. What about adaptive design? Responsive designs worked initially a lot by using media queries in CSS to create designs that adapted to CSS media queries. From this emerged the term adaptive design. There are many definitions of what responsive and adaptive mean. Adaptive loosely means progressive enhancement and responsive web design is part of this. You could say that responsive web design is the technique and adaptive design is the movement. A media query consists of a media type and zero or more expressions that check for the conditions of particular media features: http://www.w3.org/TR/css3-mediaqueries/. Mobile first Mobile first makes sure that the site works first for the smallest screen size. As you move up, styles are added using media queries to use the extra screen real estate. This was a phrase coined by Luke Wroblewski. This is a lighter approach and often preferred as it loads just what the device needs. Do you need an app? In the WordPress world, mobile themes tend to come in the form of plugins or apps. Historically, most WordPress themes had a mobile theme and this theme showed only to those accessing a mobile device. You were presented with a default theme that looked the same regardless of the site and there was often no way to see a non-mobile experience (a point of frustration on larger mobile devices). When thinking about mobile apps or plugins, a good rule of thumb is to ask if it's bringing something extra to your site. This could be focusing on one aspect, or making something mobile so devices have access to integrate. If they are simply changing the look of your site you can deal with that in your theme. You shouldn't need an app or plugin to get your site working on a mobile device. In the wild – BuddyPress custom themes There is a whole world out there of great BuddyPress examples, so let's take a little bit of time to get inspired and see what others have done. The following list shows some of the custom themes: Cuny Academic Commons: http://commons.gc.cuny.edu/. This is run by the City University of New York and is for supporting faculty initiatives along with building community. Enterprise Nation: http://enterprisenation.com. This site is a business club encouraging connections and sharing of information. Goed en wel.nl: http://goedenwel.nl. This is a community for 50-65 year olds and built around five areas of interest. Shift.ms: http://shift.ms. This is a community run by and for young people with multiple sclerosis. Teen Summer Challenge: http://teensummerchallenge.org. This site is part of an annual reading program for teens run by Pierce County Library. Trainerspace: http://www.trainerspace.com. This site is a place to go to find a trainer and those trainers can also have their own website. If you want to discover more great BuddyPress sites, BP Inspire is a great showcase site at http://www.bpinspire.com/. What are the options while creating a theme? There are several options available to you when creating a BuddyPress theme: Use an existing WordPress theme with theme compatibility Use an existing BuddyPress theme Create custom CSS with another theme Create custom templates – use in child theme or existing theme Create everything from custom Later, we will take a look at each of these and how each works. WordPress themes You can get free WordPress themes from the WordPress theme repository http://wordpress.org/extend/themes/ and also buy from many great theme shops. You can also use a theme framework. A framework goes beyond a theme and has a lot of extra tools to help you build your theme rolled in. You can learn more about theme frameworks here: http://codex.wordpress.org/Theme_Frameworks. BuddyPress themes You can get both free and paid BuddyPress themes if you want something a little bit more than theme compatibility. A BuddyPress theme, as we discussed earlier, brings something extra and it may bring that to all the features or just some. It is a theme designed to take your site beyond what theme compatibility brings. Free themes If you are looking for free themes your first place should be the theme repository on WordPress.org http://wordpress.org/extend/themes/. The theme repository page will look like the following screenshot: This is the WordPress.org theme repository where you can find BuddyPress themes You can find BuddyPress themes by searching for the word buddypress. Here you can see a range of themes. A few free themes you may want to consider are: Custom Community by svenl77: http://wordpress.org/extend/themes/custom-community. Fanwood by tungdo http://wordpress.org/extend/themes/fanwood. Frisco by David Carson: http://wordpress.org/extend/themes/frisco-for-buddypress. Status by buddypressthemers: http://wordpress.org/extend/themes/status. 3oneseven: http://3oneseven.com/buddypress-themes/. Infinity Theme Engine: http://infinity.presscrew.com. Commons in a box: http://commonsinabox.org. All the power of the CUNY site (mentioned earlier) with a theme engine. Themes to buy Buying a theme is a good way to get a lot of features from custom themes without the cost or learning involved in custom development. The downside of course is that you will not have a custom look, but if your budget and time is limited this may be a good option. When you buy a theme, be careful and as like with anything online, make sure you buy from a reputable source. There are a number of theme shops selling BuddyPress themes including (not extensive list): BuddyBoss: http://www.buddyboss.com Mojo Themes: http://www.mojo-themes.com Press Crew: http://shop.presscrew.com Theme Loom: http://themeloom.com/themes/pure-theme/ Theme Forest: http://themeforest.net/category/wordpress/buddypress Themekraft: http://themekraft.com WPMU DEV: http://premium.wpmudev.org Summary As you can see from this article, the way things are done now since the release of BuddyPress 1.7 are a little bit different to before, thanks to theme compatibility. This is great and means there is no better time than now to start developing BuddyPress themes. In this article we skimmed over a lot of important issues that anyone making a theme has to consider. These are important, just like knowing how to build. You should have in mind the users you are creating for and what their experience will be. We all view sites on different devices, we all need to have a similar experience and not one where we're penalized. Resources for Article: Further resources on this subject: Customizing WordPress Settings for SEO [Article] The Basics of WordPress and jQuery Plugin [Article] Anatomy of a WordPress Plugin [Article]
Read more
  • 0
  • 0
  • 2274
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-spotfire-architecture-overview
Packt
21 Oct 2013
6 min read
Save for later

The Spotfire Architecture Overview

Packt
21 Oct 2013
6 min read
(For more resources related to this topic, see here.) The companies of today face innumerable market challenges due to the ferocious competition of a globalized economy. Hence, providing excellent service and having customer loyalty are the priorities for their survival. In order to best achieve both goals and have a competitive edge, companies can resort to the information generated by their digitalized systems, their IT. All the recorded events from Human Resources (HR) to Customer Relationship Management (CRM), Billing, and so on, can be leveraged to better understand the health of a business. The purpose of this article is to present a tool that behaves as a digital event analysis enabler, the TIBCO Spotfire platform. In this article, we will list the main characteristics of this platform, while also presenting its architecture and describing its components. TIBCO Spotfire Spotfire is a visual analytics and business intelligence platform from TIBCO software. It is a part of new breed of tools created to bridge the gap between the massive amount of data that the corporations produce today and the business users who need to interpret this data in order to have the best foundation for the decisions they make. In my opinion, there is no better description of what TIBCO Spotfire delivers than the definition of visual analytics made in the publication named Illuminating the Path: The Research and Development Agenda for Visual Analytics. "Visual analytics is the science of analytical reasoning facilitated by interactive visual interfaces". –Illuminating the Path: The Research and Development Agenda for Visual Analytics, James J. Thomas and Kristin A. Cook, IEEE Computer Society Press The TIBCO Spotfire platform offers the possibility of creating very powerful (yet easy to interpret and interact) data visualizations. From real-time Business Activity Monitoring (BAM) to Big Data, data-based decision making becomes easy – the what, the why, and the how becomes evident. Spotfire definitely allowed TIBCO to establish itself in an area where until recently it had very little experience and no sought-after products. The main features of this platform are: More than 30 different data sources to choose from: Several databases (including Big Data Teradata), web services, files, and legacy applications. Big Data analysis: Spotfire delivers the power of MapReduce to regular users. Database analysis: Data visualizations can be built on top of databases using information links. There is no need to pull the analyzed data into the platform, as a live link is maintained between Spotfire and the database. Visual join: Capability of merging data from several distinct sources into a single visualization. Rule-based visualizations: The platform enables the creation and tailoring of rules, and the filtering of data. These features facilitate emphasizing of outliers and foster management by exception. It is also possible to highlight other important features, such as commonalities and anomalies. Data drill-down: For data visualizations it is possible to create one (or many) details visualization(s). This drill-down can be performed in multiple steps as drill-down of drill-down of drill-down and so on. Real-time integration with other TIBCO tools. Spotfire platform 5.x The platform is composed of several intercommunicating components, each one with its own responsibilities (with clear separation of concerns) enabling a clustered deployment. As this is an introductory article, we will not dive deep into all the components, but we will identify the main ones and the underlying architecture. A depiction the platform's components is shown in the following diagram: The descriptions of each of the components in the preceding diagram are as follows: TIBCO Spotfire Server: The Spotfire server makes a set of services available to the analytics clients (TIBCO Spotfire Professional and TIBCO Spotfire Web Player Server): User services: It is responsible for authentication and authorization Deployment services: It handles the consistent upgrade of Spotfire clients Library services: It manages the repository of analysis files Information services: It persists information links to external data sources The Server component is available for several operating systems, such as Linux, Solaris, and Windows. TIBCO Spotfire Professional: This is a client application (fat client) that focuses on the creation of data visualizations, taking advantage of all of the platform's features. This is the main client application, and because of that, it has enabled all the data manipulation functionalities such as use of data filters, drill down, working online and offline (working offline allows embedding data in the visualizations for use in limited connectivity environments), and exporting visualizations to MS PowerPoint, PDF, and HTML. It is only available for Windows environment. TIBCO Spotfire Web Player Server: This offers users the possibility of accessing and interacting with visualizations created in TIBCO Spotfire Professional. The existence of this web application enables the usage of an Internet browser as a client, allowing for thin client access where no software has to be installed on the user's machine. Please be aware that the visualizations cannot be created or altered this way. They can only be accessed in a read-only mode, where all rules are enabled, as well as data is drill down. Since it is developed in ASP.NET, this server must be deployed in a Microsoft IIS server, and so it is restricted to Microsoft Windows environments. Server Database: This database is accessed by the TIBCO Spotfire Server for storage of server information. It should not be confused with the data stores that the platform can access to fetch data from, and build visualizations. Only two vendor databases are supported for this role: Oracle Database and Microsoft SQL Server. TIBCO Spotfire Web Player Client: These are thin clients to the Web Player Server. Several Internet browsers can be used on various operating systems (Microsoft Internet Explorer on Windows, Mozilla Firefox on Windows and Mac OS, Google Chrome on Windows and Android, and so on). TIBCO has also made available an application for iPad, which is available in iTunes. For more details on the iPad client application, please navigate to: https://itunes.apple.com/en/app/spotfire-analytics/id417436823?mt=8 Summary In this article, we introduced the main attributes of the Spotfire platform in the scope of visual analytics, and we detailed the platform's underlying architecture. Resources for Article: Further resources on this subject: Core Data iOS: Designing a Data Model and Building Data Objects [Article] Database/Data Model Round-Trip Engineering with MySQL [Article] Drilling Back to Source Data in Dynamics GP 2013 using Dashboards [Article]
Read more
  • 0
  • 0
  • 5616

article-image-using-openshift
Packt
21 Oct 2013
5 min read
Save for later

Using OpenShift

Packt
21 Oct 2013
5 min read
(For more resources related to this topic, see here.) Each of these utilize the OpenShift REST API at the backend; therefore, as a user, we could potentially orchestrate OpenShift using the API with such common command-line utilities as curl to write scripts for automation. We could also use the API to write our own custom user interface, if we had the desire. In the following sections, we will explore using each of the currently supported user experiences, all of which can be intermixed as they communicate with the backend in a uniform fashion using the REST API previously mentioned. Getting started using OpenShift As discussed previously, we will be using the OpenShift Online free hosted service for example portions. OpenShift Online has the lowest barrier of entry from a user's perspective because we will not have to deploy our own OpenShift PaaS before being able to utilize it. Since we will be using the OpenShift Online service, the very first step is going to be to visit their website and sign up for a free account via https://openshift.redhat.com/app/account/new. New Account Form Once this step is complete, we will find an e-mail in our inbox that was provided during sign up, with a subject line similar to Confirm your Red Hat OpenShift account; inside that e-mail will be a URL that needs to be followed to complete the setup and verification step. Now that we've successfully completed the sign up phase, let's move on to exploring the different ways in which we can use and interact with OpenShift. Command-line utilities Due to the advancements in modern computing and the advent of mobile devices such as tablets, smart phones, and many other devices, we are often accustomed to Graphical User Interface (GUI) over Command-Line Interface (CLI) for most of our computing needs. This trend is heavier in the realm of web applications because of the rich visual experiences that can be delivered using next generation web technologies. However, those of us who are in the development and system administration circles of the world are no strangers to the CLI, and we know that it is often the most powerful way to accomplish an array of tasks pertaining to development and administration. Much of this is a credit to powerful shell environments that have their roots in traditional UNIX environments; popular examples of these are bash and zsh. Also, in more recent years, PowerShell for the Microsoft Windows platform has aimed to provide some relatively similar CLI power. The shell, as it is referenced here, is that of a UNIX shell, which is a command interpreter that supports such features as variables, functions, pipes, I/O redirection, variable substitution, flow control, conditionals, the ability to be scripted, and more. There is also a POSIX standard for a shell that defines a standard set of features and behaviors that must be complied with, allowing for portability of complex scripts. With this inherent power at the fingertips of the person who wields the command line, the development team of the OpenShift PaaS has written a command-line utility, much in the spirit of offering powerful utilities to its users and developers. Before we get too deep into the details, let's quickly look at what a normal application creation and deployment requires in OpenShift using the following command: $ rhc app create myawesomewebapp ruby-1.9 $ cd myawesomewebapp (Write, create, and implement code changes) $ git commit -a -m "wrote awesome code" $ git push It will be discussed at length shortly, but for a quick rundown, the rhc app create myawesomewebapp ruby-1.9 command creates an application, which runs on OpenShift using ruby-1.9 as the programming platform. Behind the scenes, it's provisioning space, resources, and configuring services for us. It also creates a git repository that is then cloned locally—in our example named myawesomewebapp—and in order to access this, we need to change directories into the git repository. That is precisely what the next command cd myawesomewebapp does. And you're live, running your web application in the cloud. While this is an extremely high-level overview and there are some prerequisites necessary, normal use of OpenShift is that easy. In the following section, we will discuss at length all the steps necessary to launch a live application in OpenShift Online using the rhc command-line utility and git. This command-line utility, rhc, is written in the Ruby programming language and is distributed as a RubyGem (https://rubygems.org/). This is the recommended method of installation for Ruby modules, libraries, and utilities due to the platform-independent nature of Ruby and the ease of distribution of gems. The rhc command-line utility is also available using the native package management for both Fedora and Red Hat Enterprise Linux (via the EPEL repository, available at https://fedoraproject.org/wiki/EPEL) by running the yum install rubygem-rhc command. Another noteworthy proponent of RubyGems is that they can be installed to a user's home directory within their local machine's operating system, allowing them to be utilized even in environments where systems are centrally managed by an IT department. RubyGems are also installed using the gem package manager for users of GNU/Linux package managers, such as yum, apt-get, and pacman or Mac OS X's community homebrew (brew) package manager, which will be familiar with the concept. For those unfamiliar with these concepts, a package manager will track a software named "package" and its dependencies, handle installation, updates, as well as removal. We will take a short moment to tangent into the topic of RubyGems before moving on to the command-line utility for OpenShift to ensure that we don't leave out any background information. Summary Hopefully, we can select our preferred method of deploying on OpenShift, and developers of all backgrounds, preferences, and development platforms will feel at home working with OpenShift as a development and deployment platform. Resources for Article: Further resources on this subject: What is Oracle Public Cloud? [Article] Features of CloudFlare [Article] vCloud Networks [Article]
Read more
  • 0
  • 0
  • 2053

article-image-getting-started-twitter-flight
Packt
16 Oct 2013
6 min read
Save for later

Getting Started with Twitter Flight

Packt
16 Oct 2013
6 min read
(For more resources related to this topic, see here.) Simplicity First and foremost, Flight is simple. Most frameworks provide layouts, data models, and utilities that tend to produce big and confusing APIs. Learning curves are steep. In contrast, you'll probably only use 10 Flight methods ever, and three of those are almost identical. All components and mixins follow the same simple format. Once you've learned one, you've learned them all. Simplicity means fast ramp-up times for new developers who should be able to come to understand individual components quickly. Efficient complexity management In most frameworks, the complexity of the code increases almost exponentially with the number of features. Dependency diagrams often look like a set of trees, each with branches and roots intermingling to create a dense thicket of connections. A simple change in one place could easily create an unforeseen error in another or a chain of failures that could easily take down the whole application. Flight applications are instead built up from reliable, reusable artifacts known as components. Each component knows nothing of the rest of the application, it simply listens for and triggers events. Components behave like cells in an organism. They have well-defined input and output, are exhaustively testable, and are loosely coupled. A component's cellular nature means that introducing more components has almost no effect on the overall complexity of the code, unlike traditional architectures. The structure remains flat, without any spaghetti code. This is particularly important in large applications. Complexity management is important in any application, but when you're dealing with hundreds or thousands of components, you need to know that they aren't going to have unforeseen knock-on effects. This flat, cellular structure also makes Flight well-suited to large projects with large or remote development teams. Each developer can work on an independent component, without first having to understand the architecture of the entire application. Reusability Flight components have well-defined interfaces and are loosely coupled, making it easy to reuse them within an application, and even across different applications. This separates Flight from other frameworks such as Backbone or AngularJS, where functionality is buried inside layers of complexity and is usually impossible to extract. Not only does this make it easier and faster to build complex applications in Flight but it also offers developers the opportunity to give back to the community. There are already a lot of useful Flight components and mixins being open sourced. Try searching for "flight-" on Bower or GitHub, or check out the list at http://flight-components.jit.su/. Twitter has already been taking advantage of this reusability factor within the company, sharing components such as Typeahead (Twitter's search autocomplete) between Twitter.com and TweetDeck, something which would have been unimaginable a year ago. Agnostic architecture Flight has agnostic architecture. For example, it doesn't matter which templating system is used, or even if one is used at all. Server-side, client-side, or plain old static HTML are all the same to Flight. Flight does not impose a data model, so any method of data storage and processing can be used behind the scenes. This gives the developer freedom to change all aspects of the stack without affecting the UI and the ability to introduce Flight to an existing application without conflict. Improved Performance Performance is about a lot more than how fast the code executes, or how efficient it is with memory. Time to first load is a very important factor. When a user loads a web page, the request must be processed, data must be gathered, and a response will be sent. The response is then rendered by the client. Server-side processing and data gathering is fast. Latency and interpretation makes rendering slow. One of the largest factors in response and rendering speed is the sheer amount of code being sent over the wire. The more code required to render a page, the slower the rendering will be. Most modern JavaScript frameworks use deferred loading (for example, via RequireJS) to reduce the amount of code sent in the first response. However, all this code is needed to be able to render a page, because layout and templating systems only exist on the client. Flight's architecture allows templates to be compiled and rendered on the server, so the first response is a fully-formed web page. Flight components can then be attached to existing DOM nodes and determine their state from the HTML, rather than having to request data over XMLHttpRequest (XHR) and generate the HTML themselves. Well-organized freedom Back in the good old days of JavaScript development, it was all a bit of a free for all. Everyone had their own way of doing things. Code was idiosyncratic rather than idiomatic. In a lot of ways, this was a pain, that is, it was hard to onboard new developers and still harder to keep a codebase consistent and well-organized. On the other hand, there was very little boilerplate code and it was possible to get things done without having to first read lengthy documentation on a big API. jQuery built on this ideal, reduced the amount of boilerplate code required. It made JavaScript code easier to read and write, while not imposing any particular requirements in terms of architecture. What jQuery failed to do (and was never intended to do) was provide an application-level structure. It remained all too easy for code to become a spaghetti mess of callbacks and anonymous functions. Flight solves this problem by providing much needed structure while maintaining a simple, architecture-agnostic approach. Its API empowers developers, helping them to create elegant and well-ordered code, while retaining a sense of freedom. Put simply, it's a joy to use. Summary The Flight API is small and should be familiar to any jQuery user, producing a shallow learning-curve. The atomic nature of components makes them reliable and reusable, and creates truly scalable applications, while its agnostic architecture allows developers to use any technology stack and even introduce Flight into existing applications. Resources for Article: Further resources on this subject: Integrating Twitter and YouTube with MediaWiki [Article] Integrating Twitter with Magento [Article] Drupal Web Services: Twitter and Drupal [Article]
Read more
  • 0
  • 0
  • 1117

article-image-configuring-manage-out-directaccess-clients
Packt
14 Oct 2013
14 min read
Save for later

Configuring Manage Out to DirectAccess Clients

Packt
14 Oct 2013
14 min read
In this article by Jordan Krause, the author of the book Microsoft DirectAccess Best Practices and Troubleshooting, we will have a look at how Manage Out is configured to DirectAccess clients. DirectAccess is obviously a wonderful technology from the user's perspective. There is literally nothing that they have to do to connect to company resources; it just happens automatically whenever they have Internet access. What isn't talked about nearly as often is the fact that DirectAccess is possibly of even greater benefit to the IT department. Because DirectAccess is so seamless and automatic, your Group Policy settings, patches, scripts, and everything that you want to use to manage and manipulate those client machines is always able to run. You no longer have to wait for the user to launch a VPN or come into the office for their computer to be secured with the latest policies. You no longer have to worry about laptops being off the network for weeks at a time, and coming back into the network after having been connected to dozens of public hotspots while someone was on a vacation with it. While many of these management functions work right out of the box with a standard DirectAccess configuration, there are some functions that will need a couple of extra steps to get them working properly. That is our topic of discussion for this article. We are going to cover the following topics: Pulls versus pushes What does Manage Out have to do with IPv6 Creating a selective ISATAP environment Setting up client-side firewall rules RDP to a DirectAccess client No ISATAP with multisite DirectAccess (For more resources related to this topic, see here.) Pulls versus pushes Often when thinking about management functions, we think of them as the software or settings that are being pushed out to the client computers. This is actually not true in many cases. A lot of management tools are initiated on the client side, and so their method of distributing these settings and patches are actually client pulls. A pull is a request that has been initiated by the client, and in this case, the server is simply responding to that request. In the DirectAccess world, this kind of request is handled very differently than an actual push, which would be any case where the internal server or resource is creating the initial outbound communication with the client, a true outbound initiation of packets. Pulls typically work just fine over DirectAccess right away. For example, Group Policy processing is initiated by the client. When a laptop decides that it's time for a Group Policy refresh, it reaches out to Active Directory and says "Hey AD, give me my latest stuff". The Domain Controllers then simply reply to that request, and the settings are pulled down successfully. This works all day, every day over DirectAccess. Pushes, on the other hand, require some special considerations. This scenario is what we commonly refer to as DirectAccess Manage Out, and this does not work by default in a stock DirectAccess implementation. What does Manage Out have to do with IPv6? IPv6 is essentially the reason why Manage Out does not work until we make some additions to your network. "But DirectAccess in Server 2012 handles all of my IPv6 to IPv4 translations so that my internal network can be all IPv4, right?" The answer to that question is yes, but those translations only work in one direction. For example, in our previous Group Policy processing scenario, the client computer calls out for Active Directory, and those packets traverse the DirectAccess tunnels using IPv6. When the packets get to the DirectAccess server, it sees that the Domain Controller is IPv4, so it translates those IPv6 packets into IPv4 and sends them on their merry way. Domain Controller receives the said IPv4 packets, and responds in kind back to the DirectAccess server. Since there is now an active thread and translation running on the DirectAccess server for that communication, it knows that it has to take those IPv4 packets coming back (as a response) from the Domain Controller and spin them back into IPv6 before sending across the IPsec tunnels back to the client. This all works wonderfully and there is absolutely no configuration that you need to do to accomplish this behavior. However, what if you wanted to send packets out to a DirectAccess client computer from inside the network? One of the best examples of this is a Helpdesk computer trying to RDP into a DirectAccess-connected computer. To accomplish this, the Helpdesk computer needs to have IPv6 routability to the DirectAccess client computer. Let's walk through the flow of packets to see why this is necessary. First of all, if you have been using DirectAccess for any duration of time, you might have realized by now that the client computers register themselves in DNS with their DirectAccess IP addresses when they connect. This is a normal behavior, but what may not look "normal" to you is that those records they are registering are AAAA (IPv6) records. Remember, all DirectAccess traffic across the internet is IPv6, using one of these three transition technologies to carry the packets: 6to4, Teredo, or IP-HTTPS. Therefore, when the clients connect, whatever transition tunnel is established has an IPv6 address on the adapter (you can see it inside ipconfig /all on the client), and those addresses will register themselves in DNS, assuming your DNS is configured to allow it. When the Helpdesk personnel types CLIENT1 in their RDP client software and clicks on connect, it is going to reach out to DNS and ask, "What is the IP address of CLIENT1?" One of two things is going to happen. If that Helpdesk computer is connected to an IPv4-only network, it is obviously only capable of transmitting IPv4 packets, and DNS will hand them the DirectAccess client computer's A (IPv4) record, from the last time the client was connected inside the office. Routing will fail, of course, because CLIENT1 is no longer sitting on the physical network. The following screenshot is an example of pinging a DirectAccess connected client computer from an IPv4 network: How to resolve this behavior? We need to give that Helpdesk computer some form of IPv6 connectivity on the network. If you have a real, native IPv6 network running already, you can simply tap into it. Each of your internal machines that need this outbound routing, as well as the DirectAccess server or servers, all need to be connected to this network for it to work. However, I find out in the field that almost nobody is running any kind of IPv6 on their internal networks, and they really aren't interested in starting now. This is where the Intra-Site Automatic Tunnel Addressing Protocol, more commonly referred to as ISATAP, comes into play. You can think of ISATAP as a virtual IPv6 cloud that runs on top of your existing IPv4 network. It enables computers inside your network, like that Helpdesk machine, to be able to establish an IPv6 connection with an ISATAP router. When this happens, that Helpdesk machine will get a new network adapter, visible via ipconfig / all, named ISATAP, and it will have an IPv6 address. Yes, this does mean that the Helpdesk computer, or any machine that needs outbound communications to the DirectAccess clients, has to be capable of talking IPv6, so this typically means that those machines must be Windows 7, Windows 8, Server 2008, or Server 2012. What if your switches and routers are not capable of IPv6? No problem. Similar to the way that 6to4, Teredo, and IP-HTTPS take IPv6 packets and wrap them inside IPv4 so they can make their way across the IPv4 internet, ISATAP also takes IPv6 packets and encapsulates them inside IPv4 before sending them across the physical network. This means that you can establish this ISATAP IPv6 network on top of your IPv4 network, without needing to make any infrastructure changes at all. So, now I need to go purchase an ISATAP router to make this happen? No, this is the best part. Your DirectAccess server is already an ISATAP router; you simply need to point those internal machines at it. Creating a selective ISATAP environment All of the Windows operating systems over the past few years have ISATAP client functionality built right in. This has been the case since Vista, I believe, but I have yet to encounter anyone using Vista in a corporate environment, so for the sake of our discussion, we are generally talking about Windows 7, Windows 8, Server 2008, and Server 2012. For any of these operating systems, out of the box all you have to do is give it somewhere to resolve the name ISATAP, and it will go ahead and set itself up with a connection to that ISATAP router. So, if you wanted to immediately enable all of your internal machines that were ISATAP capable to suddenly be ISATAP connected, all you would have to do is create a single host record in DNS named ISATAP and point it at the internal IP address of your DirectAccess server. To get that to work properly, you would also have to tweak DNS so that ISATAP was no longer part of the global query block list, but I'm not even going to detail that process because my emphasis here is that you should not set up your environment this way. Unfortunately, some of the step-by-step guides that are available on the web for setting up DirectAccess include this step. Even more unfortunately, if you have ever worked with UAG DirectAccess, you'll remember on the IP address configuration screen that the GUI actually told you to go ahead and set ISATAP up this way. Please do not create a DNS host record named ISATAP! If you have already done so, please consider this article to be a guide on your way out of danger. The primary reason why you should stay away from doing this is because Windows prefers IPv6 over IPv4. Once a computer is setup with connection to an ISATAP router, it receives an IPv6 address which registers itself in DNS, and from that point onward whenever two ISATAP machines communicate with each other, they are using IPv6 over the ISATAP tunnel. This is potentially problematic for a couple of reasons. First, all ISATAP traffic default routes through the ISATAP router for which it is configured, so your DirectAccess server is now essentially the default gateway for all of these internal computers. This can cause performance problems and even network flooding. The second reason is that because these packets are now IPv6, even though they are wrapped up inside IPv4, the tools you have inside the network that you might be using to monitor internal traffic are not going to be able to see this traffic, at least not in the same capacity as it would do for normal IPv4 traffic. It is in your best interests that you do not follow this global approach for implementing ISATAP, and instead take the slightly longer road and create what I call a "Selective ISATAP environment", where you have complete control over which machines are connected to the ISATAP network, and which ones are not. Many DirectAccess installs don't require ISATAP at all. Remember, this is only used for those instances where you need true outbound reach to the DirectAccess clients. I recommend installing DirectAccess without ISATAP first, and test all of your management tools. If they work without ISATAP, great! If they don't, then you can create your selective ISATAP environment. To set ourselves up for success, we need to create a simple Active Directory security group, and a GPO. The combination of these things is going to allow us to decisively grant or deny access to the ISATAP routing features on the DirectAccess server. Creating a security group and DNS record First, let's create a new security group in Active Directory. This is just a normal group like any other, typically a global or universal, whichever you prefer. This group is going to contain the computer accounts of the internal computers to which we want to give that outbound reaching capability. So, typically the computer accounts that we will eventually add into this group are Helpdesk computers, SCCM servers, maybe a management "jump box" terminal server, that kind of thing. To keep things intuitive, let's name the group DirectAccess – ISATAP computers or something similar. We will also need a DNS host record created. For obvious reasons we don't want to call this ISATAP, but perhaps something such as Contoso_ISATAP, swapping your name in, of course. This is just a regular DNS A record, and you want to point it at the internal IPv4 address of the DirectAccess server. If you are running a clustered array of DirectAccess servers that are configured for load balancing, then you will need multiple DNS records. All of the records have the same name,Contoso_ISATAP, and you point them at each internal IP address being used by the cluster. So, one gets pointed at the internal Virtual IP (VIP), and one gets pointed at each of the internal Dedicated IPs (DIP). In a two-node cluster, you will have three DNS records for Contoso_ISATAP. Creating the GPO Now go ahead and follow these steps to create a new GPO that is going to contain the ISATAP connection settings: Create a new GPO, name it something such as DirectAccess – ISATAP settings. Place the GPO wherever it makes sense to you, keeping in mind that these settings should only apply to the ISATAP group that we created. One way to manage the distribution of these settings is a strategically placed link. Probably the better way to handle distribution of these settings is to reflect the same approach that the DirectAccess GPOs themselves take. This would mean linking this new GPO at a high level, like the top of the domain, and then using security filtering inside the GPO settings so that it only applies to the group that we just created. This way you ensure that in the end the only computers to receive these settings are the ones that you add to your ISATAP group. I typically take the Security Filtering approach, because it closely reflects what DirectAccess itself does with GPO filtering. So, create and link the GPO at a high level, and then inside the GPO properties, go ahead and add the group (and only the group, remove everything else) to the Security Filtering section, like what is shown in the following screenshot: Then move over to the Details tab and set the GPO Status to User configuration settings disabled. Configuring the GPO Now that we have a GPO which is being applied only to our special ISATAP group that we created, let's give it some settings to apply. What we are doing with this GPO is configuring those computers which we want to be ISATAP-connected with the ISATAP server address with which they need to communicate , which is the DNS name that we created for ISATAP. First, edit the GPO and set the ISATAP Router Name by configuring the following setting: Computer Configuration | Policies | Administrative Templates | Network | TCPIP Settings | IPv6 Transition Technologies | ISATAP Router Name = Enabled (and populate your DNS record). Second, in the same location within the GPO, we want to enable your ISATAP state with the following configuration: Computer Configuration | Policies | Administrative Templates | Network | TCPIP Settings | IPv6 Transition Technologies | ISATAP State = Enabled State. Adding machines to the group All of our settings are now squared away, time to test! Start by taking a single computer account of a computer inside your network from which you want to be able to reach out to DirectAccess client computers, and add the computer account to the group that we created. Perhaps pause for a few minutes to ensure Active Directory has a chance to replicate, and then simply restart that computer. The next time it boots, it'll grab those settings from the GPO, and reach out to the DirectAccess server acting as the ISATAP router, and have an ISATAP IPv6 connection. If you generate an ipconfig /all on that internal machine, you will see the ISATAP adapter and address now listed as shown in the following screenshot: And now if you try to ping the name of a client computer that is connected via DirectAccess, you will see that it now resolves to the IPv6 record. Perfect! But wait, why are my pings timing out? Let's explore that next.
Read more
  • 0
  • 2
  • 11493
article-image-navigation-stack-robot-setups
Packt
10 Oct 2013
7 min read
Save for later

Navigation Stack - Robot Setups

Packt
10 Oct 2013
7 min read
The navigation stack in ROS In order to understand the navigation stack, you should think of it as a set of algorithms that use the sensors of the robot and the odometry, and you can control the robot using a standard message. It can move your robot without problems (for example, without crashing or getting stuck in some location, or getting lost) to another position. You would assume that this stack can be easily used with any robot. This is almost true, but it is necessary to tune some configuration files and write some nodes to use the stack. The robot must satisfy some requirements before it uses the navigation stack: The navigation stack can only handle a differential drive and holonomic-wheeled robots. The shape of the robot must be either a square or a rectangle. However, it can also do certain things with biped robots, such as robot localization, as long as the robot does not move sideways. It requires that the robot publishes information about the relationships between all the joints and sensors' position. The robot must send messages with linear and angular velocities. A planar laser must be on the robot to create the map and localization. Alternatively, you can generate something equivalent to several lasers or a sonar, or you can project the values to the ground if they are mounted in another place on the robot. The following diagram shows you how the navigation stacks are organized. You can see three groups of boxes with colors (gray and white) and dotted lines. The plain white boxes indicate those stacks that are provided by ROS, and they have all the nodes to make your robot really autonomous: In the following sections, we will see how to create the parts marked in gray in the diagram. These parts depend on the platform used; this means that it is necessary to write code to adapt the platform to be used in ROS and to be used by the navigation stack. Creating transforms The navigation stack needs to know the position of the sensors, wheels, and joints. To do that, we use the TF (which stands for Transform Frames) software library. It manages a transform tree. You could do this with mathematics, but if you have a lot of frames to calculate, it will be a bit complicated and messy. Thanks to TF, we can add more sensors and parts to the robot, and the TF will handle all the relations for us. If we put the laser 10 cm backwards and 20 cm above with regard to the origin of the coordinates of base_link, we would need to add a new frame to the transformation tree with these offsets. Once inserted and created, we could easily know the position of the laser with regard to the base_link value or the wheels. The only thing we need to do is call the TF library and get the transformation. Creating a broadcaster Let's test it with a simple code. Create a new file in chapter7_tutorials/src with the name tf_broadcaster.cpp, and put the following code inside it: #include <ros/ros.h> #include <tf/transform_broadcaster.h> int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_publisher"); ros::NodeHandle n; ros::Rate r(100); tf::TransformBroadcaster broadcaster; while(n.ok()){ broadcaster.sendTransform( tf::StampedTransform( tf::Transform(tf::Quaternion(0, 0, 0, 1), tf::Vector3(0.1, 0.0, 0.2)), ros::Time::now(),"base_link", "base_laser")); r.sleep(); } } Remember to add the following line in your CMakelist.txt file to create the new executable: rosbuild_add_executable(tf_broadcaster src/tf_broadcaster.cpp) And we also create another node that will use the transform, and it will give us the position of a point of a sensor with regard to the center of base_link (our robot). Creating a listener Create a new file in chapter7_tutorials/src with the name tf_listener.cpp and input the following code: #include <ros/ros.h> #include <geometry_msgs/PointStamped.h> #include <tf/transform_listener.h> void transformPoint(const tf::TransformListener& listener){ //we'll create a point in the base_laser frame that we'd like to transform to the base_link frame geometry_msgs::PointStamped laser_point; laser_point.header.frame_id = "base_laser"; //we'll just use the most recent transform available for our simple example laser_point.header.stamp = ros::Time(); //just an arbitrary point in space laser_point.point.x = 1.0; laser_point.point.y = 2.0; laser_point.point.z = 0.0; geometry_msgs::PointStamped base_point; listener.transformPoint("base_link", laser_point, base_point); ROS_INFO("base_laser: (%.2f, %.2f. %.2f) -----> base_link: (%.2f, %.2f, %.2f) at time %.2f", laser_point.point.x, laser_point.point.y, laser_point.point.z, base_point.point.x, base_point.point.y, base_point.point.z, base_point.header.stamp.toSec()); ROS_ERROR("Received an exception trying to transform a point from "base_laser" to "base_link": %s", ex.what()); } int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_listener"); ros::NodeHandle n; tf::TransformListener listener(ros::Duration(10)); //we'll transform a point once every second ros::Timer timer = n.createTimer(ros::Duration(1.0), boost::bind(&transformPoint, boost::ref(listener))); ros::spin(); } Remember to add the line in the CMakeList.txt file to create the executable. Compile the package and run both the nodes using the following commands: $ rosmake chapter7_tutorials $ rosrun chapter7_tutorials tf_broadcaster $ rosrun chapter7_tutorials tf_listener Then you will see the following message: [ INFO] [1368521854.336910465]: base_laser: (1.00, 2.00. 0.00) -----> base_link: (1.10, 2.00, 0.20) at time 1368521854.33 [ INFO] [1368521855.336347545]: base_laser: (1.00, 2.00. 0.00) -----> base_link: (1.10, 2.00, 0.20) at time 1368521855.33 This means that the point that you published on the node, with the position (1.00, 2.00, 0.00) relative to base_laser, has the position (1.10, 2.00, 0.20) relative to base_link. As you can see, the tf library performs all the mathematics for you to get the coordinates of a point or the position of a joint relative to another point. A transform tree defines offsets in terms of both translation and rotation between different coordinate frames. Let us see an example to help you understand this. We are going to add another laser, say, on the back of the robot (base_link): The system had to know the position of the new laser to detect collisions, such as the one between wheels and walls. With the TF tree, this is very simple to do and maintain and is also scalable. Thanks to tf, we can add more sensors and parts, and the tf library will handle all the relations for us. All the sensors and joints must be correctly configured on tf to permit the navigation stack to move the robot without problems, and to exactly know where each one of their components is. Before starting to write the code to configure each component, keep in mind that you have the geometry of the robot specified in the URDF file. So, for this reason, it is not necessary to configure the robot again. Perhaps you do not know it, but you have been using the robot_state_publisher package to publish the transform tree of your robot. We used it for the first time; therefore, you do have the robot configured to be used with the navigation stack. Watching the transformation tree If you want to see the transformation tree of your robot, use the following command: $ roslaunch chapter7_tutorials gazebo_map_robot.launch model:= "`rospack find chapter7 _tutorials`/urdf/robot1_base_04.xacro" $ rosrun tf view_frames The resultant frame is depicted as follows: And now, if you run tf_broadcaster and run the rosrun tf view_frames command again, you will see the frame that you have created by code: $ rosrun chapter7_tutorials tf_broadcaster $ rosrun tf view_frames The resultant frame is depicted as follows:
Read more
  • 0
  • 0
  • 2549

article-image-introducing-magento-extension-development
Packt
10 Oct 2013
13 min read
Save for later

Introducing Magento extension development

Packt
10 Oct 2013
13 min read
Creating Magento extensions can be an extremely challenging and time-consuming task depending on several factors such as your knowledge of Magento internals, overall development skills, and the complexity of the extension functionality itself. Having a deep insight into Magento internals, its structure, and accompanying tips and tricks will provide you with a strong foundation for clean and unobtrusive Magento extension development. The word unobtrusive should be a constant thought throughout your entire development process. The reason is simple; given the massiveness of the Magento platform, it is way too easy to build extensions that clash with other third-party extensions. This is usually a beginner's flaw, which we will hopefully avoid once we have finished reading this article. The examples listed in this article are targeted towards Magento Community Edition 1.7.0.2. Version 1.7.0.2 is the last stable release at the time of writing this writing. Throughout this article we will be referencing our URL examples as if they are executing on the magento.loc domain. You are free to set your local Apache virtual host and host file to any domain you prefer, as long as you keep this in mind. If you're hearing about virtual host terminology for the first time, please refer to the Apache Virtual Host documentation. Here is a quick summary on each of those files and folders: .htaccess: This file is a directory-level configuration file supported by several web servers, most notably the Apache web server. It controls mod_rewrite for fancy URLs and sets configuration server variables (such as memory limit) and PHP maximum execution time. .htaccess.sample: This is basically a .htaccess template file used for creating new stores within subfolders. api.php: This is primarily used for the Magento REST API, but can be used for SOAP and XML-RPC API server functionality as well. app: This is where you will find Magento core code files for the backend and for the frontend. This folder is basically the heart of the Magento platform. Later on, we will dive into this folder for more details, given that this is the folder that you as an extension developer will spend most of your time on. cron.php: This file, when triggered via URL or via console PHP, will trigger certain Magento cron jobs logic. cron.sh: This file is a Unix shell script version of cron.php. downloader: This folder is used by the Magento Connect Manager, which is the functionality you access from the Magento administration area by navigating to System | Magento Connect | Magento Connect Manager. errors: This folder is a host for a slightly separate Magento functionality, the one that jumps in with error handling when your Magento store gets an exception during code execution. favicon.ico: This is your standard 16 x 16 px website icon. get.php: This file hosts a feature that allows core media files to be stored and served from the database. With the Database File Storage system in place, Magento would redirect requests for media files to get.php. includes: This folder is used by the Mage_Compiler extension whose functionality can be accessed via Magento administration System | Tools | Compilation. The idea behind the Magento compiler feature is that you end up with PHP system that pulls all of its classes from one folder, thus, giving it a massive performance boost. index.php: This is a main entry point to your application, the main loader file for Magento, and the file that initializes everything. Every request for every Magento page goes through this file. index.php.sample: This file is just a backup copy of the index.php file. js: This folder holds the core Magento JavaScript libraries, such as Prototype, scriptaculous.js, ExtJS, and a few others, some of which are from Magento itself. lib: This folder holds the core Magento PHP libraries, such as 3DSecure, Google Checkout, phpseclib, Zend, and a few others, some of which are from Magento itself. LICENSE*: These are the Magento licence files in various formats (LICENSE_AFL.txt, LICENSE.html, and LICENSE.txt). mage: This is a Magento Connect command-line tool. It allows you to add/remove channels, install and uninstall packages (extensions), and various other package related tasks. media: This folder contains all of the media files, mostly just images from various products, categories, and CMS pages. php.ini.sample: This file is a sample php.ini file for PHP CGI/FastCGI installations. Sample files are not actually used by Magento application. pkginfo: This folder contains text files that largely operate as debug files to inform us about changes when extensions are upgraded in any way. RELEASE_NOTES.txt: This file contains the release notes and changes for various Magento versions, starting from version 1.4.0.0 and later. shell: This folder contains several PHP-based shell tools, such as compiler, indexer, and logger. skin: This folder contains various CSS and JavaScript files specific for individual Magento themes. Files in this folder and its subfolder go hand in hand with files in app/design folder, as these two locations actually result in one fully featured Magento theme or package. var: This folder contains sessions, logs, reports, configuration cache, lock files for application processes, and possible various other files distributed among individual subfolders. During development, you can freely select all the subfolders and delete them, as Magento will recreate all of them on the next page request. From a standpoint of Magento extension developer, you might find yourself looking into the var/log and var/report folders every now and then. Code pools The folder code is a placeholder for what is called a codePool in Magento. Usually, there are three code pool's in Magento, that is, three subfolders: community, core, and local. The formula for your extension code location should be something like app/code/community/YourNamespace/YourModuleName/ or app/code/local/YourNamespace/YourModuleName/. There is a simple rule as to whether to chose community or local codePool: Choose the community codePool for extensions that you plan to share across projects, or possibly upload to Magento Connect Choose the local codePool for extensions that are specific for the project you are working on and won't be shared with the public For example, let's imagine that our company name is Foggyline and the extension we are building is called Happy Hour. As we wish to share our extension with the community, we can put it into a folder such as app/code/community/Foggyline/HappyHour/. The theme system In order to successfully build extensions that visually manifest themselves to the user either on the backend or frontend, we need to get familiar with the theme system. The theme system is comprised of two distributed parts: one found under the app/design folder and other under the root skin folder. Files found under the app/design folder are PHP template files and XML layout configuration files. Within the PHP template files you can find the mix of HTML, PHP, and some JavaScript. There is one important thing to know about Magento themes; they have a fallback mechanism, for example, if someone in the administration interface sets the configuration to use a theme called hello from the default package; and if the theme is missing, for example, the app/design/frontend/default/hello/template/catalog/product/view.phtml file in its structure, Magento will use app/design/frontend/default/default/template/catalog/product/view.phtml from the default theme; and if that file is missing as well, Magento will fall back to the base package for the app/design/frontend/base/default/template/catalog/product/view.phtml file. All your layout and view files should go under the /app/design/frontend/defaultdefault/default directory. Secondly, you should never overwrite the existing .xml layout or template .phtml file from within the /app/design/frontend/default/default directory, rather create your own. For example, imagine you are doing some product image switcher extension, and you conclude that you need to do some modifications to the app/design/frontend/default/default/template/catalog/product/view/media.phtml file. A more valid approach would be to create a proper XML layout update file with handles rewriting the media.phtml usage to let's say media_product_image_switcher.phtml. The model, resource, and collection A model represents the data for the better part, and to certain extent a business logic of your application. Models in Magento take the Object Relational Mapping (ORM) approach, thus, having the developer to strictly deal with objects while their data is then automatically persisted to database. If you are hearing about ORM for the first time, please take some time to familiarize yourself with the concept. Theoretically, you could write and execute raw SQL queries in Magento. However, doing so is not advised, especially if you plan on distributing your extensions. There are two types of models in Magento: Basic Data Model: This is a simpler model type, sort of like Active Record pattern based model. If you're hearing about Active Record for the first time, please take some time to familiarize yourself with the concept. EAV (Entity-Attribute-Value) Data Model: This is a complex model type, which enables you to dynamically create new attributes on an entity. As EAV Data Model is significantly more complex than Basic Data Model and Basic Data Model will suffice for most of the time, we will focus on Basic Data Model and everything important surrounding it. Each data model you plan to persist to database, that means models that present an entity, needs to have four files in order for it to work fully: The model file: This extends the Mage_Core_Model_Abstract class. This represents single entity, its properties (fields), and possible business logic within it. The model resource file: This extends the Mage_Core_Model_Resource_Db_Abstract class. This is your connection to database; think of it as the thing that saves your entity properties (fields) database. The model collection file: This extends the Mage_Core_Model_Resource_Db_Collection_Abstract class. This is your collection of several entities, a collection that can be filtered, sorted, and manipulated. The installation script file: In its simplest definition this is the PHP file through which you, in and object-oriented way, create your database table(s). The default Magento installation comes with several built in shipping methods available: Flat Rate, Table Rates, Free Shipping UPS, USPS, FedEx, DHL. For some merchants this is more than enough, for others you are free to build an additional custom Shipping extension with support for one or more shipping methods. Be careful about the terminology here. Shipping method resides within shipping extension. A single extension can define one or more shipping methods. In this article we will learn how to create our own shipping method. Shipping methods There are two, unofficially divided, types of shipping methods: Static, where shipping cost rates are based on a predefined set of rules. For example, you can create a shipping method called 5+ and make it available to the customer for selection under the checkout only if he added more than five products to the cart. Dynamic, where retrieval of shipping cost rates comes from various shipping providers. For example, you have a web service called ABC Shipping that exposes a SOAP web service API which accepts products weight, length, height, width, shipping address and returns the calculated shipping cost which you can then show to your customer. Experienced developers would probably expect one or more PHP interfaces to handle the implementation of new shipping methods. Same goes for Magento, implementing a new shipping method is done via an interface and via proper configuration. The default Magento installation comes with several built-in payment methods available: PayPal, Saved CC, Check/Money Order, Zero Subtotal Checkout, Bank Transfer Payment, Cash On Delivery payment, Purchase Order, and Authorize.Net. For some merchants this is more than enough. Various additional payment extensions can be found on Magento Connect. For those that do not yet exist, you are free to build an additional custom payment extension with support for one or more payment methods. Building a payment extension is usually a non-trivial task that requires a lot of focus. Payment methods There are several unofficially divided types of payment method implementations such as redirect payment, hosted (on-site) payment, and an embedded iframe. Two of them stand out as the most commonly used ones: Redirect payment: During the checkout, once the customer reaches the final ORDER REVIEW step, he/she clicks on the Place Order button. Magento then redirects the customer to specific payment provider website where customer is supposed to provide the credit card information and execute the actual payment. What's specific about this is that prior to redirection, Magento needs to create the order in the system and it does so by assigning this new order a Pending status. Later if customer provides the valid credit card information on the payment provider website, customer gets redirected back to Magento success page. The main concept to grasp here is that customer might just close the payment provider website and never return to your store, leaving your order indefinitely in Pending status. The great thing about this redirect type of payment method providers (gateways) is that they are relatively easy to implement in Magento. Hosted (on-site) payment: Unlike redirect payment, there is no redirection here. Everything is handled on the Magento store. During the checkout, once the customer reaches the Payment Information step, he/she is presented with a form for providing the credit card information. After which, when he/she clicks on the Place Order button in the last ORDER REVIEW checkout step, Magento then internally calls the appropriate payment provider web service, passing it the billing information. Depending on the web service response, Magento then internally sets the order status to either Processing or some other. For example, this payment provider web service can be a standard SOAP service with a few methods such as orderSubmit. Additionally, we don't even have to use a real payment provider, we can just make a "dummy" payment implementation like built-in Check/Money Order payment. You will often find that most of the merchants prefer this type of payment method, as they believe that redirecting the customer to third-party site might negatively affect their sale. Obviously, with this payment method there is more overhead for you as a developer to handle the implementation. On top of that there are security concerns of handling the credit card data on Magento side, in which case PCI compliance is obligatory. If this is your first time hearing about PCI compliance, please click here to learn more. This type of payment method is slightly more challenging to implement than the redirect payment method. Magento Connect Magento Connect is one of the world's largest eCommerce application marketplace where you can find various extensions to customize and enhance your Magento store. It allows Magento community members and partners to share their open source or commercial contributions for Magento with the community. You can access Magento Connect marketplace here. Publishing your extension to Magento Connect is a three-step process made of: Packaging your extension Creating an extension profile Uploading the extension package More of which we will talk later in the article. Only community members and partners have the ability to publish their contributions. Becoming a community member is simple, just register as a user on official Magento website https://www.magentocommerce.com. Member account is a requirement for further packaging and publishing of your extension. Read more: Categories and Attributes in Magento: Part 2 Integrating Twitter with Magento Magento Fundamentals for Developers
Read more
  • 0
  • 0
  • 1473

article-image-top-features-you-need-know-about-responsive-web-design
Packt
10 Oct 2013
3 min read
Save for later

Top Features You Need to Know About – Responsive Web Design

Packt
10 Oct 2013
3 min read
Responsive web design Nowadays, almost everyone has a smartphone or tablet in hand; this article prepares these individuals to adapt their portfolio to this new reality. Acknowledging that, today, there are tablets that are also phones and some laptops that are also tablets, we use an approach known as device agnostic, where instead of giving devices names, such as mobile, tablet, or desktop, we refer to them as small, medium, or large. With this approach, we can cover a vast array of gadgets from smartphones, tablets, laptops, and desktops, to the displays on refrigerators, cars, watches, and so on. Photoshop Within the pages of this article, you will find two Photoshop templates that I prepared for you. The first is small.psd, which you may use to prepare your layouts for smartphones, small tablets, and even, to a certain extent, displays on a refrigerator. The second is medium.psd, which can be used for tablets, net books, or even displays in cars. I used these templates to lay out all the sizes of our website (portfolio) that we will work on in this article, as you can see in the following screenshot: One of the principle elements of responsive web design is the flexible grid and what I did with Photoshop layout was to mimic those grids, which we will use later. With time, this will be easier and it won't be necessary to lay out every version of every page, but, for now, it is good to understand how things happen. Code Now that we have a preview of how the small version will look, it's time to code it. The first thing we will need is the fluid version of the 960.gs, which you can download from https://raw.github.com/bauhouse/fluid960gs/master/css/grid.css and save as 960_fluid.css in the css folder. After that, let's create two more files in this folder, small.css and medium.css. We will use these files to maintain the organized versions of our portfolio. Lastly, let's link the files to our HTML document as follows: <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width"> <title>Portfolio</title> <link href="css/reset.css" rel="stylesheet" type="text/css"> <link href="css/960_fluid.css" rel="stylesheet" type="text/css"> <link href="css/main.css" rel="stylesheet" type="text/css"> <link href="css/medium.css" rel="stylesheet" type="text/css"> <link href="css/small.css" rel="stylesheet" type="text/css"> </head> If you reload your browser now, you should see that the portfolio is stretching all over the browser. This occurs because the grid is now fluid. To fix the width to, at most, 960 pixels, we need to insert the following lines at the beginning of the main.css file: Code 2: /* grid ================================ */ .container_12 { max-width: 960px; margin: Once you reload the browser and resize the window, you will see that the display is overly stretched and broken. In order to fix this, keeping in mind the layout we did in Photoshop, we can use the small version and medium version. Summary In this article we saw how to prepare our desktop-only portfolio using Photoshop and the method used to fix the broken and overly stretched display. Resources for Article: Further resources on this subject: Web Design Principles in Inkscape Building HTML5 Pages from Scratch HTML5 Presentations - creating our initial presentation
Read more
  • 0
  • 0
  • 2114
article-image-null-23
Packt
09 Oct 2013
11 min read
Save for later

Navigation Stack – Robot Setups

Packt
09 Oct 2013
11 min read
Introduction to the navigation stacks and their powerful capabilities—clearly one of the greatest pieces of software that comes with ROS. The TF is explained in order to show how to transform from the frame of one physical element to the other; for example, the data received using a sensor or the command for the desired position of an actuator. We will see how to create a laser driver or simulate it. We will learn how the odometry is computed and published, and how Gazebo provides it. A base controller will be presented, including a detailed description of how to create one for your robot. We will see how to execute SLAM with ROS. That is, we will show you how you can build a map from the environment with your robot as it moves through it. Finally, you will be able to localize your robot in the map using the localization algorithms of the navigation stack. The navigation stack in ROS In order to understand the navigation stack, you should think of it as a set of algorithms that use the sensors of the robot and the odometry, and you can control the robot using a standard message. It can move your robot without problems (for example, without crashing or getting stuck in some location, or getting lost) to another position. You would assume that this stack can be easily used with any robot. This is almost true, but it is necessary to tune some configuration files and write some nodes to use the stack. The robot must satisfy some requirements before it uses the navigation stack: The navigation stack can only handle a differential drive and holonomic-wheeled robots. The shape of the robot must be either a square or a rectangle. However, it can also do certain things with biped robots, such as robot localization, as long as the robot does not move sideways. It requires that the robot publishes information about the relationships between all the joints and sensors' position. The robot must send messages with linear and angular velocities. A planar laser must be on the robot to create the map and localization. Alternatively, you can generate something equivalent to several lasers or a sonar, or you can project the values to the ground if they are mounted in another place on the robot. The following diagram shows you how the navigation stacks are organized. You can see three groups of boxes with colors (gray and white) and dotted lines. The plain white boxes indicate those stacks that are provided by ROS, and they have all the nodes to make your robot really autonomous: In the following sections, we will see how to create the parts marked in gray in the diagram. These parts depend on the platform used; this means that it is necessary to write code to adapt the platform to be used in ROS and to be used by the navigation stack. Creating transforms The navigation stack needs to know the position of the sensors, wheels, and joints. To do that, we use the TF (which stands for Transform Frames) software library. It manages a transform tree. You could do this with mathematics, but if you have a lot of frames to calculate, it will be a bit complicated and messy. Thanks to TF, we can add more sensors and parts to the robot, and the TF will handle all the relations for us. If we put the laser 10 cm backwards and 20 cm above with regard to the origin of the coordinates of base_link, we would need to add a new frame to the transformation tree with these offsets. Once inserted and created, we could easily know the position of the laser with regard to the base_link value or the wheels. The only thing we need to do is call the TF library and get the transformation. Creating a broadcaster Let's test it with a simple code. Create a new file in chapter7_tutorials/src with the name tf_broadcaster.cpp, and put the following code inside it: #include <ros/ros.h> #include <tf/transform_broadcaster.h> int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_publisher"); ros::NodeHandle n; ros::Rate r(100); tf::TransformBroadcaster broadcaster; while(n.ok()){ broadcaster.sendTransform( tf::StampedTransform( tf::Transform(tf::Quaternion(0, 0, 0, 1), tf::Vector3(0.1, 0.0, 0.2)), ros::Time::now(),"base_link", "base_laser")); r.sleep(); } } Remember to add the following line in your CMakelist.txt file to create the new executable: rosbuild_add_executable(tf_broadcaster src/tf_broadcaster.cpp) And we also create another node that will use the transform, and it will give us the position of a point of a sensor with regard to the center of base_link (our robot). Creating a listener Create a new file in chapter7_tutorials/src with the name tf_listener.cpp and input the following code: #include <ros/ros.h> #include <geometry_msgs/PointStamped.h> #include <tf/transform_listener.h> void transformPoint(const tf::TransformListener& listener){ //we'll create a point in the base_laser frame that we'd like totransform to the base_link frame geometry_msgs::PointStamped laser_point; laser_point.header.frame_id = "base_laser"; //we'll just use the most recent transform available for our simple example laser_point.header.stamp = ros::Time(); //just an arbitrary point in space laser_point.point.x = 1.0; laser_point.point.y = 2.0; laser_point.point.z = 0.0; geometry_msgs::PointStamped base_point; listener.transformPoint("base_link", laser_point, base_point); ROS_INFO("base_laser: (%.2f, %.2f. %.2f) -----> base_link: (%.2f, %.2f, %.2f) at time %.2f", laser_point.point.x, laser_point.point.y, laser_point.point.z, base_point.point.x, base_point.point.y, base_point.point.z, base_point.header.stamp.toSec()); ROS_ERROR("Received an exception trying to transform a point from \"base_laser\" to \"base_link\": %s", ex.what()); } int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_listener"); ros::NodeHandle n; tf::TransformListener listener(ros::Duration(10)); //we'll transform a point once every second ros::Timer timer = n.createTimer(ros::Duration(1.0), boost::bind(&transformPoint, boost::ref(listener))); ros::spin(); } Remember to add the line in the CMakeList.txt file to create the executable. Compile the package and run both the nodes using the following commands: $ rosmake chapter7_tutorials $ rosrun chapter7_tutorials tf_broadcaster $ rosrun chapter7_tutorials tf_listener Then you will see the following message: [ INFO] [1368521854.336910465]: base_laser: (1.00, 2.00. 0.00) -----> base_link: (1.10, 2.00, 0.20) at time 1368521854.33 [ INFO] [1368521855.336347545]: base_laser: (1.00, 2.00. 0.00) -----> base_link: (1.10, 2.00, 0.20) at time 1368521855.33 This means that the point that you published on the node, with the position (1.00, 2.00, 0.00) relative to base_laser, has the position (1.10, 2.00, 0.20) relative to base_link. As you can see, the tf library performs all the mathematics for you to get the coordinates of a point or the position of a joint relative to another point. A transform tree defines offsets in terms of both translation and rotation between different coordinate frames. Let us see an example to help you understand this. We are going to add another laser, say, on the back of the robot (base_link): The system had to know the position of the new laser to detect collisions, such as the one between wheels and walls. With the TF tree, this is very simple to do and maintain and is also scalable. Thanks to tf, we can add more sensors and parts, and the tf library will handle all the relations for us. All the sensors and joints must be correctly configured on tf to permit the navigation stack to move the robot without problems, and to exactly know where each one of their components is. Before starting to write the code to configure each component, keep in mind that you have the geometry of the robot specified in the URDF file. So, for this reason, it is not necessary to configure the robot again. Perhaps you do not know it, but you have been using the robot_state_publisher package to publish the transform tree of your robot. We used it for the first time; therefore, you do have the robot configured to be used with the navigation stack. Watching the transformation tree If you want to see the transformation tree of your robot, use the following command: $ roslaunch chapter7_tutorials gazebo_map_robot.launch model:="`rospack find chapter7_tutorials`/urdf/robot1_base_04.xacro"$ rosrun tf view_frames The resultant frame is depicted as follows: And now, if you run tf_broadcaster and run the rosrun tf view_frames command again, you will see the frame that you have created by code: $ rosrun chapter7_tutorials tf_broadcaster $ rosrun tf view_frames The resultant frame is depicted as follows: Publishing sensor information Your robot can have a lot of sensors to see the world; you can program a lot of nodes to take these data and do something, but the navigation stack is prepared only to use the planar laser's sensor. So, your sensor must publish the data with one of these types: sensor_msgs/LaserScan or sensor_msgs/PointCloud. We are going to use the laser located in front of the robot to navigate in Gazebo. Remember that this laser is simulated on Gazebo, and it publishes data on the base_scan/scan frame. In our case, we do not need to configure anything of our laser to use it on the navigation stack. This is because we have tf configured in the .urdf file, and the laser is publishing data with the correct type. If you use a real laser, ROS might have a driver for it. Anyway, if you are using a laser that has no driver on ROS and want to write a node to publish the data with the sensor_msgs/LaserScan sensor, you have an example template to do it, which is shown in the following section. But first, remember the structure of the message sensor_msgs/LaserScan. Use the following command: $ rosmsg show sensor_msgs/LaserScan std_msgs/Header header uint32 seq time stamp string frame_id float32 angle_min float32 angle_max float32 angle_increment float32 time_increment float32 scan_time float32 range_min float32 range_max float32[] rangesfloat32[] intensities Creating the laser node Now we will create a new file in chapter7_tutorials/src with the name laser.cpp and put the following code in it: #include <ros/ros.h> #include <sensor_msgs/LaserScan.h> int main(int argc, char** argv){ ros::init(argc, argv, "laser_scan_publisher"); ros::NodeHandle n; ros::Publisher scan_pub = n.advertise<sensor_msgs::LaserScan>("scan", 50); unsigned int num_readings = 100; double laser_frequency = 40; double ranges[num_readings]; double intensities[num_readings]; int count = 0; ros::Rate r(1.0); while(n.ok()){ //generate some fake data for our laser scan for(unsigned int i = 0; i < num_readings; ++i){ ranges[i] = count; intensities[i] = 100 + count; } ros::Time scan_time = ros::Time::now(); //populate the LaserScan message sensor_msgs::LaserScan scan; scan.header.stamp = scan_time; scan.header.frame_id = "base_link"; scan.angle_min = -1.57; scan.angle_max = 1.57; scan.angle_increment = 3.14 / num_readings; scan.time_increment = (1 / laser_frequency) / (num_readings); scan.range_min = 0.0; scan.range_max = 100.0; scan.ranges.resize(num_readings); scan.intensities.resize(num_readings); for(unsigned int i = 0; i < num_readings; ++i){ scan.ranges[i] = ranges[i]; scan.intensities[i] = intensities[i]; } scan_pub.publish(scan); ++count; r.sleep(); } } As you can see, we are going to create a new topic with the name scan and the message type sensor_msgs/LaserScan. You must be familiar with this message type from sensor_msgs/LaserScan. The name of the topic must be unique. When you configure the navigation stack, you will select this topic to be used for the navigation. The following command line shows how to create the topic with the correct name: ros::Publisher scan_pub = n.advertise<sensor_msgs::LaserScan>("scan", 50); It is important to publish data with header, stamp, frame_id, and many more elements because, if not, the navigation stack could fail with such data: scan.header.stamp = scan_time; scan.header.frame_id = "base_link"; Other important data on header is frame_id. It must be one of the frames created in the .urdf file and must have a frame published on the tf frame transforms. The navigation stack will use this information to know the real position of the sensor and make transforms such as the one between the data sensor and obstacles. With this template, you can use any laser although it has no driver for ROS. You only have to change the fake data with the right data from your laser. This template can also be used to create something that looks like a laser but is not. For example, you could simulate a laser using stereoscopy or using a sensor such as a sonar.
Read more
  • 0
  • 0
  • 1278

article-image-creating-new-characters-morphs
Packt
09 Oct 2013
7 min read
Save for later

Creating New Characters with Morphs

Packt
09 Oct 2013
7 min read
(For more resources related to this topic, see here.) Understanding morphs The word morph comes from metamorphosis, which means a change of the form or nature of a thing or person into a completely different one. Good old Franz Kafka had a field day with metamorphosis when he imagined poor Gregor Samsa waking up and finding himself changed into a giant cockroach. This concept applies to 3D modeling very well. As we are dealing with polygons, which are defined by groups of vertices, it's very easy to morph one shape into something different. All that we need to do is to move those vertices around, and the polygons will stretch and squeeze accordingly. To get a better visualization about this process, let’s bring the Basic Female figure to the scene and show it with the wireframe turned on. To do so, after you have added the Basic Female figure, click on the DrawStyle widget on the top-right portion of the 3D Viewport. From that menu, select Wire Texture Shaded. This operation changes how Studio draws the objects in the scene during preview. It doesn't change anything else about the scene. In fact, if you try to render the image at this point, the wireframe will not show up in the render. The wireframe is a great help in working with objects because it gives us a visual representation of the structure of a model. The type of wireframe that I selected in this case is superimposed to the normal texture used with the figure. This is not the only visualization mode available. Feel free to experiment with all the options in the DrawStyle menu; most of them have their use. The most useful, in my opinion, are the Hidden Line, Lit Wireframe, Wire Shaded, and Wire Texture Shaded options. Try the Wire Shaded option as well. It shows the wireframe with a solid gray color. This is, again, just for display purposes. It doesn't remove the texture from the figure. In fact, you can switch back to Texture Shaded to see Genesis fully textured. Switching the view to use the simple wireframe or the shaded wireframe is a Great way of speeding up your workflow. When Studio doesn’t have to render the textures, the Viewport becomes more responsive and all operations take less time. If you have a slow computer, using the wireframe mode is a good way of getting a faster response time. Here are the Wire Texture Shaded and Wire Shaded styles side by side: Now that we have the wireframe visible, the concept of morphing should be simpler to understand. If we pick any vertex in the geometry and we move it somewhere, the geometry is still the same, same number of polygons and same number of vertices, but the shape has shifted. Here is a practical example that shows Genesis loaded in Blender. Blender is a free, fully featured, 3D modeling program. It has extremely advanced features that compete with commercial programs sold for thousands of dollars per license. You can find more information about Blender at http://www.blender.org. Be aware that Blender is a very advanced program with a rather difficult UI. In this image, I have selected a single polygon and pulled it away from the face: In a similar way we can use programs such as modo or ZBrush to modify the basic geometry and come up with all kinds of different shapes. For example, there are people who are specialized in reproducing the faces of celebrities as morph for DAZ V4 or Genesis. What is important to understand about morphs is that they cannot add or remove any portion of the geometry. A morph only moves things around, sometimes to extreme degrees. Morphs for Genesis or Gen4 figures can be purchased from several websites specialized in selling content for Poser and DAZ Studio. In particular, Genesis makes it very easy to apply morphs and even to mix them together. Combining premade morphs to create new faces The standard installation of Genesis provides some interesting ways of changing its shape. Let's start a new Studio scene and add our old friend, the basic Female figure. Once Genesis is in the scene, double-click on it to select it. Now let’s take a look at a new tool, the Shaping tab. It should be visible in the right-hand side pane. Click on the Shaping tab; it should show a list of shapes available. The list should be something like this: As we can see, the Basic Female shape is unsurprisingly dialed all the way to the top. The value of each slider goes from zero, no influence, to one, full influence of the morph. Morphs are not exclusive so, for example, you can add a bit of Body Builder (scroll the list to the bottom if you don't see it) to be used in conjunction with the Basic Female morph. This will give us a muscular woman. This exercise is also giving us an insight about the Basic Female figure that we have used up to this time. The figure is basically the raw Genesis figure with the Basic Female morph applied as a preset. If we continue exploring the Shaping Editor, we can see that the various shapes are grouped by major body section. We have morphs for the shape of the head, the components of the face, the nose, eyes, mouth, and so on. Let's click on the head of Genesis and use the Camera: Frame tool to frame the head in the view. Move the camera a bit so that the face is visible frontally. We will apply a few morphs to the head to see how it can be transformed. Here is the starting point: Now let’s click on the Head category in the Shaping tab. In there we can see a slider labeled Alien Humanoid. Move the slider until it gets to 0.83. The difference is dramatic. Now let’s click on the Eyes category. In there we find two values: Eyes Height and Eyes Width. To create an out-of-this-world creature, we need to break the rules of proportions a little bit, and that means to remove the limits for a couple of parameters. Click on the gear button for the Eyes Height parameter and uncheck the Use Limits checkbox. Confirm by clicking on the Accept button. Once this is done, dial a value of 1.78 for the eyes height. The eyes should move dramatically up, toward the eyebrow. Lastly, let's change the neck; it's much too thick for an alien. Also, in this case, we will need to disable the use of limits. Click on the Neck category and disable the limits for the Neck Size parameter. Once that is done, set the neck size to -1.74. Here is the result, side by side, of the transformation. This is quite a dramatic change for something that is done with just dials, without using a 3D modeling program. It gets even better, as we will see shortly. Saving your morphs If you want to save a morph to re-use it later, you can navigate to File | Save As | Shaping Preset…. To re-use a saved morph, simply select the target figure and navigate to File | Merge… to load the previously saved preset/morph. Why is Studio using the rather confusing term Merge for loading its own files? Nobody knows for sure; it's one of those weird decisions that DAZ's developers made long ago and never changed. You can merge two different Studio scenes, but it is rather confusing to think of loading a morph or a pose preset as a scene merge. Try to mentally replace File | Merge with File | Load. This is the meaning of that menu option.
Read more
  • 0
  • 0
  • 4264