Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - CMS and E-Commerce

830 Articles
Packt
08 Mar 2016
17 min read
Save for later

Magento 2 – the New E-commerce Era

Packt
08 Mar 2016
17 min read
In this article by Ray Bogman and Vladimir Kerkhoff, the authors of the book, Magento 2 Cookbook, we will cover the basic tasks related to creating a catalog and products in Magento 2. You will learn the following recipes: Creating a root catalog Creating subcategories Managing an attribute set (For more resources related to this topic, see here.) Introduction This article explains how to set up a vanilla Magento 2 store. If Magento 2 is totally new for you, then lots of new basic whereabouts are pointed out. If you are currently working with Magento 1, then not a lot has changed since. The new backend of Magento 2 is the biggest improvement of them all. The design is built responsively and has a great user experience. Compared to Magento 1, this is a great improvement. The menu is located vertically on the left of the screen and works great on desktop and mobile environments: In this article, we will see how to set up a website with multiple domains using different catalogs. Depending on the website, store, and store view setup, we can create different subcategories, URLs, and product per domain name. There are a number of different ways customers can browse your store, but one of the most effective one is layered navigation. Layered navigation is located in your catalog and holds product features to sort or filter. Every website benefits from great Search Engine Optimization (SEO). You will learn how to define catalog URLs per catalog. Throughout this article, we will cover the basics on how to set up a multidomain setup. Additional tasks required to complete a production-like setup are out of the scope of this article. Creating a root catalog The first thing that we need to start with when setting up a vanilla Magento 2 website is defining our website, store, and store view structure. So what is the difference between website, store, and store view, and why is it important: A website is the top-level container and most important of the three. It is the parent level of the entire store and used, for example, to define domain names, different shipping methods, payment options, customers, orders, and so on. Stores can be used to define, for example, different store views with the same information. A store is always connected to a root catalog that holds all the categories and subcategories. One website can manage multiple stores, and every store has a different root catalog. When using multiple stores, it is not possible to share one basket. The main reason for this has to do with the configuration setup where shipping, catalog, customer, inventory, taxes, and payment settings are not sharable between different sites. Store views is the lowest level and mostly used to handle different localizations. Every store view can be set with a different language. Besides using store views just for localizations, it can also be used for Business to Business (B2B), hidden private sales pages (with noindex and nofollow), and so on. The option where we use the base link URL, for example, (yourdomain.com/myhiddenpage) is easy to set up. The website, store, and store view structure is shown in the following image: Getting ready For this recipe, we will use a Droplet created at DigitalOcean, https://www.digitalocean.com/. We will be using NGINX, PHP-FPM, and a Composer-based setup including Magento 2 preinstalled. No other prerequisites are required. How to do it... For the purpose of this recipe, let's assume that we need to create a multi-website setup including three domains (yourdomain.com, yourdomain.de, and yourdomain.fr) and separate root catalogs. The following steps will guide you through this: First, we need to update our NGINX. We need to configure the additional domains before we can connect them to Magento. Make sure that all domain names are connected to your server and DNS is configured correctly. Go to /etc/nginx/conf.d, open the default.conf file, and include the following content at the top of your file: map $http_host $magecode { hostnames; default base; yourdomain.de de; yourdomain.fr fr; } Your configuration should look like this now: map $http_host $magecode { hostnames; default base; yourdomain.de de; yourdomain.fr fr; } upstream fastcgi_backend { server 127.0.0.1:9000; } server { listen 80; listen 443 ssl http2; server_name yourdomain.com; set $MAGE_ROOT /var/www/html; set $MAGE_MODE developer; ssl_certificate /etc/ssl/yourdomain-com.cert; ssl_certificate_key /etc/ssl/yourdomain-com.key; include /var/www/html/nginx.conf.sample; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; location ~ /\.ht { deny all; } } Now let's go to the Magento 2 configuration file in /var/www/html/ and open the nginx.conf.sample file. Go to the bottom and look for the following: location ~ (index|get|static|report|404|503)\.php$ Now we add the following lines to the file under fastcgi_pass   fastcgi_backend;: fastcgi_param MAGE_RUN_TYPE website; fastcgi_param MAGE_RUN_CODE $magecode; Your configuration should look like this now (this is only a small section of the bottom section): location ~ (index|get|static|report|404|503)\.php$ { try_files $uri =404; fastcgi_pass fastcgi_backend; fastcgi_param MAGE_RUN_TYPE website; fastcgi_param MAGE_RUN_CODE $magecode; fastcgi_param PHP_FLAG "session.auto_start=off \n suhosin.session.cryptua=off"; fastcgi_param PHP_VALUE "memory_limit=256M \n max_execution_time=600"; fastcgi_read_timeout 600s; fastcgi_connect_timeout 600s; fastcgi_param MAGE_MODE $MAGE_MODE; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } The current setup is using the MAGE_RUN_TYPE website variable. You may change website to store depending on your setup preferences. When changing the variable, you need your default.conf mapping codes as well. Now, all you have to do is restart NGINX and PHP-FPM to use your new settings. Run the following command: service nginx restart && service php-fpm restart Before we continue we need to check whether our web server is serving the correct codes. Run the following command in the Magento 2 web directory: var/www/html/pub echo "<?php header("Content-type: text/plain"); print_r($_SERVER); ?>" > magecode.php Don't forget to update your nginx.conf.sample file with the new magecode code. It's located at the bottom of your file and should look like this: location ~ (index|get|static|report|404|503|magecode)\.php$ { Restart NGINX and open the file in your browser. The output should look as follows. As you can see, the created MAGE_RUN variables are available. Congratulations, you just finished configuring NGINX including additional domains. Now let's continue connecting them in Magento 2. Now log in to the backend and navigate to Stores | All Stores. By default, Magento 2 has one Website, Store, and Store View setup. Now click on Create Website and commit the following details: Name My German Website Code de Next, click on Create Store and commit the following details: Web site My German Website Name My German Website Root Category Default Category (we will change this later)  Next, click on Create Store View and commit the following details: Store My German Website Name German Code de Status Enabled  Continue the same step for the French domain. Make sure that the Code in Website and Store View is fr. The next important step is connecting the websites with the domain name. Navigate to Stores | Configuration | Web | Base URLs. Change the Store View scope at the top to My German Website. You will be prompted when switching; press ok to continue. Now, unset the checkbox called Use Default from Base URL and Base Link URL and commit your domain name. Save and continue the same procedure for the other website. The output should look like this: Save your entire configuration and clear your cache. Now go to Products | Categories and click on Add Root Category with the following data: Name Root German Is Active Yes Page Title My German Website Continue the same step for the French domain. You may add additional information here but it is not needed. Changing the current Root Category called Default Category to Root English is also optional but advised. Save your configuration, go to Stores | All Stores, and change all of the stores to the appropriate Root Catalog that we just created. Every Root Category should now have a dedicated Root Catalog. Congratulations, you just finished configuring Magento 2 including additional domains and dedicated Root Categories. Now let's open a browser and surf to your created domain names: yourdomain.com, yourdomain.de, and yourdomain.fr. How it works… Let's recap and find out what we did throughout this recipe. In steps 1 through 11, we created a multistore setup for .com, .de, and .fr domains using a separate Root Catalog. In steps 1 through 4, we configured the domain mapping in the NGINX default.conf file. Then, we added the fastcgi_param MAGE_RUN code to the nginx.conf.sample file, which will manage what website or store view to request within Magento. In step 6, we used an easy test method to check whether all domains run the correct MAGE_RUN code. In steps 7 through 9, we configured the website, store, and store view name and code for the given domain names. In step 10, we created additional Root Catalogs for the remaining German and French stores. They are then connected to the previously created store configuration. All stores have their own Root Catalog now. There's more… Are you able to buy additional domain names but like to try setting up a multistore? Here are some tips to create one. Depending on whether you are using Windows, Mac OS, or Linux, the following options apply: Windows: Go to C:\Windows\System32\drivers\etc, open up the hosts file as an administrator, and add the following: (Change the IP and domain name accordingly.) 123.456.789.0 yourdomain.de 123.456.789.0 yourdomain.fr 123.456.789.0 www.yourdomain.de 123.456.789.0 www.yourdomain.fr Save the file and click on the Start button; then search for cmd.exe and commit the following: ipconfig /flushdns Mac OS: Go to the /etc/ directory, open the hosts file as a super user, and add the following: (Change the IP and domain name accordingly.) 123.456.789.0 yourdomain.de 123.456.789.0 yourdomain.fr 123.456.789.0 www.yourdomain.de 123.456.789.0 www.yourdomain.fr Save the file and run the following command on the shell: dscacheutil -flushcache Depending on your Mac version, check out the different commands here: http://www.hongkiat.com/blog/how-to-clear-flush-dns-cache-in-os-x-yosemite/ Linux: Go to the /etc/ directory, open the hosts file as a root user, and add the following: (Change the IP and domain name accordingly.) 123.456.789.0 yourdomain.de 123.456.789.0 yourdomain.fr 123.456.789.0 www.yourdomain.de 123.456.789.0 www.yourdomain.fr Save the file and run the following command on the shell: service nscd restart Depending on your Linux version, check out the different commands here: http://www.cyberciti.biz/faq/rhel-debian-ubuntu-flush-clear-dns-cache/ Open your browser and surf to the custom-made domains. These domains work only on your PC. You can copy these IP and domain names on as many PCs as you prefer. This method also works great when you are developing or testing and your production domain is not available on your development environment. Creating subcategories After creating the foundation of the website, we need to set up a catalog structure. Setting up a catalog structure is not difficult, but needs to be thought out well. Some websites have an easy setup using two levels, while others sometimes use five or more subcategories. Always keep in mind the user experience; your customer needs to crawl the pages easily. Keep it simple! Getting ready For this recipe, we will use a Droplet created at DigitalOcean, https://www.digitalocean.com/. We will be using NGINX, PHP-FPM, and a Composer-based setup including Magento 2 preinstalled. No other prerequisites are required. How to do it... For the purpose of this recipe, let's assume that we need to set up a catalog including subcategories. The following steps will guide you through this: First, log in to the backend of Magento 2 and go to Products | Categories. As we have already created Root Catalogs, we start with using the Root English catalog first. Click on the Root English catalog on the left and then select the Add Subcategory button above the menu. Now commit the following and repeat all steps again for the other Root Catalogs: Name Shoes (Schuhe) (Chaussures) Is Active Yes Page Title Shoes (Schuhe) (Chaussures) Name Clothes (Kleider) (Vêtements) Is Active Yes Page Title Clothes (Kleider) (Vêtements) As we have created the first level of our catalog, we can continue with the second level. Now click on the first level that you need to extend with a subcategory and select the Add Subcategory button. Now commit the following and repeat all steps again for the other Root Catalogs: Name Men (Männer) (Hommes) Is Active Yes Page Title Men (Männer) (Hommes) Name Women (Frau) (Femmes) Is Active Yes Page Title Women (Frau) (Femmes) Congratulations, you just finished configuring subcategories in Magento 2. Now let's open a browser and surf to your created domain names: yourdomain.com, yourdomain.de, and yourdomain.fr. Your categories should now look as follows: How it works… Let's recap and find out what we did throughout this recipe. In steps 1 through 4, we created subcategories for the English, German, and French stores. In this recipe, we created a dedicated Root Catalog for every website. This way, every store can be configured using their own tax and shipping rules. There's more… In our example, we only submitted Name, Is Active, and Page Title. You may continue to commit the Description, Image, Meta Keywords, and Meta Description fields. By default, the URL key is the same as the Name field; you can change this depending on your SEO needs. Every category or subcategory has a default page layout defined by the theme. You may need to override this. Go to the Custom Design tab and click the drop-down menu of Page Layout. We can choose from the following options: 1 column, 2 columns with left bar, 2 columns with right bar, 3 columns, or Empty. Managing an attribute set Every product has a unique DNA; some products such as shoes could have different colors, brands, and sizes, while a snowboard could have weight, length, torsion, manufacture, and style. Setting up a website with all the attributes does not make sense. Depending on the products that you sell, you should create attributes that apply per website. When creating products for your website, attributes are the key elements and need to be thought through. What and how many attributes do I need? How many values does one need? All types of questions that could have a great impact on your website and, not to forget, the performance of it. Creating an attribute such as color and having 100 K of different key values stored is not improving your overall speed and user experience. Always think things through. After creating the attributes, we combine them in attribute sets that can be picked when starting to create a product. Some attributes can be used more than once, while others are unique to one product of an attribute set. Getting ready For this recipe, we will use a Droplet created at DigitalOcean, https://www.digitalocean.com/. We will be using NGINX, PHP-FPM, and a Composer-based setup including Magento 2 preinstalled. No other prerequisites are required. How to do it... For the purpose of this recipe, let's assume that we need to create product attributes and sets. The following steps will guide you through this: First, log in to the backend of Magento 2 and go to Stores | Products. As we are using a vanilla setup, only system attributes and one attribute set is installed. Now click on Add New Attribute and commit the following data in the Properties tab: Attribute Properties Default label shoe_size Catalog Input Type for Store Owners Dropdown Values Required No Manage Options (values of your attribute) English Admin French German 4 4 35 35 4.5 4.5 35 35 5 5 35-36 35-36 5.5 5.5 36 36 6 6 36-37 36-37 6.5 6.5 37 37 7 7 37-38 37-38 7.5 7.5 38 38 8 8 38-39 38-39 8.5 8.5 39 39 Advanced Attribute Properties Scope Global Unique Value No Add to Column Options Yes Use in Filer Options Yes As we have already set up a multi-website that sells shoes and clothes, we stick with this. The attributes that we need to sell shoes are: shoe_size, shoe_type, width, color, gender, and occasion. Continue with the rest of the chart accordingly (http://www.shoesizingcharts.com). Click on Save and Continue Edit now and continue on the Manage Labels tab with the following information: Manage Titles (Size, Color, etc.) English French German Size Taille Größe Click on Save and Continue Edit now and continue on the Storefront Properties tab with the following information: Storefront Properties Use in Search No Comparable in Storefront No Use in Layered Navigation Filterable (with result) Use in Search Result Layered Navigation No Position 0 Use for Promo Rule Conditions No Allow HTML Tags on Storefront Yes Visible on Catalog Pages on Storefront Yes Used in Product Listing No Used for Sorting in Product Listing No Click on Save Attribute now and clear the cache. Depending on whether you have set up the index management accordingly through the Magento 2 cronjob, it will automatically update the newly created attribute. The additional shoe_type, width, color, gender, and occasion attributes configuration can be downloaded at https://github.com/mage2cookbook/chapter4. After creating all of the attributes, we combine them in an attribute set called Shoes. Go to Stores | Attribute Set, click on Add Attribute Set, and commit the following data: Edit Attribute Set Name Name Shoes Based On Default Now click on the Add New button in the Groups section and commit the group name called Shoes. The newly created group is now located at the bottom of the list. You may need to scroll down before you see it. It is possible to drag and drop the group higher up in the list. Now drag and drop the created attributes, shoe_size, shoe_type, width, color, gender, and occasion to the group and save the configuration. The notice of the cron job is automatically updated depending on your settings. Congratulations, you just finished creating attributes and attribute sets in Magento 2. This can be seen in the following screenshot: How it works… Let's recap and find out what we did throughout this recipe. In steps 1 through 10, we created attributes that will be used in an attribute set. The attributes and sets are the fundamentals for every website. In steps 1 through 5, we created multiple attributes to define all details about the shoes and clothes that we would like to sell. Some attributes are later used as configurable values on the frontend while others only indicate the gender or occasion. In steps 6 through 9, we connected the attributes to the related attribute set so that when creating a product, all correct elements are available. There's more… After creating the attribute set for Shoes, we continue to create an attribute set for Clothes. Use the following attributes to create the set: color, occasion, apparel_type, sleeve_length, fit, size, length, and gender. Follow the same steps as we did before to create a new attribute set. You may reuse the attributes, color, occasion, and gender. All detailed attributes can be found at https://github.com/mage2cookbook/chapter4#clothes-set. The following is the screenshot of the Clothes attribute set: Summary In this article, you learned how to create a Root Catalog, subcategories, and manage attribute sets. For more information on Magento 2, Refer the following books by Packt Publishing: Magento 2 Development Cookbook (https://www.packtpub.com/web-development/magento-2-development-cookbook) Magento 2 Developer's Guide (https://www.packtpub.com/web-development/magento-2-developers-guide) Resources for Article: Further resources on this subject: Social Media in Magento [article] Upgrading from Magneto 1 [article] Social Media and Magento [article]
Read more
  • 0
  • 0
  • 1785

article-image-magento-theme-development
Packt
24 Feb 2016
7 min read
Save for later

Magento Theme Development

Packt
24 Feb 2016
7 min read
In this article by Fernando J. Miguel, author of the book Magento 2 Development Essentials, we will learn the basics of theme development. Magento can be customized as per your needs because it is based on the Zend framework, adopting the Model-View-Controller (MVC) architecture as a software design pattern. When planning to create your own theme, the Magento theme process flow becomes a subject that needs to be carefully studied. Let's focus on the concepts that help you create your own theme. (For more resources related to this topic, see here.) The Magento base theme The Magento Community Edition (CE) version 1.9 comes with a new theme named rwd that implements the Responsive Web Design (RWD) practices. Magento CE's responsive theme uses a number of new technologies as follows: Sass/Compass: This is a CSS precompiler that provides a reusable CSS that can even be organized well. jQuery: This is used for customization of JavaScript in the responsive theme. jQuery operates in the noConflict() mode, so it doesn't conflict with Magento's existing JavaScript library. Basically, the folders that contain this theme are as follows: app/design/frontend/rwd skin/frontend/rwd The following image represents the folder structure: As you can see, all the files of the rwd theme are included in the app/design/frontend and skin/frontend folders: app/design/frontend: This folder stores all the .phtml visual files and .xml configurations files of all the themes. skin/frontend: This folder stores all JavaScript, CSS, and image files from their respective app/design/frontend themes folders. Inside these folders, you can see another important folder called base. The rwd theme uses some base theme features to be functional. How is it possible? Logically, Magento has distinct folders for every theme, but Magento is very smart to reuse code. Magento takes advantage of fall-back system. Let's check how it works. The fall-back system The frontend of Magento allows the designers to create new themes based on the basic theme, reusing the main code without changing its structure. The fall-back system allows us to create only the files that are necessary for the customization. To create the customization files, we have the following options: Create a new theme directory and write the entire new code Copy the files from the base theme and edit them as you wish The second option could be more productive for study purposes. You will learn basic structure by exercising the code edit. For example, let's say you want to change the header.phtml file. You can copy the header.html file from the app/design/frontend/base/default/template/page/html path to the app/design/frontend/custom_package/custom_theme/template/page/html path. In this example, if you activate your custom_theme on Magento admin panel, your custom_theme inherits all the structure from base theme, and applies your custom header.phtml on the theme. Magento packages and design themes Magento has the option to create design packages and themes as you saw on the previous example of custom_theme. This is a smart functionality because on same packages you can create more than one theme. Now, let's take a deep look at the main folders that manage the theme structure in Magento. The app/design structure In the app/design structure, we have the following folders: The folder details are as follows: adminhtml: In this folder, Magento keeps all the layout configuration files and .phtml structure of admin area. frontend: In this folder, Magento keeps all the theme's folders and their respective .phtml structure of site frontend. install: This folder stores all the files of installation Magento screen. The layout folder Let's take a look at the rwd theme folder: As you can see, the rwd is a theme folder and has a template folder called default. In Magento, you can create as many template folders as you wish. The layout folders allow you to define the structure of the Magento pages through the XML files. The layout XML files has the power to manage the behavior of your .phtml file: you can incorporate CSS or JavaScript to be loaded on specific pages. Every page on Magento is defined by a handle. A handle is a reference name that Magento uses to refer to a particular page. For example, the <cms_page> handle is used to control the layout of the pages in your Magento. In Magento, we have two main type of handles: Default handles: These manage the whole site Non-default handles: These manage specific parts of the site In the rwd theme, the .xml files are located in app/design/frontend/rwd/default/layout. Let's take a look at an .xml layout file example: This piece of code belongs to the page.xml layout file. We can see the <default> handle defining the .css and .js files that will be loaded on the page. The page.xml file has the same name as its respective folder in app/design/frontend/rwd/default/template/page. This is an internal Magento control. Please keep this in mind: Magento works with a predefined naming file pattern. Keeping this in your mind can avoid unnecessary errors. The template folder The template folder, taking rwd as a reference, is located at app/design/frontend/rwd/default/template. Every subdirectory of template controls a specific page of Magento. The template files are the .phtml files, a mix of HTML and PHP, and they are the layout structure files. Let's take a look at a page/1column.phtml example: The locale folder The locale folder has all the specific translation of the theme. Let's imagine that you want to create a specific translation file for the rwd theme. You can create a locale file at app/design/frontend/rwd/locale/en_US/translate.csv. The locale folder structure basically has a folder of the language (en_US), and always has the translate.csv filename. The app/locale folder in Magento is the main translation folder. You can take a look at it to better understand. But the locale folder inside the theme folder has priority in Magento loading. For example, if you want to create a Brazilian version of the theme, you have to duplicate the translate.csv file from app/design/frontend/rwd/locale/en_US/ to app/design/frontend/rwd/locale/pt_BR/. This will be very useful to those who use the theme and will have to translate it in the future. Creating new entries in translate If you want to create a new entry in your translate.csv, first of all put this code in your PHTML file: <?php echo $this->__('Translate test'); ?> In CSV file, you can put the translation in this format: 'Translate test', 'Translate test'. The SKIN structure The skin folder basically has the css and js files and images of the theme, and is located in skin/frontend/rwd/default. Remember that Magento has a filename/folder naming pattern. The skin folder named rwd will work with rwd theme folder. If Magento has rwd as a main theme and is looking for an image that is not in the skin folder, Magento will search this image in skin/base folder. Remember also that Magento has a fall-back system. It is keeping its search in the main themes folder to find the correct file. Take advantage of this! CMS blocks and pages Magento has a flexible theme system. Beyond Magento code customization, the admin can create blocks and content on Magento admin panel. CMS (Content Management System) pages and blocks on Magento give you the power to embed HTML code in your page. Summary In this article, we covered the basic concepts of Magento theme. These may be used to change the display of the website or its functionality. These themes are interchangeable with Magento installations. Resources for Article: Further resources on this subject: Preparing and Configuring Your Magento Website [article] Introducing Magento Extension Development [article] Installing Magento [article]
Read more
  • 0
  • 0
  • 4715

article-image-running-simpletest-and-phpunit
Packt
09 Feb 2016
4 min read
Save for later

Running Simpletest and PHPUnit

Packt
09 Feb 2016
4 min read
In this article by Matt Glaman, the author of the book Drupal 8 Development Cookbook. This article will kick off with an introduction to getting a Drupal 8 site installed and running. (For more resources related to this topic, see here.) Running Simpletest and PHPUnit Drupal 8 ships with two testing suites. Previously Drupal only supported Simpletest. Now there are PHPUnit tests as well. In the official change record, PHPUnit was added to provide testing without requiring a full Drupal bootstrap, which occurs with each Simpletest test. Read the change record here: https://www.drupal.org/node/2012184. Getting ready Currently core comes with Composer dependencies prepackaged and no extra steps need to be taken to run PHPUnit. This article will demonstrate how to run tests the same way that the QA testbot on Drupal.org does. The process of managing Composer dependencies may change, but is currently postponed due to Drupal.org's testing and packaging infrastructure. Read more here: https://www.drupal.org/node/1475510. How to do it… First enable the Simpletest module. Even though you might only want to run PHPUnit, this is a soft dependency for running the test runner script. Open a command line terminal and navigate to your Drupal installation directory and run the following to execute all available PHPUnit tests: php core/scripts/run-tests.sh PHPUnit Running Simpletest tests required executing the same script, however instead of passing PHPUnit as the argument, you must define the URL option and tests option: php core/scripts/run-tests.sh --url http://localhost --all Review the test output! How it works… The run-tests.sh script has been shipped with Drupal since 2008, then named run-functional-tests.php. The command interacts with the test suites in Drupal to run all or specific tests and sets up other configuration items. We will highlight some of the useful options below: --help: This displays the items covered in the following bullets --list: This displays the available test groups that can be run --url: This is required unless the Drupal site is accessible through http://localhost:80 --sqlite: This allows you to run Simpletest without having to have Drupal installed --concurrency: This allows you to define how many tests run in parallel There's more… Is run-tests a shell script? The run-tests.sh isn't actually a shell script. It is a PHP script, which is why you must execute it with PHP. In fact, within core/scripts each file is a PHP script file meant to be executed from the command line. These scripts are not intended to be run through a web server, which is one of the reasons for the .sh extension. There are issues with discovered PHP across platforms that prevent providing a shebang line to allow executing the file as a normal bash or bat script. For more info view this Drupal.org issue: https://www.drupal.org/node/655178. Running Simpletest without Drupal installed With Drupal 8, Simpletest can be run off SQLlite and no longer requires an installed database. This can be accomplished by passing the sqlite and dburl options to the run-tests.sh script. This requires the PHP SQLite extension to be installed. Here is an example adapted from the DrupalCI test runner for Drupal core: php core/scripts/run-tests.sh --sqlite /tmp/.ht.sqlite --die-on-fail --dburl sqlite://tmp/.ht.sqlite --all Combined with the built in PHP web server for debugging you can run Simpletest without a full-fledged environment. Running specific tests Each example thus far has used the all option to run every Simpletest available. There are various ways to run specific tests: --module: This allows you to run all the tests of a specific module --class: This runs a specific path, identified by full namespace path --file: This runs tests from a specified file --directory: This run tests within a specified directory Previously in Drupal tests were grouped inside of module.test files, which is where the file option derives from. Drupal 8 utilizes the PSR-4 autoloading method and requires one class per file. DrupalCI With Drupal 8 came a new initiative to upgrade the testing infrastructure on Drupal.org. The outcome was DrupalCI. DrupalCI is open source and can be downloaded and run locally. The project page for DrupalCI is: https://www.drupal.org/project/drupalci. The test bot utilizes Docker and can be downloaded locally to run tests. The project ships with a Vagrant file to allow it to be run within a virtual machine or locally. Learn more on the testbot's project page: https://www.drupal.org/project/drupalci_testbot. See also PHPUnit manual: https://phpunit.de/manual/4.8/en/writing-tests-for-phpunit.html Drupal PHPUnit handbook: https://drupal.org/phpunit Simpletest from the command line: https://www.drupal.org/node/645286 Resources for Article: Further resources on this subject: Installing Drupal 8 [article] What is Drupal? [article] Drupal 7 Social Networking: Managing Users and Profiles [article]
Read more
  • 0
  • 0
  • 2856
Banner background image

article-image-e-commerce-mean
Packt
05 Nov 2015
8 min read
Save for later

E-commerce with MEAN

Packt
05 Nov 2015
8 min read
These days e-commerce platforms are widely available. However, as common as they might be, there are instances that after investing a significant amount of time learning how to use a specific tool you might realize that it can not fit your unique e-commerce needs as it promised. Hence, a great advantage of building your own application with an agile framework is that you can quickly meet your immediate and future needs with a system that you fully understand. Adrian Mejia Rosario, the author of the book, Building an E-Commerce Application with MEAN, shows us how MEAN stack (MongoDB, ExpressJS, AngularJS and NodeJS) is a killer JavaScript and full-stack combination. It provides agile development without compromising on performance and scalability. It is ideal for the purpose of building responsive applications with a large user base such as e-commerce applications. Let's have a look at a project using MEAN. (For more resources related to this topic, see here.) Understanding the project structure The applications built with the angular-fullstack generator have many files and directories. Some code goes in the client, other executes in the backend and another portion is just needed for development cases such as the tests suites. It’s important to understand the layout to keep the code organized. The Yeoman generators are time savers! They are created and maintained by the community following the current best practices. It creates many directories and a lot of boilerplate code to get you started. The numbers of unknown files in there might be overwhelming at first. On reviewing the directory structure created, we see that there are three main directories: client, e2e and server: The client folder will contain the AngularJS files and assets. The server directory will contain the NodeJS files, which handles ExpressJS and MongoDB. Finally, the e2e files will contain the AngularJS end-to-end tests. File Structure This is the overview of the file structure of this project: meanshop ├── client │ ├── app - App specific components │ ├── assets - Custom assets: fonts, images, etc… │ └── components - Non-app specific/reusable components │ ├── e2e - Protractor end to end tests │ └── server ├── api - Apps server API ├── auth - Authentication handlers ├── components - App-wide/reusable components ├── config - App configuration │ └── local.env.js - Environment variables │ └── environment - Node environment configuration └── views - Server rendered views Components You might be already familiar with a number of tools used in this project. If that’s not the case, you can read the brief description here. Testing AngularJS comes with a default test runner called Karma and we are going going to leverage its default choices: Karma: JavaScript unit test runner. Jasmine: It's a BDD framework to test JavaScript code. It is executed with Karma. Protractor: They are end-to-end tests for AngularJS. These are the highest levels of testing that run in the browser and simulate user interactions with the app. Tools The following are some of the tools/libraries that we are going to use in order to increase our productivity: GruntJS: It's a tool that serves to automate repetitive tasks, such as a CSS/JS minification, compilation, unit testing, and JS linting. Yeoman (yo): It's a CLI tool to scaffold web projects., It automates directory creation and file creation through generators and also provides command lines for common tasks. Travis CI: Travis CI is a continuous integration tool that runs your test suites every time you commit to the repository. EditorConfig: EditorConfig is an IDE plugin that loads the configuration from a file .editorconfig. For example, you can set indent_size = 2 indent with spaces, tabs, and so on. It’s a time saver and helps maintain consistency across multiple IDEs/teams. SocketIO: It's a library that enables real-time bidirectional communication between the server and the client. Bootstrap: It's a frontend framework for web development. We are going to use it to build the theme thought-out for this project. AngularJS full-stack: It's a generator for Yeoman that will provide useful command lines to quickly generate server/client code and deploy it to Heroku or OpenShift. BabelJS: It's a js-tojs compiler that allows to use features from the next generation JavaScript (ECMAScript 6), currently without waiting for browser support. Git: It's a distributed code versioning control system. Package managers We have package managers for our third-party backend and frontend modules. They are as follows: NPM: It is the default package manager for NodeJS. Bower: It is the frontend package manager that can be used to handle versions and dependencies of libraries and assets used in a web project. The file bower.json contains the packages and versions to install and the file .bowerrc contains the path where those packages are to be installed. The default directory is ./bower_components. Bower packages If you have followed the exact steps to scaffold our app you will have the following frontend components installed: angular angular-cookies angular-mocks angular-resource angular-sanitize angular-scenario angular-ui-router angular-socket-io angular-bootstrap bootstrap es5-shim font-awesome json3 jquery lodash Previewing the final e-commerce app Let’s take a pause from the terminal. In any project, before starting coding, we need to spend some time planning and visualizing what we are aiming for. That’s exactly what we are going to do, draw some wireframes that walk us through the app. Our e-commerce app, MEANshop, will have three main sections: Homepage Marketplace Back-office Homepage The home page will contain featured products, navigation, menus, and basic information, as you can see in the following image: Figure 2 - Wireframe of the homepage Marketplace This section will show all the products, categories, and search results. Figure 3 - Wireframe of the products page Back-office You need to be a registered user to access the back office section, as shown in the following figure:   Figure 4 - Wireframe of the login page After you login, it will present you with different options depending on the role. If you are the seller, you can create new products, such as the following: Figure 5 - Wireframe of the Product creation page If you are an admin, you can do everything that a seller does (create products) plus you can manage all the users and delete/edit products. Understanding requirements for e-commerce applications There’s no better way than to learn new concepts and technologies while developing something useful with it. This is why we are building a real-time e-commerce application from scratch. However, there are many kinds of e-commerce apps. In the following sections we will delimit what we are going to do. Minimum viable product for an e-commerce site Even the largest applications that we see today started small and grew their way up. The minimum viable product (MVP) is strictly the minimum that an application needs to work on. In the e-commerce example, it will be: Add products with title, price, description, photo, and quantity. Guest checkout page for products. One payment integration (for example, Paypal). This is strictly the minimum requirement to get an e-commerce site working. We are going to start with these but by no means will we stop there. We will keep adding features as we go and build a framework that will allow us to extend the functionality with high quality. Defining the requirements We are going to capture our requirements for the e-commerce application with user stories. A user story is a brief description of a feature told from the perspective of a user where he expresses his desire and benefit in the following format: As a <role>, I want <desire> [so that <benefit>] User stories and many other concepts were introduced with the Agile Manifesto. Learn more at https://en.wikipedia.org/wiki/Agile_software_development Here are the features that we are planning to develop through this book that have been captured as user stories: As a seller, I want to create products. As a user, I want to see all published products and its details when I click on them. As a user, I want to search for a product so that I can find what I’m looking for quickly. As a user, I want to have a category navigation menu so that I can narrow down the search results. As a user, I want to have real-time information so that I can know immediately if a product just got sold-out or became available. As a user, I want to check out products as a guest user so that I can quickly purchase an item without registering. As a user, I want to create an account so that I can save my shipping addresses, see my purchase history, and sell products. As an admin, I want to manage user roles so that I can create new admins, sellers, and remove seller permission. As an admin, I want to manage all the products so that I can ban them if they are not appropriate. As an admin, I want to see a summary of the activities and order status. All these stories might seem verbose but they are useful in capturing requirements in a consistent way. They are also handy to develop test cases against it. Summary Now that we have a gist of an e-commerce app with MEAN, lets build a full-fledged e-commerce project with Building an E-Commerce Application with MEAN. Resources for Article:   Further resources on this subject: Introduction to Couchbase [article] Protecting Your Bitcoins [article] DynamoDB Best Practices [article]
Read more
  • 0
  • 0
  • 5003

article-image-magento-2-development-cookbook
Packt
03 Nov 2015
4 min read
Save for later

Upgrading from Magneto 1

Packt
03 Nov 2015
4 min read
In Magento 2 Development Cookbook by Bart Delvaux, the overarching goal of this book is to provides you with the with a wide range of techniques to modify and extend the functionality of your online store. It contains easy-to-understand recipes starting with the basics and moving on to cover advanced topics. Many recipes work with code examples that can be downloaded from the book’s website. (For more resources related to this topic, see here.) Why Magento 2 Solve common problems encountered while extending your Magento 2 store to fit your business needs. Exciting and enhanced features of Magento 2 such as customizing security permissions, intelligent filtered search options, easy third-party integration, among others. Learn to build and maintain a Magento 2 shop via a visual-based page editor and customize the look and feel using Magento 2 offerings on the go. What this article covers? This article covers Preparing an upgrade from Magento 1. Preparing an upgrade from Magento 1 The differences between Magento 1 and Magento 2 are big. The code has a whole new structure with a lot of improvements but there is one big disadvantage. What to do if I want to upgrade my Magento 1 shop to a Magento 2 shop. Magento created an upgrade tool that migrates the data of the Magento 1 database to the right structure for a Magento 2 database. The custom modules in your Magento 1 shop will not work in a Magento 2. It is possible that some of your modules will have a Magento 2 version and depending of the module, the module author will have a migration tool to migrate the data that is in the module. Getting ready Before we get started, make sure you have an empty (without sample data) Magento 2 installation with the same version as the Migration tool that is available at: https://github.com/magento/data-migration-tool-ce How to do it In your Magento 2 version (with the same version as the migration tool), run the following commands: composer config repositories.data-migration-tool git https://github.com/magento/data-migration-tool-ce composer require magento/data-migration-tool:dev-master Install Magento 2 with an empty database by running the installer. Make sure you configure it with the right time zone and currencies. When these steps are done, you can test the tool by running the following command: php vendor/magento/data-migration-tool/bin/migrate This command will print the usage of the command. The next thing is creating the configuration files. The examples of the configuration files are in the following folder: vendor/magento/data-migration-tool/etc/<version>. We can create a copy of this folder where we can set our custom configuration values. For a Magento 1.9 installation, we have to run the following cp command: cp –R vendor/magento/data-migration-tool/etc/ce-to-ce/1.9.1.0/ vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration Open the vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml.dist file and search for the source/database and destination/database tags. Change the values of these database settings to your database settings like in the following code: <source> <database host="localhost" name="magento1" user="root"/> </source> <destination> <database host="localhost" name="magento2_migration" user="root"/> </destination> Rename that file to config.xml with the following command: mv vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml.dist vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml How it works By adding a composer dependency, we installed the data migration tool for Magento 2 in the codebase. This migration tool is a PHP command line script that will handle the migration steps from a Magento 1 shop. In the etc folder of the migration module, there is an example configuration of an empty Magento 1.9 shop. If you want to migrate an existing Magento 1 shop, you have to customize these configuration files so it matches your preferred state. In the next recipe, we will learn how we can use the script to start the migration. Who this book is written for? This book is packed with a wide range of techniques to modify and extend the functionality of your online store. It contains easy-to-understand recipes starting with the basics and moving on to cover advanced topics. Many recipes work with code examples that can be downloaded from the book’s website. Summary In this article, we learned about how to Prepare an upgrade from Magento 1. Read Magento 2 Development Cookbook to gain detailed knowledge of Magento 2 workflows, explore use cases for advanced features, craft well thought out orchestrations, troubleshoot unexpected behavior, and extend Magento 2 through customizations. Other related titles are: Magento : Beginner's Guide - Second Edition Mastering Magento Magento: Beginner's Guide Mastering Magento Theme Design Resources for Article: Further resources on this subject: Creating a Responsive Magento Theme with Bootstrap 3[article] Social Media and Magento[article] Optimizing Magento Performance — Using HHVM [article]
Read more
  • 0
  • 0
  • 1229

article-image-creating-video-streaming-site
Packt
16 Sep 2015
16 min read
Save for later

Creating a Video Streaming Site

Packt
16 Sep 2015
16 min read
 In this article by Rachel McCollin, the author of WordPress 4.0 Site Blueprints Second Edition, you'll learn how to stream video from YouTube to your own video sharing site, meaning that you can add more than just the videos to your site and have complete control over how your videos are shown. We'll create a channel on YouTube and then set up a WordPress site with a theme and plugin to help us stream video from that channel WordPress is the world's most popular Content Management System (CMS) and you can use it to create any kind of site you or your clients need. Using free plugins and themes for WordPress, you can create a store, a social media site, a review site, a video site, a network of sites or a community site, and more. WordPress makes it easy for you to create a site that you can update and add to over time, letting you add posts, pages, and more without having to write code. WordPress makes your job of creating your own website simple and hassle-free! (For more resources related to this topic, see here.) Planning your video streaming site The first step is to plan how you want to use your video site. Ask yourself a few questions: Will I be streaming all my video from YouTube? Will I be uploading any video manually? Will I be streaming from multiple sources? What kind of design do I want? Will I include any other types of content on my site? How will I record and upload my videos? Who is my target audience and how will I reach them? Do I want to make money from my videos? How often will I create videos and what will my recording and editing process be? What software and hardware will I need for recording and editing videos? It's beyond the scope of this article to answer all of these questions, but it's worth taking some time before you start to consider how you're going to be using your video site, what you'll be adding to it, and what your objectives are. Streaming from YouTube or uploading videos direct? WordPress lets you upload your videos directly to your site using the Add Media button, the same button you use to insert images. This can seem like the simplest way of doing things as you only need to work in one place. However, I would strongly recommend using a third-party video service instead, for the following reasons: It saves on storage space in your site. It ensures your videos will play on any device people choose to view your site from. It keeps the formats your video is played in up to date so that you don't have to re-upload them when things change. It can have massive SEO benefits socially if you use YouTube. YouTube is owned by Google and has excellent search engine rankings. You'll find that videos streamed via YouTube get better Google rankings than any videos you upload directly to your site. In this article, the focus will be on creating a YouTube channel and streaming video from it to your website. We'll set things up so that when you add new videos to your channel, they'll be automatically streamed to your site. To do that, we'll use a plugin. Understanding copyright considerations Before you start uploading video to YouTube, you need to understand what you're allowed to add, and how copyright affects your videos. You can find plenty of information on YouTube's copyright rules and processes at https://www.youtube.com/yt/copyright/, but it can quite easily be summarized as this: if you created the video, or it was created by someone who has given you explicit permission to use it and publish it online, then you can upload it. If you've recorded a video from the TV or the Web that you didn't make and don't have permission to reproduce (or if you've added copyrighted music to your own videos without permission), then you can't upload it. It may seem tempting to ignore copyright and upload anything you're able to find and record (and you'll find plenty of examples of people who've done just that), but you are running a risk of being prosecuted for copyright infringement and being forced to pay a huge fine. I'd also suggest that if you can create and publish original video content rather than copying someone else's, you'll find an audience of fans for that content, and it will be a much more enjoyable process. If your videos involve screen capture of you using software or playing games, you'll need to check the license for that software or game to be sure that you're entitled to publish video of you interacting with it. Most software and games developers have no problem with this as it provides free advertising for them, but you should check with the software provider and the YouTube copyright advice. Movies and music have stricter rules than games generally do however. If you upload videos containing someone else's video or music content that's copyrighted and you haven't got permission to reproduce, then you will find yourself in violation of YouTube's rules and possibly in legal trouble too. Creating a YouTube channel and uploading videos So, you've planned your channel and you have some videos you want to share with the world. You'll need a YouTube channel so you can upload your videos. Creating your YouTube channel You'll need a YouTube channel in order to do this. Let's create a YouTube channel by following these steps: If you don't already have one, create a Google account for yourself at https://accounts.google.com/SignUp. Head over to YouTube at https://www.youtube.com and sign in. You'll have an account with YouTube because it's part of Google, but you won't have a channel yet. Go to https://www.youtube.com/channel_switcher. Click on the Create a new channel button. Follow the instructions onscreen to create your channel. Customize your channel, uploading images to your profile photo or channel art and adding a description using the About tab. Here's my channel: It can take a while for artwork from Google+ to show up on your channel, so don't worry if you don't see it straight away. Uploading videos The next step is to upload some videos. YouTube accepts videos in the following formats: .MOV .MPEG4 .AVI .WMV .MPEGPS .FLV 3GPP WebM Depending on the video software you've used to record, your video may already be in one of these formats or you may need to export it to one of these and save it before you can upload it. If you're not sure how to convert your file to one of the supported formats, you'll find advice at https://support.google.com/youtube/troubleshooter/2888402 to help you do it. You can also upload videos to YouTube directly from your phone or tablet. On an Android device, you'll need to use the YouTube app, while on an iOS device you can log in to YouTube on the device and upload from the camera app. For detailed instructions and advice for other devices, refer to https://support.google.com/youtube/answer/57407. If you're uploading directly to the YouTube website, simply click on the Upload a video button when viewing your channel and follow the onscreen instructions. Make sure you add your video to a playlist by clicking on the +Add to playlist button on the right-hand side while you're setting up the video as this will help you categorize the videos in your site later. Now when you open your channel page and click on the Videos tab, you'll see all the videos you uploaded: When you click on the Playlists tab, you'll see your new playlist: So you now have some videos and a playlist set up in YouTube. It's time to set up your WordPress site for streaming those videos. Installing and configuring the YouTube plugin Now that you have your videos and playlists set up, it's time to add a plugin to your site that will automatically add new videos to your site when you upload them to YouTube. Because I've created a playlist, I'm going to use a category in my site for the playlist and automatically add new videos to that category as posts. If you prefer you can use different channels for each category or you can just use one video category and link your channel to that. The latter is useful if your site will contain other content as well, such as photos or blog posts. Note that you don't need a plugin to stream YouTube videos to your site. You can simply paste the URL for a video into the editing pane when you're creating a post or page in your site, and WordPress will automatically stream the video. You don't even need to add an embed code, just add the YRL. But if you don't want to automate the process of streaming all of the videos in your channel to your site, this plugin will make that process easy. Installing the Automatic YouTube Video Posts plugin The Automatic YouTube Video Posts plugin lets you link your site to any YouTube channel or playlist and automatically adds each new video to your site as a post. Let's start by installing it. I'm working with a fresh WordPress installation but you can also do this on your existing site if that's what you're working with. Follow these steps: In the WordPress admin, go to Plugins | Add New. In the Search box, type Automatic Youtube. The plugins that meet the search criteria will be displayed. Select the Automatic YouTube Video Posts plugin and then install and activate it. For the plugin to work, you'll need to configure its settings and add one or more channels or playlists. Configuring the plugin settings Let's start with the plugin settings screen. You do this via the Youtube Posts menu, which the plugin has added to your admin menu: Go to Youtube Posts | Settings. Edit the settings as follows:     Automatically publish posts: Set this to Yes     Display YouTube video meta: Set this to Yes     Number of words and Video dimensions: Leave these at the default values     Display related videos: Set this to No     Display videos in post lists: Set this to Yes    Import the latest videos every: Set this to 1 hours (note that the updates will happen every hour if someone visits the site, but not if the site isn't visited) Click on the Save changes button. The settings screen will look similar to the following screenshot: Adding a YouTube channel or playlist The next step is to add a YouTube channel and/or playlist so that the plugin will create posts from your videos. I'm going to add the "Dizzy" playlist I created earlier on. But first, I'll create a category for all my videos from that playlist. Creating a category for a playlist Create a category for your playlist in the normal way: In the WordPress admin, go to Posts | Categories. Add the category name and slug or description if you want to (if you don't, WordPress will automatically create a slug). Click on the Add New Category button. Adding your channel or playlist to the plugin Now you need to configure the plugin so that it creates posts in the category you've just created. In the WordPress admin, go to Youtube Posts | Channels/Playlists. Click on the Add New button. Add the details of your channel or playlist, as shown in the next screenshot. In my case, the details are as follows:     Name: Dizzy     Channel/playlist: This is the ID of my playlist. To find this, open the playlist in YouTube and then copy the last part of its URL from your browser. The URL for my playlist is   https://www.youtube.com/watch?v=vd128vVQc6Y&list=PLG9W2ELAaa-Wh6sVbQAIB9RtN_1UV49Uv and the playlist ID is after the &list= text, so it's PLG9W2ELAaa-Wh6sVbQAIB9RtN_1UV49Uv. If you want to add a channel, add its unique name.      Type: Select Channel or Playlist; I'm selecting Playlist.      Add videos from this channel/playlist to the following categories: Select the category you just created.      Attribute videos from this channel to what author: Select the author you want to attribute videos to, if your site has more than one author. Finally, click on the Add Channel button. Adding a YouTube playlist Once you click on the Add Channel button, you'll be taken back to the Channels/Playlists screen, where you'll see your playlist or channel added: The newly added playlist If you like, you can add more channels or playlists and more categories. Now go to the Posts listing screen in your WordPress admin, and you'll see that the plugin has created posts for each of the videos in your playlist: Automatically added posts Installing and configuring a suitable theme You'll need a suitable theme in your site to make your videos stand out. I'm going to use the Keratin theme which is grid-based with a right-hand sidebar. A grid-based theme works well as people can see your videos on your home page and category pages. Installing the theme Let's install the theme: Go to Appearance | Themes. Click on the Add New button. In the search box, type Keratin. The theme will be listed. Click on the Install button. When prompted, click on the Activate button. The theme will now be displayed in your admin screen as active: The installed and activated theme Creating a navigation menu Now that you've activated a new theme, you'll need to make sure your navigation menu is configured so that it's in the theme's primary menu slot, or if you haven't created a menu yet, you'll need to create one. Follow these steps: Go to Appearance | Menus. If you don't already have a menu, click on the Create Menu button and name your new menu. Add your home page to the menu along with any category pages you've created by clicking on the Categories metabox on the left-hand side. Once everything is in the right place in your menu, click on the Save Menu button. Your Menus screen will look something similar to this: Now that you have a menu, let's take a look at the site: The live site That's looking good, but I'd like to add some text in the sidebar instead of the default content. Adding a text widget to the sidebar Let's add a text widget with some information about the site: In the WordPress admin, go to Appearance | Widgets. Find the text widget on the left-hand side and drag it into the widget area for the main sidebar. Give the widget a title. Type the following text into the widget's contents: Welcome to this video site. To see my videos on YouTube, visit <a href="https://www.youtube.com/channel/UC5NPnKZOjCxhPBLZn_DHOMw">my channel</a>. Replace the link I've added here with a link to your own channel: The Widgets screen with a text widget added Text widgets accept text and HTML. Here we've used HTML to create a link. For more on HTML links, visit http://www.w3schools.com/html/html_links.asp. Alternatively if you'd rather create a widget that gives you an editing pane like the one you use for creating posts, you can install the TinyMCE Widget plugin from https://wordpress.org/plugins/black-studio-tinymce-widget/screenshots/. This gives you a widget that lets you create links and format your text just as you would when creating a post. Now go back to your live site to see how things are looking:The live site with a text widget added It's looking much better! If you click on one of these videos, you're taken to the post for that video: A single post with a video automatically added Your site is now ready. Managing and updating your videos The great thing about using this plugin is that once you've set it up you'll never have to do anything in your website to add new videos. All you need to do is upload them to YouTube and add them to the playlist you've linked to, and they'll automatically be added to your site. If you want to add extra content to the posts holding your videos you can do so. Just edit the posts in the normal way, adding text, images, and anything you want. These will be displayed as well as the videos. If you want to create new playlists in future, you just do this in YouTube and then create a new category on your site and add the playlist in the settings for the plugin, assigning the new channel to the relevant category. You can upload your videos to YouTube in a variety of ways—via the YouTube website or directly from the device or software you use to record and/or edit them. Most phones allow you to sign in to your YouTube account via the video or YouTube app and directly upload videos, and video editing software will often let you do the same. Good luck with your video site, I hope it gets you lots of views! Summary In this article, you learned how to create a WordPress site for streaming video from YouTube. You created a YouTube channel and added videos and playlists to it and then you set up your site to automatically create a new post each time you add a new video, using a plugin. Finally, you installed a suitable theme and configured it, creating categories for your channels and adding these to your navigation menu. Resources for Article: Further resources on this subject: Adding Geographic Capabilities via the GeoPlaces Theme[article] Adding Flash to your WordPress Theme[article] Adding Geographic Capabilities via the GeoPlaces Theme [article]
Read more
  • 0
  • 0
  • 3580
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-eav-model
Packt
10 Aug 2015
11 min read
Save for later

EAV model

Packt
10 Aug 2015
11 min read
In this article by Allan MacGregor, author of the book Magento PHP Developer's Guide - Second Edition, we cover details about EAV models, its usefulness in retrieving data, and the advantages it provides to the merchants and developers. EAV stands for entity, attribute, and value and is probably the most difficult concept for new Magento developers to grasp. While the EAV concept is not unique to Magento, it is rarely implemented on modern systems. Additionally, a Magento implementation is not a simple one. (For more resources related to this topic, see here.) What is EAV? In order to understand what EAV is and what its role within Magento is, we need to break down parts of the EAV model: Entity: This represents the data items (objects) inside Magento products, customers, categories, and orders. Each entity is stored in the database with a unique ID. Attribute: These are our object properties. Instead of having one column per attribute on the product table, attributes are stored on separate sets of tables. Value: As the name implies, it is simply the value link to a particular attribute. This data model is the secret behind Magento's flexibility and power, allowing entities to add and remove new properties without having to make any changes to the code, templates, or the database schema. This model can be seen as a vertical way of growing our database (new attributes and more rows), while the traditional model involves a horizontal growth pattern (new attributes and more columns), which would result in a schema redesign every time new attributes are added. The EAV model not only allows for the fast evolution of our database, but is also more effective because it only works with non-empty attributes, avoiding the need to reserve additional space in the database for null values. If you are interested in exploring and learning more about the Magento database structure, I highly recommend visiting www.magereverse.com. Adding a new product attribute is as simple going to the Magento backend and specifying the new attribute type, be it color, size, brand, or anything else. The opposite is true as well and we can get rid of unused attributes on our products or customer models. For more information on managing attributes, visit http://www.magentocommerce.com/knowledge-base/entry/how-do-attributes-work-in-magento. The Magento community edition currently has eight different types of EAV objects: Customer Customer Address Products Product Categories Orders Invoices Credit Memos Shipments The Magento Enterprise Edition has one additional type called RMA item, which is part of the Return Merchandise Authorization (RMA) system. All this flexibility and power is not free; there is a price to pay. Implementing the EAV model results in having our entity data distributed on a large number of tables. For example, just the Product Model is distributed to around 40 different tables. The following diagram only shows a few of the tables involved in saving the information of Magento products: Other major downsides of EAV are the loss of performance while retrieving large collections of EAV objects and an increase in the database query complexity. As the data is more fragmented (stored in more tables), selecting a single record involves several joins. One way Magento works around this downside of EAV is by making use of indexes and flat tables. For example, Magento can save all the product information into the flat_catalog table for easier and faster access. Let's continue using Magento products as our example and manually build the query to retrieve a single product. If you have phpmyadmin or MySQL Workbench installed on your development environment, you can experiment with the following queries. Each can be downloaded on the PHPMyAdmin website at http://www.phpmyadmin.net/ and the MySQL Workbench website at http://www.mysql.com/products/workbench/. The first table that we need to use is the catalog_product_entity table. We canconsider this our main product EAV table since it contains the main entity records for our products: Let's query the table by running the following SQL query: SELECT FROM `catalog_product_entity`; The table contains the following fields: entity_id: This is our product unique identifier that is used internally by Magento. entity_type_id: Magento has several different types of EAV models. Products, customers, and orders are just some of them. Identifying each of these by type allows Magento to retrieve the attributes and values from the appropriate tables. attribute_set_id: Product attributes can be grouped locally into attribute sets. Attribute sets allow even further flexibility on the product structure as products are not forced to use all available attributes. type_id: There are several different types of products in Magento: simple, configurable, bundled, downloadable, and grouped products; each with unique settings and functionality. sku: This stands for Stock Keeping Unit and is a number or code used to identify each unique product or item for sale in a store. This is a user-defined value. has_options: This is used to identify if a product has custom options. required_options: This is used to identify if any of the custom options that are required. created_at: This is the row creation date. updated_at: This is the last time the row was modified. Now we have a basic understanding of the product entity table. Each record represents a single product in our Magento store, but we don't have much information about that product beyond the SKU and the product type. So, where are the attributes stored? And how does Magento know the difference between a product attribute and a customer attribute? For this, we need to take a look into the eav_attribute table by running the following SQL query: SELECT FROM `eav_attribute`; As a result, we will not only see the product attributes, but also the attributes corresponding to the customer model, order model, and so on. Fortunately, we already have a key to filter the attributes from this table. Let's run the following query: SELECT FROM `eav_attribute` WHERE entity_type_id = 4; This query tells the database to only retrieve the attributes where the entity_type_id column is equal to the product entity_type_id(4). Before moving, let's analyze the most important fields inside the eav_attribute table: attribute_id: This is the unique identifier for each attribute and primary key of the table. entity_type_id: This relates each attribute to a specific eav model type. attribute_code: This is the name or key of our attribute and is used to generate the getters and setters for our magic methods. backend_model: These manage loading and storing data into the database. backend_type: This specifies the type of value stored in the backend (database). backend_table: This is used to specify if the attribute should be stored on a special table instead of the default EAV table. frontend_model: These handle the rendering of the attribute element into a web browser. frontend_input: Similar to the frontend model, the frontend input specifies the type of input field the web browser should render. frontend_label: This is the label/name of the attribute as it should be rendered by the browser. source_model: These are used to populate an attribute with possible values. Magento comes with several predefined source models for countries, yes or no values, regions, and so on. Retrieving the data At this point, we have successfully retrieved a product entity and the specific attributes that apply to that entity. Now it's time to start retrieving the actual values. In order to simplify the example (and the query) a little, we will only try to retrieve the name attribute of our products. How do we know which table our attribute values are stored on? Well, thankfully, Magento follows a naming convention to name the tables. If we inspect our database structure, we will notice that there are several tables using the catalog_product_entity prefix: catalog_product_entity catalog_product_entity_datetime catalog_product_entity_decimal catalog_product_entity_int catalog_product_entity_text catalog_product_entity_varchar catalog_product_entity_gallery catalog_product_entity_media_gallery catalog_product_entity_tier_price Wait! How do we know which is the right table to query for our name attribute values? If you were paying attention, I already gave you the answer. Remember that the eav_attribute table had a column called backend_type? Magento EAV stores each attribute on a different table based on the backend type of that attribute. If we want to confirm the backend type of our name attribute, we can do so by running the following code: SELECT FROM `eav_attribute` WHERE `entity_type_id` =4 AND `attribute_code` = 'name'; As a result, we should see that the backend type is varchar and that the values for this attribute are stored in the catalog_product_entity_varchar table. Let's inspect this table: The catalog_product_entity_varchar table is formed by only 6 columns: value_id: This is the attribute value unique identifier and primary key entity_type_id: This is the entity type ID to which this value belongs attribute_id: This is the foreign key that relates the value to our eav_entity table store_id: This is the foreign key matching an attribute value with a storeview entity_id: This is the foreign key relating to the corresponding entity table, in this case, catalog_product_entity value: This is the actual value that we want to retrieve Depending on the attribute configuration, we can have it as a global value, meaning, it applies across all store views or a value per storeview. Now that we finally have all the tables that we need to retrieve the product information, we can build our query: SELECT p.entity_id AS product_id, var.value AS product_name, p.sku AS product_sku FROM catalog_product_entity p, eav_attribute eav, catalog_product_entity_varchar var WHERE p.entity_type_id = eav.entity_type_id AND var.entity_id = p.entity_id    AND eav.attribute_code = 'name'    AND eav.attribute_id = var.attribute_id From our query, we should see a result set with three columns, product_id, product_name, and product_sku. So let's step back for a second in order to get product names with SKUs with raw SQL. We had to write a five-line SQL query, and we only retrieved two values from our products, from one single EAV value table if we want to retrieve a numeric field such as price or a text-value-like product. If we didn't have an ORM in place, maintaining Magento would be almost impossible. Fortunately, we do have an ORM in place, and most likely, you will never need to deal with raw SQL to work with Magento. That said, let's see how we can retrieve the same product information by using the Magento ORM: Our first step is going to be to instantiate a product collection: $collection = Mage::getModel('catalog/product')->getCollection(); Then we will specifically tell Magento to select the name attribute: $collection->addAttributeToSelect('name'); Then, we will ask it to sort the collection by name: $collection->setOrder('name', 'asc'); Finally, we will tell Magento to load the collection: $collection->load(); The end result is a collection of all products in the store sorted by name. We can inspect the actual SQL query by running the following code: echo $collection->getSelect()->__toString(); In just three lines of code, we are telling Magento to grab all the products in the store, to specifically select the name, and finally order the products by name. The last line $collection->getSelect()->__toString(); allows to see the actual query that Magento is executing in our behalf. The actual query being generated by Magento is as follows: SELECT `e`.. IF( at_name.value_id >0, at_name.value, at_name_default.value ) AS `name` FROM `catalog_product_entity` AS `e` LEFT JOIN `catalog_product_entity_varchar` AS `at_name_default` ON (`at_name_default`.`entity_id` = `e`.`entity_id`) AND (`at_name_default`.`attribute_id` = '65') AND `at_name_default`.`store_id` =0 LEFT JOIN `catalog_product_entity_varchar` AS `at_name` ON ( `at_name`.`entity_id` = `e`.`entity_id` ) AND (`at_name`.`attribute_id` = '65') AND (`at_name`.`store_id` =1) ORDER BY `name` ASC As we can see, the ORM and the EAV models are wonderful tools that not only put a lot of power and flexibility in the hands of the developers, but they also do it in a way that is comprehensive and easy to use. Summary In this article, we learned about EAV models and how they are structured to provide Magento with data flexibility and extensibility that both merchants and developers can take advantage of. Resources for Article: Further resources on this subject: Creating a Shipping Module [article] Preparing and Configuring Your Magento Website [article] Optimizing Magento Performance — Using HHVM [article]
Read more
  • 0
  • 0
  • 5428

article-image-responsive-web-design-wordpress
Packt
09 Jul 2015
13 min read
Save for later

Responsive Web Design with WordPress

Packt
09 Jul 2015
13 min read
Welcome to the world of the Responsive Web Design! This article is written by Dejan Markovic, author of the book WordPress Responsive Theme Design, and it will introduce you to the Responsive Web Design and its concepts and techniques. It will also present crisp notes from WordPress Responsive Theme Design. (For more resources related to this topic, see here.) Responsive web design (RWD) is a web design approach aimed at crafting sites to provide an optimal viewing experience—easy reading and navigation with a minimum of resizing, panning, and scrolling—across a wide range of devices (from mobile phones to desktop computer monitors). Reference: http://en.wikipedia.org/wiki/Responsive_web_design. To say it simply, responsive web design (RWD) means that the responsive website should adapt to the screen size of the device it is being viewed on. When I began my web development journey in 2002, we didn't have to consider as many factors as we do today. We just had to create the website for a 17-inch screen (which was the standard at that time), and that was it. Yes, we also had to consider 15, 19, and 21-inch monitors, but since the 17-inch screen was the standard, that was the target screen size for us. In pixels, these sizes were usually 800 or 1024. We also had to consider a fewer number of browsers (Internet Explorer, Netscape, and Opera) and the styling for the print, and that was it. Since then, a lot of things have changed, and today, in 2015, for a website design, we have to consider multiple factors, such as: A lot of different web browsers (Internet Explorer, Firefox, Opera, Chrome, and Safari) A number of different operating systems (Windows (XP, 7, and 8), Mac OS X, Linux, Unix, iOS, Android, and Windows phones) Device screen sizes (desktop, mobile, and tablet) Is content accessible and readable with screen readers? How the content will look like when it's printed? Today, creating different design for all these listed factors & devices would take years. This is where a responsive web design comes to the rescue. The concepts of RWD I have to point out that the mobile environment is becoming more important factor than the desktop environment. Mobile browsing is becoming bigger than the desktop-based access, which makes the mobile environment very important factor to consider when developing a website. Simply put, the main point of RWD is that the layout changes based on the size and capabilities of the device its being viewed on. The concepts of RWD, that we will learn next, are: Viewport, scaling and screen density. Controlling Viewport On the desktop, Viewport is the screen size of the window in a browser. For example, when we resize the browser window, we are actually changing the Viewport size. On mobile devices, the Viewport size is also independent of the device screen size. For example, Viewport is 850 px for mobile Opera and 980 px for mobile Safari, and the screen size for iPhone is 320 px. If we compare the Viewport size of 980 px and the screen size of an iPhone of 320 px, we can see that Viewport is bigger than the screen size. This is because mobile browsers function differently. They first load the page into Viewport, and then they resize it to the device's screen size. This is why we are able to see the whole page on the mobile device. If the mobile browsers had Viewport the same as the screen size (320 px), we would be able to see only a part of the page on the mobile device. In the following screenshot, we can see the table with the list of Viewport sizes for some iPhone models: We can control Viewport with CSS: @viewport {width: device-width;} Or, we can control it with the meta tag: <meta name="viewport" content="width=device-width"> In the preceding code, we are matching the Viewport width with the device width. Because the Viewport meta tag approach is more widely adopted, as it was first used on iOS and the @viewport approach was not supported by some browsers, we will use the meta tag approach. We are setting the Viewport width in order to match our web content with our mobile content, as we want to make sure that our web content looks good on a mobile device as well. We can set Viewports in the code for each device separately, for example, 320 px for the iPhone. The better approach will be to use content="width=device-width". Scaling Scaling is extremely important, as the initial scale controls the zoom aspect of the content for the initial look of the page. For example, if the initial scale is set to 3, the content will be loaded in the size of 3 times of the Viewport size, which means 3 times zoom. Here is the look of the screenshot for initial-scale=1 and initial-scale=3: As we can see from the preceding screenshots, on the initial scale 3 (three times zoom), the logo image takes the bigger part of the screen. It is important to note that this is just the initial scale, which means that the user can zoom in and zoom out later, if they want to. Here is the example of the code with the initial scale: <meta name="viewport" content="width=device-width, initial- scale=1, maximum-scale=1"> In this example, we have used the maximum-scale=1 option, which means that the user will not be able to use the zoom here. We should avoid using the maximum-scale property because of accessibility issues. If we forbid zooming on our pages, users with visual problems will not be able to see the content properly. The screen density As the screen technology is going forward every year or even faster than that, we have to consider the screen density aspect as well. Screen density is the number of pixels that are contained within a screen area. This means that if the screen density is higher, we can have more details, in this case, pixels in the same area. There are two measurements that are usually used for this, dots per inch (DPI) and pixels per inch (PPI). DPI means how many drops a printer can place in an inch of a space. PPI is the number of pixels we can have in one inch of the screen. If we go back to the preceding screenshot with the table where we are showing Viewports and densities and compare the values of iPhone 3G and iPhone 4S, we will see that the screen size stayed the same at 3.5 inch, Viewport stayed the same at 320 px, but the screen density has doubled, from 163 dpi to 326 dpi, which means that the screen resolution also has doubled from 320x480 to 640x960. The screen density is very relevant to RWD, as newer devices have bigger densities and we should do our best to cover as many densities as we can in order to provide a better experience for end users. Pixels' density matters more than the resolution or screen size, because more pixels is equal to sharper display: There are topics that need to be taken into consideration, such as hardware, reference pixels, and the device-pixel-ratio, too. Problems and solutions with the screen density Scalable vector graphics and CSS graphics will scale to the resolution. This is why I recommend using Font Awesome icons in your project. Font Awesome icons are available for download at: http://fortawesome.github.io/Font-Awesome/icons/. Font Icons is a font that is made up of symbols, icons, or pictograms (whatever you prefer to call them) that you can use in a webpage just like a font. They can be instantly customized with properties like: size, drop shadow, or anything you want can be done with the power of CSS. The real problem triggered by the change in the screen density is images, as for high-density screens, we should provide higher resolution images. There are several ways through which we can approach this problem: By targeting high-density screens (providing high-resolution images to all screens) By providing high-resolution images where appropriate (loading high-resolution images only on devices with high-resolution screens) By not using high-resolution images For the beginner developers I will recommend using second approach, providing high-resolution images where appropriate. Techniques in RWD RWD consists of three coding techniques: Media queries (adapt content to specific screen sizes) Fluid grids (for flexible layouts) Flexible images and media (that respond to changes to screen sizes) More detailed information about RWD techniques by Ethan Marcote, who coined the term Reponsive Web Design, is available at http://alistapart.com/article/responsive-web-design. Media queries Media queries are CSS modules, or as some people like to say, just a conditional statements, which are telling tells the browsers to use a specific type of style, depending on the size of the screen and other factors, such as print (specific styles for print). They are here for a long time already, as I was using different styles for print in 2002. If you wish to know more about media queries, refer to W3C Candidate Recommendation 8 July 2002 at http://www.w3.org/TR/2002/CR-css3-mediaqueries-20020708/. Here is an example of media query declaration: @media only screen and (min-width:500px) { font-family: sans-serif; } Let's explain the preceding code: The @media code means that it is a media type declaration. The screen and part of the query is an expression or condition (in this case, it means only screen and no print). The following conditional statement means that everything above 500 px will have the font family of sans serif: (min-width:500px) { font-family: sans-serif; } Here is another example of a media query declaration: @media only screen and (min-width: 500px), screen and (orientation: portrait) { font-family: sans-serif; } In this case, if we have two statements and if one of the statements is true, the entire declaration is applied (either everything above 50 px or the portrait orientation will be applied to the screen). The only keyword hides the styles from older browsers. As some older browsers don't support media queries, I recommend using a respond.js script, which will "patch" support for them. Polyfill (or polyfiller) is code that provides features that are not built or supported by some web browsers. For example, a number of HTML5 features are not supported by older versions of IE (older than 8 or 9), but these features can be used if polyfill is installed on the web page. This means that if the developer wants to use these features, he/she can just include that polyfill library and these features will work in older browsers. Breakpoints Breakpoint is a moment when layout switches, from one layout to another, when some condition is fulfilled, for example, the screen has been resized. Almost all responsive designs cover the changes of the screen between the desktop, tablets, and smart phones. Here is an example with comments inside: @media only screen and (max-width: 480px) { //mobile styles // up to 480px size } Media query in the preceding code will only be used if the width of the screen is 480 px or less. @media only screen and (min-width:481px) and (max-width: 768px) { //tablet styles //between 481 and 768px } Media query in the preceding code will only be used if the width of the screen is between the 481 px and 768 px. @media only screen and (min-width:769px) { //desktop styles //from 769px and up } Media query in the preceding code will only be used when the width of the screen is 769 px and more. The minimum width value in desktop styles is 1 pixel over the maximum width value in tablet styles, and the same difference is there between values from tablet and mobile styles. We are doing this in order to avoid overlapping, as that could cause problem with our styles. There is also an approach to set the maximum width and minimum width with em values. Setting em of the screen for maximum will mean that the width of the screen is set relative to the device's font size. If the font size for the device is 16 px (which is the usual size), the maximum width for mobile styles would be 480/16=30. Why do we use em values? With pixel sizes, everything is fixed; for example, h1 is 19 px (or 1.5 em of the default size of 16 px), and that's it. With em sizes, everything is relative, so if we change the default value in the browser from, for example, 16 px to 18 px, everything relative to that will change. Therefore, all h1 values will change from 19 px to 22 px and make our layout "zoomable". Here is the example with sizes changed to em: @media only screen and (max-width: 30em) { //mobile styles // up to 480px size }   @media only screen and (min-width:30em) and (max-width: 48em) { //tablet styles //between 481 and 768px }   @media only screen and (min-width:48em) { //desktop styles //from 769px and up } Fluid grids The major point in RWD is that the content should adapt to any screen it's viewed on. One of the best solutions to do this is to use fluid layouts where our content can be resized on each breakpoint. In fluid grids, we define a maximum layout size for the design. The grid is divided into a specific number of columns to keep the layout clean and easy to handle. Then we design each element with proportional widths and heights instead of pixel based dimensions. So whenever the device or screen size is changed, elements will adjust their widths and heights by the specified proportions to its parent container. Reference: http://www.1stwebdesigner.com/tutorials/fluid-grids-in-responsive-design/. To make the grid flexible (or elastic), we can use the % points, or we can use the em values, whichever suits us better. We can make our own fluid grids, or we can use grid frameworks. As there are so many frameworks available, I would recommend that you use the existing framework rather than building your own. Grid frameworks could use a single grid that covers various screen sizes, or we can have multiple grids for each of the break points or screen size categories, such as mobiles, tablets, and desktops. Some of the notable frameworks are Twitter's Bootstrap, Foundation, and SemanticUI. I prefer Twitter's Bootstrap, as it really helps me speed up the process and it is the most used framework currently. Flexible images and media Last but not the least important, are images and media (videos). The problem with them is that they are elements that come with fixed sizes. There are several approaches to fix this: Replacing dimensions with percentage values Using maximum widths Using background images only for some cases, as these are not good for accessibility Using some libraries, such as Scott Jehl's picturefill (https://github.com/scottjehl/picturefill) Taking out the width and height parameters from the image tag and dealing with dimensions in CSS Summary In this article, you learned about the RWD concepts such as: Viewport, scaling and the screen density. Also, we have covered the RWD techniques: media queries, fluid grids, and flexible media. Resources for Article: Further resources on this subject: Deployment Preparations [article] Why Meteor Rocks! [article] Clustering and Other Unsupervised Learning Methods [article]
Read more
  • 0
  • 0
  • 3981

article-image-implementing-membership-roles-permissions-and-features
Packt
02 Jun 2015
34 min read
Save for later

Implementing Membership Roles, Permissions, and Features

Packt
02 Jun 2015
34 min read
In this article by Rakhitha Nimesh Ratnayake, author of the book WordPress Web Application Development - Second Edition, we will see how to implement frontend registration and how to create a login form in the frontend. (For more resources related to this topic, see here.) Implementing frontend registration Fortunately, we can make use of the existing functionalities to implement registration from the frontend. We can use a regular HTTP request or AJAX-based technique to implement this feature. In this article, I will focus on a normal process instead of using AJAX. Our first task is to create the registration form in the frontend. There are various ways to implement such forms in the frontend. Let's look at some of the possibilities as described in the following section: Shortcode implementation Page template implementation Custom template implementation Now, let's look at the implementation of each of these techniques. Shortcode implementation Shortcodes are the quickest way to add dynamic content to your pages. In this situation, we need to create a page for registration. Therefore, we need to create a shortcode that generates the registration form, as shown in the following code: add_shortcode( "register_form", "display_register_form" );function display_register_form(){$html = "HTML for registration form";return $html;} Then, you can add the shortcode inside the created page using the following code snippet to display the registration form: [register_form] Pros and cons of using shortcodes Following are the pros and cons of using shortcodes: Shortcodes are easy to implement in any part of your application Its hard to manage the template code assigned using the PHP variables There is a possibility of the shortcode getting deleted from the page by mistake Page template implementation Page templates are a widely used technique in modern WordPress themes. We can create a page template to embed the registration form. Consider the following code for a sample page template: /** Template Name : Registration*/HTML code for registration form Next, we have to copy the template inside the theme folder. Finally, we can create a page and assign the page template to display the registration form. Now, let's look at the pros and cons of this technique. Pros and cons of page templates Following are the pros and cons of page templates: A page template is more stable than shortcode. Generally, page templates are associated with the look of the website rather than providing dynamic forms. The full width page, two-column page, and left sidebar page are some common implementations of page templates. A template is managed separately from logic, without using PHP variables. The page templates depend on the theme and need to be updated on theme switching. Custom template implementation Experienced web application developers will always look to separate business logic from view templates. This will be the perfect technique for such people. In this technique, we will create our own independent templates by intercepting the WordPress default routing process. An implementation of this technique starts from the next section on routing. Building a simple router for a user module Routing is one of the important aspects in advanced application development. We need to figure out ways of building custom routes for specific functionalities. In this scenario, we will create a custom router to handle all the user-related functionalities of our application. Let's list the requirements for building a router: All the user-related functionalities should go through a custom URL, such as http://www.example.com/user Registration should be implemented at http://www.example.com/user/register Login should be implemented at http://www.example.com/user/login Activation should be implemented at http://www.example.com/user/activate Make sure to set up your permalinks structure to post name for the examples in this article. If you prefer a different permalinks structure, you will have to update the URLs and routing rules accordingly. As you can see, the user section is common for all the functionalities. The second URL segment changes dynamically based on the functionality. In MVC terms, user acts as the controller and the next URL segment (register, login, and activate) acts as the action. Now, let's see how we can implement a custom router for the given requirements. Creating the routing rules There are various ways and action hooks used to create custom rewrite rules. We will choose the init action to define our custom routes for the user section, as shown in the following code: public function manage_user_routes() {add_rewrite_rule( '^user/([^/]+)/?','index.php?control_action=$matches[1]', 'top' );} Based on the discussed requirements, all the URLs for the user section will follow the /user/custom action pattern. Therefore, we will define the regular expression for matching all the routes in the user section. Redirection is made to the index.php file with a query variable called control_action. This variable will contain the URL segment after the /user segment. The third parameter of the add_rewrite_rule function will decide whether to check this rewrite rule before the existing rules or after them. The value of top will give a higher precedence, while the value of bottom will give a lower precedence. We need to complete two other tasks to get these rewriting rules to take effect: Add query variables to the WordPress query_vars Flush the rewriting rules Adding query variables WordPress doesn't allow you to use any type of variable in the query string. It will check for query variables within the existing list and all other variables will be ignored. Whenever we want to use a new query variable, make sure to add it to the existing list. First, we need to update our constructor with the following filter to customize query variables: add_filter( 'query_vars', array( $this, 'manage_user_routes_query_vars' ) ); This filter on query_vars will allow us to customize the list of existing variables by adding or removing entries from an array. Now, consider the implementation to add a new query variable: public function manage_user_routes_query_vars( $query_vars ) {$query_vars[] = 'control_action';return $query_vars;} As this is a filter, the existing query_vars variable will be passed as an array. We will modify the array by adding a new query variable called control_action and return the list. Now, we have the ability to access this variable from the URL. Flush the rewriting rules Once rewrite rules are modified, it's a must to flush the rules in order to prevent 404 page generation. Flushing existing rules is a time consuming task, which impacts the performance of the application and hence should be avoided in repetitive actions such as init. It's recommended that you perform such tasks in plugin activation or installation as we did earlier in user roles and capabilities. So, let's implement the function for flushing rewrite rules on plugin activation: public function flush_application_rewrite_rules() {flush_rewrite_rules();} As usual, we need to update the constructor to include the following action to call the flush_application_rewrite_rules function: register_activation_hook( __FILE__, array( $this,'flush_application_rewrite_rules' ) ); Now, go to the admin panel, deactivate the plugin, and activate the plugin again. Then, go to the URL http://www.example.com/user/login and check whether it works. Unfortunately, you will still get the 404 error for the request. You might be wondering what went wrong. Let's go back and think about the process in order to understand the issue. We flushed the rules on plugin activation. So, the new rules should persist successfully. However, we will define the rules on the init action, which is only executed after the plugin is activated. Therefore, new rules will not be available at the time of flushing. Consider the updated version of the flush_application_rewrite_rules function for a quick fix to our problem: public function flush_application_rewrite_rules() {$this->manage_user_routes();flush_rewrite_rules();} We call the manage_user_routes function on plugin activation, followed by the call to flush_rewrite_rules. So, the new rules are generated before flushing is executed. Now, follow the previous process once again; you won't get a 404 page since all the rules have taken effect. You can get 404 errors due to the modification in rewriting rules and not flushing it properly. In such situations, go to the Permalinks section on the Settings page and click on the Save Changes button to flush the rewrite rules manually. Now, we are ready with our routing rules for user functionalities. It's important to know the existing routing rules of your application. Even though we can have a look at the routing rules from the database, it's difficult to decode the serialized array, as we encountered in the previous section. So, I recommend that you use the free plugin called Rewrite Rules Inspector. You can grab a copy at http://wordpress.org/plugins/rewrite-rules-inspector/. Once installed, this plugin allows you to view all the existing routing rules as well as offers a button to flush the rules, as shown in the following screen: Controlling access to your functions We have a custom router, which handles the URLs of the user section of our application. Next, we need a controller to handle the requests and generate the template for the user. This works similar to the controllers in the MVC pattern. Even though we have changed the default routing, WordPress will look for an existing template to be sent back to the user. Therefore, we need to intercept this process and create our own templates. WordPress offers an action hook called template_redirect for intercepting requests. So, let's implement our frontend controller based on template_redirect. First, we need to update the constructor with the template_redirect action, as shown in the following code: add_action( 'template_redirect', array( $this, 'front_controller' ) ); Now, let's take a look at the implementation of the front_controller function using the following code: public function front_controller() {global $wp_query;$control_action = isset ( $wp_query->query_vars['control_action'] ) ? $wp_query->query_vars['control_action'] : ''; ;switch ( $control_action ) {case 'register':do_action( 'wpwa_register_user' );break;}} We will be handling custom routes based on the value of the control_action query variable assigned in the previous section. The value of this variable can be grabbed through the global query_vars array of the $wp_query object. Then, we can use a simple switch statement to handle the controlling based on the action. The first action to consider will be to register as we are in the registration process. Once the control_action query variable is matched with registration, we will call a handler function using do_action. You might be confused why we use do_action in this scenario. So, let's consider the same implementation in a normal PHP application, where we don't have the do_action hook: switch ( $control_action ) {case 'register':$this->register_user();break;} This is the typical scenario where we call a function within the class or in an external class to implement the registration. In the previous code, we called a function within the class, but with the do_action hook instead of the usual function call. The advantages of using the do_action function WordPress action hooks define specific points in the execution process, where we can develop custom functions to modify existing behavior. In this scenario, we are calling the wpwa_register_user function within the class using do_action. Unlike websites or blogs, web applications need to be extendable with future requirements. Think of a situation where we only allow Gmail addresses for user registration. This Gmail validation is not implemented in the original code. Therefore, we need to change the existing code to implement the necessary validations. Changing a working component is considered bad practice in application development. Let's see why it's considered as a bad practice by looking at the definition of the open/closed principle on Wikipedia. "Open/closed principle states "software entities (classes, modules, functions, and so on) should be open for extension, but closed for modification"; that is, such an entity can allow its behavior to be modified without altering its source code. This is especially valuable in a production environment, where changes to the source code may necessitate code reviews, unit tests, and other such procedures to qualify it for use in a product: the code obeying the principle doesn't change when it is extended, and therefore, needs no such effort." WordPress action hooks come to our rescue in this scenario. We can define an action for registration using the add_action function, as shown in the following code: add_action( 'wpwa_register_user', array( $this, 'register_user' ) ); Now, you can implement this action multiple times using different functions. In this scenario, register_user will be our primary registration handler. For Gmail validation, we can define another function using the following code: add_action( 'wpwa_register_user', array( $this, 'validate_gmail_registration') ); Inside this function, we can make the necessary validations, as shown in the following code: public function validate_user(){// Code to validate user// remove registration function if validation failsremove_action( 'wpwa_register_user', array( $this,'register_user' ) );} Now, the validate_user function is executed before the primary function. So, we can remove the primary registration function if something goes wrong in validation. With this technique, we have the capability of adding new functionalities as well as changing existing functionalities without affecting the already written code. We have implemented a simple controller, which can be quite effective in developing web application functionalities. In the following sections, we will continue the process of implementing registration on the frontend with custom templates. Creating custom templates Themes provide a default set of templates to cater to the existing behavior of WordPress. Here, we are trying to implement a custom template system to suit web applications. So, our first option is to include the template files directly inside the theme. Personally, I don't like this option due to two possible reasons: Whenever we switch the theme, we have to move the custom template files to a new theme. So, our templates become theme dependent. In general, all existing templates are related to CMS functionality. Mixing custom templates with the existing ones becomes hard to manage. As a solution to these concerns, we will implement the custom templates inside the plugin. First, create a folder inside the current plugin folder and name it as templates to get things started. Designing the registration form We need to design a custom form for frontend registration containing the default header and footer. The whole content area will be used for the registration and the default sidebar will be omitted for this screen. Create a PHP file called register-template.php inside the templates folder with the following code: <?php get_header(); ?><div id="wpwa_custom_panel"><?phpif( isset($errors) && count( $errors ) > 0) {foreach( $errors as $error ){echo '<p class="wpwa_frm_error">'. $error .'</p>';}}?>HTML Code for Form</div><?php get_footer(); ?> We can include the default header and footer using the get_header and get_footer functions, respectively. After the header, we will include a display area for the error messages generated in registration. Then, we have the HTML form, as shown in the following code: <form id='registration-form' method='post' action='<?php echoget_site_url() . '/user/register'; ?>'><ul><li><label class='wpwa_frm_label'><?php echo__('Username','wpwa'); ?></label><input class='wpwa_frm_field' type='text'id='wpwa_user' name='wpwa_user' value='' /></li><li><label class='wpwa_frm_label'><?php echo __('Email','wpwa'); ?></label><input class='wpwa_frm_field' type='text'id='wpwa_email' name='wpwa_email' value='' /></li><li><label class='wpwa_frm_label'><?php echo __('UserType','wpwa'); ?></label><select class='wpwa_frm_field' name='wpwa_user_type'><option <?php echo __('Follower','wpwa');?></option><option <?php echo __('Developer','wpwa');?></option><option <?php echo __('Member','wpwa');?></option></select></li><li><label class='wpwa_frm_label' for=''>&nbsp;</label><input type='submit' value='<?php echo__('Register','wpwa'); ?>' /></li></ul></form> As you can see, the form action is set to a custom route called user/register to be handled through the front controller. Also, we have added an extra field called user type to choose the preferred user type on registration. You might have noticed that we used wpwa as the prefix for form element names, element IDs, as well as CSS classes. Even though it's not a must to use a prefix, it can be highly effective when working with multiple third-party plugins. A unique plugin-specific prefix avoids or limits conflicts with other plugins and themes. We will get a screen similar to the following one, once we access the /user/register link in the browser: Once the form is submitted, we have to create the user based on the application requirements. Planning the registration process In this application, we have opted to build a complex registration process in order to understand the typical requirements of web applications. So, it's better to plan it upfront before moving into the implementation. Let's build a list of requirements for registration: The user should be able to register as any of the given user roles The activation code needs to be generated and sent to the user The default notification on successful registration needs to be customized to include the activation link Users should activate their account by clicking the link So, let's begin the task of registering users by displaying the registration form as given in the following code: public function register_user() {if ( !is_user_logged_in() ) {include dirname(__FILE__) . '/templates/registertemplate.php';exit;}} Once user requests /user/register, our controller will call the register_user function using the do_action call. In the initial request, we need to check whether a user is already logged in using the is_user_logged_in function. If not, we can directly include the registration template located inside the templates folder to display the registration form. WordPress templates can be included using the get_template_part function. However, it doesn't work like a typical template library, as we cannot pass data to the template. In this technique, we are including the template directly inside the function. Therefore, we have access to the data inside this function. Handling registration form submission Once the user fills the data and clicks the submit button, we have to execute quite a few tasks in order to register a user in WordPress database. Let's figure out the main tasks for registering a user: Validating form data Registering the user details Creating and saving activation code Sending e-mail notifications with an activate link In the registration form, we specified the action as /user/register, and hence the same register_user function will be used to handle form submission. Validating user data is one of the main tasks in form submission handling. So, let's take a look at the register_user function with the updated code: public function register_user() {if ( $_POST ) {$errors = array();$user_login = ( isset ( $_POST['wpwa_user'] ) ?$_POST['wpwa_user'] : '' );$user_email = ( isset ( $_POST['wpwa_email'] ) ?$_POST['wpwa_email'] : '' );$user_type = ( isset ( $_POST['wpwa_user_type'] ) ?$_POST['wpwa_user_type'] : '' );// Validating user dataif ( empty( $user_login ) )array_push($errors, __('Please enter a username.','wpwa') );if ( empty( $user_email ) )array_push( $errors, __('Please enter e-mail.','wpwa') );if ( empty( $user_type ) )array_push( $errors, __('Please enter user type.','wpwa') );}// Including the template} The following steps are to be performed: First, we will check whether the request is made as POST. Then, we get the form data from the POST array. Finally, we will check the passed values for empty conditions and push the error messages to the $errors variable created at the beginning of this function. Now, we can move into more advanced validations inside the register_user function, as shown in the following code: $sanitized_user_login = sanitize_user( $user_login );if ( !empty($user_email) && !is_email( $user_email ) )array_push( $errors, __('Please enter valid email.','wpwa'));elseif ( email_exists( $user_email ) )array_push( $errors, __('User with this email alreadyregistered.','wpwa'));if ( empty( $sanitized_user_login ) || !validate_username($user_login ) )array_push( $errors, __('Invalid username.','wpwa') );elseif ( username_exists( $sanitized_user_login ) )array_push( $errors, __('Username already exists.','wpwa') ); The steps to perform are as follows: First, we will use the existing sanitize_user function and remove unsafe characters from the username. Then, we will make validations on the e-mail to check whether it's valid and its existence status in the system. Both the email_exists and username_exists functions checks for the existence of an e-mail and username from the database. Once all the validations are completed, the errors array will be either empty or filled with error messages. In this scenario, we choose to go with the most essential validations for the registration form. You can add more advanced validation in your implementations in order to minimize potential security threats. In case we get validation errors in the form, we can directly print the contents of the error array on top of the form as it's visible to the registration template. Here is a preview of our registration screen with generated error messages: Also, it's important to repopulate the form values once errors are generated. We are using the same function for loading the registration form and handling form submission. Therefore, we can directly access the POST variables inside the template to echo the values, as shown in the updated registration form: <form id='registration-form' method='post' action='<?php echoget_site_url() . '/user/register'; ?>'><ul><li><label class='wpwa_frm_label'><?php echo__('Username','wpwa'); ?></label><input class='wpwa_frm_field' type='text'id='wpwa_user' name='wpwa_user' value='<?php echo isset($user_login ) ? $user_login : ''; ?>' /></li><li><label class='wpwa_frm_label'><?php echo __('Email','wpwa'); ?></label><input class='wpwa_frm_field' type='text'id='wpwa_email' name='wpwa_email' value='<?php echo isset($user_email ) ? $user_email : ''; ?>' /></li><li><label class='wpwa_frm_label'><?php echo __('User"Type','wpwa'); ?></label><select class='wpwa_frm_field' name='wpwa_user_type'><option <?php echo (isset( $user_type ) &&$user_type == 'follower') ? 'selected' : ''; ?> value='follower'><?phpecho __('Follower','wpwa'); ?></option><option <?php echo (isset( $user_type ) &&$user_type == 'developer') ? 'selected' : ''; ?>value='developer'><?php echo __('Developer','wpwa'); ?></option><option <?php echo (isset( $user_type ) && $user_type =='member') ? 'selected' : ''; ?> value='member'><?phpecho __('Member','wpwa'); ?></option></select></li><li><label class='wpwa_frm_label' for=''>&nbsp;</label><input type='submit' value='<?php echo__('Register','wpwa'); ?>' /></li></ul></form> Exploring the registration success path Now, let's look at the success path, where we don't have any errors by looking at the remaining sections of the register_user function: if ( empty( $errors ) ) {$user_pass = wp_generate_password();$user_id = wp_insert_user( array('user_login' =>$sanitized_user_login,'user_email' => $user_email,'role' => $user_type,'user_pass' => $user_pass));if ( !$user_id ) {array_push( $errors, __('Registration failed.','wpwa') );} else {$activation_code = $this->random_string();update_user_meta( $user_id, 'wpwa_activation_code',$activation_code );update_user_meta( $user_id, 'wpwa_activation_status', 'inactive');wp_new_user_notification( $user_id, $user_pass, $activation_code);$success_message = __('Registration completed successfully.Please check your email for activation link.','wpwa');}if ( !is_user_logged_in() ) {include dirname(__FILE__) . '/templates/login-template.php';exit;}} We can generate the default password using the wp_generate_password function. Then, we can use the wp_insert_user function with respective parameters generated from the form to save the user in the database. The wp_insert_user function will be used to update the current user or add new users to the application. Make sure you are not logged in while executing this function; otherwise, your admin will suddenly change into another user type after using this function. If the system fails to save the user, we can create a registration fail message and assign it to the $errors variable as we did earlier. Once the registration is successful, we will generate a random string as the activation code. You can use any function here to generate a random string. Then, we update the user with activation code and set the activation status as inactive for the moment. Finally, we will use the wp_new_user_notification function to send an e-mail containing the registration details. By default, this function takes the user ID and password and sends the login details. In this scenario, we have a problem as we need to send an activation link with the e-mail. This is a pluggable function and hence we can create our own implementation of this function to override the default behavior. Since this is a built-in WordPress function, we cannot declare it inside our plugin class. So, we will implement it as a standalone function inside our main plugin file. The full source code for this function will not be included here as it is quite extensive. I'll explain the modified code from the original function and you can have a look at the source code for the complete code: $activate_link = site_url() ."/user/activate/?wpwa_activation_code=$activate_code";$message = __('Hi there,') . 'rnrn';$message .= sprintf(__('Welcome to %s! Please activate youraccount using the link:','wpwa'), get_option('blogname')) .'rnrn';$message .= sprintf(__('<a href="%s">%s</a>','wpwa'),$activate_link, $activate_link) . 'rn';$message .= sprintf(__('Username: %s','wpwa'), $user_login) .'rn';$message .= sprintf(__('Password: %s','wpwa'), $plaintext_pass) .'rnrn'; We create a custom activation link using the third parameter passed to this function. Then, we modify the existing message to include the activation link. That's about all we need to change from the original function. Finally, we set the success message to be passed into the login screen. Now, let's move back to the register_user function. Once the notification is sent, the registration process is completed and the user will be redirected to the login screen. Once the user has the e-mail in their inbox, they can use the activation link to activate the account. Automatically log in the user after registration In general, most web applications uses e-mail confirmations before allowing users to log in to the system. However, there can be certain scenarios where we need to automatically authenticate the user into the application. A social network sign in is a great example for such a scenario. When using social network logins, the system checks whether the user is already registered. If not, the application automatically registers the user and authenticates them. We can easily modify our code to implement an automatic login after registration. Consider the following code: if ( !is_user_logged_in() ) {wp_set_auth_cookie($user_id, false, is_ssl());include dirname(__FILE__) . '/templates/login-template.php';exit;} The registration code is updated to use the wp_set_auth_cookie function. Once it's used, the user authentication cookie will be created and hence the user will be considered as automatically signed in. Then, we will redirect to the login page as usual. Since the user is already logged in using the authentication cookie, they will be redirected back to the home page with access to the backend. This is an easy way of automatically authenticating users into WordPress. Activating system users Once the user clicks on the activate link, redirection will be made to the /user/activate URL of the application. So, we need to modify our controller with a new case for activation, as shown in the following code: case 'activate':do_action( 'wpwa_activate_user' ); As usual, the definition of add_action goes in the constructor, as shown in the following code: add_action( 'wpwa_activate_user', array( $this,'activate_user') ); Next, we can have a look at the actual implementation of the activate_user function: public function activate_user() {$activation_code = isset( $_GET['wpwa_activation_code'] ) ?$_GET['wpwa_activation_code'] : '';$message = '';// Get activation record for the user$user_query = new WP_User_Query(array('meta_key' => ' wpwa_activation_code','meta_value' => $activation_code));$users = $user_query->get_results();// Check and update activation statusif ( !empty($users) ) {$user_id = $users[0]->ID;update_user_meta( $user_id, ' wpwa_activation_status','active' );$message = __('Account activated successfully.','wpwa');} else {$message = __('Invalid Activation Code','wpwa');}include dirname(__FILE__) . '/templates/info-template.php';exit;} We will get the activation code from the link and query the database for finding a matching entry. If no records are found, we set the message as activation failed or else, we can update the activation status of the matching user to activate the account. Upon activation, the user will be given a message using the info-template.php template, which consists of a very basic template like the following one: <?php get_header(); ?><div id='wpwa_info_message'><?php echo $message; ?></div><?php get_footer(); ?> Once the user visits the activation page on the /user/activation URL, information will be given to the user, as illustrated in the following screen: We successfully created and activated a new user. The final task of this process is to authenticate and log the user into the system. Let's see how we can create the login functionality. Creating a login form in the frontend The frontend login can be found in many WordPress websites, including small blogs. Usually, we place the login form in the sidebar of the website. In web applications, user interfaces are complex and different, compared to normal websites. Hence, we will implement a full page login screen as we did with registration. First, we need to update our controller with another case for login, as shown in the following code: switch ( $control_action ) {// Other casescase 'login':do_action( 'wpwa_login_user' );break;} This action will be executed once the user enters /user/login in the browser URL to display the login form. The design form for login will be located in the templates directory as a separate template called login-template.php. Here is the implementation of the login form design with the necessary error messages: <?php get_header(); ?><div id=' wpwa_custom_panel'><?phpif (isset($errors) && count($errors) > 0) {foreach ($errors as $error) {echo '<p class="wpwa_frm_error">' .$error. '</p>';}}if( isset( $success_message ) && $success_message != ""){echo '<p class="wpwa_frm_success">' .$success_message.'</p>';}?><form method='post' action='<?php echo site_url();?>/user/login' id='wpwa_login_form' name='wpwa_login_form'><ul><li><label class='wpwa_frm_label' for='username'><?phpecho __('Username','wpwa'); ?></label><input class='wpwa_frm_field' type='text'name='wpwa_username' value='<?php echo isset( $username ) ?$username : ''; ?>' /></li><li><label class='wpwa_frm_label' for='password'><?phpecho __('Password','wpwa'); ?></label><input class='wpwa_frm_field' type='password'name='wpwa_password' value="" /></li><li><label class='wpwa_frm_label' >&nbsp;</label><input type='submit' name='submit' value='<?php echo__('Login','wpwa'); ?>' /></li></ul></form></div><?php get_footer(); ?> Similar to the registration template, we have a header, error messages, the HTML form, and the footer in this template. We have to point the action of this form to /user/login. The remaining code is self-explanatory and hence I am not going to make detailed explanations. You can take a look at the preview of our login screen in the following screenshot: Next, we need to implement the form submission handler for the login functionality. Before this, we need to update our plugin constructor with the following code to define another custom action for login: add_action( 'wpwa_login_user', array( $this, 'login_user' ) ); Once the user requests /user/login from the browser, the controller will execute the do_action( 'wpwa_login_user' ) function to load the login form in the frontend. Displaying the login form We will use the same function to handle both template inclusion and form submission for login, as we did earlier with registration. So, let's look at the initial code of the login_user function for including the template: public function login_user() {if ( !is_user_logged_in() ) {include dirname(__FILE__) . '/templates/login-template.php';} else {wp_redirect(home_url());}exit;} First, we need to check whether the user has already logged in to the system. Based on the result, we will redirect the user to the login template or home page for the moment. Once the whole system is implemented, we will be redirecting the logged in users to their own admin area. Now, we can take a look at the implementation of the login to finalize our process. Let's take a look at the form submission handling part of the login_user function: if ( $_POST ) {$errors = array();$username = isset ( $_POST['wpwa_username'] ) ?$_POST['wpwa_username'] : '';$password = isset ( $_POST['wpwa_password'] ) ?$_POST['wpwa_password'] : '';if ( empty( $username ) )array_push( $errors, __('Please enter a username.','wpwa') );if ( empty( $password ) )array_push( $errors, __('Please enter password.','wpwa') );if(count($errors) > 0){include dirname(__FILE__) . '/templates/login-template.php';exit;}$credentials = array();$credentials['user_login'] = $username;$credentials['user_login'] = sanitize_user($credentials['user_login'] );$credentials['user_password'] = $password;$credentials['remember'] = false;// Rest of the code} As usual, we need to validate the post data and generate the necessary errors to be shown in the frontend. Once validations are successfully completed, we assign all the form data to an array after sanitizing the values. The username and password are contained in the credentials array with the user_login and user_password keys. The remember key defines whether to remember the password or not. Since we don't have a remember checkbox in our form, it will be set to false. Next, we need to execute the WordPress login function in order to log the user into the system, as shown in the following code: $user = wp_signon( $credentials, false );if ( is_wp_error( $user ) )array_push( $errors, $user->get_error_message() );elsewp_redirect( home_url() ); WordPress handles user authentication through the wp_signon function. We have to pass all the credentials generated in the previous code with an additional second parameter of true or false to define whether to use a secure cookie. We can set it to false for this example. The wp_signon function will return an object of the WP_User or the WP_Error class based on the result. Internally, this function sets an authentication cookie. Users will not be logged in if it is not set. If you are using any other process for authenticating users, you have to set this authentication cookie manually. Once a user is successfully authenticated, a redirection will be made to the home page of the site. Now, we should have the ability to authenticate users from the login form in the frontend. Checking whether we implemented the process properly Take a moment to think carefully about our requirements and try to figure out what we have missed. Actually, we didn't check the activation status on log in. Therefore, any user will be able to log in to the system without activating their account. Now, let's fix this issue by intercepting the authentication process with another built-in action called authenticate, as shown in the following code: public function authenticate_user( $user, $username, $password ) {if(! empty($username) && !is_wp_error($user)){$user = get_user_by('login', $username );if (!in_array( 'administrator', (array) $user->roles ) ) {$active_status = '';$active_status = get_user_meta( $user->data->ID, 'wpwa_activation_status', true );if ( 'inactive' == $active_status ) {$user = new WP_Error( 'denied', __('<strong>ERROR</strong>:Please activate your account.','wpwa') );}}}return $user;} This function will be called in the authentication action by passing the user, username, and password variables as default parameters. All the user types of our application need to be activated, except for the administrator accounts. Therefore, we check the roles of the authenticated user to figure out whether they are admin. Then, we can check the activation status of other user types before authenticating. If an authenticated user is in inactive status, we can return the WP_Error object and prevent authentication from being successful. Last but not least, we have to include the authenticate action in the controller, to make it work as shown in the following code: add_filter( 'authenticate', array( $this, 'authenticate_user' ), 30, 3 ); This filter is also executed when the user logs out of the application. Therefore, we need to consider the following validation to prevent any errors in the logout process: if(! empty($username) && !is_wp_error($user)) Now, we have a simple and useful user registration and login system, ready to be implemented in the frontend of web applications. Make sure to check login- and registration-related plugins from the official repository to gain knowledge of complex requirements in real-world scenarios. Time to practice In this article, we implemented a simple registration and login functionality from the frontend. Before we have a complete user creation and authentication system, there are plenty of other tasks to be completed. So, I would recommend you to try out the following tasks in order to be comfortable with implementing such functionalities for web applications: Create a frontend functionality for the lost password Block the default WordPress login page and redirect it to our custom page Include extra fields in the registration form Make sure to try out these exercises and validate your answers against the implementations provided on the website for this book. Summary In this article, we looked at how we can customize the built-in registration and login process in the frontend to cater to advanced requirements in web application development. By now, you should be capable of creating custom routers for common modules, implement custom controllers with custom template systems, and customize the existing user registration and authentication process. Resources for Article: Further resources on this subject: Web Application Testing [Article] Creating Blog Content in WordPress [Article] WordPress 3: Designing your Blog [Article]
Read more
  • 0
  • 0
  • 2131

article-image-getting-started-nwjs
Packt
21 May 2015
19 min read
Save for later

Getting Started with NW.js

Packt
21 May 2015
19 min read
In this article by Alessandro Benoit, author of the book NW.js Essentials, we will learn that until a while ago, developing a desktop application that was compatible with the most common operating systems required an enormous amount of expertise, different programming languages, and logics for each platform. (For more resources related to this topic, see here.) Yet, for a while now, the evolution of web technologies has brought to our browsers many web applications that have nothing to envy from their desktop alternative. Just think of Google apps such as Gmail and Calendar, which, for many, have definitely replaced the need for a local mail client. All of this has been made possible thanks to the amazing potential of the latest implementations of the Browser Web API combined with the incredible flexibility and speed of the latest server technologies. Although we live in a world increasingly interconnected and dependent on the Internet, there is still the need for developing desktop applications for a number of reasons: To overcome the lack of vertical applications based on web technologies To implement software solutions where data security is essential and cannot be compromised by exposing data on the Internet To make up for any lack of connectivity, even temporary Simply because operating systems are still locally installed Once it's established that we cannot completely get rid of desktop applications and that their implementation on different platforms requires an often prohibitive learning curve, it comes naturally to ask: why not make desktop applications out of the very same technologies used in web development? The answer, or at least one of the answers, is NW.js! NW.js doesn't need any introduction. With more than 20,000 stars on GitHub (in the top four hottest C++ projects of the repository-hosting service) NW.js is definitely one of the most promising projects to create desktop applications with web technologies. Paraphrasing the description on GitHub, NW.js is a web app runtime that allows the browser DOM to access Node.js modules directly. Node.js is responsible for hardware and operating system interaction, while the browser serves the graphic interface and implements all the functionalities typical of web applications. Clearly, the use of the two technologies may overlap; for example, if we were to make an asynchronous call to the API of an online service, we could use either a Node.js HTTP client or an XMLHttpRequest Ajax call inside the browser. Without going into technical details, in order to create desktop applications with NW.js, all you need is a decent understanding of Node.js and some expertise in developing HTML5 web apps. In this article, we are going to dissect the topic dwelling on these points: A brief technical digression on how NW.js works An analysis of the pros and cons in order to determine use scenarios Downloading and installing NW.js Development tools Making your first, simple "Hello World" application Important notes about NW.js (also known as Node-Webkit) and io.js Before January 2015, since the project was born, NW.js was known as Node-Webkit. Moreover, with Node.js getting a little sluggish, much to the concern of V8 JavaScript engine updates, from version 0.12.0, NW.js is not based on Node.js but on io.js, an npm-compatible platform originally based on Node.js. For the sake of simplicity, we will keep referring to Node.js even when talking about io.js as long as this does not affect a proper comprehension of the subject. NW.js under the hood As we stated in the introduction, NW.js, made by Roger Wang of Intel's Open Source Technology Center (Shanghai office) in 2011, is a web app runtime based on Node.js and the Chromium open source browser project. To understand how it works, we must first analyze its two components: Node.js is an efficient JavaScript runtime written in C++ and based on theV8 JavaScript engine developed by Google. Residing in the operating system's application layer, Node.js can access hardware, filesystems, and networking functionalities, enabling its use in a wide range of fields, from the implementation of web servers to the creation of control software for robots. (As we stated in the introduction, NW.js has replaced Node.js with io.js from version 0.12.0.) WebKit is a layout engine that allows the rendering of web pages starting from the DOM, a tree of objects representing the web page. NW.js is actually not directly based on WebKit but on Blink, a fork of WebKit developed specifically for the Chromium open source browser project and based on the V8 JavaScript engine as is the case with Node.js. Since the browser, for security reasons, cannot access the application layer and since Node.js lacks a graphical interface, Roger Wang had the insight of combining the two technologies by creating NW.js. The following is a simple diagram that shows how Node.js has been combined with WebKit in order to give NW.js applications access to both the GUI and the operating system: In order to integrate the two systems, which, despite speaking the same language, are very different, a couple of tricks have been adopted. In the first place, since they are both event-driven (following a logic of action/reaction rather than a stream of operations), the event processing has been unified. Secondly, the Node context was injected into WebKit so that it can access it. The amazing thing about it is that you'll be able to program all of your applications' logic in JavaScript with no concerns about where Node.js ends and WebKit begins. Today, NW.js has reached version 0.12.0 and, although still young, is one of the most promising web app runtimes to develop desktop applications adopting web technologies. Features and drawbacks of NW.js Let's check some of the features that characterize NW.js: NW.js allows us to realize modern desktop applications using HTML5, CSS3, JS, WebGL, and the full potential of Node.js, including the use of third-party modules The Native UI API allows you to implement native lookalike applications with the support of menus, clipboards, tray icons, and file binding Since Node.js and WebKit run within the same thread, NW.js has excellent performance With NW.js, it is incredibly easy to port existing web applications to desktop applications Thanks to the CLI and the presence of third-party tools, it's really easy to debug, package, and deploy applications on Microsoft Windows, Mac OS, and Linux However, all that glitters is not gold. There are some cons to consider when developing an application with NW.js: Size of the application: Since a copy of NW.js (70-90 MB) must be distributed along with each application, the size of the application makes it quite expensive compared to native applications. Anyway, if you're concerned about download times, compressing NW.js for distribution will save you about half the size. Difficulties in distributing your application through Mac App Store: In this article, it will not be discussed (just do a search on Google), but even if the procedure is rather complex, you can distribute your NW.js application through Mac App Store. At the moment, it is not possible to deploy a NW.js application on Windows Store due to the different architecture of .appx applications. Missing support for iOS or Android: Unlike other SDKs and libraries, at the moment, it is not possible to deploy an NW.js application on iOS or Android, and it does not seem to be possible to do so in the near future. However, the portability of the HTML, JavaScript, and CSS code that can be distributed on other platforms with tools such as PhoneGap or TideSDK should be considered. Unfortunately, this is not true for all of the features implemented using Node.js. Stability: Finally, the platform is still quite young and not bug-free. NW.js – usage scenarios The flexibility and good performance of NW.js allows its use in countless scenarios, but, for convenience, I'm going to report only a few notable ones: Development tools Implementation of the GUI around existing CLI tools Multimedia applications Web services clients Video games The choice of development platform for a new project clearly depends only on the developer; for the overall aim of confronting facts, it may be useful to consider some specific scenarios where the use of NW.js might not be recommended: When developing for a specific platform, graphic coherence is essential, and, perhaps, it is necessary to distribute the application through a store If the performance factor limits the use of the preceding technologies If the application does a massive use of the features provided by the application layer via Node.js and it has to be distributed to mobile devices Popular NW.js applications After summarizing the pros and cons of NW.js, let's not forget the real strength of the platform—the many applications built on top of NW.js that have already been distributed. We list a few that are worth noting: Wunderlist for Windows 7: This is a to-do list / schedule management app used by millions Cellist: This is an HTTP debugging proxy available on Mac App Store Game Dev Tycoon: This is one of the first NW.js games that puts you in the shoes of a 1980s game developer Intel® XDK: This is an HTML5 cross-platform solution that enables developers to write web and hybrid apps Downloading and installing NW.js Installing NW.js is pretty simple, but there are many ways to do it. One of the easiest ways is probably to run npm install nw from your terminal, but for the educational purposes, we're going to manually download and install it in order to properly understand how it works. You can find all the download links on the project website at http://nwjs.io/ or in the Downloads section on the GitHub project page at https://github.com/nwjs/nw.js/; from here, download the package that fits your operating system. For example, as I'm writing this article, Node-Webkit is at version 0.12.0, and my operating system is Mac OS X Yosemite 10.10 running on a 64-bit MacBook Pro; so, I'm going to download the nwjs-v0.12.0-osx-x64.zip file. Packages for Mac and Windows are zipped, while those for Linux are in the tar.gz format. Decompress the files and proceed, depending on your operating system, as follows. Installing NW.js on Mac OS X Inside the archive, we're going to find three files: Credits.html: This contains credits and licenses of all the dependencies of NW.js nwjs.app: This is the actual NW.js executable nwjc: This is a CLI tool used to compile your source code in order to protect it Before v0.12.0, the filename of nwjc was nwsnapshot. Currently, the only file that interests us is nwjs.app (the extension might not be displayed depending on the OS configuration). All we have to do is copy this file in the /Applications folder—your main applications folder. If you'd rather install NW.js using Homebrew Cask, you can simply enter the following command in your terminal: $ brew cask install nw If you are using Homebrew Cask to install NW.js, keep in mind that the Cask repository might not be updated and that the nwjs.app file will be copied in ~/Applications, while a symlink will be created in the /Applications folder. Installing NW.js on Microsoft Windows Inside the Microsoft Windows NW.js package, we will find the following files: credits.html: This contains the credits and licenses of all NW.js dependencies d3dcompiler_47.dll: This is the Direct3D library ffmpegsumo.dll: This is a media library to be included in order to use the <video> and <audio> tags icudtl.dat: This is an important network library libEGL.dll: This is the WebGL and GPU acceleration libGLESv2.dll: This is the WebGL and GPU acceleration locales/: This is the languages folder nw.exe: This is the actual NW.js executable nw.pak: This is an important JS library pdf.dll: This library is used by the web engine for printing nwjc.exe: This is a CLI tool to compile your source code in order to protect it Some of the files in the folder will be omitted during the final distribution of our application, but for development purposes, we are simply going to copy the whole content of the folder to C:/Tools/nwjs. Installing NW.js on Linux On Linux, the procedure can be more complex depending on the distribution you use. First, copy the downloaded archive into your home folder if you have not already done so, and then open the terminal and type the following command to unpack the archive (change the version accordingly to the one downloaded): $ gzip -dc nwjs-v0.12.0-linux-x64.tar.gz | tar xf - Now, rename the newly created folder in nwjs with the following command: $ mv ~/nwjs-v0.12.0-linux-x64 ~/nwjs Inside the nwjs folder, we will find the following files: credits.html: This contains the credits and licenses of all the dependencies of NW.js icudtl.dat This is an important network library libffmpegsumo.so: This is a media library to be included in order to use the <video> and <audio> tags locales/: This is a languages folder nw: This is the actual NW.js executable nw.pak: This is an important JS library nwjc: This is a CLI tool to compile your source code in order to protect it Open the folder inside the terminal and try to run NW.js by typing the following: $ cd nwjs$ ./nw If you get the following error, you are probably using a version of Ubuntu later than 13.04, Fedora later than 18, or another Linux distribution that uses libudev.so.1 instead of libudev.so.0: otherwise, you're good to go to the next step: error while loading shared libraries: libudev.so.0: cannot open shared object file: No such file or directory Until NW.js is updated to support libudev.so.1, there are several solutions to solve the problem. For me, the easiest solution is to type the following terminal command inside the directory containing nw: $ sed -i 's/udev.so.0/udev.so.1/g' nw This will replace the string related to libudev, within the application code, with the new version. The process may take a while, so wait for the terminal to return the cursor before attempting to enter the following: $ ./nw Eventually, the NW.js window should open properly. Development tools As you'll make use of third-party modules of Node.js, you're going to need npm in order to download and install all the dependencies; so, Node.js (http://nodejs.org/) or io.js (https://iojs.org/) must be obviously installed in your development environment. I know you cannot wait to write your first application, but before you start, I would like to introduce you to Sublime Text 2. It is a simple but sophisticated IDE, which, thanks to the support for custom build scripts, allows you to run (and debug) NW.js applications from inside the editor itself. If I wasn't convincing and you'd rather keep using your favorite IDE, you can skip to the next section; otherwise, follow these steps to install and configure Sublime Text 2: Download and install Sublime Text 2 for your platform from http://www.sublimetext.com/. Open it and from the top menu, navigate to Tools | Build System | New Build System. A new edit screen will open; paste the following code depending on your platform: On Mac OS X: {"cmd": ["nwjs", "--enable-logging",     "${project_path:${file_path}}"],"working_dir": "${project_path:${file_path}}","path": "/Applications/nwjs.app/Contents/MacOS/"} On Microsoft Windows: {"cmd": ["nw.exe", "--enable-logging",     "${project_path:${file_path}}"],"working_dir": "${project_path:${file_path}}","path": "C:/Tools/nwjs/","shell": true} On Linux: {"cmd": ["nw", "--enable-logging",     "${project_path:${file_path}}"],"working_dir": "${project_path:${file_path}}","path": "/home/userName/nwjs/"} Type Ctrl + S (Cmd + S on Mac) and save the file as nw-js.sublime-build. Perfect! Now you are ready to run your applications directly from the IDE. There are a lot of packages, such as SublimeLinter, LiveReload, and Node.js code completion, available to Sublime Text 2. In order to install them, you have to install Package Control first. Just open https://sublime.wbond.net/installation and follow the instructions. Writing and running your first "Hello World" app Finally, we are ready to write our first simple application. We're going to revisit the usual "Hello World" application by making use of a Node.js module for markdown parsing. "Markdown is a plain text formatting syntax designed so that it can be converted to HTML and many other formats using a tool by the same name."                                                                                                              – Wikipedia Let's create a Hello World folder and open it in Sublime Text 2 or in your favorite IDE. Now open a new package.json file and type in the following JSON code: {"name": "nw-hello-world","main": "index.html","dependencies": {   "markdown": "0.5.x"}} The package.json manifest file is essential for distribution as it determines many of the window properties and primary information about the application. Moreover, during the development process, you'll be able to declare all of the dependencies. In this specific case, we are going to assign the application name, the main file, and obviously our dependency, the markdown module, written by Dominic Baggott. If you so wish, you can create the package.json manifest file using the npm init command from the terminal as you're probably used to already when creating npm packages. Once you've saved the package.json file, create an index.html file that will be used as the main application file and type in the following code: <!DOCTYPE html><html><head>   <title>Hello World!</title></head><body>   <script>   <!--Here goes your code-->   </script></body></html> As you can see, it's a very common HTML5 boilerplate. Inside the script tag, let's add the following: var markdown = require("markdown").markdown,   div = document.createElement("div"),   content = "#Hello World!n" +   "We are using **io.js** " +   "version *" + process.version + "*"; div.innerHTML = markdown.toHTML(content);document.body.appendChild(div); What we do here is require the markdown module and then parse the content variable through it. To keep it as simple as possible, I've been using Vanilla JavaScript to output the parsed HTML to the screen. In the highlighted line of code, you may have noticed that we are using process.version, a property that is a part of the Node.js context. If you try to open index.html in a browser, you'd get the Reference Error: require is not defined error as Node.js has not been injected into the WebKit process. Once you have saved the index.html file, all that is left is to install the dependencies by running the following command from the terminal inside the project folder: $ npm install And we are ready to run our first application! Running NW.js applications on Sublime Text 2 If you opted for Sublime Text 2 and followed the procedure in the development tools section, simply navigate to Project | Save Project As and save the hello-world.sublime-project file inside the project folder. Now, in the top menu, navigate to Tools | Build System and select nw-js. Finally, press Ctrl + B (or Cmd + B on Mac) to run the program. If you have opted for a different IDE, just follow the upcoming steps depending on your operating system. Running NW.js applications on Microsoft Windows Open the command prompt and type: C:> c:Toolsnwjsnw.exe c:pathtotheproject On Microsoft Windows, you can also drag the folder containing package.json to nw.exe in order to open it. Running NW.js applications on Mac OS Open the terminal and type: $ /Applications/nwjs.app/Contents/MacOS/nwjs /path/to/the/project/ Or, if running NW.js applications inside the directory containing package.json, type: $ /Applications/nwjs.app/Contents/MacOS/nwjs. As you can see in Mac OS X, the NW.js kit's executable binary is in a hidden directory within the .app file. Running NW.js applications on Linux Open the terminal and type: $ ~/nwjs/nw /path/to/the/project/ Or, if running NW.js applications inside the directory containing package.json, type: $ ~/nwjs/nw . Running the application, you may notice that a few errors are thrown depending on your platform. As I stated in the pros and cons section, NW.js is still young, so that's quite normal, and probably we're talking about minor issues. However, you can search in the NW.js GitHub issues page in order to check whether they've already been reported; otherwise, open a new issue—your help would be much appreciated. Now, regardless of the operating system, a window similar to the following one should appear: As illustrated, the process.version object variable has been printed properly as Node.js has correctly been injected and can be accessed from the DOM. Perhaps, the result is a little different than what you expected since the top navigation bar of Chromium is visible. Do not worry! You can get rid of the navigation bar at any time simply by adding the window.toolbar = false parameter to the manifest file, but for now, it's important that the bar is visible in order to debug the application. Summary In this article, you discovered how NW.js works under the hood, the recommended tools for development, a few usage scenarios of the library, and eventually, how to run your first, simple application using third-party modules of Node.js. I really hope I haven't bored you too much with the theoretical concepts underlying the functioning of NW.js; I really did my best to keep it short.
Read more
  • 0
  • 0
  • 6693
article-image-how-to-integrate-social-media-into-wordpress-website
Packt
29 Apr 2015
6 min read
Save for later

How to integrate social media with your WordPress website

Packt
29 Apr 2015
6 min read
In this article by Karol Krol, the author of the WordPress 4.x Complete, we will look at how we can integrate our website with social media. We will list some more ways in which you can make your site social media friendly, and also see why you'd want to do that in the first place. Let's start with the why. In this day and age, social media is one of the main drivers of traffic for many sites. Even if you just want to share your content with friends and family, or you have some serious business plans regarding your site, you need to have at least some level of social media integration. Even if you install just simple social media share buttons, you will effectively encourage your visitors to pass on your content to their followers, thus expanding your reach and making your content more popular. (For more resources related to this topic, see here.) Making your blog social media friendly There are a handful of ways to make your site social media friendly. The most common approaches are as follows: Social media share buttons, which allow your visitors to share your content with their friends and followers Social media APIs integration, which make your content look better on social media (design wise) Automatic content distribution to social media Social media metrics tracking Let's discuss these one by one. Setting up social media share buttons There are hundreds of social media plugins available out there that allow you to display a basic set of social media buttons on your site. The one I advise you to use is called Social Share Starter (http://bit.ly/sss-plugin). Its main advantage is that it's optimized to work on new and low-traffic sites, and doesn't show any negative social proof when displaying the buttons and their share numbers. Setting up social media APIs' integration The next step worth taking to make your content appear more attractive on social media is to integrate it with some social media APIs; particularly that of Twitter. What exactly their API is and how it works isn't very relevant for the WordPress discussion we're having here. So instead, let's just focus on what the outcome of integrating your site with this API is. Here's what a standard tweet mentioning a website usually looks like (please notice the overall design, not the text contents): Here's a different tweet, mentioning an article from a site that has Twitter's (Twitter Cards) API enabled: This looks much better. Luckily, having this level of Twitter integration is quite easy. All you need is a plugin called JM Twitter Cards (available at https://wordpress.org/plugins/jm-twitter-cards/). After installing and activating it, you will be guided through the process of setting everything up and approving your site with Twitter (mandatory step). Setting up automatic content distribution to social media The idea behind automatic social media distribution of your content is that you don't have to remember to do so manually whenever you publish a new post. Instead of copying and pasting the URL address of your new post by hand to each individual social media platform, you can have this done automatically. This can be done in many ways, but let's discuss the two most usable ones, the Jetpack and Revive Old Post plugins. The Jetpack plugin The Jetpack plugin is available at https://wordpress.org/plugins/jetpack/. One of Jetpack's modules is called Publicize. You can activate it by navigating to the Jetpack | Settings section of the wp-admin. After doing so, you will be able to go to Settings | Sharing and integrate your site with one of the six available social media platforms: After going through the process of authorizing the plugin with each service, your site will be fully capable of posting each of your new posts to social media automatically. The Revive Old Post plugin The Revive Old Post plugin is available at https://revive.social/plugins/revive-old-post. While the Jetpack plugin takes the newest posts on your site and distributes them to your various social media accounts, the Revive Old Post plugin does the same with your archived posts, ultimately giving them a new life. Hence the name Revive Old Post. After downloading and activating this plugin, go to its section in the wp-admin Revive Old Post. Then, switch to the Accounts tab. There, you can enable the plugin to work with your social media accounts by clicking on the authorization buttons: Then, go to the General settings tab and handle the time intervals and other details of how you want the plugin to work with your social media accounts. When you're done, just click on the SAVE button. At this point, the plugin will start operating automatically and distribute your random archived posts to your social media accounts. Note that it's probably a good idea not to share things too often if you don't want to anger your followers and make them unfollow you. For that reason, I wouldn't advise posting more than once a day. Setting up social media metrics tracking The final element in our social media integration puzzle is setting up some kind of tracking mechanism that would tell us how popular our content is on social media (in terms of shares). Granted, you can do this manually by going to each of your posts and checking their share numbers individually (provided you have the Social Share Starter plugin installed). However, there's a quicker method, and it involves another plugin. This one is called Social Metrics Tracker and you can get it at https://wordpress.org/plugins/social-metrics-tracker/. In short, this plugin collects social share data from a number of platforms and then displays them to you in a single readable dashboard view. After you install and activate the plugin, you'll need to give it a couple of minutes for it to crawl through your social media accounts and get the data. Soon after that, you will be able to visit the plugin's dashboard by going to the Social Metrics section in the wp-admin: For some webhosts and setups, this plugin might end up consuming too much of the server's resources. If this happens, consider activating it only occasionally to check your results and then deactivate it again. Doing this even once a week will still give you a great overview of how well your content is performing on social media. This closes our short guide on how to integrate your WordPress site with social media. I'll admit that we're just scratching the surface here and that there's a lot more that can be done. There are new social media plugins being released literally every week. That being said, the methods described here are more than enough to make your WordPress site social media friendly and enable you to share your content effectively with your friends, family, and audience. Summary Here, we talked about social media integration, tools, and plugins that can make your life a lot easier as an online content publisher. Resources for Article: Further resources on this subject: FAQs on WordPress 3 [article] Creating Blog Content in WordPress [article] Customizing WordPress Settings for SEO [article]
Read more
  • 0
  • 0
  • 2396

article-image-testing-our-application-ios-device
Packt
01 Apr 2015
10 min read
Save for later

Testing our application on an iOS device

Packt
01 Apr 2015
10 min read
In this article by Michelle M. Fernandez, author of the book Corona SDK Mobile Game Development Beginner's Guide, we can upload our first Hello World application on an iOS device, we need to log in into our Apple developer account so that we can create and install our signing certificates on our development machine. If you haven't created a developer account yet, do so by going to http://developer.apple.com/programs/ios/. Remember that there is a fee of $99 a year to become an Apple developer. (For more resources related to this topic, see here.) The Apple developer account is only applied to users developing on Mac OS X. Make sure that your version of Xcode is the same or newer than the version of the OS on your phone. For example, if you have version 5.0 of the iPhone OS installed, you will need Xcode that is bundled with the iOS SDK version 5.0 or later. Time for action – obtaining the iOS developer certificate Make sure that you're signed up for the developer program; you will need to use the Keychain Access tool located in /Applications/Utilities so that you can create a certificate request. A valid certificate must sign all iOS applications before they can be run on an Apple device in order to do any kind of testing. The following steps will show you how to create an iOS developer certificate: Go to Keychain Access | Certificate Assistant | Request a Certificate From a Certificate Authority: In the User Email Address field, type in the e-mail address you used when you registered as an iOS developer. For Common Name, enter your name or team name. Make sure that the name entered matches the information that was submitted when you registered as an iOS developer. The CA Email Address field does not need to be filled in, so you can leave it blank. We are not e-mailing the certificate to a Certificate Authority (CA). Check Saved to disk and Let me specify key pair information. When you click on Continue, you will be asked to choose a save location. Save your file at a destination where you can locate it easily, such as your desktop. In the following window, make sure that 2048 bits is selected for the Key Size and RSA for the Algorithm, and then click on Continue. This will generate the key and save it to the location you specified. Click on Done in the next window. Next, go to the Apple developer website at http://developer.apple.com/, click on iOS Dev Center, and log in to your developer account. Select Certificates, Identifiers & Profiles under iOS Developer Program on the right-hand side of the screen and navigate to Certificates under iOS Apps. Select the + icon on the right-hand side of the page. Under Development, click on the iOS App Development radio button. Click on the Continue button till you reach the screen to generate your certificate: Click on the Choose File button and locate your certificate file that you saved to your desktop, and then, click on the Generate button. Upon hitting Generate, you will get the e-mail notification you specified in the CA request form from Keychain Access, or you can download it directly from the developer portal. The person who created the certificate will get this e-mail and can approve the request by hitting the Approve button. Click on the Download button and save the certificate to a location that is easy to find. Once this is completed, double-click on the file, and the certificate will be added automatically in the Keychain Access. What just happened? We now have a valid certificate for iOS devices. The iOS Development Certificate is used for development purposes only and valid for about a year. The key pair is made up of your public and private keys. The private key is what allows Xcode to sign iOS applications. Private keys are available only to the key pair creator and are stored in the system keychain of the creator's machine. Adding iOS devices You are allowed to assign up to 100 devices for development and testing purposes in the iPhone Developer Program. To register a device, you will need the Unique Device Identification (UDID) number. You can find this in iTunes and Xcode. Xcode To find out your device's UDID, connect your device to your Mac and open Xcode. In Xcode, navigate to the menu bar, select Window, and then click on Organizer. The 40 hex character string in the Identifier field is your device's UDID. Once the Organizer window is open, you should see the name of your device in the Devices list on the left-hand side. Click on it and select the identifier with your mouse, copying it to the clipboard. Usually, when you connect a device to Organizer for the first time, you'll receive a button notification that says Use for Development. Select it and Xcode will do most of the provisioning work for your device in the iOS Provisioning Portal. iTunes With your device connected, open iTunes and click on your device in the device list. Select the Summary tab. Click on the Serial Number label to show the Identifier field and the 40-character UDID. Press Command + C to copy the UDID to your clipboard. Time for action – adding/registering your iOS device To add a device to use for development/testing, perform the following steps: Select Devices in the Developer Portal and click on the + icon to register a new device. Select the Register Device radio button to register one device. Create a name for your device in the Name field and put your device's UDID in the UDID field by pressing Command + V to paste the number you have saved on the clipboard. Click on Continue when you are done and click on Register once you have verified the device information. Time for action – creating an App ID Now that you have added a device to the portal, you will need to create an App ID. An App ID has a unique 10-character Apple ID Prefix generated by Apple and an Apple ID Suffix that is created by the Team Admin in the Provisioning Portal. An App ID could looks like this: 7R456G1254.com.companyname.YourApplication. To create a new App ID, use these steps: Click on App IDs in the Identifiers section of the portal and select the + icon. Fill out the App ID Description field with the name of your application. You are already assigned an Apple ID Prefix (also known as a Team ID). In the App ID Suffix field, specify a unique identifier for your app. It is up to you how you want to identify your app, but it is recommended that you use the reverse-domain style string, that is, com.domainname.appname. Click on Continue and then on Submit to create your App ID. You can create a wildcard character in the bundle identifier that you can share among a suite of applications using the same Keychain access. To do this, simply create a single App ID with an asterisk (*) at the end. You would place this in the field for the bundle identifier either by itself or at the end of your string, for example, com.domainname.*. More information on this topic can be found in the App IDs section of the iOS Provisioning Portal at https://developer.apple.com/ios/manage/bundles/howto.action. What just happened? All UDIDs are unique on every device, and we can locate them in Xcode and iTunes. When we added a device in the iOS Provisioning Portal, we took the UDID, which consists of 40 hex characters, and made sure we created a device name so that we could identify what we're using for development. We now have an App ID for the applications we want to install on a device. An App ID is a unique identifier that iOS uses to allow your application to connect to the Apple Push Notification service, share keychain data between applications, and communicate with external hardware accessories you wish to pair your iOS application with. Provisioning profiles A provisioning profile is a collection of digital entities that uniquely ties apps and devices to an authorized iOS Development Team and enables a device to be used to test a particular app. Provisioning profiles define the relationship between apps, devices, and development teams. They need to be defined for both the development and distribution aspects of an app. Time for action – creating a provisioning profile To create a provisioning profile, go to the Provisioning Profiles section of the Developer Portal and click on the + icon. Perform the following steps: Select the iOS App Development radio button under the Development section and then select Continue. Select the App ID you created for your application in the pull-down menu and click on Continue. Select the certificate you wish to include in the provisioning profile and then click on Continue. Select the devices you wish to authorize for this profile and click on Continue. Create a Profile Name and click on the Generate button when you are done: Click on the Download button. While the file is downloading, launch Xcode if it's not already open and press Shift + Command + 2 on the keyboard to open Organizer. Under Library, select the Provisioning Profiles section. Drag your downloaded .mobileprovision file to the Organizer window. This will automatically copy your .mobileprovision file to the proper directory. What just happened? Devices that have permission within the provisioning profile can be used for testing as long as the certificates are included in the profile. One device can have multiple provisioning profiles installed. Application icon Currently, our app has no icon image to display on the device. By default, if there is no icon image set for the application, you will see a light gray box displayed along with your application name below it once the build has been loaded to your device. So, launch your preferred creative developmental tool and let's create a simple image. The application icon for standard resolution iPad2 or iPad mini image file is 76 x 76 px PNG. The image should always be saved as Icon.png and must be located in your current project folder. iPhone/iPod touch devices that support retina display need an additional high resolution 120 x 120 px and iPad or iPad mini have an icon of 152 x 152 px named as Icon@2x.png. The contents of your current project folder should look like this: Hello World/       name of your project folderIcon.png           required for iPhone/iPod/iPadIcon@2x.png   required for iPhone/iPod with Retina displaymain.lua In order to distribute your app, the App Store requires a 1024 x 1024 pixel version of the icon. It is best to create your icon at a higher resolution first. Refer to the Apple iOS Human Interface Guidelines for the latest official App Store requirements at http://developer.apple.com/library/ios/#documentation/userexperience/conceptual/mobilehig/Introduction/Introduction.html. Creating an application icon is a visual representation of your application name. You will be able to view the icon on your device once you compile a build together. The icon is also the image that launches your application. Summary In this article, we covered how to test your app on an iOS device and register your iOS device. Resources for Article: Further resources on this subject: Linking OpenCV to an iOS project [article] Creating a New iOS Social Project [article] Sparrow iOS Game Framework - The Basics of Our Game [article]
Read more
  • 0
  • 0
  • 1926

article-image-woocommerce-basics
Packt
01 Apr 2015
16 min read
Save for later

WooCommerce Basics

Packt
01 Apr 2015
16 min read
In this article by Patrick Rauland, author of the book WooCommerce Cookbook, we will focus on the following topics: Installing WooCommerce Installing official WooThemes plugins Manually creating WooCommerce pages Creating a WooCommerce plugin (For more resources related to this topic, see here.) A few years ago, building an online store used to be an incredibly complex task. You had to install bulky software onto your own website and pay expensive developers a significant sum of money to customize even the simplest elements of your store. Luckily, nowadays, adding e-commerce functionality to your existing WordPress-powered website can be done by installing a single plugin. In this article, we'll go over the settings that you'll need to configure before launching your online store with WooCommerce. Most of the recipes in this article are simple to execute. We do, however, add a relatively complex recipe near the end of the article to show you how to create a plugin specifically for WooCommerce. If you're going to be customizing WooCommerce with code, it's definitely worth looking at that recipe to know the best way to customize WooCommerce without affecting other parts of your site. The recipes in this article form the very basics of setting up a store, installing plugins that enhance WooCommerce, and managing those plugins. There are recipes for official WooCommerce plugins written using WooThemes as well as a recipe for unofficial plugins. Feel free to select either one. In general, the official plugins are better supported, more up to date, and have more functionality than unofficial plugins. You could always try an unofficial plugin to see whether it meets your needs, and if it doesn't, then use an official plugin that is much more likely to meet your needs. At the end of this article, your store will be fully functional and ready to display products. Installing WooCommerce WooCommerce is a WordPress plugin, which means that you need to have WordPress running on your own server to add WooCommerce. The first step is to install WooCommerce. You could do this on an established website or a brand new website—it doesn't matter. Since e-commerce is more complex than your average plugin, there's more to the installation process than just installing the plugin. Getting ready Make sure you have the permissions necessary to install plugins on your WordPress site. The easiest way to have the correct permissions is to make sure your account on your WordPress site has the admin role. How to do it… There are two parts to this recipe. The first part is installing the plugin and the second step is adding the required pages to the site. Let's have a look at the following steps for further clarity: Log in to your WordPress site. Click on the Plugins menu. Click on the Add New menu item. These steps have been demonstrated visually in the following screenshot: Search for WooCommerce. Click on the Install Now button, as shown in the following screenshot: Once the plugin has been installed, click on the Activate Plugin button. You now have WooCommerce activated on your site, which means we're half way there. E-commerce platforms need to have certain pages (such as a cart page, a checkout page, an account page, and so on) to function. We need to add those to your site. Click on the Install WooCommerce Pages button, which appears after you've activated WooCommerce. This is demonstrated in the following screenshot: How it works… WordPress has an infrastructure that allows any WordPress site to install a plugin hosted on WordPress.org. This is a secure process that is managed by WordPress.org. Installing the WooCommerce pages allows all of the e-commerce functionality to run. Without installing the pages, WooCommerce won't know which page is the cart page or the checkout page. Once these pages are set up, we're ready to have a basic store up and running. If WordPress prompts you for FTP credentials when installing the plugin, that's likely to be a permissions issue with your web host. It is a huge pain if you have to enter FTP credentials every time you want to install or update a plugin, and it's something you should take care of. You can send this link to your web host provider so they know how to change their permissions. You can refer to http://www.chrisabernethy.com/why-wordpress-asks-connection-info/ for more information to resolve this WordPress issue. Installing official WooThemes plugins WooThemes doesn't just create the WooCommerce plugin. They also create standalone plugins and hundreds of extensions that add extra functionality to WooCommerce. The beauty of this system is that WooCommerce is very easy to use because users only add extra complexity when they need it. If you only need simple shipping options, you don't ever have to see the complex shipping settings. On the WooThemes website, you may browse for WooCommerce extensions, purchase them, and download and install them on your site. WooThemes has made the whole process very easy to maintain. They have built an updater similar to the one in WordPress, which, once configured, will allow a user to update a plugin with one click instead of having to through the whole plugin upload process again. Getting ready Make sure you have the necessary permissions to install plugins on your WordPress site. You also need to have a WooThemes product. There are several free WooThemes products including Pay with Amazon which you can find at http://www.woothemes.com/products/pay-with-amazon/. How to do it… There are two parts to this recipe. The first part is installing the plugin and the second step is adding your license for future updates. Follow these steps: Log in to http://www.woothemes.com. Click on the Downloads menu: Find the product you wish to download and click on the Download link for the product. You will see that you get a ZIP file. On your WordPress site, go the Plugins menu and click on Add New. Click on Upload Plugin. Select the file you just downloaded and click on the Install Now button. After the plugin has finished installing, click on the Activate Plugin link. You now have WooCommerce as well as a WooCommerce extension activated on your site. They're both functioning and will continue to function. You will, however, want to perform a few more steps to make sure it's easy to update your extensions: Once you have an extension activated on your site, you'll see a link in the WordPress admin: Install the WooThemes Updater plugin. Click on that link: The updater will be installed automatically. Once it is installed, you need to activate the updater. After activation, you'll see a new link in the WordPress admin: activate your product licenses. Click that link to go straight to the page where you can enter your licenses. You could also navigate to that page manually by going to Dashboard | WooThemes Helper from the menu. Keep your WordPress site open in one tab and log back in to your WooThemes account in another browser tab. On the WooThemes browser tab, go to My Licenses and you'll see a list of your products with a license key under the heading KEY: Copy the key, go back to your WordPress site, and enter it in the Licenses field. Click on the Activate Products button at the bottom of the page. The activation process can take a few seconds to complete. If you've successfully put in your key, you should see a message at the top of the screen saying so. How it works… A plugin that's not hosted on WordPress.org can't update without someone manually reuploading it. The WooThemes updater was built to make this process easier so you can press the update button and have your website do all the heavy lifting. Some websites sell official WooCommerce plugins without a license key. These sales aren't licensed and you won't be getting updates, bug fixes, or access to the support desk. With a regular website, it's important to stay up to date. However, with e-commerce, it's even more important since you'll be handling very sensitive payment information. That's why I wouldn't ever recommend using a plugin that can't update. Manually creating WooCommerce pages Every e-commerce platform will need to have some way of creating extra pages for e-commerce functionality, such as a cart page, a checkout page, an account page, and so on. WooCommerce prompts to helps you create these pages for you when you first install the plugin. So if you installed it correctly, you shouldn't have to do this. But if you were trying multiple e-commerce systems and for some reason deleted some pages, you may have to recreate those pages. How to do it… There's a very useful Tools menu in WooCommerce. It's a bit hard to find since you won't be needing it everyday, but it has some pretty useful tools if you ever need to do some troubleshooting. One of these tools is the one that allows you to recreate your WooCommerce pages. Let's have a look at how to use that tool: Log in to the WordPress admin. Click on WooCommerce | System Status: Click on Tools: Click on the Install Pages button: How it works… WooCommerce keeps track of which pages run e-commerce functionality. When you click on the Install Pages button, it checks which pages exist and if they don't exist, it will automatically create them for you. You could create them by creating new WordPress pages and then manually assigning each page with specific e-commerce functionality. You may want to do this if you already have a cart page and don't want to recreate a new cart page but just copy the content from the old page to the new page. All you want to do is tell WooCommerce which page should have the cart functionality. Let's have a look at the following manual settings: The Cart, Checkout, and Terms & Conditions page can all be set by going to WooCommerce | Settings | Checkout The My Account page can be set by going to WooCommerce | Settings | Accounts There's more... You can manually set some pages, such as the Cart and Checkout page, but you can't set subpages. WooCommerce uses a WordPress functionality called end points to create these subpages. Pages such as the Order Received page, which is displayed right after payment, can't be manually created. These endpoints are created on the fly based on the parent page. The Order Received page is part of the checkout process, so it's based on the Checkout page. Any content on the Checkout page will appear on both the Checkout page and on the Order Received page. You can't add content to the parent page without it affecting the subpage, but you can change the subpage URLs. The checkout endpoints can be configured by going to WooCommerce | Settings | Checkout | Checkout Endpoints. Creating a WooCommerce plugin Unlike a lot of hosted e-commerce solutions, WooCommerce is entirely customizable. That's one of the huge advantages for anyone who builds on open source software. If you don't like it, you can change it. At some point, you'll probably want to change something that's not on a settings page, and that's when you may want to dig into the code. Even if you don't know how to code, you may want to look this over so that when you work with a developer, you would know they're doing it the right way. Getting ready In addition to having admin access to a WordPress site, you'll also need FTP credentials so you can upload a plugin. You'll also need a text editor. Popular code editors include Sublime Text, Coda, Dreamweaver, and Atom. I personally use Atom. You could also use Notepad on a Windows machine or Text Edit on a Mac in a pinch. How to do it… We're going to be creating a plugin that interacts with WooCommerce. It will take the existing WooCommerce functionality and change it. These are the WooCommerce basics. If you build a plugin like this correctly, when WooCommerce isn't active, it won't do anything at all and won't slow down your website. Let's create a plugin by performing the following steps: Open your text editor and create a new file. Save the file as woocommerce-demo-plugin.php. In that file, add the opening PHP tag, which looks like this: <?php. On the next line, add a plugin header. This allows WordPress to recognize the file as a plugin so that it can be activated. It looks something like the following: /** * Plugin Name: WooCommerce Demo Plugin * Plugin URI: https://gist.github.com/BFTrick/3ab411e7cec43eff9769 * Description: A WooCommerce demo plugin * Author: Patrick Rauland * Author URI: http://speakinginbytes.com/ * Version: 1.0 * * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see <http://www.gnu.org/licenses/>. * */ Now that WordPress knows that your file is a plugin, it's time to add some functionality to this. The first thing a good developer does is makes sure their plugin won't conflict with another plugin. To do that, we make sure an existing class doesn't have the same name as our class. I'll be using the WC_Demo_Plugin class, but you can use any class name you want. Add the following code beneath the plugin header: if ( class_exists( 'WC_Demo_Plugin' ) ) {    return; }   class WC_Demo_Plugin {   } Our class doesn't do anything yet, but at least we've written it in such a way that it won't break another plugin. There's another good practice we should add to our plugin before we add the functionality, and that's some logic to make sure another plugin won't misuse our plugin. In the vast majority of use cases, you want to make sure there can't be two instances of your code running. In computer science, this is called the Singleton pattern. This can be controlled by tracking the instances of the plugin with a variable. Right after the WC_Demo_Plugin { line, add the following: protected static $instance = null;     /** * Return an instance of this class. * * @return object A single instance of this class. * @since 1.0 */ public static function get_instance() {    // If the single instance hasn't been set, set it now.    if ( null == self::$instance ) {        self::$instance = new self;    }      return self::$instance; } And get the plugin started by adding this right before the endif; line: add_action( 'plugins_loaded', array( 'WC_Demo_Plugin', 'get_instance' ), 0 ); At this point, we've made sure our plugin doesn't break other plugins and we've also dummy-proofed our own plugin so that we or other developers don't misuse it. Let's add just a bit more logic so that we don't run our logic unless WooCommerce is already loaded. This will make sure that we don't accidentally break something if we turn WooCommerce off temporarily. Right after the protected static $instance = null; line, add the following: /** * Initialize the plugin. * * @since 1.0 */ private function __construct() {    if ( class_exists( 'WooCommerce' ) ) {      } } And now our plugin only runs when WooCommerce is loaded. I'm guessing that at this point, you finally want it to do something, right? After we make sure WooCommerce is running, let's add some functionality. Right after the if ( class_exists( 'WooCommerce' ) ) { line, add the following code so that we add an admin notice: // print an admin notice to the screen. add_action( 'admin_notices', array( $this, 'my_admin_notice' ) ); This code will call a method called my_admin_notice, but we haven't written that yet, so it's not doing anything. Let's write that method. Have a look at the __construct method, which should now look like this: /** * Initialize the plugin. * * @since 1.0 */ private function __construct() {    if ( class_exists( 'WooCommerce' ) ) {          // print an admin notice to the screen.        add_action( 'admin_notices', array( $this, 'display_admin_notice' ) );      } } Add the following after the preceding __construct method: /** * Print an admin notice * * @since 1.0 */ public function display_admin_notice() {    ?>    <div class="updated">        <p><?php _e( 'The WooCommerce dummy plugin notice.', 'woocommerce-demo-plugin' ); ?></p>    </div>    <?php } This will print an admin notice on every single admin page. This notice includes all the messages you typically see in the WordPress admin. You could replace this admin notice method with just about any other hook in WooCommerce to provide additional customizations in other areas of WooCommerce, whether it be for shipping, the product page, the checkout process, or any other area. This plugin is the easiest way to get started with WooCommerce customizations. If you'd like to see the full code sample, you can see it at https://gist.github.com/BFTrick/3ab411e7cec43eff9769. Now that the plugin is complete, you need to upload it to your plugins folder. You can do this via the WordPress admin or more commonly via FTP. Once the plugin has been uploaded to your site, you'll need to activate the plugin just like any other WordPress plugin. The end result is a notice in the WordPress admin letting us know we did everything successfully. Whenever possible, use object-oriented code. That means using objects (like the WC_Demo_Plugin class) to encapsulate your code. It will prevent a lot of naming conflicts down the road. If you see some procedural code online, you can usually convert it to object-oriented code pretty easily. Summary In this article, you have learned the basic steps in installing WooCommerce, installing WooThemes plugins, manually creating WooCommerce pages, and creating a WooCommerce plugin. Resources for Article: Further resources on this subject: Creating Blog Content in WordPress [article] Tips and Tricks [article] Setting Up WooCommerce [article]
Read more
  • 0
  • 0
  • 1675
article-image-drupal-8-and-configuration-management
Packt
18 Mar 2015
15 min read
Save for later

Drupal 8 and Configuration Management

Packt
18 Mar 2015
15 min read
In this article, by the authors, Stefan Borchert and Anja Schirwinski, of the book, Drupal 8 Configuration Management,we will learn the inner workings of the Configuration Management system in Drupal 8. You will learn about config and schema files and read about the difference between simple configuration and configuration entities. (For more resources related to this topic, see here.) The config directory During installation, Drupal adds a directory within sites/default/files called config_HASH, where HASH is a long random string of letters and numbers, as shown in the following screenshot: This sequence is a random hash generated during the installation of your Drupal site. It is used to add some protection to your configuration files. Additionally to the default restriction enforced by the .htaccess file within the subdirectories of the config directory that prevents unauthorized users from seeing the content of the directories. As a result, would be really hard for someone to guess the folder's name. Within the config directory, you will see two additional directories that are empty by default (leaving the .htaccess and README.txt files aside). One of the directories is called active. If you change the configuration system to use file storage instead of the database for active Drupal site configuration, this directory will contain the active configuration. If you did not customize the storage mechanism of the active configuration (we will learn later how to do this), Drupal 8 uses the database to store the active configuration. The other directory is called staging. This directory is empty by default, but can host the configuration you want to be imported into your Drupal site from another installation. You will learn how to use this later on in this article. A simple configuration example First, we want to become familiar with configuration itself. If you look into the database of your Drupal installation and open up the config table , you will find the entire active configuration of your site, as shown in the following screenshot: Depending on your site's configuration, table names may be prefixed with a custom string, so you'll have to look for a table name that ends with config. Don't worry about the strange-looking text in the data column; this is the serialized content of the corresponding configuration. It expands to single configuration values—that is, system.site.name, which holds the value of the name of your site. Changing the site's name in the user interface on admin/config/system/site-information will immediately update the record in the database; thus, put simply the records in the table are the current state of your site's configuration, as shown in the following screenshot: But where does the initial configuration of your site come from? Drupal itself and the modules you install must use some kind of default configuration that gets added to the active storage during installation. Config and schema files – what are they and what are they used for? In order to provide a default configuration during the installation process, Drupal (modules and profiles) comes with a bunch of files that hold the configuration needed to run your site. To make parsing of these files simple and enhance readability of these configuration files, the configuration is stored using the YAML format. YAML (http://yaml.org/) is a data-orientated serialization standard that aims for simplicity. With YAML, it is easy to map common data types such as lists, arrays, or scalar values. Config files Directly beneath the root directory of each module and profile defining or overriding configuration (either core or contrib), you will find a directory named config. Within this directory, there may be two more directories (although both are optional): install and schema. Check the image module inside core/modules and take a look at its config directory, as shown in the following screenshot: The install directory shown in the following screenshot contains all configuration values that the specific module defines or overrides and that are stored in files with the extension .yml (one of the default extensions for files in the YAML format): During installation, the values stored in these files are copied to the active configuration of your site. In the case of default configuration storage, the values are added to the config table; in file-based configuration storage mechanisms, on the other hand, the files are copied to the appropriate directories. Looking at the filenames, you will see that they follow a simple convention: <module name>.<type of configuration>[.<machine name of configuration object>].yml (setting aside <module name>.settings.yml for now). The explanation is as follows: <module name>: This is the name of the module that defines the settings included in the file. For instance, the image.style.large.yml file contains settings defined by the image module. <type of configuration>: This can be seen as a type of group for configuration objects. The image module, for example, defines several image styles. These styles are a set of different configuration objects, so the group is defined as style. Hence, all configuration files that contain image styles defined by the image module itself are named image.style.<something>.yml. The same structure applies to blocks (<block.block.*.yml>), filter formats (<filter.format.*.yml>), menus (<system.menu.*.yml>), content types (<node.type.*.yml>), and so on. <machine name of configuration object>: The last part of the filename is the unique machine-readable name of the configuration object itself. In our examples from the image module, you see three different items: large, medium, and thumbnail. These are exactly the three image styles you will find on admin/config/media/image-styles after installing a fresh copy of Drupal 8. The image styles are shown in the following screenshot: Schema files The primary reason schema files were introduced into Drupal 8 is multilingual support. A tool was needed to identify all translatable strings within the shipped configuration. The secondary reason is to provide actual translation forms for configuration based on your data and to expose translatable configuration pieces to external tools. Each module can have as many configuration the .yml files as needed. All of these are explained in one or more schema files that are shipped with the module. As a simple example of how schema files work, let's look at the system's maintenance settings in the system.maintenance.yml file at core/modules/system/config/install. The file's contents are as follows: message: '@site is currently under maintenance. We should be back shortly. Thank you for your patience.' langcode: en The system module's schema files live in core/modules/system/config/schema. These define the basic types but, for our example, the most important aspect is that they define the schema for the maintenance settings. The corresponding schema section from the system.schema.yml file is as follows: system.maintenance: type: mapping label: 'Maintenance mode' mapping:    message:      type: text      label: 'Message to display when in maintenance mode'    langcode:      type: string      label: 'Default language' The first line corresponds to the filename for the .yml file, and the nested lines underneath the first line describe the file's contents. Mapping is a basic type for key-value pairs (always the top-level type in .yml). The system.maintenance.yml file is labeled as label: 'Maintenance mode'. Then, the actual elements in the mapping are listed under the mapping key. As shown in the code, the file has two items, so the message and langcode keys are described. These are a text and a string value, respectively. Both values are given a label as well in order to identify them in configuration forms. Learning the difference between active and staging By now, you know that Drupal works with the two directories active and staging. But what is the intention behind those directories? And how do we use them? The configuration used by your site is called the active configuration since it's the configuration that is affecting the site's behavior right now. The current (active) configuration is stored in the database and direct changes to your site's configuration go into the specific tables. The reason Drupal 8 stores the active configuration in the database is that it enhances performance and security. Source: https://www.drupal.org/node/2241059. However, sometimes you might not want to store the active configuration in the database and might need to use a different storage mechanism. For example, using the filesystem as configuration storage will enable you to track changes in the site's configuration using a versioning system such as Git or SVN. Changing the active configuration storage If you do want to switch your active configuration storage to files, here's how: Note that changing the configuration storage is only possible before installing Drupal. After installing it, there is no way to switch to another configuration storage! To use a different configuration storage mechanism, you have to make some modifications to your settings.php file. First, you'll need to find the section named Active configuration settings. Now you will have to uncomment the line that starts with $settings['bootstrap_config_storage'] to enable file-based configuration storage. Additionally, you need to copy the existing default.services.yml (next to your settings.php file) to a file named services.yml and enable the new configuration storage: services: # Override configuration storage. config.storage:    class: DrupalCoreConfigCachedStorage    arguments: ['@config.storage.active', '@cache.config'] config.storage.active:    # Use file storage for active configuration.    alias: config.storage.file This tells Drupal to override the default service used for configuration storage and use config.storage.file as the active configuration storage mechanism instead of the default database storage. After installing the site with these settings, we will take another look at the config directory in sites/default/files (assuming you didn't change to the location of the active and staging directory): As you can see, the active directory contains the entire site's configuration. The files in this directory get copied here during the website's installation process. Whenever you make a change to your website, the change is reflected in these files. Exporting a configuration always exports a snapshot of the active configuration, regardless of the storage method. The staging directory contains the changes you want to add to your site. Drupal compares the staging directory to the active directory and checks for changes between them. When you upload your compressed export file, it actually gets placed inside the staging directory. This means you can save yourself the trouble of using the interface to export and import the compressed file if you're comfortable enough with copy-and-pasting files to another directory. Just make sure you copy all of the files to the staging directory even if only one of the files was changed. Any missing files are interpreted as deleted configuration, and will mess up your site. In order to get the contents of staging into active, we simply have to use the synchronize option at admin/config/development/configuration again. This page will show us what was changed and allows us to import the changes. On importing, your active configuration will get overridden with the configuration in your staging directory. Note that the files inside the staging directory will not be removed after the synchronization is finished. The next time you want to copy-and-paste from your active directory, make sure you empty staging first. Note that you cannot override files directly in the active directory. The changes have to be made inside staging and then synchronized. Changing the storage location of the active and staging directories In case you do not want Drupal to store your configuration in sites/default/files, you can set the path according to your wishes. Actually, this is recommended for security reasons, as these directories should never be accessible over the Web or by unauthorized users on your server. Additionally, it makes your life easier if you work with version control. By default, the whole files directory is usually ignored in version-controlled environments because Drupal writes to it, and having the active and staging directory located within sites/default/files would result in them being ignored too. So how do we change the location of the configuration directories? Before installing Drupal, you will need to create and modify the settings.php file that Drupal uses to load its basic configuration data from (that is, the database connection settings). If you haven't done it yet, copy the default.settings.php file and rename the copy to settings.php. Afterwards, open the new file with the editor of your choice and search for the following line: $config_directories = array(); Change the preceding line to the following (or simply insert your addition at the bottom of the file). $config_directories = array( CONFIG_ACTIVE_DIRECTORY => './../config/active', // folder outside the webroot CONFIG_STAGING_DIRECTORY => './../config/staging', // folder outside the webroot ); The directory names can be chosen freely, but it is recommended that you at least use similar names to the default ones so that you or other developers don't get confused when looking at them later. Remember to put these directories outside your webroot, or at least protect the directories using an .htaccess file (if using Apache as the server). Directly after adding the paths to your settings.php file, make sure you remove write permissions from the file as it would be a security risk if someone could change it. Drupal will now use your custom location for its configuration files on installation. You can also change the location of the configuration directories after installing Drupal. Open up your settings.php file and find these two lines near the end of the file and start with $config_directories. Change their paths to something like this: $config_directories['active'] = './../config/active'; $config_directories['staging] = './../config/staging'; This path places the directories above your Drupal root. Now that you know about active and staging, let's learn more about the different types of configuration you can create on your own. Simple configuration versus configuration entities As soon as you want to start storing your own configuration, you need to understand the differences between simple configuration and configuration entities. Here's a short definition of the two types of configuration used in Drupal. Simple configuration This configuration type is easier to implement and therefore ideal for basic configuration settings that result in Boolean values, integers, or simple strings of text being stored, as well as global variables that are used throughout your site. A good example would be the value of an on/off toggle for a specific feature in your module, or our previously used example of the site name configured by the system module: name: 'Configuration Management in Drupal 8' Simple configuration also includes any settings that your module requires in order to operate correctly. For example, JavaScript aggregation has to be either on or off. If it doesn't exist, the system module won't be able to determine the appropriate course of action. Configuration entities Configuration entities are much more complicated to implement but far more flexible. They are used to store information about objects that users can create and destroy without breaking the code. A good example of configuration entities is an image style provided by the image module. Take a look at the image.style.thumbnail.yml file: uuid: fe1fba86-862c-49c2-bf00-c5e1f78a0f6c langcode: en status: true dependencies: { } name: thumbnail label: 'Thumbnail (100×100)' effects: 1cfec298-8620-4749-b100-ccb6c4500779:    uuid: 1cfec298-8620-4749-b100-ccb6c4500779    id: image_scale    weight: 0    data:      width: 100      height: 100      upscale: false third_party_settings: { } This defines a specific style for images, so the system is able to create derivatives of images that a user uploads to the site. Configuration entities also come with a complete set of create, read, update, and delete (CRUD) hooks that are fired just like any other entity in Drupal, making them an ideal candidate for configuration that might need to be manipulated or responded to by other modules. As an example, the Views module uses configuration entities that allow for a scenario where, at runtime, hooks are fired that allow any other module to provide configuration (in this case, custom views) to the Views module. Summary In this article, you learned about how to store configuration and briefly got to know the two different types of configuration. Resources for Article: Further resources on this subject: Tabula Rasa: Nurturing your Site for Tablets [article] Components - Reusing Rules, Conditions, and Actions [article] Introduction to Drupal Web Services [article]
Read more
  • 0
  • 0
  • 3416

article-image-drupal-8-configuration-management
Packt
18 Mar 2015
14 min read
Save for later

Drupal 8 Configuration Management

Packt
18 Mar 2015
14 min read
In this article, by the authors, Stefan Borchert and Anja Schirwinski, of the book, Drupal 8 Configuration Management,we will learn the inner workings of the Configuration Management system in Drupal 8. You will learn about config and schema files and read about the difference between simple configuration and configuration entities. (For more resources related to this topic, see here.) The config directory During installation, Drupal adds a directory within sites/default/files called config_HASH, where HASH is a long random string of letters and numbers, as shown in the following screenshot: This sequence is a random hash generated during the installation of your Drupal site. It is used to add some protection to your configuration files. Additionally to the default restriction enforced by the .htaccess file within the subdirectories of the config directory that prevents unauthorized users from seeing the content of the directories. As a result, would be really hard for someone to guess the folder's name. Within the config directory, you will see two additional directories that are empty by default (leaving the .htaccess and README.txt files aside). One of the directories is called active. If you change the configuration system to use file storage instead of the database for active Drupal site configuration, this directory will contain the active configuration. If you did not customize the storage mechanism of the active configuration (we will learn later how to do this), Drupal 8 uses the database to store the active configuration. The other directory is called staging. This directory is empty by default, but can host the configuration you want to be imported into your Drupal site from another installation. You will learn how to use this later on in this article. A simple configuration example First, we want to become familiar with configuration itself. If you look into the database of your Drupal installation and open up the config table , you will find the entire active configuration of your site, as shown in the following screenshot: Depending on your site's configuration, table names may be prefixed with a custom string, so you'll have to look for a table name that ends with config. Don't worry about the strange-looking text in the data column; this is the serialized content of the corresponding configuration. It expands to single configuration values—that is, system.site.name, which holds the value of the name of your site. Changing the site's name in the user interface on admin/config/system/site-information will immediately update the record in the database; thus, put simply the records in the table are the current state of your site's configuration, as shown in the following screenshot: But where does the initial configuration of your site come from? Drupal itself and the modules you install must use some kind of default configuration that gets added to the active storage during installation. Config and schema files – what are they and what are they used for? In order to provide a default configuration during the installation process, Drupal (modules and profiles) comes with a bunch of files that hold the configuration needed to run your site. To make parsing of these files simple and enhance readability of these configuration files, the configuration is stored using the YAML format. YAML (http://yaml.org/) is a data-orientated serialization standard that aims for simplicity. With YAML, it is easy to map common data types such as lists, arrays, or scalar values. Config files Directly beneath the root directory of each module and profile defining or overriding configuration (either core or contrib), you will find a directory named config. Within this directory, there may be two more directories (although both are optional): install and schema. Check the image module inside core/modules and take a look at its config directory, as shown in the following screenshot: The install directory shown in the following screenshot contains all configuration values that the specific module defines or overrides and that are stored in files with the extension .yml (one of the default extensions for files in the YAML format): During installation, the values stored in these files are copied to the active configuration of your site. In the case of default configuration storage, the values are added to the config table; in file-based configuration storage mechanisms, on the other hand, the files are copied to the appropriate directories. Looking at the filenames, you will see that they follow a simple convention: <module name>.<type of configuration>[.<machine name of configuration object>].yml (setting aside <module name>.settings.yml for now). The explanation is as follows: <module name>: This is the name of the module that defines the settings included in the file. For instance, the image.style.large.yml file contains settings defined by the image module. <type of configuration>: This can be seen as a type of group for configuration objects. The image module, for example, defines several image styles. These styles are a set of different configuration objects, so the group is defined as style. Hence, all configuration files that contain image styles defined by the image module itself are named image.style.<something>.yml. The same structure applies to blocks (<block.block.*.yml>), filter formats (<filter.format.*.yml>), menus (<system.menu.*.yml>), content types (<node.type.*.yml>), and so on. <machine name of configuration object>: The last part of the filename is the unique machine-readable name of the configuration object itself. In our examples from the image module, you see three different items: large, medium, and thumbnail. These are exactly the three image styles you will find on admin/config/media/image-styles after installing a fresh copy of Drupal 8. The image styles are shown in the following screenshot: Schema files The primary reason schema files were introduced into Drupal 8 is multilingual support. A tool was needed to identify all translatable strings within the shipped configuration. The secondary reason is to provide actual translation forms for configuration based on your data and to expose translatable configuration pieces to external tools. Each module can have as many configuration the .yml files as needed. All of these are explained in one or more schema files that are shipped with the module. As a simple example of how schema files work, let's look at the system's maintenance settings in the system.maintenance.yml file at core/modules/system/config/install. The file's contents are as follows: message: '@site is currently under maintenance. We should be back shortly. Thank you for your patience.' langcode: en The system module's schema files live in core/modules/system/config/schema. These define the basic types but, for our example, the most important aspect is that they define the schema for the maintenance settings. The corresponding schema section from the system.schema.yml file is as follows: system.maintenance: type: mapping label: 'Maintenance mode' mapping:    message:      type: text      label: 'Message to display when in maintenance mode'    langcode:      type: string      label: 'Default language' The first line corresponds to the filename for the .yml file, and the nested lines underneath the first line describe the file's contents. Mapping is a basic type for key-value pairs (always the top-level type in .yml). The system.maintenance.yml file is labeled as label: 'Maintenance mode'. Then, the actual elements in the mapping are listed under the mapping key. As shown in the code, the file has two items, so the message and langcode keys are described. These are a text and a string value, respectively. Both values are given a label as well in order to identify them in configuration forms. Learning the difference between active and staging By now, you know that Drupal works with the two directories active and staging. But what is the intention behind those directories? And how do we use them? The configuration used by your site is called the active configuration since it's the configuration that is affecting the site's behavior right now. The current (active) configuration is stored in the database and direct changes to your site's configuration go into the specific tables. The reason Drupal 8 stores the active configuration in the database is that it enhances performance and security. Source: https://www.drupal.org/node/2241059. However, sometimes you might not want to store the active configuration in the database and might need to use a different storage mechanism. For example, using the filesystem as configuration storage will enable you to track changes in the site's configuration using a versioning system such as Git or SVN. Changing the active configuration storage If you do want to switch your active configuration storage to files, here's how: Note that changing the configuration storage is only possible before installing Drupal. After installing it, there is no way to switch to another configuration storage! To use a different configuration storage mechanism, you have to make some modifications to your settings.php file. First, you'll need to find the section named Active configuration settings. Now you will have to uncomment the line that starts with $settings['bootstrap_config_storage'] to enable file-based configuration storage. Additionally, you need to copy the existing default.services.yml (next to your settings.php file) to a file named services.yml and enable the new configuration storage: services: # Override configuration storage. config.storage:    class: DrupalCoreConfigCachedStorage    arguments: ['@config.storage.active', '@cache.config'] config.storage.active:    # Use file storage for active configuration.    alias: config.storage.file This tells Drupal to override the default service used for configuration storage and use config.storage.file as the active configuration storage mechanism instead of the default database storage. After installing the site with these settings, we will take another look at the config directory in sites/default/files (assuming you didn't change to the location of the active and staging directory): As you can see, the active directory contains the entire site's configuration. The files in this directory get copied here during the website's installation process. Whenever you make a change to your website, the change is reflected in these files. Exporting a configuration always exports a snapshot of the active configuration, regardless of the storage method. The staging directory contains the changes you want to add to your site. Drupal compares the staging directory to the active directory and checks for changes between them. When you upload your compressed export file, it actually gets placed inside the staging directory. This means you can save yourself the trouble of using the interface to export and import the compressed file if you're comfortable enough with copy-and-pasting files to another directory. Just make sure you copy all of the files to the staging directory even if only one of the files was changed. Any missing files are interpreted as deleted configuration, and will mess up your site. In order to get the contents of staging into active, we simply have to use the synchronize option at admin/config/development/configuration again. This page will show us what was changed and allows us to import the changes. On importing, your active configuration will get overridden with the configuration in your staging directory. Note that the files inside the staging directory will not be removed after the synchronization is finished. The next time you want to copy-and-paste from your active directory, make sure you empty staging first. Note that you cannot override files directly in the active directory. The changes have to be made inside staging and then synchronized. Changing the storage location of the active and staging directories In case you do not want Drupal to store your configuration in sites/default/files, you can set the path according to your wishes. Actually, this is recommended for security reasons, as these directories should never be accessible over the Web or by unauthorized users on your server. Additionally, it makes your life easier if you work with version control. By default, the whole files directory is usually ignored in version-controlled environments because Drupal writes to it, and having the active and staging directory located within sites/default/files would result in them being ignored too. So how do we change the location of the configuration directories? Before installing Drupal, you will need to create and modify the settings.php file that Drupal uses to load its basic configuration data from (that is, the database connection settings). If you haven't done it yet, copy the default.settings.php file and rename the copy to settings.php. Afterwards, open the new file with the editor of your choice and search for the following line: $config_directories = array(); Change the preceding line to the following (or simply insert your addition at the bottom of the file). $config_directories = array( CONFIG_ACTIVE_DIRECTORY => './../config/active', // folder outside the webroot CONFIG_STAGING_DIRECTORY => './../config/staging', // folder outside the webroot ); The directory names can be chosen freely, but it is recommended that you at least use similar names to the default ones so that you or other developers don't get confused when looking at them later. Remember to put these directories outside your webroot, or at least protect the directories using an .htaccess file (if using Apache as the server). Directly after adding the paths to your settings.php file, make sure you remove write permissions from the file as it would be a security risk if someone could change it. Drupal will now use your custom location for its configuration files on installation. You can also change the location of the configuration directories after installing Drupal. Open up your settings.php file and find these two lines near the end of the file and start with $config_directories. Change their paths to something like this: $config_directories['active'] = './../config/active'; $config_directories['staging] = './../config/staging'; This path places the directories above your Drupal root. Now that you know about active and staging, let's learn more about the different types of configuration you can create on your own. Simple configuration versus configuration entities As soon as you want to start storing your own configuration, you need to understand the differences between simple configuration and configuration entities. Here's a short definition of the two types of configuration used in Drupal. Simple configuration This configuration type is easier to implement and therefore ideal for basic configuration settings that result in Boolean values, integers, or simple strings of text being stored, as well as global variables that are used throughout your site. A good example would be the value of an on/off toggle for a specific feature in your module, or our previously used example of the site name configured by the system module: name: 'Configuration Management in Drupal 8' Simple configuration also includes any settings that your module requires in order to operate correctly. For example, JavaScript aggregation has to be either on or off. If it doesn't exist, the system module won't be able to determine the appropriate course of action. Configuration entities Configuration entities are much more complicated to implement but far more flexible. They are used to store information about objects that users can create and destroy without breaking the code. A good example of configuration entities is an image style provided by the image module. Take a look at the image.style.thumbnail.yml file: uuid: fe1fba86-862c-49c2-bf00-c5e1f78a0f6c langcode: en status: true dependencies: { } name: thumbnail label: 'Thumbnail (100×100)' effects: 1cfec298-8620-4749-b100-ccb6c4500779:    uuid: 1cfec298-8620-4749-b100-ccb6c4500779    id: image_scale    weight: 0    data:      width: 100      height: 100      upscale: false third_party_settings: { } This defines a specific style for images, so the system is able to create derivatives of images that a user uploads to the site. Configuration entities also come with a complete set of create, read, update, and delete (CRUD) hooks that are fired just like any other entity in Drupal, making them an ideal candidate for configuration that might need to be manipulated or responded to by other modules. As an example, the Views module uses configuration entities that allow for a scenario where, at runtime, hooks are fired that allow any other module to provide configuration (in this case, custom views) to the Views module. Summary In this article, you learned about how to store configuration and briefly got to know the two different types of configuration. Resources for Article: Further resources on this subject: Tabula Rasa: Nurturing your Site for Tablets [article] Components - Reusing Rules, Conditions, and Actions [article] Introduction to Drupal Web Services [article]
Read more
  • 0
  • 0
  • 1282