Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - CMS and E-Commerce

830 Articles
article-image-migrating-wordpress-blog-middleman-and-deploying-to-amazon-part3
Mike Ball
09 Feb 2015
9 min read
Save for later

Part 3: Migrating a WordPress Blog to Middleman and Deploying to Amazon S3

Mike Ball
09 Feb 2015
9 min read
Part 3: Migrating WordPress blog content and deploying to production In parts 1 and 2 of this series, we created middleman-demo, a basic Middleman-based blog, imported content from WordPress, and deployed middleman-demo to Amazon S3. Now that middleman-demo has been deployed to production, let’s design a continuous integration workflow that automates builds and deployments. In part 3, we’ll cover the following: Testing middleman-demo with RSpec and Capybara Integrating with GitHub and Travis CI Configuring automated builds and deployments from Travis CI If you didn’t follow parts 1 and 2, or you no longer have your original middleman-demo code, you can clone mine and check out the part3 branch:  $ git clone http://github.com/mdb/middleman-demo && cd middleman-demo && git checkout part3 Create some automated tests In software development, the practice of continuous delivery serves to frequently deploy iterative software bug fixes and enhancements, such that users enjoy an ever-improving product. Automated processes, such as tests, assist in rapidly validating quality with each change. middleman-demo is a relatively simple codebase, though much of its build and release workflow can still be automated via continuous delivery. Let’s write some automated tests for middleman-demo using RSpec and Capybara. These tests can assert that the site continues to work as expected with each change. Add the gems to the middleman-demo Gemfile:  gem 'rspec'gem 'capybara' Install the gems: $ bundle install Create a spec directory to house tests: $ mkdir spec As is the convention in RSpec, create a spec/spec_helper.rb file to house the RSpec configuration: $ touch spec/spec_helper.rb Add the following configuration to spec/spec_helper.rb to run middleman-demo during test execution: require "middleman" require "middleman-blog" require 'rspec' require 'capybara/rspec' Capybara.app = Middleman::Application.server.inst do set :root, File.expand_path(File.join(File.dirname(__FILE__), '..')) set :environment, :development end Create a spec/features directory to house the middleman-demo RSpec test files: $ mkdir spec/features Create an RSpec spec file for the homepage: $ touch spec/features/index_spec.rb Let’s create a basic test confirming that the Middleman Demo heading is present on the homepage. Add the following to spec/features/index_spec.rb: require "spec_helper" describe 'index', type: :feature do before do visit '/' end it 'displays the correct heading' do expect(page).to have_selector('h1', text: 'Middleman Demo') end end Run the test and confirm that it passes: $ rspec You should see output like the following: Finished in 0.03857 seconds (files took 6 seconds to load) 1 example, 0 failures Next, add a test asserting that the first blog post is listed on the homepage; confirm it passes by running the rspec command: it 'displays the "New Blog" blog post' do expect(page).to have_selector('ul li a[href="/blog/2014/08/20/new-blog/"]', text: 'New Blog') end As an example, let’s add one more basic test, this time asserting that the New Blog text properly links to the corresponding blog post. Add the following to spec/features/index_spec.rb and confirm that the test passes: it 'properly links to the "New Blog" blog post' do click_link 'New Blog' expect(page).to have_selector('h2', text: 'New Blog') end middleman-demo can be further tested in this fashion. The extent to which the specs test every element of the site’s functionality is up to you. At what point can it be confidently asserted that the site looks good, works as expected, and can be publicly deployed to users? Push to GitHub Next, push your middleman-demo code to GitHub. If you forked my original github.com/mdb/middleman-demo repository, skip this section. 1. Create a GitHub repositoryIf you don’t already have a GitHub account, create one. Create a repository through GitHub’s web UI called middleman-demo. 2. What should you do if your version of middleman-demo is not a git repository?If your middleman-demo is already a git repository, skip to step 3. If you started from scratch and your code isn’t already in a git repository, let’s initialize one now. I’m assuming you have git installed and have some basic familiarity with it. Make a middleman-demo git repository: $ git init && git add . && git commit -m 'initial commit' Declare your git origin, where <your_git_url_from_step_1> is your GitHub middleman-demo repository URL: $ git remote add origin <your_git_url_from_step_1> Push to your GitHub repository: $ git push origin master You’re done; skip step 3 and move on to Integrate with Travis CI. 3. If you cloned my mdb/middleman-demo repository…If you cloned my middleman-demo git repository, you’ll need to add your newly created middleman-demo GitHub repository as an additional remote: $ git remote add my_origin <your_git_url_from_step_1> If you are working in a branch, merge all your changes to master. Then push to your GitHub repository: $ git push -u my_origin master Integrate with Travis CI Travis CI is a distributed continuous integration service that integrates with GitHub. It’s free for open source projects. Let’s configure Travis CI to run the middleman-demo tests when we push to the GitHub repository. Log in to Travis CI First, sign in to Travis CI using your GitHub credentials. Visit your profile. Find your middleman-demo repository in the “Repositories” list. Activate Travis CI for middleman-demo; click the toggle button “ON.” Create a .travis.ymlfile Travis CI looks for a .travis.yml YAML file in the root of a repository. YAML is a simple, human-readable markup language; it’s a popular option in authoring configuration files. The .travis.yml file informs Travis how to execute the project’s build. Create a .travis.yml file in the root of middleman-demo: $ touch .travis.yml Configure Travis CI to use Ruby 2.1 when building middleman-demo. Add the following YAML to the .travis.yml file: language: rubyrvm: 2.1 Next, declare how Travis CI can install the necessary gem dependencies to build middleman-demo; add the following: install: bundle install Let’s also add before_script, which runs the middleman-demo tests to ensure all tests pass in advance of a build: before_script: bundle exec rspec Finally, add a script that instructs Travis CI how to build middleman-demo: script: bundle exec middleman build At this point, the .travis.yml file should look like the following: language: ruby rvm: 2.1 install: bundle install before_script: bundle exec rspec script: bundle exec middleman build Commit the .travis.yml file: $ git add .travis.yml && git commit -m "added basic .travis.yml file" Now, after pushing to GitHub, Travis CI will attempt to install middleman-demo dependencies using Ruby 2.1, run its tests, and build the site. Travis CI’s command build output can be seen here: https://travis-ci.org/<your_github_username>/middleman-demo Add a build status badge Assuming the build passes, you should see a green build passing badge near the top right corner of the Travis CI UI on your Travis CI middleman-demo page. Let’s add this badge to the README.md file in middleman-demo, such that a build status badge reflecting the status of the most recent Travis CI build displays on the GitHub repository’s README. If one does not already exist, create a README.md file: $ touch README.md Add the following markdown, which renders the Travis CI build status badge: [![Build Status](https://travis-ci.org/<your_github_username>/middleman-demo.svg?branch=master)](https://travis-ci.org/<your_github_username>/middleman-demo) Configure continuous deployments Through continuous deployments, code is shipped to users as soon as a quality-validated change is committed. Travis CI can be configured to deploy a middleman-demo build with each successful build. Let’s configure Travis CI to continuously deploy middleman-demo to the S3 bucket created in part 2 of this tutorial series. First, install the travis command-line tools: $ gem install travis Use the travis command-line tools to set S3 deployments. Enter the following; you’ll be prompted for your S3 details (see the example below if you’re unsure how to answer): $ travis setup s3 An example response is: Access key ID: <your_aws_access_key_id> Secret access key: <your_aws_secret_access_key_id> Bucket: <your_aws_bucket> Local project directory to upload (Optional): build S3 upload directory (Optional): S3 ACL Settings (private, public_read, public_read_write, authenticated_read, bucket_owner_read, bucket_owner_full_control): |private| public_read Encrypt secret access key? |yes| yes Push only from <your_github_username>/middleman-demo? |yes| yes This automatically edits the .travis.yml file to include the following deploy information: deploy: provider: s3 access_key_id: <your_aws_access_key_id> secret_access_key: secure: <your_encrypted_aws_secret_access_key_id> bucket: <your_s3_bucket> local-dir: build acl: !ruby/string:HighLine::String public_read on: repo: <your_github_username>/middleman-demo Add one additional option, informing Travis to preserve the build directory for use during the deploy process: skip_cleanup: true The final .travis.yml file should look like the following: language: ruby rvm: 2.1 install: bundle install before_script: bundle exec rspec script: bundle exec middleman build deploy: provider: s3 access_key_id: <your_aws_access_key> secret_access_key: secure: <your_encrypted_aws_secret_access_key> bucket: <your_aws_bucket> local-dir: build skip_cleanup: true acl: !ruby/string:HighLine::String public_read on: repo: <your_github_username>/middleman-demo Confirm that your continuous integration works Commit your changes: $ git add .travis.yml && git commit -m "added travis deploy configuration" Push to GitHub and watch the build output on Travis CI: https://travis-ci.org/<your_github_username>/middleman-demo If all works as expected, Travis CI will run the middleman-demo tests, build the site, and deploy to the proper S3 bucket. Recap Throughout this series, we’ve examined the benefits of static site generators and covered some basics regarding Middleman blogging. We’ve learned how to use the wp2middleman gem to migrate content from a WordPress blog, and we’ve learned how to deploy Middleman to Amazon’s cloud-based Simple Storage Service (S3). We’ve configured Travis CI to run automated tests, produce a build, and automate deployments. Beyond what’s been covered within this series, there’s an extensive Middleman ecosystem worth exploring, as well as numerous additional features. Middleman’s custom extensions seek to extend basic Middleman functionality through third-party gems. Read more about Middleman at Middlemanapp.com. About this author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications. 
Read more
  • 0
  • 0
  • 1951

article-image-managing-local-environments
Packt
09 Feb 2015
15 min read
Save for later

Managing local environments

Packt
09 Feb 2015
15 min read
In this article by Juampy Novillo Requena, author of Drush for Developers, Second Edition, we will learn that Drush site aliases offer a useful way to manage local environments without having to be within Drupal's root directory. (For more resources related to this topic, see here.) A site alias consists of an array of settings for Drush to access a Drupal project. They can be defined in different locations, using various file structures. You can find all of its variations at drush topic docs-aliases. In this article, we will use the following variations: We will define local site aliases at $HOME/.drush/aliases.drushrc.php, which are accessible anywhere for our command-line user. We will define a group of site aliases to manage the development and production environments of our sample Drupal project. These will be defined at sites/all/drush/example.aliases.drushrc.php. In the following example, we will use the site-alias command to generate a site alias definition for our sample Drupal project: $ cd /home/juampy/projects/example $ drush --uri=example.local site-alias --alias-name=example.local @self $aliases["example.local"] = array ( 'root' => '/home/juampy/projects/example', 'uri' => 'example.local', '#name' => 'self', ); The preceding command printed an array structure for the $aliases variable. You can see the root and uri options. There is also an internal property called #name that we can ignore. Now, we will place the preceding output at $HOME/.drush/aliases.drushrc.php so that we can invoke Drush commands to our local Drupal project from anywhere in the command-line interface: <?php   /** * @file * User-wide site alias definitions. * * Site aliases defined here are available everywhere for the current user. */   // Sample Drupal project. $aliases["example.local"] = array ( 'root' => '/home/juampy/projects/example', 'uri' => 'example.local', ); Here is how we use this site alias in a command. The following example is running the core-status command for our sample Drupal project: $ cd /home/juampy $ drush @example.local core-status Drupal version                 : 7.29-dev                               Site URI                       : example.local                               Database driver                 : mysql                               Database username               : root                                   Database name                   : drupal7x                               Database                       : Connected                               ... Drush alias files              : /home/juampy/.drush/aliases.drushrc.php Drupal root                     : /home/juampy/projects/example           Site path                       : sites/default                           File directory path             : sites/default/files                     Drush loaded our site alias file and used the root and uri options defined in it to find and bootstrap Drupal. The preceding command is equivalent to the following one: $ drush --root=/home/juampy/projects/example --uri=example.local core-status While $HOME/.drush/aliases.drushrc.php is a good place to define site aliases in your local environment, /etc/drush is a first class directory to place site aliases in servers. Let's discover now how we can connect to remote environments via Drush. Managing remote environments Site aliases that reference remote websites can be accessed by Drush through a password-less SSH connection (http://en.wikipedia.org/wiki/Secure_Shell). Before we start with these, let's make sure that we meet the requirements. Verifying requirements First, it is recommended to install the same version of Drush in all the servers that host your website. Drush will fail to run a command if it is not installed in the remote machine except for core-rsync, which runs rsync, a non-Drush command that is available in Unix-like systems. If you can already access the server that hosts your Drupal project through a public key, then skip to the next section. If not, you can either use the pushkey command from Drush extras (https://www.drupal.org/project/drush_extras), or continue reading to set it up manually. Accessing a remote server through a public key The first thing that we need to do is generate a public key for our command-line user in our local machine. Open the command-line interface and execute the following command. We will explain the output step by step: $ cd $HOME $ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/juampy/.ssh/id_rsa): By default, SSH keys are created at $HOME/.ssh/. It is fine to go ahead with the suggested path in the preceding prompt; so, let's hit Enter and continue: Created directory '/home/juampy/.ssh'. Enter passphrase (empty for no passphrase): ********* Enter same passphrase again: ********* If the .ssh directory does not exist for the current user, the ssh-keygen command will create it with the correct permissions. We are next prompted to enter a passphrase. It is highly recommended to set one as it makes our private key safer. Here is the rest of the output once we have entered a passphrase: Your identification has been saved in /home/juampy/.ssh/id_rsa. Your public key has been saved in /home/juampy/.ssh/id_rsa.pub. The key fingerprint is: 6g:bf:3j:a2:00:03:a6:00:e1:43:56:7a:a0:c7:e9:f3 juampy@juampy-box The key's randomart image is: +--[ RSA 2048]----+ |                 | |                 | |..               | |o..*             | |o + . . S        | | + * = . .       | | = O o . .     | |   *.o * . .     | |   .oE oo.     | +-----------------+ The result is a new hidden directory under our $HOME path named .ssh. This directory contains a private key file (id_rsa) and a public key file (id_rsa.pub). The former is to be kept secret by us, while the latter is the one we will copy into remote servers where we want to gain access. Now that we have a public key, we will announce it to the SSH agent so that it can be used without having to enter the passphrase every time: $ ssh-add ~/.ssh/id_rsa Identity added: /home/juampy/.ssh/id_rsa (/home/juampy/.ssh/id_rsa) Our key is ready to be used. Assuming that we know an SSH username and password to access the server that hosts the development environment of our website, we will now copy our public key into it. In the following command, replace exampledev and dev.example.com with the username and server's URL of your server: $ ssh-copy-id exampledev@dev.example.com exampledev@dev.example.com's password: Now try logging into the machine, with "ssh 'exampledev@dev.example.com'", and check in: ~/.ssh/authorized_keys to make sure we haven't added extra keys that you weren't Our public key has been copied to the server and now we do not need to enter a password to identify ourselves anymore when we log in to it. We could have logged on to the server ourselves and manually copied the key, but the benefit of using the ssh-copy-id command is that it takes care of setting the right permissions to the ~/.ssh/authorized_keys file. Let's test it by logging in to the server: $ ssh exampledev@dev.example.com Welcome! We are ready to set up remote site aliases and run commands using the credentials that we have just configured. We will do this in the next section. If you have any trouble setting up SSH authentication, you can find plenty of debugging tips at https://help.github.com/articles/generating-ssh-keys and http://git-scm.com/book/en/Git-on-the-Server-Generating-Your-SSH-Public-Key. Defining a group of remote site aliases for our project Before diving into the specifics of how to define a Drush site alias, let's assume the following scenario: you are part of a development team working on a project that has two environments, each one located in its own server: Development, which holds the bleeding edge version of the project's codebase. It can be reached at http://dev.example.com. Production, which holds the latest stable release and real data. It can be reached at http://www.example.com. Additionally, there might be a variable amount of local environments for each developer in their working machines; although, these do not need a site alias. Given the preceding scenario and assuming that we have SSH access to the development and production servers, we will create a group of site aliases that identify them. We will define this group at sites/all/drush/example.aliases.drushrc.php within our Drupal project: <?php /** * @file * * Site alias definitions for Example project. */   // Development environment. $aliases['dev'] = array( 'root' => '/var/www/exampledev/docroot', 'uri' => 'dev.example.com', 'remote-host' => 'dev.example.com', 'remote-user' => 'exampledev', );   // Production environment. $aliases['prod'] = array( 'root' => '/var/www/exampleprod/docroot', 'uri' => 'www.example.com', 'remote-host' => 'prod.example.com', 'remote-user' => 'exampleprod', ); The preceding file defines two arrays for the $aliases variable keyed by the environment name. Drush will find this group of site aliases when being invoked from the root of our Drupal project. There are many more settings available, which you can find by reading the contents of the drush topic docs-aliases command. These site aliases contain options known to us: root and uri refer to the remote root path and the hostname of the remote Drupal project. There are also two new settings: remote-host and remote-uri. The former defines the URL of the server hosting the website, while the latter is the user to authenticate Drush when connecting via SSH. Now that we have a group of Drush site aliases to work with, the following section will cover some examples using them. Using site aliases in commands Site aliases prepend a command name for Drush to bootstrap the site and then run the command there. Our site aliases are @example.dev and @example.prod. The word example comes from the filename example.aliases.drushrc.php, while dev and prod are the two keys that we added to the $aliases array. Let's see them in action with a few command examples: Check the status of the Development environment: $ cd /home/juampy/projects/example $ drush @example.dev status Drupal version                 : 7.26                           Site URI                       : http://dev.example.com         Database driver                : mysql                           Database username              : exampledev                     Drush temp directory           : /tmp                           ... Drush alias files              : /home/juampy/projects/example/sites/all/drush/example.aliases.drushrc.php     Drupal root                    : /var/www/exampledev/docroot ...                                           The preceding output shows the current status of our development environment. Drush sent the command via SSH to our development environment and rendered back the resulting output. Most Drush commands support site aliases. Let's see the next example. Log in to the development environment and copy all the files from the files directory located at the production environment: $ drush @example.dev site-ssh Welcome to example.dev server! $ cd `drush @example.dev drupal-directory` $ drush core-rsync @example.prod:%files @self:%files You will destroy data from /var/www/exampledev/docroot/sites/default/files and replace with data from exampleprod@prod.example.com:/var/www/exampleprod/docroot/sites/default/files/ Do you really want to continue? (y/n): y Note the use of @self in the preceding command, which is a special Drush site alias that represents the current Drupal project where we are located. We are using @self instead of @example.dev because we are already logged inside the development environment. Now, we will move on to the next example. Open a connection with the Development environment's database: $ drush @example.dev sql-cli Welcome to the MySQL monitor. Commands end with ; or g. mysql> select database(); +------------+ | database() | +------------+ | exampledev | +------------+ 1 row in set (0.02 sec) The preceding command will be identical to the following set of commands: drush @example.dev site-ssh cd /var/www/exampledev drush sql-cli However, Drush is so clever that it opens the connection for us. Isn't this neat? This is one of the commands I use most frequently. Let's finish by looking at our last example. Log in as the administrator user in production: $ drush @example.prod user-login http://www.example.com/user/reset/1/some-long-token/login Created new window in existing browser session. The preceding command creates a login URL and attempts to open your default browser with it. I love Drush! Summary In this article, we covered practical examples with site aliases. We started by defining a site alias for our local Drupal project, and then went on to write a group of site aliases to manage remote environments for a hypothetical Drupal project with a development and production site. Before using site aliases for our remote environments, we covered the basics of setting up SSH in order for Drush to connect to these servers and run commands there. Resources for Article: Further resources on this subject: Installing and Configuring Drupal [article] Installing and Configuring Drupal Commerce [article] 25 Useful Extensions for Drupal 7 Themers [article]
Read more
  • 0
  • 0
  • 1128

article-image-migrating-wordpress-blog-middleman-and-deploying-amazon-s3-part2
Mike Ball
28 Nov 2014
9 min read
Save for later

Part 2: Migrating a WordPress Blog to Middleman and Deploying to Amazon S3

Mike Ball
28 Nov 2014
9 min read
Part 2: Migrating WordPress blog content and deploying to production In part 1 of this series, we created middleman-demo, a basic Middleman-based blog. Part 1 addressed the benefits of a static site, setting up a Middleman development environment, Middleman’s templating system, and how to configure a Middleman project to support a basic blogging functionality. Now that middleman-demo is configured for blogging, let’s export old content from an existing WordPress blog, compile the application for production, and deploy to a web server. In this part, we’ll cover the following: Using the wp2middleman gem to migrate content from an existing WordPress blog Creating a Rake task to establish an Amazon Web Services S3 bucket Deploying a Middleman blog to Amazon S3 Setting up a custom domain for an S3-hosted site If you didn’t follow part 1, or you no longer have your original middleman-demo code, you can clone mine and check out the part2 branch: $ git clone http://github.com/mdb/middleman-demo && cd middleman-demo && git checkout part2 Export your content from Wordpress Now that middleman-demo is configured for blogging, let’s export old content from an existing Wordpress blog. Wordpress provides a tool through which blog content can be exported as an XML file, also called a WordPress “eXtended RSS” or “WXR” file. A WXR file can be generated and downloaded via the Wordpress admin’s Tools > Export screen, as is explained in Wordpress’s WXR documentation. In absence of a real Wordpress blog, download middleman_demo.wordpress.xml file, a sample WXR file: $ wget www.mikeball.info/downloads/middleman_demo.wordpress.xml Migrating the Wordpress posts to markdown To migrate the posts contained in the Wordpress WXR file, I created wp2middleman, a command line tool to generate Middleman-style markdown files from the posts in a WXR. Install wp2middleman via Rubygems: $ gem install wp2middleman wp2middleman provides a wp2mm command. Pass the middleman_demo.wordpress.xml file to the wp2mm command: $ wp2mm middleman_demo.wordpress.xml If all goes well, the following output is printed to the terminal: Successfully migrated middleman_demo.wordpress.xml wp2middleman also produced an export directory. The export directory houses the blog posts from the middleman_demo.wordpress.xml WXR file, now represented as Middleman-style markdown files: $ ls export/ 2007-02-14-Fusce-mauris-ligula-rutrum-at-tristique-at-pellentesque-quis-nisl.html.markdown 2007-07-21-Suspendisse-feugiat-enim-vel-lorem.html.markdown 2008-02-20-Suspendisse-rutrum-Suspendisse-nisi-turpis-congue-ac.html.markdown 2008-03-17-Duis-euismod-purus-ac-quam-Mauris-tortor.html.markdown 2008-04-02-Donec-cursus-tincidunt-libero-Nam-blandit.html.markdown 2008-04-28-Etiam-nulla-nisl-cursus-vel-auctor-at-mollis-a-quam.html.markdown 2008-06-08-Praesent-faucibus-ligula-luctus-dolor.html.markdown 2008-07-08-Proin-lobortis-sapien-non-venenatis-luctus.html.markdown 2008-08-08-Etiam-eu-urna-eget-dolor-imperdiet-vehicula-Phasellus-dictum-ipsum-vel-neque-mauris-interdum-iaculis-risus.html.markdown 2008-09-08-Lorem-ipsum-dolor-sit-amet-consectetuer-adipiscing-elit.html.markdown 2013-12-30-Hello-world.html.markdown Note that wp2mm supports additional options, though these are beyond the scope of this tutorial. Read more on wp2middleman’s GitHub page. Also note that the markdown posts in export are named *.html.markdown and some -- such as SOME EXAMPLE TODO -- contain the HTML embedded in the original Wordpress post. Middleman supports the ability to embed multiple languages within a single post file. For example, Middleman will evaluate a file named .html.erb.markdown first as markdown and then ERb. The final result would be HTML. Move the contents of export to source/blog and remove the export directory: $ mv export/* source/blog && rm -rf export Now, assuming the Middleman server is running, visiting http://localhost:4567 lists all the blog posts migrated from Wordpress. Each post links to its permalink. In the case of posts with tags, each tag links to a tag page. Compiling for production Thus far, we’ve been viewing middleman-demo in local development, where the Middleman server dynamically generates the HTML, CSS, and JavaScript with each request. However, Middleman’s value lies in its ability to generate a static website -- simple HTML, CSS, JavaScript, and image files -- served directly by a web server such as Nginx or Apache and thus requiring no application server or internal backend. Compile middleman-demo to a static build directory: $ middleman build The resulting build directory houses every HTML file that can be served by middleman-demo, as well as all necessary CSS, JavaScript, and images. Its directory layout maps to the URL patterns defined in config.rb. The build directory is typically ignored from source control. Deploying the build to Amazon S3 Amazon Web Services is Amazon’s cloud computing platform. Amazon S3, or Simple Storage Service, is a simple data storage service. Because S3 “buckets” can be accessible over HTTP, S3 offers a great cloud-based hosting solution for static websites, such as middleman-demo. While S3 is not free, it is generally extremely affordable. Amazon charges on a per-usage basis according to how many requests your bucket serves, including PUT requests, i.e. uploads. Read more about S3 pricing on AWS’s pricing guide. Let’s deploy the middleman-demo build to Amazon S3. First, sign up for AWS. Through AWS’s web-based admin, create an IAM user and locate the corresponding “access key id” and “secret access key:” 1: Visit the AWS IAM console. 2: From the navigation menu, click Users. 3: Select your IAM user name. 4: Click User Actions; then click Manage Access Keys. 5: Click Create Access Key. 6: Click Download Credentials; store the keys in a secure location. 7: Store your access key id in an environment variable named AWS_ACCESS_KEY_ID: $ export AWS_ACCESS_KEY_ID=your_access_key_id 8: Store your secret access key as an environment variable named AWS_SECRET_ACCESS_KEY: $ export AWS_SECRET_ACCESS_KEY=your_secret_access_key Note that, to persist these environment variables beyond the current shell session, you may want to automatically set them in each shell session. Setting them in a file such as your ~/.bashrc ensures this: export AWS_ACCESS_KEY_ID=your_access_key_id export AWS_SECRET_ACCESS_KEY=your_secret_access_key Creating an S3 bucket with Ruby To deploy to S3, we’ll need to create a “bucket,” or an S3 endpoint to which the middleman-demo’s build directory can be deployed. This can be done via AWS’s management console, but we can also automate its creation with Ruby. We’ll use the aws-sdk Ruby gem and a Rake task to create an S3 bucket for middleman-demo. Add the aws-sdk gem to middleman-demo’s Gemfile: gem 'aws-sdk Install the new gem: $ bundle install Create a Rakefile: $ touch Rakefile Add the following Ruby to the Rakefile; this code establishes a Rake task -- a quick command line utility -- to automate the creation of an S3 bucket: require 'aws-sdk' desc "Create an AWS S3 bucket" task :s3_bucket, :bucket_name do |task, args| s3 = AWS::S3.new(region: 'us-east-1) bucket = s3.buckets.create(args[:bucket_name]) bucket.configure_website do |config| config.index_document_suffix = 'index.html' config.error_document_key = 'error/index.html' end end From the command line, use the newly-established :s3_bucket Rake task to create a unique S3 bucket for your middleman-demo. Note that, if you have an existing domain you’d like to use, your bucket should be named www.yourdomain.com: $ rake s3_bucket[some_unique_bucket_name] For example, I named my S3 bucket www.middlemandemo.com by entering the following: $ rake s3_bucket[www.middlemandemo.com] After running rake s3_bucket[YOUR_BUCKET], you should see YOUR_BUCKET amongst the buckets listed in your AWS web console. Creating an error template Our rake task specifies a config.error_document_key whose value is error/index.html. This configures your S3 bucket to serve an error.html for erroring responses, such as 404s. Create an source/error.html.erb template: $ touch source/error.html.erb And add the following: --- title: Oops - something went wrong --- <h2><%= current_page.data.title %></h2> Deploying to your S3 bucket With an S3 bucket established, the middleman-sync Ruby gem can be used to automate uploading middleman-demo builds to S3. Add the middleman-sync gem to the Gemfile: gem ‘middleman-sync’ Install the middleman-sync gem: $ bundle install Add the necessary middleman-sync configuration to config.rb: activate :sync do |sync| sync.fog_provider = 'AWS' sync.fog_region = 'us-east-1' sync.fog_directory = '<YOUR_BUCKET>' sync.aws_access_key_id = ENV['AWS_ACCESS_KEY_ID'] sync.aws_secret_access_key = ENV['AWS_SECRET_ACCESS_KEY'] end Build and deploy middleman-demo: $ middleman build && middleman sync Note: if your deployment fails with a ’post_connection_check': hostname "YOUR_BUCKET" does not match the server certificate (OpenSSL::SSL::SSLError) (Excon::Errors::SocketError), it’s likely due to an open issue with middleman-sync. To work around this issue, add the following to the top of config.rb: require 'fog' Fog.credentials = { path_style: true } Now, middlemn-demo is browseable online at http://YOUR_BUCKET.s3-website-us-east-1.amazonaws.com/ Using a custom domain With middleman-demo -- deployed to an S3 bucket whose name matches a domain name, a custom domain can be configured easily. To use a custom domain, log into your domain management provider and add a CNAME mapping your domain to www.yourdomain.com.s3-website-us-east-1.amazonaws.com.. While the exact process for managing a CNAME varies between domain name providers, the process is generally fairly simple. Note that your S3 bucket name must perfectly match your domain name. Recap We’ve examined the benefits of static site generators and covered some basics regarding Middleman blogging. We’ve learned how to use the wp2middleman gem to migrate content from a Wordpress blog, and we’ve learned how to deploy Middleman to Amazon’s cloud-based Simple Storage Service (S3). About this author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications.
Read more
  • 0
  • 0
  • 2288

article-image-using-front-controllers-create-new-page
Packt
28 Nov 2014
22 min read
Save for later

Using front controllers to create a new page

Packt
28 Nov 2014
22 min read
In this article, by Fabien Serny, author of PrestaShop Module Development, you will learn about controllers and object models. Controllers handle display on front and permit us to create new page type. Object models handle all required database requests. We will also see that, sometimes, hooks are not enough and can't change the way PrestaShop works. In these cases, we will use overrides, which permit us to alter the default process of PrestaShop without making changes in the core code. If you need to create a complex module, you will need to use front controllers. First of all, using front controllers will permit to split the code in several classes (and files) instead of coding all your module actions in the same class. Also, unlike hooks (that handle some of the display in the existing PrestaShop pages), it will allow you to create new pages. (For more resources related to this topic, see here.) Creating the front controller To make this section easier to understand, we will make an improvement on our current module. Instead of displaying all of the comments (there can be many), we will only display the last three comments and a link that redirects to a page containing all the comments of the product. First of all, we will add a limit to the Db request in the assignProductTabContent method of your module class that retrieves the comments on the product page: $comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$id_product.'ORDER BY `date_add` DESCLIMIT 3'); Now, if you go to a product, you should only see the last three comments. We will now create a controller that will display all comments concerning a specific product. Go to your module's root directory and create the following directory path: /controllers/front/ Create the file that will contain the controller. You have to choose a simple and explicit name since the filename will be used in the URL; let's name it comments.php. In this file, create a class and name it, ensuring that you follow the [ModuleName][ControllerFilename]ModuleFrontController convention, which extends the ModuleFrontController class. So in our case, the file will be as follows: <?phpclass MyModCommentsCommentsModuleFrontController extendsModuleFrontController{} The naming convention has been defined by PrestaShop and must be respected. The class names are a bit long, but they enable us to avoid having two identical class names in different modules. Now you just to have to set the template file you want to display with the following lines: class MyModCommentsCommentsModuleFrontController extendsModuleFrontController{public function initContent(){parent::initContent();$this->setTemplate('list.tpl');}} Next, create a template named list.tpl and place it in views/templates/front/ of your module's directory: <h1>{l s='Comments' mod='mymodcomments'}</h1> Now, you can check the result by loading this link on your shop: /index.php?fc=module&module=mymodcomments&controller=comments You should see the Comments title displayed. The fc parameter defines the front controller type, the module parameter defines in which module directory the front controller is, and, at last, the controller parameter defines which controller file to load. Maintaining compatibility with the Friendly URL option In order to let the visitor access the controller page we created in the preceding section, we will just add a link between the last three comments displayed and the comment form in the displayProductTabContent.tpl template. To maintain compatibility with the Friendly URL option of PrestaShop, we will use the getModuleLink method. This will generate a URL according to the URL settings (defined in Preferences | SEO & URLs). If the Friendly URL option is enabled, then it will generate a friendly URL (for example, /en/5-tshirts-doctor-who); if not, it will generate a classic URL (for example, /index.php?id_category=5&controller=category&id_lang=1). This function takes three parameters: the name of the module, the controller filename you want to call, and an array of parameters. The array of parameters must contain all of the data that's needed, which will be used by the controller. In our case, we will need at least the product identifier, id_product, to display only the comments related to the product. We can also add a module_action parameter just in case our controller contains several possible actions. Here is an example. As you will notice, I created the parameters array directly in the template using the assign Smarty method. From my point of view, it is easier to have the content of the parameters close to the link. However, if you want, you can create this array in your module class and assign it to your template in order to have cleaner code: <div class="rte">{assign var=params value=['module_action' => 'list','id_product'=> $smarty.get.id_product]}<a href="{$link->getModuleLink('mymodcomments', 'comments',$params)}">{l s='See all comments' mod='mymodcomments'}</a></div> Now, go to your product page and click on the link; the URL displayed should look something like this: /index.php?module_action=list&id_product=1&fc=module&module=mymodcomments&controller=comments&id_lang=1 Creating a small action dispatcher In our case, we won't need to have several possible actions in the comments controller. However, it would be great to create a small dispatcher in our front controller just in case we want to add other actions later. To do so, in controllers/front/comments.php, we will create new methods corresponding to each action. I propose to use the init[Action] naming convention (but this is not mandatory). So in our case, it will be a method named initList: protected function initList(){$this->setTemplate('list.tpl');} Now in the initContent method, we will create a $actions_list array containing all possible actions and associated callbacks: $actions_list = array('list' => 'initList'); Now, we will retrieve the id_product and module_action parameters in variables. Once complete, we will check whether the id_product parameter is valid and if the action exists by checking in the $actions_list array. If the method exists, we will dynamically call it: if ($id_product > 0 && isset($actions_list[$module_action]))$this->$actions_list[$module_action](); Here's what your code should look like: public function initContent(){parent::initContent();$id_product = (int)Tools::getValue('id_product');$module_action = Tools::getValue('module_action');$actions_list = array('list' => 'initList');if ($id_product > 0 && isset($actions_list[$module_action]))$this->$actions_list[$module_action]();} If you did this correctly nothing should have changed when you refreshed the page on your browser, and the Comments title should still be displayed. Displaying the product name and comments We will now display the product name (to let the visitor know he or she is on the right page) and associated comments. First of all, create a public variable, $product, in your controller class, and insert it in the initContent method with an instance of the selected product. This way, the product object will be available in every action method: $this->product = new Product((int)$id_product, false,$this->context->cookie->id_lang); In the initList method, just before setTemplate, we will make a DB request to get all comments associated with the product and then assign the product object and the comments list to Smarty: // Get comments$comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->product->id.'ORDER BY `date_add` DESC');// Assign comments and product object$this->context->smarty->assign('comments', $comments);$this->context->smarty->assign('product', $this->product); Once complete, we will display the product name by changing the h1 title: <h1>{l s='Comments on product' mod='mymodcomments'}"{$product->name}"</h1> If you refresh your page, you should now see the product name displayed. I won't explain this part since it's exactly the same HTML code we used in the displayProductTabContent.tpl template. At this point, the comments should appear without the CSS style; do not panic, just go to the next section of this article. Including CSS and JS media in the controller As you can see, the comments are now displayed. However, you are probably asking yourself why the CSS style hasn't been applied properly. If you look back at your module class, you will see that it is the hookDisplayProductTab hook in the product page that includes the CSS and JS files. The problem is that we are not on a product page here. So we have to include them on this page. To do so, we will create a method named setMedia in our controller and add CS and JS files (as we did in the hookDisplayProductTab hook). It will override the default setMedia method contained in the FrontController class. Since this method includes general CSS and JS files used by PrestaShop, it is very important to call the setMedia parent method in our override: public function setMedia(){// We call the parent methodparent::setMedia();// Save the module path in a variable$this->path = __PS_BASE_URI__.'modules/mymodcomments/';// Include the module CSS and JS files needed$this->context->controller->addCSS($this->path.'views/css/starrating.css', 'all');$this->context->controller->addJS($this->path.'views/js/starrating.js');$this->context->controller->addCSS($this->path.'views/css/mymodcomments.css', 'all');$this->context->controller->addJS($this->path.'views/js/mymodcomments.js');} If you refresh your browser, the comments should now appear well formatted. In an attempt to improve the display, we will just add the date of the comment beside the author's name. Just replace <p>{$comment.firstname} {$comment.lastname|substr:0:1}.</p> in your list.tpl template with this line: <div>{$comment.firstname} {$comment.lastname|substr:0:1}. <small>{$comment.date_add|substr:0:10}</small></div> You can also replace the same line in the displayProductTabContent.tpl template if you want. If you want more information on how the Smarty method works, such as substr that I used for the date, you can check the official Smarty documentation. Adding a pagination system Your controller page is now fully working. However, if one of your products has thousands of comments, the display won't be quick. We will add a pagination system to handle this case. First of all, in the initList method, we need to set a number of comments per page and know how many comments are associated with the product: // Get number of comments$nb_comments = Db::getInstance()->getValue('SELECT COUNT(`id_product`)FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->product->id);// Init$nb_per_page = 10; By default, I have set the number per page to 10, but you can set the number you want. The value is stored in a variable to easily change the number, if needed. Now we just have to calculate how many pages there will be : $nb_pages = ceil($nb_comments / $nb_per_page); Also, set the page the visitor is on: $page = 1;if (Tools::getValue('page') != '')$page = (int)$_GET['page']; Now that we have this data, we can generate the SQL limit and use it in the comment's DB request in such a way so as to display the 10 comments corresponding to the page the visitor is on: $limit_start = ($page - 1) * $nb_per_page;$limit_end = $nb_per_page;$comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->product->id.'ORDER BY `date_add` DESCLIMIT '.(int)$limit_start.','.(int)$limit_end); If you refresh your browser, you should only see the last 10 comments displayed. To conclude, we just need to add links to the different pages for navigation. First, assign the page the visitor is on and the total number of pages to Smarty: $this->context->smarty->assign('page', $page);$this->context->smarty->assign('nb_pages', $nb_pages); Then in the list.tpl template, we will display numbers in a list from 1 to the total number of pages. On each number, we will add a link with the getModuleLink method we saw earlier, with an additional parameter page: <ul class="pagination">{for $count=1 to $nb_pages}{assign var=params value=['module_action' => 'list','id_product' => $smarty.get.id_product,'page' => $count]}<li><a href="{$link->getModuleLink('mymodcomments', 'comments',$params)}"><span>{$count}</span></a></li>{/for}</ul> To make the pagination clearer for the visitor, we can use the native CSS class to indicate the page the visitor is on: {if $page ne $count}<li><a href="{$link->getModuleLink('mymodcomments', 'comments',$params)}"><span>{$count}</span></a></li>{else}<li class="active current"><span><span>{$count}</span></span></li>{/if} Your pagination should now be fully working. Creating routes for a module's controller At the beginning of this article, we chose to use the getModuleLink method to keep compatibility with the Friendly URL option of PrestaShop. Let's enable this option in the SEO & URLs section under Preferences. Now go to your product page and look at the target of the See all comments link; it should have changed from /index.php?module_action=list&id_product=1&fc=module&module=mymodcomments&controller=comments&id_lang=1 to /en/module/mymodcomments/comments?module_action=list&id_product=1. The result is nice, but it is not really a Friendly URL yet. ISO code at the beginning of URLs appears only if you enabled several languages; so if you have only one language enabled, the ISO code will not appear in the URL in your case. Since PrestaShop 1.5.3, you can create specific routes for your module's controllers. To do so, you have to attach your module to the ModuleRoutes hook. In your module's install method in mymodcomments.php, add the registerHook method for ModuleRoutes: // Register hooksif (!$this->registerHook('displayProductTabContent') ||!$this->registerHook('displayBackOfficeHeader') ||!$this->registerHook('ModuleRoutes'))return false; Don't forget; you will have to uninstall/install your module if you want it to be attached to this hook. If you don't want to uninstall your module (because you don't want to lose all the comments you filled in), you can go to the Positions section under the Modules section of your back office and hook it manually. Now we have to create the corresponding hook method in the module's class. This method will return an array with all the routes we want to add. The array is a bit complex to explain, so let me write an example first: public function hookModuleRoutes(){return array('module-mymodcomments-comments' => array('controller' => 'comments','rule' => 'product-comments{/:module_action}{/:id_product}/page{/:page}','keywords' => array('id_product' => array('regexp' => '[d]+','param' => 'id_product'),'page' => array('regexp' => '[d]+','param' => 'page'),'module_action' => array('regexp' => '[w]+','param' => 'module_action'),),'params' => array('fc' => 'module','module' => 'mymodcomments','controller' => 'comments')));} The array can contain several routes. The naming convention for the array key of a route is module-[ModuleName]-[ModuleControllerName]. So in our case, the key will be module-mymodcomments-comments. In the array, you have to set the following: The controller; in our case, it is comments. The construction of the route (the rule parameter). You can use all the parameters you passed in the getModuleLink method by using the {/:YourParameter} syntax. PrestaShop will automatically add / before each dynamic parameter. In our case, I chose to construct the route this way (but you can change it if you want): product-comments{/:module_action}{/:id_product}/page{/:page} The keywords array corresponding to the dynamic parameters. For each dynamic parameter, you have to set Regexp, which will permit to retrieve it from the URL (basically, [d]+ for the integer values and '[w]+' for string values) and the parameter name. The parameters associated with the route. In the case of a module's front controller, it will always be the same three parameters: the fc parameter set with the fix value module, the module parameter set with the module name, and the controller parameter set with the filename of the module's controller. Very importantNow PrestaShop is waiting for a page parameter to build the link. To avoid fatal errors, you will have to set the page parameter to 1 in your getModuleLink parameters in the displayProductTabContent.tpl template: {assign var=params value=[ 'module_action' => 'list', 'id_product' => $smarty.get.id_product, 'page' => 1 ]} Once complete, if you go to a product page, the target of the See all comments link should now be: /en/product-comments/list/1/page/1 It's really better, but we can improve it a little more by setting the name of the product in the URL. In the assignProductTabContent method of your module, we will load the product object and assign it to Smarty: $product = new Product((int)$id_product, false,$this->context->cookie->id_lang);$this->context->smarty->assign('product', $product); This way, in the displayProductTabContent.tpl template, we will be able to add the product's rewritten link to the parameters of the getModuleLink method: (do not forget to add it in the list.tpl template too!): {assign var=params value=['module_action' => 'list','product_rewrite' => $product->link_rewrite,'id_product' => $smarty.get.id_product,'page' => 1]} We can now update the rule of the route with the product's link_rewrite variable: 'product-comments{/:module_action}{/:product_rewrite} {/:id_product}/page{/:page}' Do not forget to add the product_rewrite string in the keywords array of the route: 'product_rewrite' => array('regexp' => '[w-_]+','param' => 'product_rewrite'), If you refresh your browser, the link should look like this now: /en/product-comments/list/tshirt-doctor-who/1/page/1 Nice, isn't it? Installing overrides with modules As we saw in the introduction of this article, sometimes hooks are not sufficient to meet the needs of developers; hooks can't alter the default process of PrestaShop. We could add code to core classes; however, it is not recommended, as all those core changes will be erased when PrestaShop is updated using the autoupgrade module (even a manual upgrade would be difficult). That's where overrides take the stage. Creating the override class Installing new object models and controller overrides on PrestaShop is very easy. To do so, you have to create an override directory in the root of your module's directory. Then, you just have to place your override files respecting the path of the original file that you want to override. When you install the module, PrestaShop will automatically move the override to the overrides directory of PrestaShop. In our case, we will override the getProducts method of the /classes/Search.php class to display the grade and the number of comments on the product list. So we just have to create the Search.php file in /modules/mymodcomments/override/classes/Search.php, and fill it with: <?phpclass Search extends SearchCore{public static function find($id_lang, $expr, $page_number = 1,$page_size = 1, $order_by = 'position', $order_way = 'desc',$ajax = false, $use_cookie = true, Context $context = null){}} In this method, first of all, we will call the parent method to get the products list and return it: // Call parent method$find = parent::find($id_lang, $expr, $page_number, $page_size,$order_by, $order_way, $ajax, $use_cookie, $context);// Return productsreturn $find; We want to display the information (grade and number of comments) to the products list. So, between the find method call and the return statement, we will add some lines of code. First, we will check whether $find contains products. The find method can return an empty array when no products match the search. In this case, we don't have to change the way this method works. We also have to check whether the mymodcomments module has been installed (if the override is being used, the module is most likely to be installed, but as I said, it's just for security): if (isset($find['result']) && !empty($find['result']) &&Module::isInstalled('mymodcomments')){} If we enter these conditions, we will list the product identifier returned by the find parent method: // List id product$products = $find['result'];$id_product_list = array();foreach ($products as $p)$id_product_list[] = (int)$p['id_product']; Next, we will retrieve the grade average and number of comments for the products in the list: // Get grade average and nb comments for products in list$grades_comments = Db::getInstance()->executeS('SELECT `id_product`, AVG(`grade`) as grade_avg,count(`id_mymod_comment`) as nb_commentsFROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` IN ('.implode(',', $id_product_list).')GROUP BY `id_product`'); Finally, fill in the $products array with the data (grades and comments) corresponding to each product: // Associate grade and nb comments with productforeach ($products as $kp => $p)foreach ($grades_comments as $gc)if ($gc['id_product'] == $p['id_product']){$products[$kp]['mymodcomments']['grade_avg'] =round($gc['grade_avg']);$products[$kp]['mymodcomments']['nb_comments'] =$gc['nb_comments'];}$find['result'] = $products; Now, as we saw at the beginning of this section, the overrides of the module are installed when you install the module. So you will have to uninstall/install your module. Once this is done, you can check the override contained in your module; the content of /modules/mymodcomments/override/classes/Search.php should be copied in /override/classes/Search.php. If an override of the class already exists, PrestaShop will try to merge it by adding the methods you want to override to the existing override class. Once the override is added by your module, PrestaShop should have regenerated the cache/class_index.php file (which contains the path of every core class and controller), and the path of the Category class should have changed. Open the cache/class_index.php file and search for 'Search'; the content of this array should now be: 'Search' =>array ( 'path' => 'override/classes /Search.php','type' => 'class',), If it's not the case, it probably means the permissions of this file are wrong and PrestaShop could not regenerate it. To fix this, just delete this file manually and refresh any page of your PrestaShop. The file will be regenerated and the new path will appear. Since you uninstalled/installed the module, all your comments should have been deleted. So take 2 minutes to fill in one or two comments on a product. Then search for this product. As you must have noticed, nothing has changed. Data is assigned to Smarty, but not used by the template yet. To avoid deletion of comments each time you uninstall the module, you should comment the loadSQLFile call in the uninstall method of mymodcomments.php. We will uncomment it once we have finished working with the module. Editing the template file to display grades on products list In a perfect world, you should avoid using overrides. In this case, we could have used the displayProductListReviews hook, but I just wanted to show you a simple example with an override. Moreover, this hook exists only since PrestaShop 1.6, so it would not work on PrestaShop 1.5. Now, we will have to edit the product-list.tpl template of the active theme (by default, it is /themes/default-bootstrap/), so the module won't be a turnkey module anymore. A merchant who will install this module will have to manually edit this template if he wants to have this feature. In the product-list.tpl template, just after the short description, check if the $product.mymodcomments variable exists (to test if there are comments on the product), and then display the grade average and the number of comments: {if isset($product.mymodcomments)}<p><b>{l s='Grade:'}</b> {$product.mymodcomments.grade_avg}/5<br/><b>{l s='Number of comments:'}</b>{$product.mymodcomments.nb_comments}</p>{/if} Here is what the products list should look like now: Creating a new method in a native class In our case, we have overridden an existing method of a PrestaShop class. But we could have added a method to an existing class. For example, we could have added a method named getComments to the Product class: <?phpclass Product extends ProductCore{public function getComments($limit_start, $limit_end = false){$limit = (int)$limit_start;if ($limit_end)$limit = (int)$limit_start.','.(int)$limit_end;$comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->id.'ORDER BY `date_add` DESCLIMIT '.$limit);return $comments;}} This way, you could easily access the product comments everywhere in the code just with an instance of a Product class. Summary This article taught us about the main design patterns of PrestaShop and explained how to use them to construct a well-organized application. Resources for Article: Further resources on this subject: Django 1.2 E-commerce: Generating PDF Reports from Python using ReportLab [Article] Customizing PrestaShop Theme Part 2 [Article] Django 1.2 E-commerce: Data Integration [Article]
Read more
  • 0
  • 0
  • 12510

article-image-how-to-deploy-a-blog-with-ghost-and-docker
Felix Rabe
07 Nov 2014
6 min read
Save for later

How to Deploy a Blog with Ghost and Docker

Felix Rabe
07 Nov 2014
6 min read
2013 gave birth to two wonderful Open Source projects: Ghost and Docker. This post will show you what the buzz is all about, and how you can use them together. So what are Ghost and Docker, exactly? Ghost is an exciting new blogging platform, written in JavaScript running on Node.js. It features a simple and modern user experience, as well as very transparent and accessible developer communications. This blog post covers Ghost 0.4.2. Docker is a very useful new development tool to package applications together with their dependencies for automated and portable deployment. It is based on Linux Containers (lxc) for lightweight virtualization, and AUFS for filesystem layering. This blog post covers Docker 1.1.2. Install Docker If you are on Windows or Mac OS X, the easiest way to get started using Docker is Boot2Docker. For Linux and more in-depth instructions, consult one of the Docker installation guides. Go ahead and install Docker via one of the above links, then come back and run: docker version You run this in your terminal to verify your installation. If you get about eight lines of detailed version information, the installation was successful. Just running docker will provide you with a list of commands, and docker help <command> will show a command's usage. If you use Boot2Docker, remember to export DOCKER_HOST=tcp://192.168.59.103:2375. Now, to get the Ubuntu 14.04 base image downloaded (which we'll use in the next sections), run the following command: docker run --rm ubuntu:14.04 /bin/true This will take a while, but only for the first time. There are many more Docker images available at the Docker Hub Registry. Hello Docker To give you a quick glimpse into what Docker can do for you, run the following command: docker run --rm ubuntu:14.04 /bin/echo Hello Docker This runs /bin/echo Hello Docker in its own virtual Ubuntu 14.04 environment, but since it uses Linux Containers instead of booting a complete operating system in a virtual machine, this only takes less than a second to complete. Pretty sweet, huh? To run Bash, provide the -ti flags for interactivity: docker run --rm -ti ubuntu:14.04 /bin/bash The --rm flag makes sure that the container gets removed after use, so any files you create in that Bash session get removed after logging out. For more details, see the Docker Run Reference. Build the Ghost image In the previous section, you've run the ubuntu:14.04 image. In this section, we'll build an image for Ghost that we can then use to quickly launch a new Ghost container. While you could get a pre-made Ghost Docker image, for the sake of learning, we'll build our own. About the terminology: A Docker image is analogous to a program stored on disk, while a Docker container is analogous to a process running in memory. Now create a new directory, such as docker-ghost, with the following files — you can also find them in this Gist on GitHub: package.json: {} This is the bare minimum actually required, and will be expanded with the current Ghost dependency by the Dockerfile command npm install --save ghost when building the Docker image. server.js: #!/usr/bin/env node var ghost = require('ghost'); ghost({ config: __dirname + '/config.js' }); This is all that is required to use Ghost as an NPM module. config.js: config = require('./node_modules/ghost/config.example.js'); config.development.server.host = '0.0.0.0'; config.production.server.host = '0.0.0.0'; module.exports = config; This will make the Ghost server accessible from outside of the Docker container. Dockerfile: # DOCKER-VERSION 1.1.2 FROM ubuntu:14.04 # Speed up apt-get according to https://gist.github.com/jpetazzo/6127116 RUN echo "force-unsafe-io" > /etc/dpkg/dpkg.cfg.d/02apt-speedup RUN echo "Acquire::http {No-Cache=True;};" > /etc/apt/apt.conf.d/no-cache # Update the distribution ENV DEBIAN_FRONTEND noninteractive RUN apt-get update RUN apt-get upgrade -y # https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager RUN apt-get install -y software-properties-common RUN add-apt-repository -y ppa:chris-lea/node.js RUN apt-get update RUN apt-get install -y python-software-properties python g++ make nodejs git # git needed by 'npm install' ADD . /src RUN cd /src; npm install --save ghost ENTRYPOINT ["node", "/src/server.js"] # Override ubuntu:14.04 CMD directive: CMD [] EXPOSE 2368 This Dockerfile will create a Docker image with Node.js and the dependencies needed to build the Ghost NPM module, and prepare Ghost to be run via Docker. See Documentation for details on the syntax. Now build the Ghost image using: cd docker-ghost docker build -t ghost-image . This will take a while, but you might have to Ctrl-C and re-run the command if, for more than a couple of minutes, you are stuck at the following step: > node-pre-gyp install --fallback-to-build Run Ghost Now start the Ghost container: docker run --name ghost-container -d -p 2368:2368 ghost-image If you run Boot2Docker, you'll have to figure out its IP address: boot2docker ip Usually, that's 192.168.59.103, so by going to http://192.168.59.103:2368, you will see your fresh new Ghost blog. Yay! For the admin interface, go to http://192.168.59.103:2368/ghost. Manage the Ghost container The following commands will come in handy to manage the Ghost container: # Show all running containers: docker ps -a # Show the container logs: docker logs [-f] ghost-container # Stop Ghost via a simulated Ctrl-C: docker kill -s INT ghost-container # After killing Ghost, this will restart it: docker start ghost-container # Remove the container AND THE DATA (!): docker rm ghost-container What you'll want to do next Some steps that are outside the scope of this post, but some steps that you might want to pursue next, are: Copy and change the Ghost configuration that currently resides in node_modules/ghost/config.js. Move the Ghost content directory into a separate Docker volume to allow for upgrades and data backups. Deploy the Ghost image to production on your public server at your hosting provider. Also, you might want to change the Ghost configuration to match your domain and change the port to 80. How I use Ghost with Docker I run Ghost in Docker successfully over at Named Data Education, a new blog about Named Data Networking. I like the fact that I can replicate an isolated setup identically on that server as well as on my own laptop. Ghost resources Official docs: The Ghost Guide, and the FAQ- / How-To-like User Guide. How To Install Ghost, Ghost for Beginners and All About Ghost are a collection of sites that provide more in-depth material on operating a Ghost blog. By the same guys: All Ghost Themes. Ghost themes on ThemeForest is also a great collection of themes. Docker resources The official documentation provides many guides and references. Docker volumes are explained here and in this post by Michael Crosby. About the Author Felix Rabe has been programming and working with different technologies and companies at different levels since 1993. Currently he is researching and promoting Named Data Networking (http://named-data.net/), an evolution of the Internet architecture that currently relies on the host-bound Internet Protocol. You can find our very best Docker content on our dedicated Docker page. Whatever you do with software, Docker will help you do it better.
Read more
  • 0
  • 0
  • 5312

article-image-migrating-wordpress-blog-middleman-and-deploying-amazon-s3
Mike Ball
07 Nov 2014
11 min read
Save for later

Migrating a WordPress Blog to Middleman and Deploying to Amazon S3

Mike Ball
07 Nov 2014
11 min read
Part 1: Getting up and running with Middleman Many of today’s most prominent web frameworks, such as Ruby on Rails, Django, Wordpress, Drupal, Express, and Spring MVC, rely on a server-side language to process HTTP requests, query data at runtime, and serve back dynamically constructed HTML. These platforms are great, yet developers of dynamic web applications often face complex performance challenges under heavy user traffic, independent of the underlying technology. High traffic, and frequent requests, may exploit processing-intensive code or network latency, in effect yielding a poor user experience or production outage. Static site generators such as Middleman, Jeckyll, and Wintersmith offer developers an elegant, highly scalable alternative to complex, dynamic web applications. Such tools perform dynamic processing and HTML construction during build time rather than runtime. These tools produce a directory of static HTML, CSS, and JavaScript files that can be deployed directly to a web server such as Nginx or Apache. This architecture reduces complexity and encourages a sensible separation of concerns; if necessary, user-specific customization can be handled via client-side interaction with third-party satellite services. In this three part series, we'll walk-through how to get started in developing a Middleman site, some basics of Middleman blogging, how to migrate content from an existing WordPress blog, and how to deploy a Middleman blog to production. We will also learn how to create automated tests, continuous integration, and automated deployments. In this part, we’ll cover the following: Creating a basic Middleman project Middleman configuration basics A quick overview of the Middleman template system Creating a basic Middleman blog Why should you use middleman? Middleman is a mature, full-featured static site generator. It supports a strong templating system, numerous Ruby-based HTML templating tools such as ERb and HAML, as well as a Sprockets-based asset pipeline used to manage CSS, JavaScript, and third-party client-side code. Middleman also integrates well with CoffeeScript, SASS, and Compass. Environment For this tutorial, I’m using an RVM-installed Ruby 2.1.2. I’m on Mac OSX 10.9.4. Installing middleman Install middleman via bundler: $ gem install middleman Create a basic middleman project called middleman-demo: $ middleman init middleman-demo This results in a middleman-demo directory with the following layout: ├── Gemfile ├── Gemfile.lock ├── config.rb └── source    ├── images    │   ├── background.png    │   └── middleman.png    ├── index.html.erb    ├── javascripts    │   └── all.js    ├── layouts    │   └── layout.erb    └── stylesheets        ├── all.css        └── normalize.css[SB4]  There are 5 directories and 10 files. A quick tour Here are a few notes on the middleman-demo layout: The Ruby Gemfile  cites Ruby gem dependencies; Gemfile.lock cites the full dependency chain, including  middleman-demo’s dependencies’ dependencies The config.rb  houses middleman-demo’s configuration The source directory houses middleman-demo ’s source code–the templates, style sheets, images, JavaScript, and other source files required by the  middleman-demo [SB7] site While a Middleman production build is simply a directory of static HTML, CSS, JavaScript, and image files, Middleman sites can be run via a simple web server in development. Run the middleman-demo development server: $ middleman Now, the middleman-demo site can be viewed in your web browser at  http://localhost:4567. Set up live-reloading Middleman comes with the middleman-livereload gem. The gem detects source code changes and automatically reloads the Middleman app. Activate middleman-livereload  by uncommenting the following code in config.rb: # Reload the browser automatically whenever files change configure :development do activate :livereload end Restart the middleman server to allow the configuration change to take effect. Now, middleman-demo should automatically reload on change to config.rb and your web browser should automatically refresh when you edit the source/* code. Customize the site’s appearance Middleman offers a mature HTML templating system. The source/layouts directory contains layouts, the common HTML surrounding individual pages and shared across your site. middleman-demo uses ERb as its template language, though Middleman supports other options such as HAML and Slim. Also note that Middleman supports the ability embed metadata within templates via frontmatter. Frontmatter allows page-specific variables to be embedded via YAML or JSON. These variables are available in a current_page.data namespace. For example, source/index.html.erb contains the following frontmatter specifying a title; it’s available to ERb templates as current_page.data.title: --- title: Welcome to Middleman --- Currently, middleman-demo is a default Middleman installation. Let’s customize things a bit. First, remove all the contents of source/stylesheets/all.css  to remove the default Middleman styles. Next, edit source/index.html.erb to be the following: --- title: Welcome to Middleman Demo --- <h1>Middleman Demo</h1> When viewing middleman-demo at http://localhost:4567, you’ll now see a largely unstyled HTML document with a single Middleman Demo heading. Install the middleman-blog plugin The middleman-blog plugin offers blog functionality to middleman applications. We’ll use middleman-blog in middleman-demo. Add the middleman-blog version 3.5.3 gem dependency to middleman-demo by adding the following to the Gemfile: gem "middleman-blog", "3.5.3 Re-install the middleman-demo gem dependencies, which now include middleman-blog: $ bundle install Activate middleman-blog and specify a URL pattern at which to serve blog posts by adding the following to config.rb: activate :blog do |blog| blog.prefix = "blog" blog.permalink = "{year}/{month}/{day}/{title}.html" end Write a quick blog post Now that all has been configured, let’s write a quick blog post to confirm that middleman-blog works. First, create a directory to house the blog posts: $ mkdir source/blog The source/blog directory will house markdown files containing blog post content and any necessary metadata. These markdown files highlight a key feature of middleman: rather than query a relational database within which content is stored, a middleman application typically reads data from flat files, simple text files–usually markdown–stored within the site’s source code repository. Create a markdown file for middleman-demo ’s first post: $ touch source/blog/2014-08-20-new-blog.markdown Next, add the required frontmatter and content to source/blog/2014-08-20-new-blog.markdown: --- title: New Blog date: 2014/08/20 tags: middleman, blog --- Hello world from Middleman! Features Rich templating system Built-in helpers Easy configuration Asset pipeline Lots more  Note that the content is authored in markdown, a plain text syntax, which is evaluated by Middleman as HTML. You can also embed HTML directly in the markdown post files. GitHub’s documentation provides a good overview of markdown. Next, add the following ERb template code to source/index.html.erb [SB37] to display a list of blog posts on middleman-demo ’s home page: <ul> <% blog.articles.each do |article| %> <li> <%= link_to article.title, article.path %> </li> <% end %> </ul> Now, when running middleman-demo and visiting http://localhost:4567, a link to the new blog post is listed on middleman-demo ’s home page. Clicking the link renders the permalink for the New Blog blog post at blog/2014-08-20/new-blog.html, as is specified in the blog configuration in config.rb. A few notes on the template code Note the use of a link_to method. This is a built-in middleman template helper. Middleman provides template helpers to simplify many common template tasks, such as rendering an anchor tag. In this case, we pass the link_to method two arguments, the intended anchor tag text and the intended href value. In turn, link_to generates the necessary HTML. Also note the use of a blog variable, within which an article’s method houses an array of all blog posts. Where did this come from?  middleman-demo is an instance of  Middleman::Application;  a blog  method on this instance. To explore other Middleman::Application methods, open middleman-demo via the built-in Middleman console by entering the following in your terminal: $ middleman console To view all the methods on the blog, including the aforementioned articles method, enter the following within the console: 2.1.2 :001 > blog.methods To view all the additional methods, beyond the blog, available to the Middleman::Application instance, enter the following within the console: 2.1.2 :001 > self.methods More can be read about all these methods on Middleman::Application’s rdoc.info class documentation. Cleaner URLs Note that the current new blog URL ends in .html. Let’s customize middleman-demo to omit .html from URLs. Add the following config.rb: activate :directory_indexes Now, rather than generating files such as /blog/2014-08-20/new-blog.html,  middleman-demo generates files such as /blog/2014-08-20/new-blog/index.html, thus enabling the page to be served by most web servers at a /blog/2014-08-20/new-blog/ path. Adjusting the templates Let’s adjust our the middleman-demo ERb templates a bit. First, note that <h1>Middleman Demo</h1> only displays on the home page; let’s make it render on all of the site’s pages. Move <h1>Middleman Demo</h1> from  source/index.html.erb  to source/layouts/layout.erb. Put it just inside the <body> tag: <body class="<%= page_classes %>"> <h1>Middleman Demo</h1> <%= yield %> </body> Next, let’s create a custom blog post template. Create the template file: $ touch source/layout/post.erb Add the following to extend the site-wide functionality of source/layouts/layout.erb to  source/layouts/post.erb: <% wrap_layout :layout do %> <h2><%= current_article.title %></h2> <p>Posted <%= current_article.date.strftime('%B %e, %Y') %></p> <%= yield %> <ul> <% current_article.tags.each do |tag| %> <li><a href="/blog/tags/<%= tag %>/"><%= tag %></a></li> <% end %> </ul> <% end %> Note the use of the wrap_layout  ERb helper.  The wrap_layout ERb helper takes two arguments. The first is the name of the layout to wrap, in this case :layout. The second argument is a Ruby block; the contents of the block are evaluated within the <%= yield %> call of source/layouts/layout.erb. Next, instruct  middleman-demo  to use  source/layouts/post.erb  in serving blog posts by adding the necessary configuration to  config.rb : page "blog/*", :layout => :post Now, when restarting the Middleman server and visiting  http://localhost:4567/blog/2014/08/20/new-blog/,  middleman-demo renders a more comprehensive blog template that includes the post’s title, date published, and tags. Let’s add a simple template to render a tags page that lists relevant tagged content. First, create the template: $ touch source/tag.html.erb And add the necessary ERb to list the relevant posts assigned a given tag: <h2>Posts tagged <%= tagname %></h2> <ul> <% page_articles.each do |post| %> <li> <a href="<%= post.url %>"><%= post.title %></a> </li> <% end %> </ul> Specify the blog’s tag template by editing the blog configuration in config.rb: activate :blog do |blog| blog.prefix = 'blog' blog.permalink = "{year}/{month}/{day}/{title}.html" # tag template: blog.tag_template = "tag.html" end Edit config.rb to configure middleman-demo’s tag template to use source/layout.erb rather than source/post.erb: page "blog/tags/*", :layout => :layout Now, when visiting http://localhost:4567/2014/08/20/new-blog/, you should see a linked list of New Blog’s tags. Clicking a tag should correctly render the tags page. Part 1 recap Thus far, middleman-demo serves as a basic Middleman-based blog example. It demonstrates Middleman templating, how to set up the middleman-blog  plugin, and how to make author markdown-based blog posts in Middleman. In part 2, we’ll cover migrating content from an existing Wordpress blog. We’ll also step through establishing an Amazon S3 bucket, building middleman-demo, and deploying to production. In part 3, we’ll cover how to create automated tests, continuous integration, and automated deployments. About this author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications.
Read more
  • 0
  • 0
  • 14187
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-alfresco-web-scrpits
Packt
06 Nov 2014
15 min read
Save for later

Alfresco Web Scrpits

Packt
06 Nov 2014
15 min read
In this article by Ramesh Chauhan, the author of Learning Alfresco Web Scripts, we will cover the following topics: Reasons to use web scripts Executing a web script from standalone Java program Invoking a web script from Alfresco Share DeclarativeWebScript versus AbstractWebScript (For more resources related to this topic, see here.) Reasons to use web scripts It's now time to discover the answer to the next question—why web scripts? There are various alternate approaches available to interact with the Alfresco repository, such as CMIS, SOAP-based web services, and web scripts. Generally, web scripts are always chosen as a preferred option among developers and architects when it comes to interacting with the Alfresco repository from an external application. Let's take a look at the various reasons behind choosing a web script as an option instead of CMIS and SOAP-based web services. In comparison with CMIS, web scripts are explained as follows: In general, CMIS is a generic implementation, and it basically provides a common set of services to interact with any content repository. It does not attempt to incorporate the services that expose all features of each and every content repository. It basically tries to cover a basic common set of functionalities for interacting with any content repository and provide the services to access such functionalities. Alfresco provides an implementation of CMIS for interacting with the Alfresco repository. Having a common set of repository functionalities exposed using CMIS implementation, it may be possible that sometimes CMIS will not do everything that you are aiming to do when working with the Alfresco repository. While with web scripts, it will be possible to do the things you are planning to implement and access the Alfresco repository as required. Hence, one of the best alternatives is to use Alfresco web scripts in this case and develop custom APIs as required, using the Alfresco web scripts. Another important thing to note is, with the transaction support of web scripts, it is possible to perform a set of operations together in a web script, whereas in CMIS, there is a limitation for the transaction usage. It is possible to execute each operation individually, but it is not possible to execute a set of operations together in a single transaction as possible in web scripts. SOAP-based web services are not preferable for the following reasons: It takes a long time to develop them They depend on SOAP Heavier client-side requirements They need to maintain the resource directory Scalability is a challenge They only support XML In comparison, web scripts have the following properties: There are no complex specifications There is no dependency on SOAP There is no need to maintain the resource directory They are more scalable as there is no need to maintain session state They are a lightweight implementation They are simple and easy to develop They support multiple formats In a developer's opinion: They can be easily developed using any text editor No compilations required when using scripting language No need for server restarts when using scripting language No complex installations required In essence: Web scripts are a REST-based and powerful option to interact with the Alfresco repository in comparison to the traditional SOAP-based web services and CMIS alternatives They provide RESTful access to the content residing in the Alfresco repository and provide uniform access to a wide range of client applications They are easy to develop and provide some of the most useful features such as no server restart, no compilations, no complex installations, and no need of a specific tool to develop them All these points make web scripts the most preferred choice among developers and architects when it comes to interacting with the Alfresco repository Executing a web script from standalone Java program There are different options to invoke a web script from a Java program. Here, we will take a detailed walkthrough of the Apache commons HttpClient API with code snippets to understand how a web script can be executed from the Java program, and will briefly mention some other alternatives that can also be used to invoke web scripts from Java programs. HttpClient One way of executing a web script is to invoke web scripts using org.apache.commons.httpclient.HttpClient API. This class is available in commons-httpclient-3.1.jar. Executing a web script with HttpClient API also requires commons-logging-*.jar and commons-codec-*.jar as supporting JARs. These JARs are available at the tomcatwebappsalfrescoWEB-INFlib location inside your Alfresco installation directory. You will need to include them in the build path for your project. We will try to execute the hello world web script using the HttpClient from a standalone Java program. While using HttpClient, here are the steps in general you need to follow: Create a new instance of HttpClient. The next step is to create an instance of method (we will use GetMethod). The URL needs to be passed in the constructor of the method. Set any arguments if required. Provide the authentication details if required. Ask HttpClient to now execute the method. Read the response status code and response. Finally, release the connection. Understanding how to invoke a web script using HttpClient Let's take a look at the following code snippet considering the previous mentioned steps. In order to test this, you can create a standalone Java program with a main method and put the following code snippet in Java program and then modify the web script URLs/credentials as required. Comments are provided in the following code snippet for you to easily correlate the previous steps with the code: // Create a new instance of HttpClient HttpClient objHttpClient = new HttpClient(); // Create a new method instance as required. Here it is GetMethod. GetMethod objGetMethod = new GetMethod("http://localhost:8080/alfresco/service/helloworld"); // Set querystring parameters if required. objGetMethod.setQueryString(new NameValuePair[] { new NameValuePair("name", "Ramesh")}); // set the credentials if authentication is required. Credentials defaultcreds = new UsernamePasswordCredentials("admin","admin"); objHttpClient.getState().setCredentials(new AuthScope("localhost",8080, AuthScope.ANY_REALM), defaultcreds); try { // Now, execute the method using HttpClient. int statusCode = objHttpClient.executeMethod(objGetMethod); if (statusCode != HttpStatus.SC_OK) { System.err.println("Method invocation failed: " + objGetMethod.getStatusLine()); } // Read the response body. byte[] responseBody = objGetMethod.getResponseBody(); // Print the response body. System.out.println(new String(responseBody)); } catch (HttpException e) { System.err.println("Http exception: " + e.getMessage()); e.printStackTrace(); } catch (IOException e) { System.err.println("IO exception transport error: " + e.getMessage()); e.printStackTrace(); } finally { // Release the method connection. objGetMethod.releaseConnection(); } Note that the Apache commons client is a legacy project now and is not being developed anymore. This project has been replaced by the Apache HttpComponents project in HttpClient and HttpCore modules. We have used HttpClient from Apache commons client here to get an overall understanding. Some of the other options that you can use to invoke web scripts from a Java program are mentioned in subsequent sections. URLConnection One option to execute web script from Java program is by using java.net.URLConnection. For more details, you can refer to http://docs.oracle.com/javase/tutorial/networking/urls/readingWriting.html. Apache HTTP components Another option to execute web script from Java program is to use Apache HTTP components that are the latest available APIs for HTTP communication. These components offer better performance and more flexibility and are available in httpclient-*.jar and httpcore-*.jar. These JARs are available at the tomcatwebappsalfrescoWEBINFlib location inside your Alfresco installation directory. For more details, refer to https://hc.apache.org/httpcomponents-client-4.3.x/quickstart.html to get an understanding of how to execute HTTP calls from a Java program. RestTemplate Another alternative would be to use org.springframework.web.client.RestTemplate available in org.springframework.web-*.jar located at tomcatwebappsalfrescoWEB-INFlib inside your Alfresco installation directory. If you are using Alfresco community 5, the RestTemplate class is available in spring-web-*.jar. Generally, RestTemplate is used in Spring-based services to invoke an HTTP communication. Calling web scripts from Spring-based services If you need to invoke an Alfresco web script from Spring-based services, then you need to use RestTemplate to invoke HTTP calls. This is the most commonly used technique to execute HTTP calls from Spring-based classes. In order to do this, the following are the steps to be performed. The code snippets are also provided: Define RestTemplate in your Spring context file: <bean id="restTemplate" class="org.springframework.web.client.RestTemplate" /> In the Spring context file, inject restTemplate in your Spring class as shown in the following example: <bean id="httpCommService" class="com.test.HTTPCallService"> <property name="restTemplate" value="restTemplate" /> </bean> In the Java class, define the setter method for restTemplate as follows: private RestTemplate restTemplate; public void setRestTemplate(RestTemplate restTemplate) {    this.restTemplate = restTemplate; } In order to invoke a web script that has an authentication level set as user authentication, you can use RestTemplate in your Java class as shown in the following code snippet. The following code snippet is an example to invoke the hello world web script using RestTemplate from a Spring-based service: // setup authentication String plainCredentials = "admin:admin"; byte[] plainCredBytes = plainCredentials.getBytes(); byte[] base64CredBytes = Base64.encodeBase64(plainCredBytes); String base64Credentials = new String(base64CredBytes); // setup request headers HttpHeaders reqHeaders = new HttpHeaders(); reqHeaders.add("Authorization", "Basic " + base64Credentials); HttpEntity<String> requestEntity = new HttpEntity<String>(reqHeaders); // Execute method ResponseEntity<String> responseEntity = restTemplate.exchange("http://localhost:8080/alfresco/service/helloworld?name=Ramesh", HttpMethod.GET, requestEntity, String.class); System.out.println("Response:"+responseEntity.getBody()); Invoking a web script from Alfresco Share When working on customizing Alfresco Share, you will need to make a call to Alfresco repository web scripts. In Alfresco Share, you can invoke repository web scripts from two places. One is the component level the presentation web scripts, and the other is client-side JavaScript. Calling a web script from presentation web script JavaScript controller Alfresco Share renders the user interface using the presentation web scripts. These presentation web scripts make a call to the repository web script to render the repository data. Repository web script is called before the component rendering file (for example, get.html.ftl) loads. In out-of-the-box Alfresco installation, you should be able to see the components’ presentation web script available under tomcatwebappsshareWEB-INFclassesalfrescosite-webscripts. When developing a custom component, you will be required to write a presentation web script. A presentation web script will make a call to the repository web script. You can make a call to the repository web script as follows: var reponse = remote.call("url of web script as defined in description document"); var obj = eval('(' + response + ')'); In the preceding code snippet, we have used the out-of-the-box available remote object to make a repository web script call. The important thing to notice is that we have to provide the URL of the web script as defined in the description document. There is no need to provide the initial part such as host or port name, application name, and service path the way we use while calling web script from a web browser. Once the response is received, web script response can be parsed with the use of the eval function. In the out-of-the-box code of Alfresco Share, you can find the presentation web scripts invoking the repository web scripts, as we have seen in the previous code snippet. For example, take a look at the main() method in the site-members.get.js file, which is available at the tomcatwebappssharecomponentssite-members location inside your Alfresco installed directory. You can take a look at the other JavaScript controller implementation for out-of-the-box presentation web scripts available at tomcatwebappsshareWEB-INFclassesalfrescosite-webscripts making repository web script calls using the previously mentioned technique. When specifying the path to provide references to the out-of-the-box web scripts, it is mentioned starting with tomcatwebapps. This location is available in your Alfresco installation directory. Invoking a web script from client-side JavaScript The client-side JavaScript control file can be associated with components in Alfresco Share. If you need to make a repository web script call, you can do this from the client-side JavaScript control files generally located at tomcatwebappssharecomponents. There are different ways you can make a repository web script call using a YUI-based client-side JavaScript file. The following are some of the ways to do invoke web script from client-side JavaScript files. References are also provided along with each of the ways to look in the Alfresco out-of-the-box implementation to understand its usage practically: Alfresco.util.Ajax.request: Take a look at tomcatwebappssharecomponentsconsolegroups.js and refer to the _removeUser function. Alfresco.util.Ajax.jsonRequest: Take a look at tomcatwebappssharecomponentsdocumentlibrarydocumentlist.js and refer to the onOptionSelect function. Alfresco.util.Ajax.jsonGet: To directly make a call to get web script, take a look at tomcatwebappssharecomponentsconsolegroups.js and refer to the getParentGroups function. YAHOO.util.Connect.asyncRequest: Take a look at tomcatwebappssharecomponentsdocumentlibrarytree.js and refer to the _sortNodeChildren function. In alfresco.js located at tomcatwebappssharejs, the wrapper implementation of YAHOO.util.Connect.asyncRequest is provided and various available methods such as the ones we saw in the preceding list, Alfresco.util.Ajax.request, Alfresco.util.Ajax.jsonRequest, and Alfresco.util.Ajax.jsonGet can be found in alfresco.js. Hence, the first three options in the previous list internally make a call using the YAHOO.util.Connect.asyncRequest (the last option in the previous list) only. Calling a web script from the command line Sometimes while working on your project, it might be required that from the Linux machine you need to invoke a web script or create a shell script to invoke a web script. It is possible to invoke a web script from the command line using cURL, which is a valuable tool to use while working on web scripts. You can install cURL on Linux, Mac, or Windows and execute a web script from the command line. Refer to http://curl.haxx.se/ for more details on cURL. You will be required to install cURL first. On Linux, you can install cURL using apt-get. On Mac, you should be able to install cURL through MacPorts and on Windows using Cygwin you can install cURL. Once cURL is installed, you can invoke web script from the command line as follows: curl -u admin:admin "http://localhost:8080/alfresco/service/helloworld?name=Ramesh" This will display the web script response. DeclarativeWebScript versus AbstractWebScript The web script framework in Alfresco provides two different helper classes from which the Java-backed controller can be derived. It's important to understand the difference between them. The first helper class is the one we used while developing the web script in this article, org.springframework.extensions.webscripts.DeclarativeWebScript. The second one is org.springframework.extensions.webscripts.AbstractWebScript. DeclarativeWebScript in turn only extends the AbstractWebScript class. If the Java-backed controller is derived from DeclarativeWebScript, then execution assistance is provided by the DeclarativeWebScript class. This helper class basically encapsulates the execution of the web script and checks if any controller written in JavaScript is associated with the web script or not. If any JavaScript controller is found for the web script, then this helper class will execute it. This class will locate the associated response template of the web script for the requested format and will pass the populated model object to the response template. For the controller extending DeclarativeWebScript, the controller logic for a web script should be provided in the Map<String, Object> executeImpl(WebScriptRequest req, Status status, Cache cache) method. Most of the time while developing a Java-backed web script, the controller will extend DeclarativeWebScript only. AbstractWebScript does not provide execution assistance in the way DeclarativeWebScript does. It gives full control over the entire execution process to the derived class and allows the extending class to decide how the output is to be rendered. One good example of this is the DeclarativeWebScript class itself. It extends the AbstractWebScript class and provides a mechanism to render the response using FTL templates. In a scenario like streaming the content, there won't be any need for a response template; instead, the content itself needs to be rendered directly. In this case, the Java-backed controller class can extend from AbstractWebScript. If a web script has both a JavaScript-based controller and a Java-backed controller, then: If a Java-backed controller is derived from DeclarativeWebScript, then first the Java-backed controller will get executed and then the control would be passed to the JavaScript-backed controller prior to returning the model object to the response template. If the Java-backed controller is derived from AbstractWebScript, then, only the Java-backed controller will be executed. The JavaScript controller will not get executed. Summary In this article, we took a look at the reasons of using web scripts. Then we executed a web script from standalone Java program and move on to invoke a web script from Alfresco Share. Lastly, we saw the difference between DeclarativeWebScript versus AbstractWebScript. Resources for Article: Further resources on this subject: Alfresco 3 Business Solutions: Types of E-mail Integration [article] Alfresco 3: Writing and Executing Scripts [article] Overview of REST Concepts and Developing your First Web Script using Alfresco [article]
Read more
  • 0
  • 0
  • 3086

article-image-quizzes-and-interactions-camtasia-studio
Packt
21 Aug 2014
12 min read
Save for later

Quizzes and Interactions in Camtasia Studio

Packt
21 Aug 2014
12 min read
This article by David B. Demyan, the author of the book eLearning with Camtasia Studio, covers the different types of interactions, description of how interactions are created and how they function, and the quiz feature. In this article, we will cover the following topics specific topics: The types of interactions available in Camtasia Studio Video player requirements Creating simple action hotspots Using the quiz feature (For more resources related to this topic, see here.) Why include learner interactions? Interactions in e-learning support cognitive learning, the application of behavioral psychology to teaching. Students learn a lot when they perform an action based on the information they are presented. Without exhausting the volumes written about this subject, your own background has probably prepared you for creating effective materials that support cognitive learning. To boil it down for our purposes, you present information in chunks and ask learners to demonstrate whether they have received the signal. In the classroom, this is immortalized as a teacher presenting a lecture and asking questions, a basic educational model. In another scenario, it might be an instructor showing a student how to perform a mechanical task and then asking the student to repeat the same task. We know from experience that learners struggle with concepts if you present too much information too rapidly without checking to see if they understand it. In e-learning, the most effective ways to prevent confusion involve chunking information into small, digestible bites and mapping them into an overall program that allows the learner to progress in a logical fashion, all the while interacting and demonstrating comprehension. Interaction is vital to keep your students awake and aware. Interaction, or two-way communication, can take your e-learning video to the next level: a true cognitive learning experience. Interaction types While Camtasia Studio does not pretend to be a full-featured interactive authoring tool, it does contain some features that allow you to build interactions and quizzes. This section defines those features that support learners to take action while viewing an e-learning video when you request them for an interaction. There are three types of interactions available in Camtasia Studio: Simple action hotspots Branching hotspots Quizzes You are probably thinking of ways these techniques can help support cognitive learning. Simple action hotspots Hotspots are click areas. You indicate where the hotspot is using a visual cue, such as a callout. Camtasia allows you to designate the area covered by the callout as a hotspot and define the action to take when it is clicked. An example is to take the learner to another time in the video when the hotspot is clicked. Another click could take the learner back to the original place in the video. Quizzes Quizzes are simple questions you can insert in the video, created and implemented to conform to your testing strategy. The question types available are as follows: Multiple choice Fill in the blanks Short answers True/false Video player requirements Before we learn how to create interactions in Camtasia Studio, you should know some special video player requirements. A simple video file playing on a computer cannot be interactive by itself. A video created and produced in Camtasia Studio without including some additional program elements cannot react when you click on it except for what the video player tells it to do. For example, the default player for YouTube videos stops and starts the video when you click anywhere in the video space. Click interactions in videos created with Camtasia are able to recognize where clicks occur and the actions to take. You provide the click instructions when you set up the interaction. These instructions are required, for example, to intercept the clicking action, determine where exactly the click occurred, and link that spot with a command and destination. These click instructions may be any combination of HyperText Markup Language (HTML), HTML5, JavaScript, and Flash ActionScript. Camtasia takes care of creating the coding behind the scenes, associated with the video player being used. In the case of videos produced with Camtasia Studio, to implement any form of interactivity, you need to select the default Smart Player output options when producing the video. Creating simple hotspots The most basic interaction is clicking a hotspot layered over the video. You can create an interactive hotspot for many purposes, including the following: Taking learners to a specific marker or frame within the video, as determined on the timeline Allowing learners to replay a section of the video Directing learners to a website or document to view reference material Showing a pop up with additional information, such as a phone number or web link Try it – creating a hotspot If you are building the exercise project featured in this book, let's use it to create an interactive hotspot. The task in this exercise is to pause the video and add a Replay button to allow viewers to review a task. After the replay, a prompt will be added to resume the video from where it was paused. Inserting the Replay/Continue buttons The first step is to insert a Replay button to allow viewers to review what they just saw or continue without reviewing. This involves adding two hotspot buttons on the timeline, which can be done by performing the following steps: Open your exercise project in Camtasia Studio or one of your own projects where you can practice. Position the play head right after the part where text is shown being pasted into the CuePrompter window. From the Properties area, select Callouts from the task tabs above the timeline. In the Shape area, select Filled Rounded Rectangle (at the upper-right corner of the drop-down selection). A shape is added to the timeline. Set the Fade in and Fade out durations to about half a second. Select the Effects dropdown and choose Style. Choose the 3D Edge style. It looks like a raised button. Set any other formatting so the button looks the way you want in the preview window. In the Text area, type your button text. For the sample project, enter Replay Copy & Paste. Select the button in the preview window and make a copy of the button. You can use Ctrl + C to copy and Ctrl + V to paste the button. In the second copy of the button, select the text and retype it as Continue. It should be stacked on the timeline as shown in the following screenshot: Select the Continue button in the preview window and drag it to the right-hand side, at the same height and distance from the edge. The final placement of the buttons is shown in the sample project. Save the project. Adding a hotspot to the Continue button The buttons are currently inactive images on the timeline. Viewers could click them in the produced video, but nothing would happen. To make them active, enable the Hotspot properties for each button. To add a hotspot to the Continue button, perform the following steps: With the Continue button selected, select the Make hotspot checkbox in the Callouts panel. Click on the Hotspot Properties... button to set properties for the callout button. Under Actions, make sure to select Click to continue. Click on OK. The Continue button now has an active hotspot assigned to it. When published, the video will pause when the button appears. When the viewer clicks on Continue, the video will resume playing. You can test the video and the operation of the interactive buttons as described later in this article. Adding a hotspot to the Replay button Now, let's move on to create an action for the Replay copy & paste button: Select the Replay copy & paste button in the preview window. Select the Make hotspot checkbox in the Callouts panel. Click on the Hotspot properties... button. Under Actions, select Go to frame at time. Enter the time code for the spot on the timeline where you want to start the replay. In the sample video, this is around 0:01:43;00, just before text is copied in the script. Click on OK. Save the project. The Replay copy & paste button now has an active hotspot assigned to it. Later, when published, the video will pause when the button appears. When viewers click on Replay copy & paste, the video will be repositioned at the time you entered and begin playing from there. Using the quiz feature A quiz added to a video sets it apart. The addition of knowledge checks and quizzes to assess your learners' understanding of the material presented puts the video into the true e-learning category. By definition, a knowledge check is a way for the student to check their understanding without worrying about scoring. Typically, feedback is given to the student for them to better understand the material, the question, and their answer. The feedback can be terse, such as correct and incorrect, or it can be verbose, informing if the answer is correct or not and perhaps giving additional information, a hint, or even the correct answers, depending on your strategy in creating the knowledge check. A quiz can be in the same form as a knowledge check but a record of the student's answer is created and reported to an LMS or via an e-mail report. Feedback to the student is optional, again depending on your testing strategy. In Camtasia Studio, you can insert a quiz question or set of questions anywhere on the timeline you deem appropriate. This is done with the Quizzing task tab. Try it – inserting a quiz In this exercise, you will select a spot on the timeline to insert a quiz, enable the Quizzing feature, and write some appropriate questions following the sample project, Using CuePrompter. Creating a quiz Place your quiz after you have covered a block of information. The sample project, Using CuePrompter, is a very short task-based tutorial, showing some basic steps. Assume for now that you are teaching a course on CuePrompter and need to assess students' knowledge. I believe a good place for a quiz is after the commands to scroll forward, speed up, slow down, and scroll reverse. Let's give it a try with multiple choice and true/false questions: Position the play head at the appropriate part of the timeline. In the sample video, the end of the scrolling command description is at about 3 minutes 12 seconds. Select Quizzing in the task tabs. If you do not see the Quizzing tab above the timeline, select the More tab to reveal it. Click on the Add quiz button to begin adding questions. A marker appears on the timeline where your quiz will appear during the video, as illustrated in the following screenshot: In the Quiz panel, add a quiz name. In the sample project, the quiz is entitled CuePrompter Commands. Scroll down to Question type. Make sure Multiple Choice is selected from the dropdown. In the Question box, type the question text. In the sample project, the first question is With text in the prompter ready to go, the keyboard control to start scrolling forward is _________________. In the Answers box, double-click on the checkbox text that says Default Answer Text. Retype the answer Control-F. In the next checkbox text that says <Type an answer choice here>, double-click on it and add the second possible answer, Spacebar. Check the box next to it to indicate that it is the correct answer. Add two more choices: Alt-Insert and Tab. Your Quiz panel should look like the following screenshot: Click on Add question. From the Question type dropdown, select True/False. In the Question box, type You can stop CuePrompter with the End key. In Answers, select False. For the final question, click on Add question again. From the Question type dropdown, select Multiple Choice. In the Question box, type Which keyboard command tells CuePrompter to reverse?. Enter the four possible answers: Left arrow Right arrow Down arrow Up arrow Select Down arrow as the correct answer. Save the project. Now you have entered three questions and answer choices, while indicating the choice that will be scored correct if selected. Next, preview the quiz to check format and function. Previewing the quiz Camtasia Studio allows you to preview quizzes for correct formatting, wording, and scoring. Continue to follow along in the exercise project and perform the following steps: Leave checkmarks in the Score quiz and Viewer can see answers after submitting boxes. Click on the Preview button. A web page opens in your Internet browser showing the questions, as shown in the following screenshot: Select an answer and click on Next. The second quiz question is displayed. Select an answer and click on Next. The third quiz question is displayed. Select an answer and click on Submit Answers. As this is the final question, there is no Next. Since we left the Score quiz and Viewer can see answers after submitting options selected, the learner receives a prompt, as shown in the following screenshot: Click on View Answers to review the answers you gave. Correct responses are shown with a green checkmark and incorrect ones are shown with a red X mark. If you do not want your learners to see the answers, remove the checkmark from Viewer can see answers after submitting. Exit the browser to discontinue previewing the quiz. Save the project. This completes the Try it exercise for inserting and previewing a quiz in your video e-learning project. Summary In this article, we learned different types of interactions, video player requirements, creating simple action hotspots, and inserting and previewing a quiz. Resources for Article: Further resources on this subject: Introduction to Moodle [article] Installing Drupal [article] Make Spacecraft Fly and Shoot with Special Effects using Blender 3D 2.49 [article]
Read more
  • 0
  • 0
  • 3293

article-image-now-youre-ready
Packt
20 Aug 2014
14 min read
Save for later

Now You're Ready!

Packt
20 Aug 2014
14 min read
In this article by Ryan John, author of the book Canvas LMS Course Design, we will have a look at the key points encountered during the course-building process, along with connections to educational philosophies and practices that support technology as a powerful way to enhance teaching and learning. (For more resources related to this topic, see here.) As you finish teaching your course, you will be well served to export your course to keep as a backup, to upload and reteach later within Canvas to a new group of students, or to import into another LMS. After covering how to export your course, we will tie everything we've learned together through a discussion of how Canvas can help you and your students achieve educational goals while acquiring important 21st century skills. Overall, we will cover the following topics: Exporting your course from Canvas to your computer Connecting Canvas to education in the 21st century Exporting your course Now that your course is complete, you will want to export the course from Canvas to your computer. When you export your course, Canvas compiles all the information from your course and allows you to download a single file to your computer. This file will contain all of the information for your course, and you can use this file as a master template for each time you or your colleagues teach the course. Exporting your course is helpful for two main reasons: It is wise to save a back-up version of your course on a computer. After all the hard work you have put into building and teaching your course, it is always a good decision to export your course and save it to a computer. If you are using a Free for Teachers account, your course will remain intact and accessible online until you choose to delete it. However, if you use Canvas through your institution, each institution has different procedures and policies in place regarding what happens to courses when they are complete. Exporting and saving your course will preserve your hard work and protect it from any accidental or unintended deletion. Once you have exported your course, you will be able to import your course into Canvas at any point in the future. You are also able to import your course into other LMSs such as Moodle or BlackBoard. You might wish to import your course back into Canvas if your course is removed from your institution-specific Canvas account upon completion. You will have a copy of the course to import for the next time you are scheduled to teach the same course. You might build and teach a course using a Free for Teachers account, and then later wish to import that version of the course into an institution-specific Canvas account or another LMS. Exporting your course does not remove the course from Canvas—your course will still be accessible on the Canvas site unless it is automatically deleted by your institution or if you choose to delete it. To export your entire course, complete the following steps: Click on the Settings tab at the bottom of the left-hand side menu, as pictured in the following screenshot: On the Settings page, look to the right-hand side menu. Click on the Export Course Content button, which is highlighted in the following screenshot: A screen will appear asking you whether you would like to export the Course or export a Quiz. To export your entire course, select the Course option and then click on Create Export, as shown in the following screenshot: Once you click on Create Export, a progress bar will appear. As indicated in the message below the progress bar, the export might take a while to complete, and you can leave the page while Canvas exports the content. The following screenshot displays this progress bar and message: When the export is complete, you will receive an e-mail from notifications@instructure.com that resembles the following screenshot. Click on the Click to view exports link in the e-mail: A new window or tab will appear in your browser that shows your Content Exports. Below the heading of the page, you will see your course export listed with a link that reads Click here to download, as pictured in the following screenshot. Go ahead and click on the link, and the course export file will be downloaded to your computer. Your course export file will be downloaded to your computer as a single .imscc file. You can then move the downloaded file to a folder on your computer's hard drive for later access. Your course export is complete, and you can save the exported file for later use. To access the content stored in the exported .imscc file, you will need to import the file back into Canvas or another LMS. You might notice an option to Conclude this Course on the course Settings page if your institution has not hidden or disabled this option. In most cases, it is not necessary to conclude your course if you have set the correct course start and end dates in your Course Details. Concluding your course prevents you from altering grades or accessing course content, and you cannot unconclude your course on your own. Some institutions conclude courses automatically, which is why it is always best to export your course to preserve your work. Now that we have covered the last how-to aspects of Canvas, let's close with some ways to apply the skills we have learned in this book to contemporary educational practices, philosophies, and requirements that you might encounter in your teaching. Connecting Canvas to education in the 21st century While learning how to use the features of Canvas, it is easy to forget the main purpose of Canvas' existence—to better serve your students and you in the process of education. In the midst of rapidly evolving technology, students and teachers alike require skills that are as adaptable and fluid as the technologies and new ideas swirling around them. While the development of various technologies might seem daunting, those involved in education in the 21st century have access to new and exciting tools that have never before existed. As an educator seeking to refine your craft, utilizing tools such as Canvas can help you and your students develop the skills that are becoming increasingly necessary to live and thrive in the 21st century. As attainment of these skills is indeed proving more and more valuable in recent years, many educational systems have begun to require evidence that instructors are cognizant of these skills and actively working to ensure that students are reaching valuable goals. Enacting the Framework for 21st Century Learning As education across the world continues to evolve through time, the development of frameworks, methods, and philosophies of teaching have shaped the world of formal education. In recent years, one such approach that has gained prominence in the United States' education systems is the Framework for 21st Century Learning, which was developed over the last decade through the work of the Partnership for 21st Century Skills (P21). This partnership between education, business, community, and government leaders was founded to help educators provide children in Kindergarten through 12th Grade (K-12) with the skills they would need going forward into the 21st century. Though the focus of P21 is on children in grades K-12, the concepts and knowledge articulated in the Framework for 21st Century Learning are valuable for learners at all levels, including those in higher education. In the following sections, we will apply our knowledge of Canvas to the desired 21st century student outcomes, as articulated in the P21 Framework for 21st Century Learning, to brainstorm the ways in which Canvas can help prepare your students for the future. Core subjects and 21st century themes The Framework for 21st Century Learning describes the importance of learning certain core subjects including English, reading or language arts, world languages, the arts, Mathematics, Economics, Science, Geography, History, Government, and Civics. In connecting these core subjects to the use of Canvas, the features of Canvas and the tips throughout this book should enable you to successfully teach courses in any of these subjects. In tandem with teaching and learning within the core subjects, P21 also advocates for schools to "promote understanding of academic content at much higher levels by weaving 21st century interdisciplinary themes into core subjects." The following examples offer insight and ideas for ways in which Canvas can help you integrate these interdisciplinary themes into your course. As you read through the following suggestions and ideas, think about strategies that you might be able to implement into your existing curriculum to enhance its effectiveness and help your students engage with the P21 skills: Global awareness: Since it is accessible from anywhere with an Internet connection, Canvas opens the opportunity for a myriad of interactions across the globe. Utilizing Canvas as the platform for a purely online course enables students from around the world to enroll in your course. As a distance-learning tool in colleges, universities, or continuing education departments, Canvas has the capacity to unite students from anywhere in the world to directly interact with one another: You might utilize the graded discussion feature for students to post a reflection about a class reading that considers their personal cultural background and how that affects their perception of the content. Taking it a step further, you might require students to post a reply comment on other students' reflections to further spark discussion, collaboration, and cross-cultural connections. As a reminder, it is always best to include an overview of online discussion etiquette somewhere within your course—you might consider adding a "Netiquette" section to your syllabus to maintain focus and a professional tone within these discussions. You might set up a conference through Canvas with an international colleague as a guest lecturer for a course in any subject. As a prerequisite assignment, you might ask students to prepare three questions to ask the guest lecturer to facilitate a real-time international discussion within your class. Financial, economic, business, and entrepreneurial literacy: As the world becomes increasingly digitized, accessing and incorporating current content from the Web is a great way to incorporate financial, economic, business, and entrepreneurial literacy into your course: In a Math course, you might consider creating a course module centered around the stock market. Within the module, you could build custom content pages offering direct instruction and introductions to specific topics. You could upload course readings and embed videos of interviews with experts with the YouTube app. You could link to live steam websites of the movement of the markets and create quizzes to assess students' understanding. Civic literacy: In fostering students' understanding of their role within their communities, Canvas can serve as a conduit of information regarding civic responsibilities, procedures, and actions: You might create a discussion assignment in which students search the Internet for a news article about a current event and post a reflection with connections to other content covered in the course. Offering guidance in your instructions to address how local and national citizenship impacts students' engagement with the event or incident could deepen the nature of responses you receive. Since discussion posts are visible to all participants in your course, a follow-up assignment might be for students to read one of the articles posted by another student and critique or respond to their reflection. Health literacy: Canvas can allow you to facilitate the exploration of health and wellness through the wide array of submission options for assignments. By utilizing the variety of assignment types you can create within Canvas, students are able to explore course content in new and meaningful ways: In a studio art class, you can create an out-of-class assignment to be submitted to Canvas in which students research the history, nature, and benefits of art therapy online and then create and upload a video sharing their personal relationship with art and connecting it to what they have found in the art therapy stories of others. Environmental literacy: As a cloud-based LMS, Canvas allows you to share files and course content with your students while maintaining and fostering an awareness of environmental sustainability: In any course you teach that involves readings uploaded to Canvas, encourage your students to download the readings to their computers or mobile devices rather than printing the content onto paper. Downloading documents to read on a device instead of printing them saves paper, reduces waste, and helps foster sustainable environmental habits. For PDF files embedded into content pages on Canvas, students can click on the preview icon that appears next to the document link and read the file directly on the content page without downloading or printing anything. Make a conscious effort to mention or address the environmental impacts of online learning versus traditional classroom settings, perhaps during a synchronous class conference or on a discussion board. Learning and innovation skills A number of specific elements combined can enable students to develop learning and innovation skills to prepare them for the increasingly "complex life and work environments in the 21st century." The communication setup of Canvas allows for quick and direct interactions while offering students the opportunity to contemplate and revise their contributions before posting to the course, submitting an assignment, or interacting with other students. This flexibility, combined with the ways in which you design your assignments, can help incorporate the following elements into your course to ensure the development of learning and innovation skills: Creativity and innovation: There are many ways in which the features of Canvas can help your students develop their creativity and innovation. As you build your course, finding ways for students to think creatively, work creatively with others, and implement innovations can guide the creation of your course assignments: You might consider assigning groups of students to assemble a content page within Canvas dedicated to a chosen or assigned topic. Do so by creating a content page, and then enable any user within the course to edit the page. Allowing students to experiment with the capabilities of the Rich Content Editor, embedding outside content and synthesizing ideas within Canvas allows each group's creativity to shine. As a follow-up assignment, you might choose to have students transfer the content of their content page to a public website or blog using sites such as Wikispaces, Wix, or Weebly. Once the sites are created, students can post their group site to a Canvas discussion page, where other students can view and interact with the work of their peers. Asking students to disseminate the class sites to friends or family around the globe could create international connections stemming from the creativity and innovation of your students' web content. Critical thinking and problem solving: As your students learn to overcome obstacles and find multiple solutions to complex problems, Canvas offers a place for students to work together to develop their critical thinking and problem-solving skills: Assign pairs of students to debate and posit solutions to a global issue that connects to topics within your course. Ask students to use the Conversations feature of Canvas to debate the issue privately, finding supporting evidence in various forms from around the Internet. Using the Collaborations feature, ask each pair of students to assemble and submit a final e-report on the topic, presenting the various solutions they came up with as well as supporting evidence in various electronic forms such as articles, videos, news clips, and websites. Communication and collaboration: With the seemingly impersonal nature of electronic communication, communication skills are incredibly important to maintain intended meanings across multiple means of communication. As the nature of online collaboration and communication poses challenges for understanding, connotation, and meaning, honing communication skills becomes increasingly important: As a follow-up assignment to the preceding debate suggestion, use the conferences tool in Canvas to set up a full class debate. During the debate, ask each pair of students to present their final e-report to the class, followed by a group discussion of each pair's findings, solutions, and conclusions. You might find it useful for each pair to explain their process and describe the challenges and/or benefits of collaborating and communicating via the Internet in contrast to collaborating and communicating in person.
Read more
  • 0
  • 0
  • 1123

article-image-social-media-and-magento
Packt
19 Aug 2014
17 min read
Save for later

Social Media and Magento

Packt
19 Aug 2014
17 min read
Social networks such as Twitter and Facebook are ever popular and can be a great source of new customers if used correctly on your store. In this article by Richard Carter, the author of Learning Magento Theme Development, covers the following topics: Integrating a Twitter feed into your Magento store Integrating a Facebook Like Box into your Magento store Including social share buttons in your product pages Integrating product videos from YouTube into the product page (For more resources related to this topic, see here.) Integrating a Twitter feed into your Magento store If you're active on Twitter, it can be worthwhile to let your customers know. While you can't (yet, anyway!) accept payment for your goods through Twitter, it can be a great way to develop a long term relationship with your store's customers and increase repeat orders. One way you can tell customers you're active on Twitter is to place a Twitter feed that contains some of your recent tweets on your store's home page. While you need to be careful not to get in the way of your store's true content, such as your most recent products and offers, you could add the Twitter feed in the footer of your website.   Creating your Twitter widget   To embed your tweets, you will need to create a Twitter widget. Log in to your Twitter account, navigate to https://twitter.com/settings/widgets, and follow the instructions given there to create a widget that contains your most recent tweets. This will create a block of code for you that looks similar to the following code:   <a class="twitter-timeline" href="https://twitter.com/RichardCarter" data-widget-id="123456789999999999">Tweets by @RichardCarter</a>   <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s) [0],p=/^http:/.test(d.location)?'http':'https';if(!d. getElementById(id)){js=d.createElement(s);js.id=id;js. src=p+"://platform.twitter.com/widgets.js";fjs.parentNode. insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script> Embedding your Twitter feed into a Magento template   Once you have the Twitter widget code to embed, you're ready to embed it into one of Magento's template files. This Twitter feed will be embedded in your store's footer area. So, so open your theme's /app/design/frontend/default/m18/template/page/html/footer.phtml file and add the highlighted section of the following code:   <div class="footer-about footer-col">   <?php echo $this->getLayout()->createBlock('cms/block')->setBlockId('footer_about')->toHtml(); ?>   <?php   $_helper = Mage::helper('catalog/category'); $_categories = $_helper->getStoreCategories(); if (count($_categories) > 0): ?> <ul>   <?phpforeach($_categories as $_category): ?> <li>   <a href="<?php echo $_helper->getCategoryUrl($_category) ?>"> <?php echo $_category->getName() ?>   </a>   </li> <?phpendforeach; ?> </ul>   <?phpendif; ?>   <a class="twitter-timeline" href="https://twitter.com/RichardCarter" data-widget-id="123456789999999999">Tweets by @RichardCarter</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s) [0],p=/^http:/.test(d.location)?'http':'https';if(!d. getElementById(id)){js=d.createElement(s);js.id=id;js. src=p+"://platform.twitter.com/widgets.js";fjs.parentNode. insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script>   </div>   The result of the preceding code is a Twitter feed similar to the following one embedded on your store:     As you can see, the Twitter widget is quite cumbersome. So, it's wise to be sparing when adding this to your website. Sometimes, a simple Twitter icon that links to your account is all you need!   Integrating a Facebook Like Box into your Magento store   Facebook is one of the world's most popular social networks; with careful integration, you can help drive your customers to your Facebook page and increase long term interaction. This will drive repeat sales and new potential customers to your store. One way to integrate your store's Facebook page into your Magento site is to embed your Facebook page's news feed into it.   Getting the embedding code from Facebook   Getting the necessary code for embedding from Facebook is relatively easy; navigate to the Facebook Developers website at https://developers.facebook.com/docs/plugins/like-box-for-pages. Here, you are presented with a form. Complete the form to generate your embedding code; enter your Facebook page's URL in the Facebook Page URL field (the following example uses Magento's Facebook page):   Click on the Get Code button on the screen to tell Facebook to generate the code you will need, and you will see a pop up with the code appear as shown in the following screenshot:   Adding the embed code into your Magento templates   Now that you have the embedding code from Facebook, you can alter your templates to include the code snippets. The first block of code for the JavaScript SDK is required in the header.phtml file in your theme's directory at /app/design/frontend/default/m18/template/page/html/. Then, add it at the top of the file:   <div id="fb-root"></div> <script>(function(d, s, id) {   var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id;   js.src = "//connect.facebook.net/en_GB/all.js#xfbml=1"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk'));</script>   Next, you can add the second code snippet provided by the Facebook Developers site where you want the Facebook Like Box to appear in your page. For flexibility, you can create a static block in Magento's CMS tool to contain this code and then use the Magento XML layout to assign the static block to a template's sidebar.   Navigate to CMS | Static Blocks in Magento's administration panel and add a new static block by clicking on the Add New Block button at the top-right corner of the screen. Enter a suitable name for the new static block in the Block Title field and give it a value facebook in the Identifier field. Disable Magento's rich text editor tool by clicking on the Show / Hide Editor button above the Content field.   Enter in the Content field the second snippet of code the Facebook Developers website provided, which will be similar to the following code:   <div class="fb-like-box" data-href="https://www.facebook.com/Magento" data-width="195" data-colorscheme="light" data-show-faces="true" data-header="true" data-stream="false" data-show-border="true"></div> Once complete, your new block should look like the following screenshot:   Click on the Save Block button to create a new block for your Facebook widget. Now that you have created the block, you can alter your Magento theme's layout files to include the block in the right-hand column of your store.   Next, open your theme's local.xml file located at /app/design/frontend/default/m18/layout/ and add the following highlighted block of XML to it. This will add the static block that contains the Facebook widget:   <reference name="right">   <block type="cms/block" name="cms_facebook">   <action method="setBlockId"><block_id>facebook</block_id></action>   </block>   <!--other layout instructions -->   </reference>   If you save this change and refresh your Magento store on a page that uses the right-hand column page layout, you will see your new Facebook widget appear in the right-hand column. This is shown in the following screenshot:   Including social share buttons in your product pages   Particularly if you are selling to consumers rather than other businesses, you can make use of social share buttons in your product pages to help customers share the products they love with their friends on social networks such as Facebook and Twitter. One of the most convenient ways to do this is to use a third-party service such as AddThis, which also allows you to track your most shared content. This is useful to learn which products are your most-shared products within your store! Styling the product page a little further   Before you begin to integrate the share buttons, you can style your product page to provide a little more layout and distinction between the blocks of content. Open your theme's styles.css file and append the following CSS (located at /skin/frontend/default/m18/css/) to provide a column for the product image and a column for the introductory content of the product:   .product-img-box, .product-shop {   float: left;   margin: 1%;   padding: 1%;   width: 46%;   }   You can also add some additional CSS to style some of the elements that appear on the product view page in your Magento store:   .product-name { margin-bottom: 10px; }   .or {   color: #888; display: block; margin-top: 10px;   }   .add-to-box { background: #f2f2f2; border-radius: 10px; margin-bottom: 10px; padding: 10px; }   .more-views ul { list-style-type: none;   }   If you refresh a product page on your store, you will see the new layout take effect: Integrating AddThis   Now that you have styled the product page a little, you can integrate AddThis with your Magento store. You will need to get a code snippet from the AddThis website at http://www.addthis.com/get/sharing. Your snippet will look something similar to the following code:   <div class="addthis_toolboxaddthis_default_style ">   <a class="addthis_button_facebook_like" fb:like:layout="button_ count"></a>   <a class="addthis_button_tweet"></a>   <a class="addthis_button_pinterest_pinit" pi:pinit:layout="horizontal"></a>   <a class="addthis_counteraddthis_pill_style"></a> </div>   <script type="text/javascript">var addthis_config = {"data_track_ addressbar":true};</script>   <script type="text/javascript" src="//s7.addthis.com/js/300/addthis_ widget.js#pubid=youraddthisusername"></script>   Once the preceding code is included in a page, this produces a social share tool that will look similar to the following screenshot:   Copy the product view template from the view.phtml file from /app/design/frontend/base/default/catalog/product/ to /app/design/frontend/default/m18/catalog/product/ and open your theme's view.phtml file for editing. You probably don't want the share buttons to obstruct the page name, add-to-cart area, or the brief description field. So, positioning the social share tool underneath those items is usually a good idea. Locate the snippet in your view.phtml file that has the following code:   <?php if ($_product->getShortDescription()):?>   <div class="short-description">   <h2><?php echo $this->__('Quick Overview') ?></h2>   <div class="std"><?php echo $_helper->productAttribute($_product, nl2br($_product->getShortDescription()), 'short_description') ?></div>   </div>   <?phpendif;?>   Below this block, you can insert your AddThis social share tool highlighted in the following code so that the code is similar to the following block of code (the youraddthisusername value on the last line becomes your AddThis account's username):   <?php if ($_product->getShortDescription()):?>   <div class="short-description">   <h2><?php echo $this->__('Quick Overview') ?></h2>   <div class="std"><?php echo $_helper->productAttribute($_product, nl2br($_product->getShortDescription()), 'short_description') ?></div>   </div>   <?phpendif;?>   <div class="addthis_toolboxaddthis_default_style ">   <a class="addthis_button_facebook_like" fb:like:layout="button_ count"></a>   <a class="addthis_button_tweet"></a>   <a class="addthis_button_pinterest_pinit" pi:pinit:layout="horizontal"></a>   <a class="addthis_counteraddthis_pill_style"></a> </div>   <script type="text/javascript">var addthis_config = {"data_track_ addressbar":true};</script>   <script type="text/javascript" src="//s7.addthis.com/js/300/addthis_ widget.js#pubid=youraddthisusername"></script>   If you want to reuse this block in multiple places throughout your store, consider adding this to a static block in Magento and using Magento's XML layout to add the block as required.   Once again, refresh the product page on your Magento store and you will see the AddThis toolbar appear as shown in the following screenshot. It allows your customers to begin sharing their favorite products on their preferred social networking sites.     If you can't see your changes, don't forget to clear your caches by navigating to System | Cache Management. If you want to provide some space between other elements and the AddThis toolbar, add the following CSS to your theme's styles.css file:   .addthis_toolbox {   margin: 10px 0;   }   The resulting product page will now look similar to the following screenshot. You have successfully integrated social sharing tools on your Magento store's product page:     Integrating product videos from YouTube into the product page   An increasingly common occurrence on ecommerce stores is the use of video in addition to product photography. The use of videos in product pages can help customers overcome any fears they're not buying the right item and give them a better chance to see the quality of the product they're buying. You can, of course, simply add the HTML provided by YouTube's embedding tool to your product description. However, if you want to insert your video on a specific page within your product template, you can follow the steps described in this section. Product attributes in Magento   Magento products are constructed from a number of attributes (different fields), such as product name, description, and price. Magento allows you to customize the attributes assigned to products, so you can add new fields to contain more information on your product. Using this method, you can add a new Video attribute that will contain the video embedding HTML from YouTube and then insert it into your store's product page template.   An attribute value is text or other content that relates to the attribute, for example, the attribute value for the Product Name attribute might be Blue Tshirt. Magento allows you to create different types of attribute:   •        Text Field: This is used for short lines of text.   •        Text Area: This is used for longer blocks of text.   •        Date: This is used to allow a date to be specified.   •        Yes/No: This is used to allow a Boolean true or false value to be assignedto the attribute.   •        Dropdown: This is used to allow just one selection from a list of optionsto be selected.   •        Multiple Select: This is used for a combination box type to allow one ormore selections to be made from a list of options provided.   •        Price: This is used to allow a value other than the product's price, specialprice, tier price, and cost. These fields inherit your store's currency settings.   •        Fixed Product Tax: This is required in some jurisdictions for certain types ofproducts (for example, those that require an environmental tax to be added). Creating a new attribute for your video field   Navigate to Catalog | Attributes | Manage Attributes in your Magento store's control panel. From there, click on the Add New Attribute button located near the top-right corner of your screen:     In the Attribute Properties panel, enter a value in the Attribute Code field that will be used internally in Magento to refer this. Remember the value you enter here, as you will require it in the next step! We will use video as the Attribute Code value in this example (this is shown in the following screenshot). You can leave the remaining settings in this panel as they are to allow this newly created attribute to be used with all types of products within your store.   In the Frontend Properties panel, ensure that Allow HTML Tags on Frontend is set to Yes (you'll need this enabled to allow you to paste the YouTube embedding HTML into your store and for it to work in the template). This is shown in the following screenshot:   Now select the Manage Labels / Options tab in the left-hand column of your screen and enter a value in the Admin and Default Store View fields in the Manage Titles panel:     Then, click on the Save Attribute button located near the top-right corner of the screen. Finally, navigate to Catalog | Attributes | Manage Attribute Sets and select the attribute set you wish to add your new video attribute to (we will use the Default attribute set for this example). In the right-hand column of this screen, you will see the list of Unassigned Attributes with the newly created video attribute in this list:     Drag-and-drop this attribute into the Groups column under the General group as shown in the following screenshot:   Click on the Save Attribute Set button at the top-right corner of the screen to add the new video attribute to the attribute set.   Adding a YouTube video to a product using the new attribute   Once you have added the new attribute to your Magento store, you can add a video to a product. Navigate to Catalog | Manage Products and select a product to edit (ensure that it uses one of the attribute sets you added the new video attribute to). The new Video field will be visible under the General tab:   Insert the embedding code from the YouTube video you wish to use on your product page into this field. The embed code will look like the following:   <iframe width="320" height="240" src="https://www.youtube.com/embed/dQw4w9WgXcQ?rel=0" frameborder="0" allowfullscreen></iframe> Once you have done that, click on the Save button to save the changes to the product.   Inserting the video attribute into your product view template   Your final task is to allow the content of the video attribute to be displayed in your product page templates in Magento. Open your theme's view.phtml file from /app/design/frontend/default/m18/catalog/product/ and locate the followingsnippet of code:   <div class="product-img-box">   <?php echo $this->getChildHtml('media') ?> </div>   Add the following highlighted code to the preceding code to check whether a video for the product exists and show it if it does exist:   <div class="product-img-box">   <?php   $_video-html = $_product->getResource()->getAttribute('video')->getFrontend()->getValue($_product);   if ($_video-html) echo $_video-html ;   ?>   <?php echo $this->getChildHtml('media') ?>   </div>   If you now refresh the product page that you have added a video to, you will see that the video appears in the same column as the product image. This is shown in the following screenshot: Summary In this article, we looked at expanding the customization of your Magento theme to include elements from social networking sites. You learned about integrating a Twitter feed and Facebook feed into your Magento store, including social share buttons in your product pages, and integrating product videos from YouTube. Resources for Article: Further resources on this subject: Optimizing Magento Performance — Using HHVM [article] Installing Magento [article] Magento Fundamentals for Developers [article]
Read more
  • 0
  • 0
  • 1361
article-image-keeping-site-secure
Packt
17 Jul 2014
9 min read
Save for later

Keeping the Site Secure

Packt
17 Jul 2014
9 min read
(For more resources related to this topic, see here.) Choosing a web host that meets your security requirements In this article, you'll learn what you, as the site administrator, can do to keep your site safe. However, there are also some basic but critical security measures that your web hosting company should take. You'll probably have a shared hosting account, where multiple sites are hosted on one server computer and each site has access to their part of the available disk space and resources. Although this is much cheaper than hiring a dedicated server to host your site, it does involve some security risks. Good web hosting companies take precautions to minimize these risks. When selecting your web host, it's worth checking if they have a good reputation of keeping their security up to standards. The official Joomla resources site (http://resources.joomla.org/directory/support-services/hosting.html) features hosting companies that fully meet the security requirements of a typical Joomla-powered site. Tip 1 – Download from reliable sources To avoid installing corrupted versions, it's a good idea to download the Joomla software itself only from the official website (www.joomla.org) or from reliable local Joomla community sites. This is also true when downloading third-party extensions. Use only extensions that have a good reputation; you can check the reviews at www.extensions.joomla.org. Preferably, download extensions only from the original developer's website or from Joomla community websites that have a good reputation. Tip 2 – Update regularly The Joomla development team regularly releases updates to fix bugs and security issues. Fortunately, Joomla makes keeping your site up to date effortless. In the backend control panel, you'll find a MAINTENANCE section in the quick icons column displaying the current status of both the Joomla software itself and of installed extensions. This is shown in the following screenshot: If updates are found, the quick icon text will prompt you to update by adding an Update now! link, as shown in the following screenshot: Clicking on this link takes you to the Joomla! Update component (which can also be found by navigating to Components | Joomla! Update). In this window, you'll see the details of the update. Just click on Install the update; the process is fully automated. Before you upgrade Joomla to an updated version, it's a good idea to create a backup of your current site. If anything goes wrong, you can quickly have it up and running again. See the Tip 6 – Protect files and directories section in this article for more information on creating backups. If updates for installed extensions are available, you'll also see a notice stating this in the MAINTENANCE section of the control panel. However, you can also check for updates manually; in the backend, navigate to Extensions | Extension Manager and click on Update in the menu on the left-hand side. Click on Find Updates to search for updates. After you've clicked on the Find Updates button, you'll see a notice informing you whether updates are available or not. Select the update you want to install and click on the Update button. Be patient; you may not see much happening for a while. After completion, a message is displayed that the available updates have been installed successfully. The update functionality only works for extensions that support it. It's to be expected that this feature will be widely supported by extension developers; but for other extensions, you'll still have to manually check for updates by visiting the extension developer's website. The Joomla update packages are stored in your website's tmp directory before the update is installed on your site. After installation, you can remove these files from the tmp directory to avoid running into disc space problems. Tip 3 – Choose a safe administrator username When you install Joomla, you also choose and enter a username for your login account (the critical account of the almighty Super User). Although you can enter any administrator username when installing Joomla, many people enter a generic administrator username, for example, admin. However, this username is far too common, and therefore poses a security risk; hackers only have to guess your password to gain access to your site. If you haven't come up with something different during the installation process, you can change the administrator username later on. You can do this using the following steps: In the backend of the site, navigate to Users | User Manager. Select the Super User user record. In the Edit Profile screen, enter a new Login Name. Be creative! Click on Save & Close to apply changes. Log out and log in to the backend with the new username. Tip 4 – Pick a strong password Pick an administrator password that isn't easy to guess. It's best to have a password that's not too short; 8 or more characters is fine. Use a combination of uppercase letters, lowercase letters, numbers, and special characters. This should guarantee a strong password. Don't use the same username and password you use for other online accounts, and regularly change your password. You can create a new password anytime in the backend User Manager section in the same way as you enter or change a username (see Tip 2 – Update regularly). Tip 5 – Use Two-Factor authentication By default, logging in to the administrative interface of your site just requires the correct combination of username and password. A much more secure way to log in is using the Two-Factor authentication system, a recent addition to Joomla. It requires you to log in with not just your username and password, but also with a six-digit security code. To get this code, you need to use the Google Authenticator app, which is available for most Android, iOS, Blackberry, Windows 8, and Windows Mobile devices. The app doesn't store the six-digit code; it changes the code every 30 seconds. Two-Factor authentication is a great solution, but it does require a little extra effort every time you log in to your site. You need to have the app ready every time you log in to generate a new access code. However, you can selectively choose which users will require this system. You can decide, for example, that only the site administrator has to log in using Two-Factor authentication. Enabling the Two-Factor authentication system of Joomla! To enable Joomla's Two-Factor authentication, where a user has to enter an additional secret key when logging in to the site, follow these steps: To use this system on your site, first enable the Two-Factor authentication plugin that comes with Joomla. In the Joomla backend, navigate to Extensions | Plugin Manager, select Two Factor Authentication – Google Authenticator, and enable it. Next, download and install the Google Authenticator app for your device. Once this is set up, you can set the authentication procedure for any user in the Joomla backend. In the user manager section, click on the account name. Then, click on the Two Factor Authentication tab and select Google Authenticator from the Authentication method dropdown. This is shown in the following screenshot: Joomla will display the account name and the key for this account. Enter these account details in the Google Authenticator app on your device. The app will generate a security code. In the Joomla backend, enter this in the Security Code field, as shown in the following screenshot: Save your changes. From now on, the Joomla login screen will ask you for your username, password, and a secret key. This is shown in the following screenshot: There are other secure login systems available besides the Google Authenticator app. Joomla also supports Yubico's YubiKey, a USB stick that generates an additional password every time it is used. After entering their usual password, the user inserts the YubiKey into the USB port of the computer and presses the YubiKey button. Automatically, the extra one-time password will be entered in Joomla's Secret Key field. For more information on YubiKey, visit http://www.yubico.com. Tip 6 – Protect files and directories Obviously, you don't want everybody to be able to access the Joomla files and folders on the web server. You can protect files and folders by setting access permissions using the Change Mode (CHMOD) commands. Basically, the CHMOD settings tell the web server who has access to a file or folder and who is allowed to read it, write to it, or to execute a file (run it as a program). Once your Joomla site is set up and everything works OK, you can use CHMOD to change permissions. You don't use Joomla to change the CHMOD settings; these are set with FTP software. You can make this work using the following steps: In your FTP program, right-click on the name of the file or directory you want to protect. In this example, we'll use the open source FTP program FileZilla. In the right-click menu, select File Permissions. You'll be presented with a pop-up screen. Here, you can check permissions and change them by selecting the appropriate options, as shown in the following screenshot: As you can see, it's possible to set permissions for the file owner (that's you), for group members (that's likely to be only you too), and for the public (everyone else). The public permissions are the tricky part; you should restrict public permissions as much as possible. When changing the permission settings, the file permissions number (the value in the Numeric value: field in the previous screenshot) will change accordingly. Every combination of settings has its particular number. In the previous example, this number is 644. Click on OK to execute the CHMOD command and set file permissions. Setting file permissions What files should you protect and what CHMOD settings should you choose? Here are a few pointers: By default, permissions for files are set to 644. Don't change this; it's a safe value. For directories, a safe setting is 750 (which doesn't allow any public access). However, some extensions may need access to certain directories. In such cases, the 750 setting may result in error messages. In this case, set permissions to 755. Never leave permissions for a file or directory set to 777. This allows everybody to write data to it. You can also block direct access to critical directories using a .htaccess file. This is a special file containing instructions for the Apache web server. Among other things, it tells the web server who's allowed access to the directory contents. You can add a .htaccess file to any folder on the server by using specific instructions. This is another way to instruct the web server to restrict access. Check the Joomla security documentation at www.joomla.org for instructions.
Read more
  • 0
  • 0
  • 1434

article-image-building-private-app
Packt
23 May 2014
14 min read
Save for later

Building a Private App

Packt
23 May 2014
14 min read
(For more resources related to this topic, see here.) Even though the app will be simple and only take a few hours to build, we'll still use good development practices to ensure we create a solid foundation. There are many different approaches to software development and discussing even a fraction of them is beyond the scope of this book. Instead, we'll use a few common concepts, such as requirements gathering, milestones, Test-Driven Development (TDD), frequent code check-ins, and appropriate commenting/documentation. Personal discipline in following development procedures is one of the best things a developer can bring to a project; it is even more important than writing code. This article will cover the following topics: The structure of the app we'll be building The development process Working with the Shopify API Using source control Deploying to production Signing up for Shopify Before we dive back into code, it would be helpful to get the task of setting up a Shopify store out of the way. Sign up as a Shopify partner by going to http://partners.shopify.com. The benefit of this is that partners can provision stores that can be used for testing. Go ahead and make one now before reading further. Keep your login information close at hand; we'll need it in just a moment. Understanding our workflow The general workflow for developing our application is as follows: Pull down the latest version of the master branch. Pick a feature to implement from our requirements list. Create a topic branch to keep our changes isolated. Write tests that describe the behavior desired by our feature. Develop the code until it passes all the tests. Commit and push the code into the remote repository. Pull down the latest version of the master branch and merge it with our topic branch. Run the test suite to ensure that everything still works. Merge the code back with the master branch. Commit and push the code to the remote repository. The previous list should give you a rough idea of what is involved in a typical software project involving multiple developers. The use of topic branches ensures that our work in progress won't affect other developers (called breaking the build) until we've confirmed that our code has passed all the tests and resolved any conflicts by merging in the latest stable code from the master branch. The practical upside of this methodology is that it allows bug fixes or work from another developer to be added to the project at any time without us having to worry about incomplete code polluting the build. This also gives us the ability to deploy production from a stable code base. In practice, a lot of projects will also have a production branch (or tagged release) that contains a copy of the code currently running in production. This is primarily in case of a server failure so that the application can be restored without having to worry about new features being released ahead of schedule, and secondly so that if a new deploy introduces bugs, it can easily be rolled back. Building the application We'll be building an application that allows Shopify storeowners to organize contests for their shoppers and randomly select a winner. Contests can be configured based on purchase history and timeframe. For example, a contest could be organized for all the customers who bought the newest widget within the last three days, or anyone who has made an order for any product in the month of August. To accomplish this, we'll need to be able to pull down order information from the Shopify store, generate a random winner, and show the storeowner the results. Let's start out by creating a list of requirements for our application. We'll use this list to break our development into discrete pieces so we can easily measure our progress and also keep our focus on the important features. Of course, it's difficult to make a complete list of all the requirements and have it stick throughout the development process, which is why a common strategy is to develop in iterations (or sprints). The result of an iteration is a working app that can be reviewed by the client so that the remaining features can be reprioritized if necessary. High-level requirements The requirements list comprises all the tasks we're going to accomplish in this article. The end result will be an application that we can use to run a contest for a single Shopify store. Included in the following list are any related database, business logic, and user interface coding necessary. Install a few necessary gems. Store Shopify API credentials. Connect to Shopify. Retrieve order information from Shopify. Retrieve product information from Shopify. Clean up the UI. Pick a winner from a list. Create contests. Now that we have a list of requirements, we can treat each one as a sprint. We will work in a topic branch and merge our code to the master branch at the end of the sprint. Installing a few necessary gems The first item on our list is to add a few code libraries (gems) to our application. Let's create a topic branch and do just that. To avoid confusion over which branch contains code for which feature, we can start the branch name with the requirement number. We'll additionally prepend the chapter number for clarity, so our format will be <chapter #>_<requirement #>_<branch name>. Execute the following command line in the root folder of the app: git checkout -b ch03_01_gem_updates This command will create a local branch called ch03_01_gem_updates that we will use to isolate our code for this feature. Once we've installed all the gems and verified that the application runs correctly, we'll merge our code back with the master branch. At a minimum we need to install the gems we want to use for testing. For this app we'll use RSpec. We'll need to use the development and test group to make sure the testing gems aren't loaded in production. Add the following code in bold to the block present in the Gemfile: group :development, :test do gem "sqlite3" # Helpful gems gem "better_errors" # improves error handling gem "binding_of_caller" # used by better errors # Testing frameworks gem 'rspec-rails' # testing framework gem "factory_girl_rails" # use factories, not fixtures gem "capybara" # simulate browser activity gem "fakeweb" # Automated testing gem 'guard' # automated execution of test suite upon change gem "guard-rspec" # guard integration with rspec # Only install the rb-fsevent gem if on Max OSX gem 'rb-fsevent' # used for Growl notifications end Now we need to head over to the terminal and install the gems via Bundler with the following command: bundle install The next step is to install RSpec: rails generate rspec:install The final step is to initialize Guard: guard init rspec This will create a Guard file, and fill it with the default code needed to detect the file changes. We can now restart our Rails server and verify that everything works properly. We have to do a full restart to ensure that any initialization files are properly picked up. Once we've ensured that our page loads without issue, we can commit our code and merge it back with the master branch: git add --all git commit -am "Added gems for testing" git checkout master git merge ch03_01_gem_updates git push Great! We've completed our first requirement. Storing Shopify API credentials In order to access our test store's API, we'll need to create a Private App and store the provided credentials there for future use. Fortunately, Shopify makes this easy for us via the Admin UI: Go to the Apps page. At the bottom of the page, click on the Create a private API key… link. Click on the Generate new Private App button. We'll now be provided with three important pieces of information: the API Key, password, and shared secret. In addition, we can see from the example URL field that we need to track our Shopify URL as well. Now that we have credentials to programmatically access our Shopify store, we can save this in our application. Let's create a topic branch and get to work: git checkout -b ch03_02_shopify_credentials Rails offers a generator called a scaffold that will create the database migration model, controller, view files, and test stubs for us. Run the following from the command line to create the scaffold for the Account vertical (make sure it is all on one line): rails g scaffold Account shopify_account_url:string shopify_api_key:string shopify_password:string shopify_shared_secret:string We'll need to run the database migration to create the database table using the following commands: bundle exec rake db:migrate bundle exec rake db:migrate RAILS_ENV=test Use the following command to update the generated view files to make them bootstrap compatible: rails g bootstrap:themed Accounts -f Head over to http://localhost:3000/accounts and create a new account in our app that uses the Shopify information from the Private App page. It's worth getting Guard to run our test suite every time we make a change so we can ensure that we don't break anything. Open up a new terminal in the root folder of the app and start up Guard: bundle exec guard After booting up, Guard will automatically run all our tests. They should all pass because we haven't made any changes to the generated code. If they don't, you'll need to spend time sorting out any failures before continuing. The next step is to make the app more user friendly. We'll make a few changes now and leave the rest for you to do later. Update the layout file so it has accurate navigation. Boostrap created several dummy links in the header navigation and sidebar. Update the navigation list in /app/views/layouts/application.html.erb to include the following code: <a class="brand" href="/">Contestapp</a> <div class="container-fluid nav-collapse"> <ul class="nav"> <li><%= link_to "Accounts", accounts_path%></li> </ul> </div><!--/.nav-collapse --> Add validations to the account model to ensure that all fields are required when creating/updating an account. Add the following lines to /app/models/account.rb: validates_presence_of :shopify_account_url validates_presence_of :shopify_api_key validates_presence_of :shopify_password validates_presence_of :shopify_shared_secret This will immediately cause the controller tests to fail due to the fact that it is not passing in all the required fields when attempting to submit the created form. If you look at the top of the file, you'll see some code that creates the :valid_attributes hash. If you read the comment above it, you'll see that we need to update the hash to contain the following minimally required fields: # This should return the minimal set of attributes required # to create a valid Account. As you add validations to # Account, be sure to adjust the attributes here as well. let(:valid_attributes) { { "shopify_account_url" => "MyString", "shopify_password" => "MyString", "shopify_api_ key" => "MyString", "shopify_shared_secret" => "MyString" } } This is a prime example of why having a testing suite is important. It keeps us from writing code that breaks other parts of the application, or in this case, helps us discover a weakness we might not have known we had: the ability to create a new account record without filling in any fields! Now that we have satisfied this requirement and all our tests pass, we can commit the code and merge it with the master branch: git add --all git commit -am "Account model and related files" git checkout master git merge ch03_02_shopify_credentials git push Excellent! We've now completed another critical piece! Connecting to Shopify Now that we have a test store to work with, we're ready to implement the code necessary to connect our app to Shopify. First, we need to create a topic branch: git checkout -b ch03_03_shopify_connection We are going to use the official Shopify gem to connect our app to our test store, as well as interact with the API. Add this to the Gemfile under the gem 'bootstrap-sass' line: gem 'shopify_api' Update the bundle from the command line: bundle install We'll also need to restart Guard in order for it to pick up the new gem. This is typically done by using a key combination like Ctrl + Z< (Windows) or Cmd + C (Mac OS X) or by typing the word exit and pressing the Enter key. I've written a class that encapsulates the Shopify connection logic and initializes the global ShopifyAPI class that we can then use to interact with the API. You can find the code for this class in ch03_shopify_integration.rb. You'll need to copy the contents of this file to your app in a new file located at /app/services/shopify_integration.rb. The contents of the spec file ch03_shopify_integration_spec.rb need to be pasted in a new file located at /spec/services/shopify_integration_spec.rb. Using this class will allow us to execute something like ShopifyAPI::Order.find(:all) to get a list of orders, or ShopifyAPI::Product.find(1234) to retrieve the product with the ID 1234. The spec file contains tests for functionality that we haven't built yet and will initially fail. We'll fix this soon! We are going to add a Test Connection button to the account page that will give the user instant feedback as to whether or not the credentials are valid. Because we will be adding a new action to our application, we will need to first update controller, request, routing, and view tests before proceeding. Given the nature of this article and because in this case, we're connecting to an external service, the topics such as mocking, test writing, and so on will have to be reviewed as homework. I recommend watching the excellent screencasts created by Ryan Bates at http://railscasts.com as a primer on testing in Rails. The first step is to update the resources :accounts route in the /config/routes.rb file with the following block: resources :accounts do member do get 'test_connection' end end Copy the controller code from ch03_accounts_controller.rb and replace the code in /app/controllers/accounts_controller.rb file. This new code adds the test_connection method as well as ensuring the account is loaded properly. Finally, we need to add a button to /app/views/account/show.html.erb that will call this action in div.form-actions: <%= link_to "Test Connection",test_connection_account_path(@account), :class => 'btn' %> If we view the account page in our browser, we can now test our Shopify integration. Assuming that everything was copied correctly, we should see a success message after clicking on the Test Connection button. If everything was not copied correctly, we'll see the message that the Shopify API returned to us as a clue to what isn't working. Once all the tests pass, we can commit the code and merge it with the master branch: git add --all git commit -am "Shopify connection and related UI" git checkout master git merge ch03_03_shopify_connection git push Having fun? Good, because things are about to get heavy. Summary: As you can see and understand this article explains briefly about, the integration with Shopify's API in order to retrieve product and order information from the shop. The UI is then streamlined a bit before the logic to create a contest is created. Resources for Article: Further resources on this subject: Integrating typeahead.js into WordPress and Ruby on Rails [Article] Xen Virtualization: Work with MySQL Server, Ruby on Rails, and Subversion [Article] Designing and Creating Database Tables in Ruby on Rails [Article]
Read more
  • 0
  • 0
  • 1479

article-image-optimizing-magento-performance-using-hhvm
Packt
16 May 2014
5 min read
Save for later

Optimizing Magento Performance — Using HHVM

Packt
16 May 2014
5 min read
(For more resources related to this topic, see here.) HipHop Virtual Machine As we can write a whole book (or two) about HHVM, we will just give the key ideas here. HHVM is a virtual machine that will translate any called PHP file into a HHVM byte code in the same spirit as the Java or .NET virtual machine. HHVM transforms your PHP code into a lower level language that is much faster to execute. Of course, the transformation time (compiling) does cost a lot of resources, therefore, HHVM is shipped with a cache mechanism similar to APC. This way, the compiled PHP files are stored and reused when the original file is requested. With HHVM, you keep the PHP flexibility and ease in writing, but you now have a performance like that of C++. Hear the words of the HHVM team at Facebook: "HHVM (aka the HipHop Virtual Machine) is a new open-source virtual machine designed for executing programs written in PHP. HHVM uses a just-in-time compilation approach to achieve superior performance while maintaining the flexibility that PHP developers are accustomed to. To date, HHVM (and its predecessor HPHPc) has realized over a 9x increase in web request throughput and over a 5x reduction in memory consumption for Facebook compared with the Zend PHP 5.2 engine + APC. HHVM can be run as a standalone webserver (in other words, without the Apache webserver and the "modphp" extension). HHVM can also be used together with a FastCGI-based webserver, and work is in progress to make HHVM work smoothly with Apache." If you think this is too good to be true, you're right! Indeed, HHVM have a major drawback. HHVM was and still is focused on the needs of Facebook. Therefore, you might have a bad time trying to use your custom made PHP applications inside it. Nevertheless, this opportunity to speed up large PHP applications has been seen by talented developers who improve it, day after day, in order to support more and more framework. As our interest is in Magento, I will introduce you Daniel Sloof who is a developer from Netherland. More interestingly, Daniel has done (and still does) an amazing work at adapting HHVM for Magento. Here are the commands to install Daniel Sloof's version of HHVM for Magento: $ sudo apt-get install git $ git clone https://github.com/danslo/hhvm.git $ sudo chmod +x configure_ubuntu_12.04.sh $ sudo ./configure_ubuntu_12.04.sh $ sudo CMAKE_PREFIX_PATH=`pwd`/.. make If you thought that the first step was long, you will be astonished by the time required to actually build HHVM. Nevertheless, the wait is definitely worth it. The following screenshot shows how your terminal will look for the next hour or so: Create a file named hhvm.hdf under /etc/hhvm and write the following code inside: Server { Port = 80 SourceRoot = /var/www/_MAGENTO_HOME_ } Eval { Jit = true } Log { Level = Error UseLogFile = true File = /var/log/hhvm/error.log Access { * { File = /var/log/hhvm/access.log Format = %h %l %u %t \"%r\" %>s %b } } } VirtualHost { * { Pattern = .* RewriteRules { dirindex { pattern = ^/(.*)/$ to = $1/index.php qsa = true } } } } StaticFile { FilesMatch { * { pattern = .*\.(dll|exe) headers { * = Content-Disposition: attachment } } } Extensions { css = text/css gif = image/gif html = text/html jpe = image/jpeg jpeg = image/jpeg jpg = image/jpeg png = image/png tif = image/tiff tiff = image/tiff txt = text/plain } } Now, run the following command: $ sudo ./hhvm –mode daemon –config /etc/hhvm.hdf The hhvm executable is under hhvm/hphp/hhvm. Is all of this worth it? Here's the response: ab -n 100 -c 5 http://192.168.0.105192.168.0.105/index.php/furniture/livingroom.html Server Software: Server Hostname: 192.168.0.105192.168.0.105 Server Port: 80 Document Path: /index.php/furniture/living-room.html Document Length: 35552 bytes Concurrency Level: 5 Time taken for tests: 4.970 seconds Requests per second: 20.12 [#/sec] (mean) Time per request: 248.498 [ms] (mean) Time per request: 49.700 [ms] (mean, across all concurrent requests) Transfer rate: 707.26 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 2 12.1 0 89 Processing: 107 243 55.9 243 428 Waiting: 107 242 55.9 242 427 Total: 110 245 56.7 243 428 We literally reach a whole new world here. Indeed, our Magento instance is six times faster than after all our previous optimizations and about 20 times faster than the default Magento served by Apache. The following graph shows the performances: Our Magento instance is now flying at lightening speed, but what are the drawbacks? Is it still as stable as before? All the optimization we did so far, are they still effective? Can we go even further? In what follows, we present a non-exhaustive list of answers: Fancy extensions and modules may (and will) trigger HHVM incompatibilities. Magento is a relatively old piece of software and combining it with a cutting edge technology such as HHVM can have some unpredictable (and undesirable) effects. HHVM is so complex that fixing a Magento-related bug requires a lot of skill and dedication. HHVM takes care of PHP, not of cache mechanisms or the accelerator we installed before. Therefore, APC, memcached, and Varnish are still running and helping to improve our performances. If you become addicted to performances, HHVM is now supporting Fast-CGI through Nginx and Apache. You can find out more about that at: http://www.hhvm.com/blog/1817/fastercgi-with-hhvm. Summary In this article, we successfully used the HipHop Virtual Machine (HHVM) from Facebook to serve Magento. This improvement optimizes our Magento performance incredibly (20 times faster), that is, the time required initially was 110 seconds while now it is less then 5 seconds. Resources for Article: Further resources on this subject: Magento: Exploring Themes [article] Getting Started with Magento Development [article] Enabling your new theme in Magento [article] Call Send SMS Add to Skype You'll need Skype CreditFree via Skype
Read more
  • 0
  • 0
  • 1223
article-image-creating-responsive-magento-theme-bootstrap-3
Packt
21 Apr 2014
13 min read
Save for later

Creating a Responsive Magento Theme with Bootstrap 3

Packt
21 Apr 2014
13 min read
In this article, by Andrea Saccà, the author of Mastering Magento Theme Design, we will learn how to integrate the Bootstrap 3 framework and how to develop the main theme blocks. The following topics will be covered in this article: An introduction to Bootstrap Downloading Bootstrap (the current Version 3.1.1) Downloading and including jQuery Integrating the files into the theme Defining the main layout design template (For more resources related to this topic, see here.) An introduction to Bootstrap 3 Bootstrap is a sleek, intuitive, powerful, mobile-first frontend framework that enables faster and easier web development, as shown in the following screenshot: Bootstrap 3 is the most popular frontend framework that is used to create mobile-first websites. It includes a free collection of buttons, CSS components, and JavaScript to create websites or web applications; it was created by the Twitter team. Downloading Bootstrap (the current Version 3.1.1) First, you need to download the latest version of Bootstrap. The current version is 3.0. You can download the framework from http://getbootstrap.com/. The fastest way to download Bootstrap 3 is to download the precompiled and minified versions of CSS, JavaScript, and fonts. So, click on the Download Bootstrap button and unzip the file you downloaded. Once the archive is unzipped, you will see the following files: We need to take only the minified version of the files, that is, bootstrap.min.css from css, bootstrap.min.js from js, and all the files from font. For development, you can use bootstrap.css so that you can inspect the code and learn, and then switch to bootstrap.min.css when you go live. Copy all the selected files (CSS files inside the css folder, the .js files inside the js folder, and the font files inside the fonts folder) in the theme skin folder at skin/frontend/bookstore/default. Downloading and including jQuery Bootstrap is dependent on jQuery, so we have to download and include it before including boostrap.min.js. So, download jQuery from http://jquery.com/download/. The preceding URL takes us to the following screenshot: We will use the compressed production Version 1.10.2. Once you download jQuery, rename the file as jquery.min.js and copy it into the js skin folder at skin/frontend/bookstore/default/js/. In the same folder, also create the jquery.scripts.js file, where we will insert our custom scripts. Magento uses Prototype as the main JavaScript library. To make jQuery work correctly without conflicts, you need to insert the no conflict code in the jquery.scripts.js file, as shown in the following code: // This is important!jQuery.noConflict(); jQuery(document).ready(function() { // Insert your scripts here }); The following is a quick recap of CSS and JS files: Integrating the files into the theme Now that we have all the files, we will see how to integrate them into the theme. To declare the new JavaScript and CSS files, we have to insert the action in the local.xml file located at app/design/frontend/bookstore/default/layout. In particular, the file declaration needs to be done in the default handle to make it accessible by the whole theme. The default handle is defined by the following tags: <default> . . . </default> The action to insert the JavaScript and CSS files must be placed inside the reference head block. So, open the local.xml file and first create the following block that will define the reference: <reference name="head"> … </reference> Declaring the .js files in local.xml The action tag used to declare a new .js file located in the skin folder is as follows: <action method="addItem"> <type>skin_js</type><name>js/myjavascript.js</name> </action> In our skin folder, we copied the following three .js files: jquery.min.js jquery.scripts.js bootstrap.min.js Let's declare them as follows: <action method="addItem"> <type>skin_js</type><name>js/jquery.min.js</name> </action> <action method="addItem"> <type>skin_js</type><name>js/bootstrap.min.js</name> </action> <action method="addItem"> <type>skin_js</type><name>js/jquery.scripts.js</name> </action> Declaring the CSS files in local.xml The action tag used to declare a new CSS file located in the skin folder is as follows: <action method="addItem"> <type>skin_css</type><name>css/mycss.css</name> </action> In our skin folder, we have copied the following three .css files: bootstrap.min.css styles.css print.css So let's declare these files as follows: <action method="addItem"> <type>skin_css</type><name>css/bootstrap.min.css</name> </action> <action method="addItem"> <type>skin_css</type><name>css/styles.css</name> </action> <action method="addItem"> <type>skin_css</type><name>css/print.css</name> </action> Repeat this action for all the additional CSS files. All the JavaScript and CSS files that you insert into the local.xml file will go after the files declared in the base theme. Removing and adding the style.css file By default, the base theme includes a CSS file called styles.css, which is hierarchically placed before the bootstrap.min.css. One of the best practices to overwrite the Bootstrap CSS classes in Magento is to remove the default CSS files declared by the base theme of Magento, and declare it after Bootstrap's CSS files. Thus, the styles.css file loads after Bootstrap, and all the classes defined in it will overwrite the boostrap.min.css file. To do this, we need to remove the styles.css file by adding the following action tag in the xml part, just before all the css declaration we have already made: <action method="removeItem"> <type>skin_css</type> <name>css/styles.css</name> </action> Hence, we removed the styles.css file and added it again just after adding Bootstrap's CSS file (bootstrap.min.css): <action method="addItem"> <type>skin_css</type> <stylesheet>css/styles.css</stylesheet> </action> If it seems a little confusing, the following is a quick view of the CSS declaration: <!-- Removing the styles.css declared in the base theme --> <action method="removeItem"> <type>skin_css</type> <name>css/styles.css</name> </action> <!-- Adding Bootstrap Css --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/bootstrap.min.css</stylesheet> </action> <!-- Adding the styles.css again --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/styles.css</stylesheet> </action> Adding conditional JavaScript code If you check the Bootstrap documentation, you can see that in the HTML5 boilerplate template, the following conditional JavaScript code is added to make Internet Explorer (IE) HTML 5 compliant: <!--[if lt IE 9]> <script src = "https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"> </script> <script src = "https://oss.maxcdn.com/libs/respond.js/1.3.0/respond.min.js"> </script> <![endif]--> To integrate them into the theme, we can declare them in the same way as the other script tags, but with conditional parameters. To do this, we need to perform the following steps: Download the files at https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js and https://oss.maxcdn.com/libs/respond.js/1.3.0/respond.min.js. Move the downloaded files into the js folder of the theme. Always integrate JavaScript through the .xml file, but with the conditional parameters as follows: <action method="addItem"> <type>skin_js</type><name>js/html5shiv.js</name> <params/><if>lt IE 9</if> </action> <action method="addItem"> <type>skin_js</type><name>js/respond.min.js</name> <params/><if>lt IE 9</if> </action> A quick recap of our local.xml file Now, after we insert all the JavaScript and CSS files in the .xml file, the final local.xml file should look as follows: <?xml version="1.0" encoding="UTF-8"?> <layout version="0.1.0"> <default translate="label" module="page"> <reference name="head"> <!-- Adding Javascripts --> <action method="addItem"> <type>skin_js</type> <name>js/jquery.min.js</name> </action> <action method="addItem"> <type>skin_js</type> <name>js/bootstrap.min.js</name> </action> <action method="addItem"> <type>skin_js</type> <name>js/jquery.scripts.js</name> </action> <action method="addItem"> <type>skin_js</type> <name>js/html5shiv.js</name> <params/><if>lt IE 9</if> </action> <action method="addItem"> <type>skin_js</type> <name>js/respond.min.js</name> <params/><if>lt IE 9</if> </action> <!-- Removing the styles.css --> <action method="removeItem"> <type>skin_css</type><name>css/styles.css</name> </action> <!-- Adding Bootstrap Css --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/bootstrap.min.css</stylesheet> </action> <!-- Adding the styles.css --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/styles.css</stylesheet> </action> </reference> </default> </layout> Defining the main layout design template A quick tip for our theme is to define the main template for the site in the default handle. To do this, we have to define the template into the most important reference, root. In a few words, the root reference is the block that defines the structure of a page. Let's suppose that we want to use a main structure having two columns with the left sidebar for the theme To change it, we should add the setTemplate action in the root reference as follows: <reference name="root"> <action method="setTemplate"> <template>page/2columns-left.phtml</template> </action> </reference> You have to insert the reference name "root" tag with the action inside the default handle, usually before every other reference. Defining the HTML5 boilerplate for main templates After integrating Bootstrap and jQuery, we have to create our HTML5 page structure for the entire base template. The following are the structure files that are located at app/design/frontend/bookstore/template/page/: 1column.phtml 2columns-left.phtml 2columns-right.phtml 3columns.phtml The Twitter Bootstrap uses scaffolding with containers, a row, and 12 columns. So, its page layout would be as follows: <div class="container"> <div class="row"> <div class="col-md-3"></div> <div class="col-md-9"></div> </div> </div> This structure is very important to create responsive sections of the store. Now we will need to edit the templates to change to HMTL5 and add the Bootstrap scaffolding. Let's look at the following 2columns-left.phtml main template file: <!DOCTYPE HTML> <html><head> <?php echo $this->getChildHtml('head') ?> </head> <body <?php echo $this->getBodyClass()?' class="'.$this->getBodyClass().'"':'' ?>> <?php echo $this->getChildHtml('after_body_start') ?> <?php echo $this->getChildHtml('global_notices') ?> <header> <?php echo $this->getChildHtml('header') ?> </header> <section id="after-header"> <div class="container"> <?php echo $this->getChildHtml('slider') ?> </div> </section> <section id="maincontent"> <div class="container"> <div class="row"> <?php echo $this->getChildHtml('breadcrumbs') ?> <aside class="col-left sidebar col-md-3"> <?php echo $this->getChildHtml('left') ?> </aside> <div class="col-main col-md-9"> <?php echo $this->getChildHtml('global_messages') ?> <?php echo $this->getChildHtml('content') ?> </div> </div> </div> </section> <footer id="footer"> <div class="container"> <?php echo $this->getChildHtml('footer') ?> </div> </footer> <?php echo $this->getChildHtml('before_body_end') ?> <?php echo $this->getAbsoluteFooter() ?> </body> </html> You will notice that I removed the Magento layout classes col-main, col-left, main, and so on, as these are being replaced by the Bootstrap classes. I also added a new section, after-header, because we will need it after we develop the home page slider. Don't forget to replicate this structure on the other template files 1column.phtml, 2columns-right.phtml, and 3columns.phtml, changing the columns as you need. Summary We've seen how to integrate Bootstrap and start the development of a Magento theme with the most famous framework in the world. Bootstrap is very neat, flexible, and modular, and you can use it as you prefer to create your custom theme. However, please keep in mind that it can be a big drawback on the loading time of the page. Following these techniques by adding the JavaScript and CSS classes via XML, you can allow Magento to minify them to speed up the loading time of the site. Resources for Article: Further resources on this subject: Integrating Twitter with Magento [article] Magento : Payment and shipping method [article] Magento: Exploring Themes [article]
Read more
  • 0
  • 0
  • 4684

article-image-getting-started-cmis
Packt
19 Mar 2014
9 min read
Save for later

Getting Started with CMIS

Packt
19 Mar 2014
9 min read
(For more resources related to this topic, see here.) What is CMIS? The goal of CMIS is to provide a standard method for accessing content from different content repositories. Using CMIS service calls, it is possible to navigate through and create content in a repository. CMIS also includes a query language for searching both the metadata and full-text content stored that is stored in a repository. The CMIS standard defines the protocols and formats for the requests and responses of API service calls made to a repository. CMIS acts as a standard interface and protocol for accessing content repositories, something similar to how ANSI-SQL acts as a common-denominator language for interacting with different databases. The use of the CMIS API for accessing repositories brings with it a number of benefits. Perhaps chief among these is the fact that access to CMIS is language neutral. Any language that supports HTTP services can be used to access a CMIS-enabled repository. Client software can be written to use a single API and be deployed to run against multiple CMIS-compliant repositories. Alfresco and CMIS The original draft for CMIS 0.5 was written by EMC, IBM and Microsoft. Shortly after that draft, Alfresco and other vendors joined the CMIS standards group. Alfresco was an early CMIS adopter and offered an implementation of CMIS version 0.5 in 2008. In 2009, Alfresco began hosting an on-line preview of the CMIS standard. The server, accessible via the http://cmis.alfresco.com URL, still exists and implements the latest CMIS standard. As of this writing, that URL hosts a preview of CMIS 1.1 features. In mid-2010, just after the CMIS 1.0 standard was approved, Alfresco released CMIS in both the Alfresco Community and Enterprise editions. In 2012, with Alfresco version 4.0, Alfresco moved from a home grown CMIS runtime implementation to one that uses the Apache Chemistry OpenCMIS Server Framework. From that release, developers have been able to customize Alfresco using the OpenCMIS Java API. Overview of the CMIS Standard Next we discuss the details of the CMIS specification, particularly the domain model, the different services that it provides, and the supported protocol bindings. Domain model (Object model) Every content repository vendor has their own definition of a content or object model. Alfresco, for example, has rich content modeling capabilities, such as types and aspects that can inherit from other types and aspects, and properties that can be assigned attributes like data-type, multi-valued and required. But there are wide differences in the ways in which different vendors have implemented content modeling. In the Documentum ECM system, for example, the generic content type is called dm_document, while in Alfresco it is called cm:content. Another example is the concept of an aspect as used in Alfresco – many repositories do not support that idea. The CMIS Domain Model is an attempt by the CMIS Standardization group to define a framework generic enough that can describe content models and map to concepts used by many different repository vendors. The CMIS Domain Model defines a Repository as a container and an entry point to all content items, from now on called objects. All objects are classified by an Object Type , which describes a common set of Properties (like Type ID, Parent, and Display Name). There are five base types of objects: Document, Folder, Relationship, Policy , Item(available from CMIS 1.1), and these all inherit from Object Type. In addition to the five base object types there are also a number of property types that can be used when defining new properties for an Object type. These are shown in the figure: String, Boolean, Decimal, Integer, and DateTime. Besides these property types there are also the URI, Id, and HTML property types, not shown in the figure. Taking a closer look at each one of the base types, we can see that: Document almost always corresponds to a file, although it need not have any content (when you upload a file via, for example, the AtomPub binding the metadata is created with the first request and the content for the file is posted with the second request). Folder is a container for file-able objects such as folders and documents. Immediately after filing a folder or document into a folder, an implicit parent-child relationship is automatically created. The fileable property of the object type definition specifies whether an object is file-able or not. Relationship object defines a relationship between a target and source object. Objects can have multiple relationships with other objects. The support for relationship objects is optional. Policy is a way of defining administrative policies to manage objects. An object to which a policy may be applied is called a controllable object (controllablePolicy property has to be set to true). For example, a CMIS policy could be used to define a retention policy. A policy is opaque and has no meaning to the repository. It must be implemented and enforced in a repository-specific way. For example, rules might be used in Alfresco to enforce a policy. The support for policy objects is optional. Item (CMIS 1.1) object represents a generic type of a CMIS information asset. For example, this could be a user or group object. Item objects are not versionable and do not have content streams like documents, but they do have properties like all other CMIS objects. The support for item objects is optional. Additional object types can be defined in a repository as custom subtypes of the base types. The Legal Case type shown in the figure above is an example. CMIS services are provided for the discovery of object types that are defined in a repository. However, object type management services, such as the creation, modification, and deletion of an object type, are not covered by the CMIS standard. An object has one primary base object type, such as Document or Folder, which cannot be changed. An object can also have secondary object types applied to it (CMIS 1.1). A secondary type is a named class that may add extra properties to an object in addition to the properties defined by the object's primary base object-type (This is similar to the concept of aspects in Alfresco). Every CMIS object has an opaque and immutable Object Identity (ID), which is assigned by the repository when the object is created. In the case of Alfresco, a Node Reference is created which becomes the Object ID. The ID uniquely identifies an object within a repository regardless of the type of the object. All CMIS objects have a set of named, but not explicitly ordered, properties. Within an object, each property is uniquely identified by its Property ID. In addition, a document object can have a Content Stream, which is then used to hold the actual byte content from a file. A document can also have one or more Renditions, like a thumbnail, a different sized image, or an alternate representation of the content stream. Document or folder objects can have one Access Control List (ACL), which controls access to the document or folder. An ACL is made up of a list of Access Control Entries (ACEs). An ACE in turn represents one or more permissions being granted to a principal, such as a user, group, role, or something similar. All objects and properties are defined in the cmis name-space. From now on we will refer to the different objects and properties via their fully qualified name, for example cmis:document or cmis:name. Services The following CMIS services can access and manage CMIS objects in the repository: Repository Services: These are used to discover information about the repository, including repository ids (more than one repository could be managed by the endpoint). Since many features are optional, this provides a way to find out which are supported. CMIS 1.1 compliant repositories also support the creation of new types dynamically. Methods: getRepositories, getRepositoryInfo , getTypeChildren, getTypeDescendants , getTypeDefinition , createType (CMIS 1.1), updateType (CMIS 1.1), deleteType (CMIS 1.1) Navigation Services: These are used to navigate the folder hierarchy. Methods: getChildren, getDescendants, getFolderTree, getFolderParent, getObjectParents, getCheckedOutDocs. Object Services: These services provide ID-based CRUD (Create, Read, Update, Delete) operations. Methods: createDocument, createDocumentFromSource, createFolder, createRelationship, createPolicy, createItem (CMIS 1.1), getAllowableActions, getObject, getProperties, getObjectByPath, getContentStream, getRenditions, updateProperties, bulkUpdateProperties (CMIS 1.1), moveObject, deleteObject, deleteTree, setContentStream, appendContentStream (CMIS 1.1), deleteContentStream. Multi-filing Services: These services (optional) makes it possible to put an object to several folders (multi-filing) or outside the folder hierarchy (un-filing). This service is not used to create or delete objects. Methods: addObjectToFolder, removeObjectFromFolder. Discovery Services: These are used to search for query-able objects within the Repository (objects with property queryable set to true ). Methods: query , getContentChanges. Versioning Services: These are used to manage versioning of document objects, other objects are not versionable. Whether or not a document can be versioned is controlled by the versionable property in the Object type. Methods: checkOut, cancelCheckOut, checkIn, getObjectOfLatestVersion, getPropertiesOfLatestVersion, getAllVersions. Relationship Services: These (optional) are used to retrieve the relationships in which an object is participating. Methods: getObjectRelationships. Policy Services: These (optional) are used to apply or remove a policy object to an object which has the property controllablePolicy set to true. Methods: applyPolicy, removePolicy, getAppliedPolicies. ACL Services: This service is used to discover and manage the Access Control List (ACL) for an object, if the object has one. Methods: applyACL, and getACL Summary In this article, we introduced the CMIS standard and how it came about. Then we covered the CMIS domain model with its five base object types: document, folder, relationship, policy, and item (CMIS 1.1.). We also learned that the CMIS standard defines a number of services, such as navigation and discovery, which makes it possible to manipulate objects in a content management system repository. Resources for Article: Further resources on this subject: Content Delivery in Alfresco 3 [Article] Getting Started with the Alfresco Records Management Module [Article] Managing Content in Alfresco [Article]
Read more
  • 0
  • 0
  • 2386