Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS and E-Commerce

830 Articles
article-image-using-linq-query-linqpad
Packt
05 Sep 2013
3 min read
Save for later

Using a LINQ query in LINQPad

Packt
05 Sep 2013
3 min read
(For more resources related to this topic, see here.) The standard version We are going to implement a simple scenario: given a deck of 52 cards, we want to pick a random number of cards, and then take out all of the hearts. From this stack of hearts, we will discard the first two and take the next five cards (if possible), and order them by their face value for display. You can try it in a C# program query in LINQPad: public static Random random = new Random();void Main(){ var deck = CreateDeck(); var randomCount = random.Next(52); var hearts = new Card[randomCount]; var j = 0; // take all hearts out for(var i=0;i<randomCount;i++) { if(deck[i].Suit == "Hearts") { hearts[j++] = deck[i]; } } // resize the array to avoid null references Array.Resize(ref hearts, j); // check that we have at least 2 cards. If not, stop if(hearts.Length <= 2) return; var count = 0; // check how many cards we can take count = hearts.Length - 2; // the most we need to take is 5 if(count > 5) { count = 5; } // take the cards var finalDeck = new Card[count]; Array.Copy(hearts, 2, finalDeck, 0, count); // now order the cards Array.Sort(finalDeck, new CardComparer()); // Display the result finalDeck.Dump();}public class Card{ public string Suit { get; set; } public int Value { get; set; }}// Create the cards' deckpublic Card[] CreateDeck(){ var suits = new [] { "Spades", "Clubs", "Hearts", "Diamonds" }; var deck = new Card[52]; for(var i = 0; i < 52; i++) { deck[i] = new Card { Suit = suits[i / 13], FaceValue = i-(13*(i/13))+1 }; } // randomly shuffle the deck for (var i = deck.Length - 1; i > 0; i--) { var j = random.Next(i + 1); var tmp = deck[j]; deck[j] = deck[i]; deck[i] = tmp; } return deck;}// CardComparer compare 2 cards against their face valuepublic class CardComparer : Comparer<Card>{ public override int Compare(Card x, Card y) { return x.FaceValue.CompareTo(y.FaceValue); }} Even if we didn't consider the CreateDeck() method, we had to do quite a few operations to produce the expected result (your values might be different as we are using random cards). The output is as follows: Depending on the data, LINQPad will add contextual information. For example, in this sample it will add the bottom row with the sum of all the values (here, only FaceValue). Also, if you click on the horizontal graph button, you will get a visual representation of your data, as shown in the following screenshot: This information is not always relevant but it can help you explore your data. Summary In this article we saw how LINQ queries can be used in LINQPad. The powerful query capabilitiesof LINQ has been utilized to the maximum in LINQPad. Resources for Article: Further resources on this subject: Displaying SQL Server Data using a Linq Data Source [Article] Binding MS Chart Control to LINQ Data Source Control [Article] LINQ to Objects [Article]
Read more
  • 0
  • 0
  • 1570

article-image-chef-infrastructure
Packt
05 Sep 2013
10 min read
Save for later

Chef Infrastructure

Packt
05 Sep 2013
10 min read
(For more resources related to this topic, see here.) First, let's talk about the terminology used in the Chef universe. A cookbook is a collection of recipes – codifying the actual resources, which should be installed and configured on your node – and the files and configuration templates needed. Once you've written your cookbooks, you need a way to deploy them to the nodes you want to provision. Chef offers multiple ways for this task. The most widely used way is to use a central Chef Server. You can either run your own or sign up for Opscode's Hosted Chef. The Chef Server is the central registry where each node needs to get registered. The Chef Server distributes the cookbooks to the nodes based on their configuration settings. Knife is Chef's command-line tool called to interact with the Chef Server. You use it for uploading cookbooks and managing other aspects of Chef. On your nodes, you need to install Chef Client – the part that retrieves the cookbooks from the Chef Server and executes them on the node. In this article, we'll see the basic infrastructure components of your Chef setup at work and learn how to use the basic tools. Let's get started with having a look at how to use Git as a version control system for your cookbooks. Using version control Do you manually back up every file before you change it? And do you invent creative filename extensions like _me and _you when you try to collaborate on a file? If you answer yes to any of the preceding questions, it's time to rethink your process. A version control system (VCS) helps you stay sane when dealing with important files and collaborating on them. Using version control is a fundamental part of any infrastructure automation. There are multiple solutions (some free, some paid) for managing source version control including Git, SVN, Mercurial, and Perforce. Due to its popularity among the Chef community, we will be using Git. However, you could easily use any other version control system with Chef. Getting ready You'll need Git installed on your box. Either use your operating system's package manager (such as Apt on Ubuntu or Homebrew on OS X), or simply download the installer from www.git-scm.org. Git is a distributed version control system. This means that you don't necessarily need a central host for storing your repositories. But in practice, using GitHub as your central repository has proven to be very helpful. In this article, I'll assume that you're using GitHub. Therefore, you need to go to github.com and create a (free) account to follow the instructions given in this article. Make sure that you upload your SSH key following the instructions at https://help.github.com/articles/generating-ssh-keys, so that you're able to use the SSH protocol to interact with your GitHub account. As soon as you've created your GitHub account, you should create your repository by visiting https://github.com/new and using chef-repo as the repository name. How to do it... Before you can write any cookbooks, you need to set up your initial Git repository on your development box. Opscode provides an empty Chef repository to get you started. Let's see how you can set up your own Chef repository with Git using Opscode's skeleton. Download Opscode's skeleton Chef repository as a tarball: mma@laptop $ wget http://github.com/opscode/chef-repo/tarball/master...TRUNCATED OUTPUT...2013-07-05 20:54:24 (125 MB/s) - 'master' saved [9302/9302] Extract the downloaded tarball: mma@laptop $ tar zvf master Rename the directory. Replace 2c42c6a with whatever your downloaded tarball contained in its name: mma@laptop $ mv opscode-chef-repo-2c42c6a/ chef-repo Change into your newly created Chef repository: mma@laptop $ cd chef-repo/ Initialize a fresh Git repository: mma@laptop:~/chef-repo $ git init .Initialized empty Git repository in /Users/mma/work/chef-repo/.git/ Connect your local repository to your remote repository on github.com. Make sure to replace mmarschall with your own GitHub username: mma@laptop:~/chef-repo $ git remote add origin git@github.com:mmarschall/chef-repo.git Add and commit Opscode's default directory structure: mma@laptop:~/chef-repo $ git add .mma@laptop:~/chef-repo $ git commit -m "initial commit"[master (root-commit) 6148b20] initial commit10 files changed, 339 insertions(+), 0 deletions(-)create mode 100644 .gitignore...TRUNCATED OUTPUT...create mode 100644 roles/README.md Push your initialized repository to GitHub. This makes it available to all your co-workers to collaborate on it. mma@laptop:~/chef-repo $ git push -u origin master...TRUNCATED OUTPUT...To git@github.com:mmarschall/chef-repo.git* [new branch] master -> master How it works... You've downloaded a tarball containing Opscode's skeleton repository. Then, you've initialized your chef-repo and connected it to your own repository on GitHub. After that, you've added all the files from the tarball to your repository and committed them. This makes Git track your files and the changes you make later. As a last step, you've pushed your repository to GitHub, so that your co-workers can use your code too. There's more... Let's assume you're working on the same chef-repo repository together with your co-workers. They cloned your repository, added a new cookbook called other_cookbook, committed their changes locally, and pushed their changes back to GitHub. Now it's time for you to get the new cookbook down to your own laptop. Pull your co-workers, changes from GitHub. This will merge their changes into your local copy of the repository. mma@laptop:~/chef-repo $ git pull From github.com:mmarschall/chef-repo * branch master -> FETCH_HEAD ...TRUNCATED OUTPUT... create mode 100644 cookbooks/other_cookbook/recipes/default.rb In the case of any conflicting changes, Git will help you merge and resolve them. Installing Chef on your workstation If you want to use Chef, you'll need to install it on your local workstation first. You'll have to develop your configurations locally and use Chef to distribute them to your Chef Server. Opscode provides a fully packaged version, which does not have any external prerequisites. This fully packaged Chef is called the Omnibus Installer. We'll see how to use it in this section. Getting ready Make sure you've curl installed on your box by following the instructions available at http://curl.haxx.se/download.html. How to do it... Let's see how to install Chef on your local workstation using Opscode's Omnibus Chef installer: In your local shell, run the following command: mma@laptop:~/chef-repo $ curl -L https://www.opscode.com/chef/install.sh | sudo bashDownloading Chef......TRUNCATED OUTPUT...Thank you for installing Chef! Add the newly installed Ruby to your path: mma@laptop:~ $ echo 'export PATH="/opt/chef/embedded/bin:$PATH"'>> ~/.bash_profile && source ~/.bash_profile How it works... The Omnibus Installer will download Ruby and all the required Ruby gems into /opt/chef/embedded. By adding the /opt/chef/embedded/bin directory to your .bash_profile, the Chef command-line tools will be available in your shell. There's more... If you already have Ruby installed in your box, you can simply install the Chef Ruby gem by running mma@laptop:~ $ gem install chef. Using the Hosted Chef platform If you want to get started with Chef right away (without the need to install your own Chef Server) or want a third party to give you an Service Level Agreement (SLA) for your Chef Server, you can sign up for Hosted Chef by Opscode. Opscode operates Chef as a cloud service. It's quick to set up and gives you full control, using users and groups to control the access permissions to your Chef setup. We'll configure Knife, Chef's command-line tool to interact with Hosted Chef, so that you can start managing your nodes. Getting ready Before being able to use Hosted Chef, you need to sign up for the service. There is a free account for up to five nodes. Visit http://www.opscode.com/hosted-chef and register for a free trial or the free account. I registered as the user webops with an organization short-name of awo. After registering your account, it is time to prepare your organization to be used with your chef-repo repository. How to do it... Carry out the following steps to interact with the Hosted Chef: Navigate to http://manage.opscode.com/organizations. After logging in, you can start downloading your validation keys and configuration file. Select your organization to be able to see its contents using the web UI. Regenerate the validation key for your organization and save it as <your-organization-short-name>.pem in the .chef directory inside your chef-repo repository. Generate the Knife config and put the downloaded knife.rb into the .chef directory inside your chef-repo directory as well. Make sure you replace webops with the username you chose for Hosted Chef and awo with the short-name you chose for your organization: current_dir = File.dirname(__FILE__)log_level :infolog_location STDOUTnode_name "webops"client_key "#{current_dir}/webops.pem"validation_client_name "awo-validator"validation_key "#{current_dir}/awo-validator.pem"chef_server_url "https://api.opscode.com/organizations/awo"cache_type 'BasicFile'cache_options( :path => "#{ENV['HOME']}/.chef/checksums" )cookbook_path ["#{current_dir}/../cookbooks"] Use Knife to verify that you can connect to your hosted Chef organization. It should only have your validator client so far. Instead of awo, you'll see your organization's short-name: mma@laptop:~/chef-repo $ knife client listawo-validator How it works... Hosted Chef uses two private keys (called validators): one for the organization and the other for every user. You need to tell Knife where it can find these two keys in your knife.rb file. The following two lines of code in your knife.rb file tells Knife about which organization to use and where to find its private key: validation_client_name "awo-validator"validation_key "#{current_dir}/awo-validator.pem" The following line of code in your knife.rb file tells Knife about where to find your users' private key: client_key "#{current_dir}/webops.pem" And the following line of code in your knife.rb file tells Knife that you're using Hosted Chef. You will find your organization name as the last part of the URL: chef_server_url "https://api.opscode.com/organizations/awo" Using the knife.rb file and your two validators Knife can now connect to your organization hosted by Opscode. You do not need your own, self-hosted Chef Server, nor do you need to use Chef Solo in this setup. There's more... This setup is good for you if you do not want to worry about running, scaling, and updating your own Chef Server and if you're happy with saving all your configuration data in the cloud (under Opscode's control). If you need to have all your configuration data within your own network boundaries, you might sign up for Private Chef, which is a fully supported and enterprise-ready version of Chef Server. If you don't need any advanced enterprise features like role-based access control or multi-tenancy, then the open source version of Chef Server might be just right for you. Summary In this article, we learned about key concepts such as cookbooks, roles, and environments and how to use some basic tools such as Git, Knife, Chef Shell, Vagrant, and Berkshelf. Resources for Article: Further resources on this subject: Automating the Audio Parameters – How it Works [Article] Skype automation [Article] Cross-browser-distributed testing [Article]
Read more
  • 0
  • 0
  • 1750

article-image-managing-adobe-connect-meeting-room
Packt
04 Sep 2013
6 min read
Save for later

Managing Adobe Connect Meeting Room

Packt
04 Sep 2013
6 min read
(For more resources related to this topic, see here.) The Meeting Information page In order to get to the Meeting Information page, you will first need to navigate to the Meeting List page by following these steps: Log in to the Connect application. Click on the Meetings tab in the Home Page main menu. When you access the Meetings page, the My Meetings link is opened by default and a view is set on the Meeting List tab. You will find the meeting that is listed on this page as shown in the following screenshot: By clicking on the Cookbook Meeting option in the Name column (marked with a red outline), you will be presented with the Meeting Information page. In the section titled Meeting Information, you can examine various pieces of information about the selected meeting. On this page, you can review Name, Summary, Start Time, Duration, Number of users in room(that are currently present in the meeting room), URL, Language(selected), and the Access rights of the meeting. The two most important fields are marked with a red outline in the previous screenshot. The first one is the link to the meeting URL and the second is the Enter Meeting Room button. You can join the selected meeting room by clicking on any of these two options. In the upper portion of this page, you will notice the navigation bar with the following links: Meeting Information Edit Information Edit Participants Invitations Uploaded Content Recordings Reports By selecting any of these links, you will open pages associated with them. Our main focus of this article will be on the functionalities of these pages. Since we have explained the Meeting Information page, we can proceed to the Edit Information page. The Edit Information page The Edit Information page is very similar to the Enter Meeting Information page. We will briefly inform you about the meeting settings, which you can edit on this page. These settings are: Name Summary Start time Duration Language Access Audio conference settings Any changes made on this page are preserved by clicking on the Save button that you will find at very bottom of this page. Changes will not affect participants who are already logged in to the room, except changes to the Audio Conference settings. Next to the Save button, you will find the Cancel button. Any changes made on the Edit Information page, which are not already saved will be reverted by clicking on the Cancel button. The Edit Participants page After the Edit Information page, it's time for us to access the next page by clicking on the Edit Participants link in the navigation bar. This link will take you to the Select Participants page. In addition to the already described features, we will introduce you to a couple more functionalities that will help you to add participants, change their roles, or remove them from the meeting. Example 1 – changing roles In this example, we will change the role of the administrators group from participant to presenter by using the Search button. This feature is of great help when there are a large number of Connect users that are already added as meeting participants. In order to do so, you will need to follow the steps listed: In the Current Participants For Cookbook Meeting table on the right-hand side, click on the Search button located in the lower-left corner of the table. When you click on the Search button, a text field for instant search will be displayed. In the text field, enter the name of the Administrators group or part of the group name (the auto-complete function should recognize the name of the present group). When the group is present in the table, select it. Click on the Set User Role button. Select new role for this group in the menu. For the purpose of this example, we will select the Presenter role. By completing this action, you will grant Presenter privileges in the Cookbook Meeting table to all the administrators as shown in the following screenshot: Example 2 – removing a user In this example, we will show you how to remove a specific user from the selected meeting. For the purpose of this exercise, we will remove the Administrators group from the Participants list. In order to complete this action, please follow the given steps: Select Administrators in the Current Participants For Cookbook Meeting table. Click on the Remove button. Now, all the members of this group will be excluded from the meeting, and Administrators should not be present in the list. Example 3 – adding a specific user This example will demonstrate how to add a specific user from any group. For example, we will add a user from the Authors group to the Current Participants list. In the Available users and Groups table, double-click on the Authors group. This action will change the user interface of this table and list all the users that belong to the Authors group. Please note that table header is now changed to Authors. Select a specific user and click on the Add button. This will add the selected user from the Authors group to the Current Participants For Cookbook Meeting table. One thing that we would like to mention here is the ability to perform multiple selections in both the Available Users and Groups and Current Participants For Cookbook Meeting tables. To enable multiple selection functionality, select a specific user and group by clicking and selecting Ctrl and Shift on the keyboard at the same time. By demonstrating these examples, we reviewed the Edit Participant link functionalities. Summary In this article, we learned how to master all functionalities on how to edit different settings for already existing meetings. We covered the following topics: The Meeting information page The Managing Edit information page The Managing Edit participants page Resources for Article: Further resources on this subject: Top features you'll want to know about [Article] Remotely Preview and test mobile web pages on actual devices with Adobe Edge Inspect [Article] Exporting SAP BusinessObjects Dashboards into Different Environments [Article]
Read more
  • 0
  • 0
  • 1076
Banner background image

article-image-getting-ibus
Packt
03 Sep 2013
10 min read
Save for later

Getting on the IBus

Packt
03 Sep 2013
10 min read
(For more resources related to this topic, see here.) Why NServiceBus? Before diving in, we should take a moment to consider why NServiceBus might be a tool worth adding to your repertoire. If you're eager to get started, feel free to skip this section and come back later. So what is NServiceBus? It's a powerful, extensible framework that will help you to leverage the principles of Service-oriented architecture ( SOA ) to create distributed systems that are more reliable, more extensible, more scalable, and easier to update. That's all well and good, but if you're just picking up this book for the first time, why should you care? What problems does it solve? How will it make your life better? Ask yourself whether any of the following situations describe you: My code updates values in several tables in a transaction, which acquires locks on those tables, so it frequently runs into deadlocks under load. I've optimized all the queries that I can. The transaction keeps the database consistent but the user gets an ugly exception and has to retry what they were doing, which doesn't make them very happy. Our order processing system sometimes fails on the third of three database calls. The transaction rolls back and we log the error, but we're losing money because the end user doesn't know if their order went through or not, and they're not willing to retry for fear of being double charged, so we're losing business to our competitor. We built a system to process images for our clients. It worked fine for a while but now we've become a victim of our own success. We designed it to be multithreaded (which was no small feat!) but we already maxed out the original server it was running on, and at the rate we're adding clients it's only a matter of time until we max out this one too. We need to scale it out to run on multiple servers but have no idea how to do it. We have a solution that is integrating with a third-party web service, but when we call the web service we also need to update data in a local database. Sometimes the web service times out, so our database transaction rolls back, but sometimes the web service call does actually complete at the remote end, so now our local data and our third-party provider's data are out of sync. We're sending emails as part of a complex business process. It is designed to be retried in the event of a failure, but now customers are complaining that they're receiving duplicate emails, sometimes dozens of them. A failure occurs after the email is sent, the process is retried, and the emails is sent over and over until the failure no longer occurs. I have a long-running process that gets kicked off from a web application. The website sits on an interstitial page while the backend process runs, similar to what you would see on a travel site when you search for plane tickets. This process is difficult to set up and fairly brittle. Sometimes the backend process fails to start and the web page just spins forever. We added latitude and longitude to our customer database, but now it is a nightmare to try to keep that information up-to-date. When a customer's address changes, there is nothing to make sure the location information is also recalculated. There are dozens of procedures that update the customer address, and not all of them are under our department's control. If any of these situations has you nodding your head in agreement, I invite you to read on. NServiceBus will help you to make multiple transactional updates utilizing the principle of eventual consistency so that you do not encounter deadlocks. It will ensure that valuable customer order data is not lost in the deep dark depths of a multi-megabyte log file. By the end of the book, you'll be able to build systems that can easily scale out, as well as up. You'll be able to reliably perform non-transactional tasks such as calling web services and sending emails. You will be able to easily start up long-running processes in an application server layer, leaving your web application free to process incoming requests, and you'll be able to unravel your spaghetti codebases into a logical system of commands, events, and handlers that will enable you to more easily add new features and version the existing ones. You could try to do this all on your own by rolling your own messaging infrastructure and carefully applying the principles of service-oriented architecture, but that would be really time consuming. NServiceBus is the easiest solution to solve the aforementioned problems without having to expend too much effort to get it right, allowing you to put your focus on your business concerns, where it belongs. So if you're ready, let's get started creating an NServiceBus solution. Getting the code We will be covering a lot of information very quickly in this article, so if you see something that doesn't immediately make sense, don't panic! Once we have the basic example in place, we will loop back and explain some of the finer points more completely. There are two main ways to get the NServiceBus code integrated with your project, by downloading the Windows Installer package, and via NuGet. I recommend you use Windows Installer the first time to ensure that your machine is set up properly to run NServiceBus, and then use NuGet to actually include the assemblies in your project. Windows Installer automates quite a bit of setup for you, all of which can be controlled through the advanced installation options: NServiceBus binaries, tools, and sample code are installed. The NServiceBus Management Service is installed to enable integration with ServiceInsight. . Microsoft Message Queueing ( MSMQ ) is installed on your system if it isn't already. MSMQ provides the durable, transactional messaging that is at the core of NServiceBus. The Distributed Transaction Coordinator ( DTC ) is configured on your system. This will allow you to receive MSMQ messages and coordinate data access within a transactional context. RavenDB is installed, which provides the default persistence mechanism for NServiceBus subscriptions, timeouts, and saga data. NServiceBus performance counters are added to help you monitor NServiceBus performance. Download the installer from http://particular.net/downloads and install it on your machine. After the install is complete, everything will be accessible from your Start Menu. Navigate to All Programs | Particular Software | NServiceBus as shown in the following screenshot: The install package includes several samples that cover all the basics as well as several advanced features. The Video Store sample is a good starting point. Multiple versions of it are available for different message transports that are supported by NServiceBus. If you don't know which one to use, take a look at VideoStore.Msmq. I encourage you to work through all of the samples, but for now we are going to roll our own solution by pulling in the NServiceBus NuGet packages. NServiceBus NuGet packages Once your computer has been prepared for the first time, the most direct way to include NServiceBus within an application is to use the NuGet packages. There are four core NServiceBus NuGet packages: NServiceBus.Interfaces: This package contains only interfaces and abstractions, but not actual code or logic. This is the package that we will use for message assemblies. NServiceBus: This package contains the core assembly with most of the code that drives NServiceBus except for the hosting capability. This is the package we will reference when we host NServiceBus within our own process, such as in a web application. NServiceBus.Host: This package contains the service host executable. With the host we can run an NServiceBus service endpoint from the command line during development, and then install it as a Windows service for production use. NServiceBus.Testing: This package contains a framework for unit testing NServiceBus endpoints and sagas. The NuGet packages will also attempt to verify that your system is properly prepared through PowerShell cmdlets that ship as part of the package. However, if you are not running Visual Studio as an Administrator, this can be problematic as the tasks they perform sometimes require elevated privileges. For this reason it's best to run Windows Installer before getting started. Creating a message assembly The first step to creating an NServiceBus system is to create a messages assembly. Messages in NServiceBus are simply plain old C# classes. Like the WSDL document of a web service, your message classes form a contract by which services communicate with each other. For this example, let's pretend we're creating a website like many on the Internet, where users can join and become a member. We will construct our project so that the user is created in a backend service and not in the main code of the website. Follow these steps to create your solution: In Visual Studio, create a new class library project. Name the project UserService.Messages and the solution simply Example. This first project will be your messages assembly. Delete the Class1.cs file that came with the class project. From the NuGet Package Manager Console, run this command to install the NServiceBus.Interfaces package, which will add the reference to NServiceBus.dll. PM> Install-Package NServiceBus.Interfaces –ProjectName UserService.Messages Add a new folder to the project called Commands. Add a new class to the Commands folder called CreateNewUserCmd.cs. Add using NServiceBus; to the using block of the class file. It is very helpful to do this first so that you can see all of the options available with IntelliSense. Mark the class as public and implement ICommand. This is a marker interface so there is nothing you need to implement. Add the public properties for EmailAddress and Name. When you're done, your class should look like this: using System; using System.Collections.Generic; using System.Linq; using System.Text; using NServiceBus; namespace UserService.Messages.Commands { public class CreateNewUserCmd : ICommand { public string EmailAddress { get; set; } public string Name { get; set; } } } Congratulations! You've created a message! This will form the communication contract between the message sender and receiver. Unfortunately, we don't have enough to run yet, so let's keep moving. Creating a service endpoint Now we're going to create a service endpoint that will handle our command message. Add a new class library project to your solution. Name the project UserService. Delete the Class1.cs file that came with the class project. From the NuGet Package Manager Console window, run this command to install the NServiceBus.Host package: PM> Install-Package NServiceBus.Host –ProjectName UserService Take a look at what the host package has added to your class library. Don't worry; we'll cover this in more detail later. References to NServiceBus.Host.exe, NServiceBus.Core.dll, and NServiceBus.dll An App.config file A class named EndpointConfig.cs In the service project, add a reference to the UserService.Messages project you created before. Right-click on the project file and click on Properties , then in the property pages, navigate to the Debug tab and enter NServiceBus.Lite under Command line arguments . This tells NServiceBus not to run the service in production mode while we're just testing. This may seem obvious, but this is part of the NServiceBus promise to be safe by default, meaning you won't be able to mess up when you go to install your service in production.
Read more
  • 0
  • 0
  • 1359

article-image-configuring-payment-models-intermediate
Packt
03 Sep 2013
4 min read
Save for later

Configuring payment models (Intermediate)

Packt
03 Sep 2013
4 min read
(For more resources related to this topic, see here.) How to do it... Let's learn how to integrate PayPal Website Payments Standard into our store in test mode. Integrating PayPal Website Payments Standard into our store (test mode) We will start by activating PayPal Payments Standard using the Payments section under the Extensions menu, and then we will edit the settings for it. The next step is to fill in the needed information for testing the PayPal system. For test purposes, we will choose the option for Sandbox Mode as Yes. We will now open a developer account to create test accounts on PayPal. Let's browse to http://developer.paypal.com and sign up for a developer account. Following is a screenshot of the screen that is displayed after we sign up and log in to the account. Click on the Create a preconfigured account link. The next screen will propose an account name with which your account will be created. Now we only need to add funds to the Account Balance field to create the account. Remember that it is a test account, so we can give any virtual amount of funds we want. Now we have a test PayPal account that can be used for our test purchases: Let's go to our shop's user interface, add a product to the shopping cart, and proceed to the Checkout page: Let's log in with the test account we have just created: The following screenshot shows a successful test order: Integrating PayPal Website Payments Pro into our store (live mode) We need to get the API information first. Let's log in to our PayPal account. Click on the User Profile link, and then click on the Update link next to API access in the My selling tools section. The next step is to click on the Request API Credentials link. Choose the option that says Request API signature. This will help us get the API information. The next step is to activate Payment Pro on the OpenCart administration interface using the Payments section under the Extensions menu. We need to edit the details and enter the API information. Let's not forget to select No for Test Mode, which means that this will be a live system. Choose Enabled for the Status field. How it works... Now let's learn how the PayPal Standard and Pro models work and how they differ from each other. PayPal Website Payments Standard PayPal Standard is the easiest payment model to integrate into our store. All we need is a PayPal account, and a bank account to withdraw the money from. PayPal Standard has no monthly costs or setup fee. However, the company charges a small percentage from each transaction. Please go to https://www.paypal.com/webapps/mpp/merchant for merchant service details. The activation of the Standard method is very straightforward. We only need to provide our e-mail address and then set Transaction Method to Sale, Sandbox Mode to No, and Status to Enabled on the administration panel. There is a difference in the test payments that we have made. Customers can also pay with their credit cards instantly, even without a PayPal account. This makes PayPal a very powerful and popular solution. If you are afraid of charging your real PayPal account, there is a good way to test your real payment environment. Create a dummy product with the price of $0.01 and complete the purchase with this tiny amount. PayPal Website Payments Pro This service can be used to charge credit cards using PayPal services in the background. The customers will not need to leave the store at all; the transaction will be completed at the shop itself. Many big e-commerce websites operate this way. Currently, Pro service is only available for merchant accounts located in the US, UK, and Canada. Summary This article explains about implementing different PayPal integrations. The article discusses about how to integrate the PayPal Website Payments Standard and Pro methods into a simple store. Resources for Article: Further resources on this subject: Upgrading OpenCart [Article] Setting Payment Model in OpenCart [Article] OpenCart: Layout Structure [Article]
Read more
  • 0
  • 0
  • 1022

Packt
03 Sep 2013
19 min read
Save for later

Oracle ADF Essentials – Adding Business Logic

Packt
03 Sep 2013
19 min read
(For more resources related to this topic, see here.) Adding logic to business components by default, a business component does not have an explicit Java class. When you want to add Java logic, however, you generate the relevant Java class from the Java tab of the business component. On the Java tab, you also decide which of your methods are to be made available to other objects by choosing to implement a Client Interface . Methods that implement a client interface show up in the Data Control palette and can be called from outside the object. Logic in entity objects Remember that entity objects are closest to your database tables –– most often, you will have one entity object for every table in the database. This makes the entity object a good place to put data logic that must be always executed. If you place, for example, validation logic in an entity object, it will be applied no matter which view object attempts to change data. In the database or in an entity object? Much of the business logic you can place in an entity object can also be placed in the database using database triggers. If other systems are accessing your database tables, business logic should go into the database as much as possible. Overriding accessors To use Java in entity objects, you open an entity object and select the Java tab. When you click on the pencil icon, the Select Java Options dialog opens as shown in the following screenshot: In this dialog, you can select to generate Accessors (the setXxx() and getXxx() methods for all the attributes) as well as Data Manipulation Methods (the doDML() method; there is more on this later). When you click on OK , the entity object class is generated for you. You can open it by clicking on the hyperlink or you can find it in the Application Navigator panel as a new node under the entity object. If you look inside this file, you will find: Your class should start with an import section that contains a statement that imports your EntityImpl class. If you have set up your framework extension classes correctly this could be import com.adfessentials.adf.framework.EntityImpl. You will have to click on the plus sign in the left margin to expand the import section. The Structure panel in the bottom-left shows an overview of the class including all the methods it contains. You will see a lot of setter and getter methods like getFirstName() and setFirstName() as shown in the following screenshot: There is a doDML() method described later. If you were to decide, for example, that last name should always be stored in upper case, you could change the setLastName() method to: public void setLastName(String value) { setAttributeInternal(LASTNAME, value.toUpperCase()); } Working with database triggers If you decide to keep some of your business logic in database triggers, your triggers might change the values that get passed from the entity object. Because the entity object caches values to save database work, you need to make sure that the entity object stays in sync with the database even if a trigger changes a value. You do this by using the Refresh on Update property. To find this property, select the Attributes subtab on the left and then select the attribute that might get changed. At the bottom of the screen, you see various settings for the attribute with the Refresh settings in the top-right of the Details tab as shown in the following screenshot: Check the Refresh on Update property checkbox if a database trigger might change the attribute value. This makes the ADF framework requery the database after an update has been issued. Refresh on Insert doesn't work if you are using MySQL and your primary key is generated with AUTO_INCREMENT or set by a trigger. ADF doesn't know the primary key and therefore cannot find the newly inserted row after inserting it. It does work if you are running against an Oracle database, because Oracle SQL syntax has a special RETURNING construct that allows the entity object to get the newly created primary key back. Overriding doDML() Next, after the setters and getters, the doDML() method is the one that most often gets overridden. This method is called whenever an entity object wants to execute a Data Manipulation Language (DML ) statement like INSERT, UPDATE, or DELETE. This offers you a way to add additional processing; for example, checking that the account balance is zero before allowing a customer to be deleted. In this case, you would add logic to check the account balance, and if the deletion is allowed, call super.doDML() to invoke normal processing. Another example would be to implement logical delete (records only change state and are not actually deleted from the table). In this case, you would override doDML() as follows: @override protected void doDML(int operation, TransactionEvent e) { if (operation == DML_DELETE) { operation = DML_UPDATE; } super.doDML(operation, e); } As it is probably obvious from the code, this simply replaces a DELETE operation with an UPDATE before it calls the doDML() method of its superclass (your framework extension EntityImpl, which passes the task on to the Oracle-supplied EntityImpl class). Of course, you also need to change the state of the entity object row, for example, in the remove() method. You can find fully-functional examples of this approach on various blogs, for example at http://myadfnotebook.blogspot.dk/2012/02/updating-flag-when-deleting-entity-in.html. You also have the option of completely replacing normal doDML() method processing by simply not calling super.doDML(). This could be the case if you want all your data modifications to go via a database procedure –– for example, to insert an actor, you would have to call insertActor with first name and last name. In this case, you would write something like: @override protected void doDML(int operation, TransactionEvent e) { CallableStatement cstmt = null; if (operation == DML_INSERT) { String insStmt = "{call insertActor (?,?)}"; cstmt = getDBTransaction().createCallableStatement(insStmt, 0); try { cstmt.setString(1, getFirstName()); cstmt.setString(2, getLastName()); cstmt.execute(); } catch (Exception ex) { … } finally { … } } } If the operation is insert, the above code uses the current transaction (via the getDBTransaction() method) to create a CallableStatement with the string insertActor(?,?). Next, it binds the two parameters (indicated by the question marks in the statement string) to the values for first name and last name (by calling the getter methods for these two attributes). Finally, the code block finishes with a normal catch clause to handle SQL errors and a finally clause to close open objects. Again, fully working examples are available in the documentation and on the Internet in various blog posts. Normally, you would implement this kind of override in the framework extension EntityImpl class, with additional logic to allow the framework extension class to recognize which specific entity object the operation applies to and which database procedure to call. Data validation With the techniques you have just seen, you can implement every kind of business logic your requirements call for. One requirement, however, is so common that it has been built right into the ADF framework: data validation . Declarative validation The simplest kind of validation is where you compare one individual attribute to a limit, a range, or a number of fixed values. For this kind of validation, no code is necessary at all. You simply select the Business Rules subtab in the entity object, select an attribute, and click on the green plus sign to add a validation rule. The Add Validation Rule dialog appears as shown in the following screenshot: You have a number of options for Rule Type –– depending on your choice here, the Rule Definition tab changes to allow you to define the parameters for the rule. On the Failure Handling tab, you can define whether the validation is an error (that must be corrected) or a warning (that the user can override), and you define a message text as shown in the following screenshot: You can even define variable message tokens by using curly brackets { } in your message text. If you do so, a token will automatically be added to the Token Message Expressions section of the dialog, where you can assign it any value using Expression Language. Click on the Help button in the dialog for more information on this. If your application might ever conceivably be needed in a different language, use the looking glass icon to define a resource string stored in a separate resource bundle. This allows your application to have multiple resource bundles, one for each different user interface language. There is also a Validation Execution tab that allows you to specify under which condition your rule should be applied. This can be useful if your logic is complex and resource intensive. If you do not enter anything here, your rule is always executed. Regular expression validation One of the especially powerful declarative validations is the Regular Expression validation. A regular expression is a very compact notation that can define the format of a string –– this is very useful for checking e-mail addresses, phone numbers, and so on. To use this, set Rule Type to Regular Expression as shown in the following screenshot: JDeveloper offers you a few predefined regular expressions, for example, the validation for e-mails as shown in the preceding screenshot. Even though you can find lots of predefined regular expressions on the Internet, someone from your team should understand the basics of regular expression syntax so you can create the exact expression you need. Groovy scripts You can also set Rule Type to Script to get a free-format box where you can write a Groovy expression. Groovy is a scripting language for the Java platform that works well together with Java –– see http://groovy.codehaus.org/ for more information on Groovy. Oracle has published a white paper on Groovy in ADF (http://www.oracle.com/technetwork/developer-tools/jdev/introduction-to-groovy-128837.pdf), and there is also information on Groovy in the JDeveloper help. Method validation If none of these methods for data validation fit your need, you can of course always revert to writing code. To do this, set Rule Type to Method and provide an error message. If you leave the Create a Select Method checkbox checked when you click on OK , JDeveloper will automatically create a method with the right signature and add it to the Java class for the entity object. The autogenerated validation method for Length (in the Film entity object) would look as follows: /** * Validation method for Length. */ public boolean validateLength (Integer length) { return true; } It is your task to fill in the logic and return either true (if validation is OK) or false (if the data value does not meet the requirements). If validation fails, ADF will automatically display the message you defined for this validation rule. Logic in view objects View objects represent the dataset you need for a specific part of the application — typically a specific screen or part of a screen. You can create Java objects for either an entire view object (an XxxImpl.java class, where Xxx is the name of your view object) or for a specific row (an XxxRowImpl.java class). A view object class contains methods to work with the entire data-set that the view object represents –– for example, methods to apply view criteria or re-execute the underlying database query. The view row class contains methods to work with an individual record of data –– mainly methods to set and get attribute values for one specific record. Overriding accessors Like for entity objects, you can override the accessors (setters and getters) for view objects. To do this, you use the Java subtab in the view object and click on the pencil icon next to Java Classes to generate Java. You can select to generate a view row class including accessors to ask JDeveloper to create a view row implementation class as shown in the following screenshot: This will create an XxxRowImpl class (for example, RentalVORowImpl) with setter and getter methods for all attributes. The code will look something like the following code snippet: … public class RentalVORowImpl extends ViewRowImpl { … /** * This is the default constructor (do not remove). */ public RentalVORowImpl() { } … /** * Gets the attribute value for title using the alias name * Title. * @return the title */ public String getTitle() { return (String) getAttributeInternal(TITLE); } /** * Sets <code>value</code> as attribute value for title using * the alias name Title. * @param value value to set the title */ public void setTitle(String value) { setAttributeInternal(TITLE, value); } … } You can change all of these to manipulate data before it is delivered to the entity object or to return a processed version of an attribute value. To use such attributes, you can write code in the implementation class to determine which value to return. You can also use Groovy expressions to determine values for transient attributes. This is done on the Value subtab for the attribute by setting Value Type to Expression and filling in the Value field with a Groovy expression. See the Oracle white paper on Groovy in ADF (http://www.oracle.com/technetwork/developer-tools/jdev/introduction-to-groovy-128837.pdf) or the JDeveloper help. Change view criteria Another example of coding in a view object is to dynamically change which view criteria are applied to the view object.It is possible to define many view criteria on a view object –– when you add a view object instance to an application module, you decide which of the available view criteria to apply to that specific view object instance. However, you can also programmatically change which view criteria are applied to a view object. This can be useful if you want to have buttons to control which subset of data to display –– in the example application, you could imagine a button to "show only overdue rentals" that would apply an extra view criterion to a rental view object. Because the view criteria apply to the whole dataset, view criteria methods go into the view object, not the view row object. You generate a Java class for the view object from the Java Options dialog in the same way as you generate Java for the view row object. In the Java Options dialog, select the option to generate the view object class as shown in the following screenshot: A simple example of programmatically applying a view criteria would be a method to apply an already defined view criterion called called OverdueCriterion to a view object. This would look like this in the view object class: public void showOnlyOverdue() { ViewCriteria vc = getViewCriteria("OverdueCriterion"); applyViewCriteria(vc); executeQuery(); } View criteria often have bind variables –– for example, you could have a view criteria called OverdueByDaysCriterion that uses a bind variable OverdueDayLimit. When you generate Java for the view object, the default option of Include bind variable accessors (shown in the preceding screenshot) will create a setOverdueDayLimit() method if you have an OverdueDayLimit bind variable. A method in the view object to which we apply this criterion might look like the following code snippet: public void showOnlyOverdueByDays(int days) { ViewCriteria vc = getViewCriteria("OverdueByDaysCriterion"); setOverdueDayLimit(days); applyViewCriteria(vc); executeQuery(); } If you want to call these methods from the user interface, you must select create a client interface for them (on the Java subtab in the view object). This will make your method available in the Data Control palette, ready to be dragged onto a page and dropped as a button. When you change the view criteria and execute the query, only the content of the view object changes –– the screen does not automatically repaint itself. In order to ensure that the screen refreshes, you need to set the PartialTriggers property of the data table to point to the ID of the button that changes the view criteria. For more on partial page rendering, see the Oracle Fusion Middleware Web User Interface Developer's Guide for Oracle Application Development Framework (http://docs.oracle.com/cd/E37975_01/web.111240/e16181/af_ppr.htm). Logic in application modules You've now seen how to add logic to both entity objects and view objects. However, you can also add custom logic to application modules. An application module is the place where logic that does not belong to a specific view object goes –– for example, calls to stored procedures that involve data from multiple view objects. To generate a Java class for an application module, you navigate to the Java subtab in the application module and select the pencil icon next to the Java Classes heading. Typically, you create Java only for the application module class and not for the application module definition. You can also add your own logic here that gets called from the user interface or you can override the existing methods in the application module. A typical method to override is prepareSession(), which gets called before the application module establishes a connection to the database –– if you need to, for example, call stored procedures or do other kinds of initialization before accessing the database, an application module method is a good place to do so. Remember that you need to define your own methods as client methods on the Java tab of the application module for the method to be available to be called from elsewhere in the application. Because the application module handles the transaction, it also contains methods, such as beforeCommit(), beforeRollback(), afterCommit(), afterRollback(), and so on. The doDML() method on any entity object that is part of the transaction is executed before any of the application modules' methods. Adding logic to the user interface Logic in the user interface is implemented in the form of managed beans. These are Java classes that are registered with the task flow and automatically instantiated by the ADF framework.ADF operates with various memory scopes –– you have to decide on a scope when you define a managed bean. Adding a bean method to a button The simplest way to add logic to the user interface is to drop a button (af:commandButton) onto a page or page fragment and then double-click on it. This brings up the Bind Action Property dialog as shown in the following screenshot: If you leave Method Binding selected and click on New , the Create Managed Bean dialog appears as shown in the following screenshot: In this dialog, you can give your bean a name, provide a class name (typically the same as the bean name), and select a scope. The backingBean scope is a good scope for logic that is only used for one action when the user clicks on the button and which does not need to store any state for later. Leaving the Generate Class If It Does Not Exist checkbox checked asks JDeveloper to create the class for you. When you click on OK , JDeveloper will automatically suggest a method for you in the Method dropdown (based on the ID of the button you double-clicked on). In the Method field, provide a more useful name and click on OK to add the new class and open it in the editor. You will see a method with your chosen name, as shown in the following code snippet: Public String rentDvd() { // Add event code here... return null; } Obviously, you place your code inside this method. If you accidentally left the default method name and ended up with something like cb5_action(), you can right-click on the method name and navigate to Refactor | Rename to give it a more descriptive name. Note that JDeveloper automatically sets the Action property for your button matching the scope, bean name, and method name. This might be something like #{backingBeanScope.RentalBean.rentDvd}. Adding a bean to a task flow Your beans should always be part of a task flow. If you're not adding logic to a button, or you just want more control over the process, you can also create a backing bean class first and then add it to the task flow. A bean class is a regular Java class created by navigating to File | New | Java Class . When you have created the class, you open the task flow where you want to use it and select the Overview tab. On the Managed Beans subtab, you can use the green plus to add your bean. Simply give it a name, point to the class you created, and select a memory scope. Accessing UI components from beans In a managed bean, you often want to refer to various user interface elements. This is done by mapping each element to a property in the bean. For example, if you have an af:inputText component that you want to refer to in a bean, you create a private variable of type RichInputText in the bean (with setter and getter methods) and set the Binding property (under the Advanced heading) to point to that bean variable using Expression Language. When creating a page or page fragment, you have the option (on the Managed Bean tab) to automatically have JDeveloper create corresponding attributes for you. The Managed Bean tab is shown in the following screenshot: Leave it on the default setting of Do Not Automatically Expose UI Components in a Managed Bean . If you select one of the options to automatically expose UI elements, your bean will acquire a lot of attributes that you don't need, which will make your code unnecessarily complex and slow. However, while learning ADF, you might want to try this out to see how the bean attributes and the Binding property work together. If you do activate this setting, it applies to every page and fragment you create until you explicitly deselect this option. Summary In this article, you have seen some examples of how to add Java code to your application to implement the specific business logic your application needs. There are many, many more places and ways to add logic –– as you work with ADF, you will continually come across new business requirements that force you to figure out how to add code to your application in new ways. Fortunately, there are other books, websites, online tutorials and training that you can use to add to your ADF skill set –– refer to http://www.adfessentials.com for a starting point. Resources for Article : Further resources on this subject: Oracle Tools and Products [Article] Managing Oracle Business Intelligence [Article] Oracle Integration and Consolidation Products [Article]
Read more
  • 0
  • 0
  • 3836
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-communicating-servers
Packt
02 Sep 2013
24 min read
Save for later

Communicating with Servers

Packt
02 Sep 2013
24 min read
(For more resources related to this topic, see here.) Creating an HTTP GET request to fetch JSON One of the basic means of retrieving information from the server is using HTTP GET. This type of method in a RESTful manner should be only used for reading data. So, GET calls should never change server state. Now, this may not be true for every possible case, for example, if we have a view counter on a certain resource, is that a real change? Well, if we follow the definition literally then yes, this is a change, but it's far from significant to be taken into account. Opening a web page in a browser does a GET request, but often we want to have a scripted way of retrieving data. This is usually to achieve Asynchronous JavaScript and XML (AJAX ), allowing reloading of data without doing a complete page reload. Despite the name, the use of XML is not required, and these days, JSON is the format of choice. A combination of JavaScript and the XMLHttpRequest object provides a method for exchanging data asynchronously, and in this recipe, we are going to see how to read JSON for the server using plain JavaScript and jQuery. Why use plain JavaScript rather than using jQuery directly? We strongly believe that jQuery simplifies the DOM API, but it is not always available to us, and additionally, we need have to know the underlying code behind asynchronous data transfer in order to fully grasp how applications work. Getting ready The server will be implemented using Node.js. In this example, for simplicity, we will use restify (http://mcavage.github.io/node-restify/), a Node.js module for creation of correct REST web services. How to do it... Let's perform the following steps. In order to include restify to our project in the root directory of our server side scripts, use the following command: npm install restify After adding the dependency, we can proceed to creating the server code. We create a server.js file that will be run by Node.js, and at the beginning of it we add restify: var restify = require('restify'); With this restify object, we can now create a server object and add handlers for get methods: var server = restify.createServer(); server.get('hi', respond); server.get('hi/:index', respond); The get handlers do a callback to a function called respond, so we can now define this function that will return the JSON data. We will create a sample JavaScript object called hello, and in case the function was called having a parameter index part of the request it was called from the "hi/:index" handler: function respond(req, res, next) { console.log("Got HTTP " + req.method + " on " + req.url + " responding"); var hello = [{ 'id':'0', 'hello': 'world' },{ 'id':'1', 'say':'what' }]; if(req.params.index){ var found = hello[req.params.index]; if(found){ res.send(found); } else { res.status(404); res.send(); } }; res.send(hello); addHeaders(req,res); return next(); } The following addHeaders function that we call at the beginning is adding headers to enable access to the resources served from a different domain or a different server port: function addHeaders(req, res) { res.header("Access-Control-Allow-Origin", "*"); res.header("Access-Control-Allow-Headers", "X-Requested-With"); }; The definition of headers and what they mean will be discussed later on in the Article. For now, let's just say they enable accesses to the resources from a browser using AJAX. At the end, we add a block of code that will set the server to listen on port 8080: server.listen(8080, function() { console.log('%s listening at %s', server.name, server.url); }); To start the sever using command line, we type the following command: node server.js If everything went as it should, we will get a message in the log: restify listening at http://0.0.0.0:8080 We can then test it by accessing directly from the browser on the URL we defined http://localhost:8080/hi Now we can proceed with the client-side HTML and JavaScript. We will implement two ways for reading data from the server, one using standard XMLHttpRequest and the other using jQuery.get(). Note that not all features are fully compatible with all browsers. We create a simple page where we have two div elements, one with the ID data and another with the ID say. These elements will be used as placeholders to load data form the server into them: Hello <div id="data">loading</div> <hr/> Say <div id="say">No</div>s <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src = "example.js"></script> <script src = "exampleJQuery.js"></script> In the example.js file, we define a function called getData that will create a AJAX call to a given url and do a callback if the request went successfully: function getData(url, onSuccess) { var request = new XMLHttpRequest(); request.open("GET", url); request.onload = function() { if (request.status === 200) { console.log(request); onSuccess(request.response); } }; request.send(null); } After that, we can call the function directly, but in order to demonstrate that the call happens after the page is loaded, we will call it after a timeout of three seconds: setTimeout( function() { getData( 'http://localhost:8080/hi', function(response){ console.log('finished getting data'); var div = document.getElementById('data'); var data = JSON.parse(response); div.innerHTML = data[0].hello; }) }, 3000); The jQuery version is a lot cleaner, as the complexity that comes with the standard DOM API and the event handling is reduced substantially: (function(){ $.getJSON('http://localhost:8080/hi/1', function(data) { $('#say').text(data.say); }); }()) How it works... At the beginning, we installed the dependency using npm install restify; this is sufficient to have it working, but in order to define dependencies in a more expressive way, npm has a way of specifying it. We can add a file called package.json, a packaging format that is mainly used for for publishing details for Node.js applications. In our case, we can define package.json with the flowing code: { "name" : "ch8-tip1-http-get-example", "description" : "example on http get", "dependencies" : ["restify"], "author" : "Mite Mitreski", "main" : "html5dasc", "version" : "0.0.1" } If we have a file like this, npm will automatically handle the installation of dependencies after calling npm install from the command line in the directory where the package.json file is placed. Restify has a simple routing where functions are mapped to appropriate methods for a given URL. The HTTP GET request for '/hi' is mapped with server.get('hi', theCallback), where theCallback is executed, and a response should be returned. When we have a parameterized resource, for example in 'hi/:index', the value associated with :index will be available under req.params. For example, in a request to '/hi/john' to access the john value, we simple have req.params.index. Additionally, the value for index will automatically get URL-decoded before it is passed to our handler. One other notable part of the request handlers in restify is the next() function that we called at the end. In our case, it mostly does not makes much sense, but in general, we are responsible for calling it if we want the next handler function in the chain to be called. For exceptional circumstances, there is also an option to call next() with an error object triggering custom responses. When it comes to the client-side code, XMLHttpRequest is the mechanism behind the async calls, and on calling request.open("GET", url, true) with the last parameter value as true, we get a truly asynchronous execution. Now you might be wondering why is this parameter here, isn't the call already done after loading the page? That is true, the call is done after loading the page, but if, for example, the parameter was set to false, the execution of the request will be a blocking method, or to put it in layman's terms, the script will pause until we get a response. This might look like a small detail, but it can have a huge impact on performance. The jQuery part is pretty straightforward; there is function that accepts a URL value of the resource, the data handler function, and a success function that gets called after successfully getting a response: jQuery.getJSON( url [, data ] [, success(data, textStatus, jqXHR) ] ) When we open index.htm, the server should log something like the following: Got HTTP GET on /hi/1 responding Got HTTP GET on /hi responding Here one is from the jQuery request and the other from the plain JavaScript. There's more... XMLHttpRequest Level 2 is one of the new improvements being added to the browsers, although not part of HTML5 it is still a significant change. There are several features with the Level 2 changes, mostly to enable working with files and data streams, but there is one simplification we already used. Earlier we would have to use onreadystatechange and go through all of the states, and if the readyState was 4, which is equal to DONE, we could read the data: var xhr = new XMLHttpRequest(); xhr.open('GET', 'someurl', true); xhr.onreadystatechange = function(e) { if (this.readyState == 4 && this.status == 200) { // response is loaded } } In a Level 2 request however, we can use request.onload = function() {} directly without checking states. Possible states can be seen in the table: table One other thing to note is that XMLHttpRequest Level 2 is supported in all major browsers and IE 10; the older XMLHttpRequest has a different way of instantiation on older versions of IE (older than IE 7), where we can access it through an ActiveX object via new ActiveXObject("Msxml2.XMLHTTP.6.0");. Creating a request with custom headers The HTTP headers are a part of the request object being sent to the server. Many of them give information about the client's user agent setup and configuration, as that is sometimes the basis of making description for the resources being fetched from the server. Several of them such as Etag, Expires, and If-Modified-Since are closely related to caching, while others such as DNT that stands for "Do Not Track" (http://www.w3.org/2011/tracking-protection/drafts/tracking-dnt.html) can be quite controversial. In this recipe, we will take a look at a way for using the custom X-Myapp header in our server and client-side code. Getting ready The server will be implemented using Node.js. In this example, again for simplicity, we will use restify (http://mcavage.github.io/node-restify/). Also, monitoring the console in your browser and server is crucial in order to understand what happens in the background. How to do it... We can start by defining the dependencies for the server side in package.json file: { "name" : "ch8-tip2-custom-headers", "dependencies" : ["restify"], "main" : "html5dasc", "version" : "0.0.1" } After that, we can call npm install from the command line that will automatically retrieve restify and place it in a node_modules folder created in the root directory of the project. After this part, we can proceed to creating the server-side code in a server.js file where we set the server to listen on port 8080 and add a route handler for 'hi' and for every other path when the request method is HTTP OPTIONS: var restify = require('restify'); var server = restify.createServer(); server.get('hi', addHeaders, respond); server.opts(/.*/, addHeaders, function (req, res, next) { console.log("Got HTTP " + req.method + " on " + req.url + " with headersn"); res.send(200); return next(); }); server.listen(8080, function() { console.log('%s listening at %s', server.name, server.url); }); In most cases, the documentation should be enough when we write the application's build onto Restify, but sometimes, it is a good idea to take a look a the source code as well. It can be found on https://github.com/mcavage/node-restify/. One thing to notice is that we can have multiple chained handlers; in this case, we have addHeaders before the others. In order for every handler to be propagated, next() should be called: function addHeaders(req, res, next) { res.setHeader("Access-Control-Allow-Origin", "*"); res.setHeader('Access-Control-Allow-Headers', 'X-Requested-With, X-Myapp'); res.setHeader('Access-Control-Allow-Methods', 'GET, OPTIONS'); res.setHeader('Access-Control-Expose-Headers', 'X-Myapp, X-Requested-With'); return next(); }; The addHeaders adds access control options in order to enable cross-origin resource sharing. Cross-origin resource sharing (CORS ) defines a way in which the browser and server can interact to determine if the request should be allowed. It is more secure than allowing all cross-origin requests, but is more powerful than simply allowing all of them. After this, we can create the handler function that will return a JSON response with the headers the server received and a hello world kind of object: function respond(req, res, next) { console.log("Got HTTP " + req.method + " on " + req.url + " with headersn"); console.log("Request: ", req.headers); var hello = [{ 'id':'0', 'hello': 'world', 'headers': req.headers }]; res.send(hello); console.log('Response:n ', res.headers()); return next(); } We additionally log the request and response headers to the sever console log in order to see what happens in the background. For the client-side code, we need a plain "vanilla" JavaScript approach and jQuery method, so in order to do that, include example.js and exampleJquery.js as well as a few div elements that we will use for displaying data retrieved from the server: Hi <div id="data">loading</div> <hr/> Headers list from the request: <div id="headers"></div> <hr/> Data from jQuery: <div id="dataRecieved">loading</div> <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src = "example.js"></script> <script src = "exampleJQuery.js"></script> A simple way to add the headers is to call setRequestHeader on a XMLHttpRequest object after the call of open(): function getData(url, onSucess) { var request = new XMLHttpRequest(); request.open("GET", url, true); request.setRequestHeader("X-Myapp","super"); request.setRequestHeader("X-Myapp","awesome"); request.onload = function() { if (request.status === 200) { onSuccess(request.response); } }; request.send(null); } The XMLHttpRequest automatically sets headers, such as "Content-Length","Referer", and "User-Agent", and does not allow you to change them using JavaScript. A more complete list of headers and the reasoning behind this can be found in the W3C documentation at http://www.w3.org/TR/XMLHttpRequest/#the-setrequestheader%28%29-method. To print out the results, we add a function that will add each of the header keys and values to an unordered list: getData( 'http://localhost:8080/hi', function(response){ console.log('finished getting data'); var data = JSON.parse(response); document.getElementById('data').innerHTML = data[0].hello; var headers = data[0].headers, headersList = "<ul>"; for(var key in headers){ headersList += '<li><b>' + key + '</b>: ' + headers[key] +'</li>'; }; headersList += "</ul>"; document.getElementById('headers').innerHTML = headersList; }); When this gets executed. a list of all the request headers should be displayed on a page, and our custom x-myapp should be shown: host: localhost:8080 connection: keep-alive origin: http://localhost:8000 x-myapp: super, awesome user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.27 (KHTML, like Gecko) Chrome/26.0.1386.0 Safari/537.27 The jQuery approach is far simpler, we can use the beforeSend hook to call a function that will set the 'x-myapp' header. When we receive the response, write it down to the element with the ID dataRecived: $.ajax({ beforeSend: function (xhr) { xhr.setRequestHeader('x-myapp', 'this was easy'); }, success: function (data) { $('#dataRecieved').text(data[0].headers['x-myapp']); } Output from the jQuery example will be the data contained in x-myapp header: Data from jQuery: this was easy How it works... You may have noticed that on the server side, we added a route that has a handler for HTTP OPTIONS method, but we never explicitly did a call there. If we take a look at the server log, there should be something like the following output: Got HTTP OPTIONS on /hi with headers Got HTTP GET on /hi with headers This happens because the browser first issues a preflight request , which in a way is the browser's question whether or not there is a permission to make the "real" request. Once the permission has been received, the original GET request happens. If the OPTIONS response is cached, the browser will not issue any extra preflight calls for subsequent requests. The setRequestHeader function of XMLHttpRequest actually appends each value as a comma-separated list of values. As we called the function two times, the value for the header is as follows: 'x-myapp': 'super, awesome' There's more... For most use cases, we do not need custom headers to be part of our logic, but there are plenty of API's that make good use of them. For example, many server-side technologies add the X-Powered-By header that contains some meta information, such as JBoss 6 or PHP/5.3.0. Another example is Google Cloud Storage, where among other headers there are x-goog-meta-prefixed headers such as x-goog-meta-project-name and x-goog-meta-project-manager. Versioning your API We do not always have the best solution while doing the first implementation. The API can be extended up to a certain point, but afterwards needs to undergo some structural changes. But we might already have users that depend on the current version, so we need a way to have different representation versions of the same resource. Once a module has users, the API cannot be changed at our own will. One way to resolve this issue is to use a so-called URL versioning, where we simply add a prefix. For example, if the old URL was http://example.com/rest/employees, the new one could be http://example.com/rest/v1/employees, or under a subdomain it could be http://v1.example.com/rest/employee. This approach only works if you have direct control over all the servers and clients. Otherwise, you need to have a way of handling fallback to older versions. In this recipe, we are going implement a so-called "Semantic versioning", http://semver.org/, using HTTP headers to specify accepted versions. Getting ready The server will be implemented using Node.js. In this example, we will use restify (http://mcavage.github.io/node-restify/) for the server-side logic to monitor the requests to understand what is sent. How to do it... Let's perform the following steps. We need to define the dependencies first, and after installing restify, we can proceed to the creation of the server code. The main difference with the previous examples is the definition of the "Accept-version" header. restify has built-in handling for this header using versioned routes . After creating the server object, we can set which methods will get called for what version: server.get({ path: "hi", version: '2.1.1'}, addHeaders, helloV2, logReqRes); server.get({ path: "hi", version: '1.1.1'}, addHeaders, helloV1, logReqRes); We also need the handler for the HTTP OPTIONS, as we are using cross-origin resource sharing and the browser needs to do the additional request in order to get permissions: server.opts(/.*/, addHeaders, logReqRes, function (req, res, next) { res.send(200); return next(); }); The handlers for Version 1 and Version 2 will return different objects in order for us to easily notice the difference between the API calls. In the general case, the resource should be the same, but can have different structural changes. For Version 1, we can have the following: function helloV1(req, res, next) { var hello = [{ 'id':'0', 'hello': 'grumpy old data', 'headers': req.headers }]; res.send(hello); return next() } As for Version 2, we have the following: function helloV2(req, res, next) { var hello = [{ 'id':'0', 'awesome-new-feature':{ 'hello': 'awesomeness' }, 'headers': req.headers }]; res.send(hello); return next(); } One other thing we must do is add the CORS headers in order to enable the accept-version header, so in the route we included the addHeaders that should be something like the following: function addHeaders(req, res, next) { res.setHeader("Access-Control-Allow-Origin", "*"); res.setHeader('Access-Control-Allow-Headers', 'X-Requested-With, accept-version'); res.setHeader('Access-Control-Allow-Methods', 'GET, OPTIONS'); res.setHeader('Access-Control-Expose-Headers', 'X-Requested-With, accept-version'); return next(); }; Note that you should not forget to the call to next() in order to call the next function in the route chain. For simplicity, we will only implement the client side in jQuery, so we create a simple HTML document, where we include the necessary JavaScript dependencies: Old api: <div id="data">loading</div> <hr/> New one: <div id="dataNew"> </div> <hr/> <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src = "exampleJQuery.js"></script> In the example.js file, we do two AJAX calls to our REST API, one is set to use the Version 1 and other to use Version 2: $.ajax({ url: 'http://localhost:8080/hi', type: 'GET', dataType: 'json', success: function (data) { $('#data').text(data[0].hello); }, beforeSend: function (xhr) { xhr.setRequestHeader('accept-version', '~1'); } }); $.ajax({ url: 'http://localhost:8080/hi', type: 'GET', dataType: 'json', success: function (data) { $('#dataNew').text(data[0]['awesome-new-feature'].hello); }, beforeSend: function (xhr) { xhr.setRequestHeader('accept-version', '~2'); } }); Notice that the accept-version header contains values ~1 and ~2. These designate that all the semantic versions such as 1.1.0 and 1.1.1 1.2.1 will get matched by ~1 and similarly for ~2. At the end, we should get an output like the following text: Old api:grumpy old data New one:awesomeness How it works... Versioned routes are a built-in feature of restify that work through the use of accept-version. In our example, we used Versions ~1 and ~2, but what happens if we don't specify a version? restify will do the choice for us, as the the request will be treated in the same manner as if the client has sent a * version. The first defined matching route in our code will be used. There is also an option to set up the routes to match multiple versions by adding a list of versions for a certain handler: server.get({path: 'hi', version: ['1.1.0', '1.1.1', '1.2.1']}, sendOld); The reason why this type of versioning is very suitable for use in constantly growing applications is because as the API changes, the client can stick with their version of the API without any additional effort or changes needed in the client-side development. Meaning that we don't have to do updates on the application. On the other hand, if the client is sure that their application will work on newer API versions, they can simply change the request headers. There's more... Versioning can be implemented by using custom content types prefixed with vnd for example, application/vnd.mycompany.user-v1. An example of this is Google Earth's content type KML where it is defined as application/vnd.google-earth.kml+xml. Notice that the content type can be in two parts; we could have application/vnd.mycompany-v1+json where the second part will be the format of the response. Fetching JSON data with JSONP JSONP or JSON with padding is a mechanism of making cross-domain requests by taking advantage of the <script> tag. AJAX transport is done by simply setting the src attribute on a script element or adding the element itself if not present. The browser will do an HTTP request to download the URL specified, and that is not subject to the same origin policy, meaning that we can use it to get data from servers that are not under our control. In this recipe, we will create a simple JSONP request, and a simple server to back that up. Getting ready We will make a simplified implementation of the server we used in previous examples, so we need Node.js and restify (http://mcavage.github.io/node-restify/) installed either via definition of package.json or a simple install. For working with Node.js. How to do it... First, we will create a simple route handler that will return a JSON object: function respond(req, res, next) { console.log("Got HTTP " + req.method + " on " + req.url + " responding"); var hello = [{ 'id':'0', 'what': 'hi there stranger' }]; res.send(hello); return next(); } We could roll our own version that will wrap the response into a JavaScript function with the given name, but in order to enable JSONP when using restify, we can simply enable the bundled plugin. This is done by specifying what plugin to be used: var server = restify.createServer(); server.use(restify.jsonp()); server.get('hi', respond); After this, we just set the server to listen on port 8080: server.listen(8080, function() { console.log('%s listening at %s', server.name, server.url); }); The built-in plugin checks the request string for parameters called callback or jsonp, and if those are found, the result will be JSONP with the function name of the one passed as value to one of these parameters. For example, in our case, if we open the browser on http://localhost:8080/hi, we get the following: [{"id":"0","what":"hi there stranger"}] If we access the same URL with the callback parameter or a JSONP set, such as http://localhost:8080/hi?callback=great, we should receive the same data wrapped with that function name: great([{"id":"0","what":"hi there stranger"}]); This is where the P in JSONP, which stands for padded, comes into the picture. So, what we need to do next is create an HTML file where we would show the data from the server and include two scripts, one for the pure JavaScript approach and another for the jQuery way: <b>Hello far away server: </b> <div id="data">loading</div> <hr/> <div id="oneMoreTime">...</div> <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src = "example.js"></script> <script src = "exampleJQuery.js"></script> We can proceed with the creation of example.js, where we create two functions; one will create a script element and set the value of src to http://localhost:8080/?callback=cool.run, and the other will serve as a callback upon receiving the data: var cool = (function(){ var module = {}; module.run = function(data){ document.getElementById('data').innerHTML = data[0].what; } module.addElement = function (){ var script = document.createElement('script'); script.src = 'http://localhost:8080/hi?callback=cool.run' document.getElementById('data').appendChild(script); return true; } return module; }()); Afterwards we only need the function that adds the element: cool.addElement(); This should read the data from the server and show a result similar to the following: Hello far away server: hi there stranger From the cool object, we can run the addElement function directly as we defined it as self-executable. The jQuery example is a lot simpler; We can set the datatype to JSONP and everything else is the same as any other AJAX call, at least from the API point of view: $.ajax({ type : "GET", dataType : "jsonp", url : 'http://localhost:8080/hi', success: function(obj){ $('#oneMoreTime').text(obj[0].what); } }); We can now use the standard success callback to handle the data received from the server, and we don't have to specify the parameter in the request. jQuery will automatically append a callback parameter to the URL and delegate the call to the success callback. How it works... The first large leap we are doing here is trusting the source of the data. Results from the server is evaluated after the data is downloaded from the server. There has been some efforts to define a safer JSONP on http://json-p.org/, but it is far from being widespread. The download itself is a HTTP GET method adding another major limitation to usability. Hypermedia as the Engine of Application State (HATEOAS ), among other things, defines the use of HTTP methods for the create, update, and delete operations, making JSONP very unstable for those use cases. Another interesting point is how jQuery delegates the call to the success callback. In order to achieve this, a unique function name is created and is sent to the callback parameter, for example: /hi?callback=jQuery182031846177391707897_1359599143721&_=1359599143727 This function later does a callback to the appropriate handler of jQuey.ajax. There's more... With jQuery, we can also use a custom function if the server parameter that should handle jsonp is not called callback. This is done using the flowing config: jsonp: false, jsonpCallback: "my callback" As with JSONP, we don't do XMLHttpRequest and expect any of the functions that are used with AJAX call to be executed or have their parameters filled as such call. It is a very common mistake to expect just that. More on this can be found in the jQuery documentation at http://api.jquery.com/category/ajax/.
Read more
  • 0
  • 0
  • 1507

article-image-using-third-party-plugins-non-native-plugins
Packt
30 Aug 2013
4 min read
Save for later

Using third-party plugins (non-native plugins)

Packt
30 Aug 2013
4 min read
(For more resources related to this topic, see here.) We want to focus on a particular case here, because we have already seen how to add a new property, and for some components, we can easily add the plugins or features property, and then add the plugin configuration. But the components that have native plugins supported by the API do not allow us to do so, like for instance, the grid panel from Ext JS: We can only use the plugins and features that are available within Sencha Architect. What if we want to use a third-party plugin or feature such as the Filter Plugin? It is possible, but we need to use an advanced feature from Sencha Architect, which is "creating overrides". A disclaimer about overrides: this has to be avoided. Whenever you can use a set method to change a property, use it. Overrides should be your last resource and they should be used very carefully, because if you do not use them carefully, you can change the behavior of a component and something may stop working. But we will demonstrate how to do it in a safe way! We will use the BooksGrid as an example in this topic. Let's say we need to use the Filter Plugin on it, so we need to create an override first. To do it, select the BooksGrid from the project inspector, open the code editor, and click on the Create Override button (Step 1). Sencha Architect will display a warning (Step 2). We can click on Yes to continue: The code editor will open (Step 3) the override class so we can enter our code. In this case, we will have complete freedom to do whatever we need to on this file. So let's add the features() function with the declaration of the plugin and also the initComponent()function as shown in the following screenshot (Step 4): One thing that is very important is that we must call the callParent()function (callOverriden()is deprecated already in Ext JS 4.1 and later versions) to make sure we will continue to have all the original behavior of the component (in this case the BooksGridclass). The only thing we want to do is to add a new feature to it. And we are done with the override! To go back to the original class we can use the navigator as shown in the following screenshot: Notice that requires was added to the class Packt.view.override. BooksGrid, which is the class we just wrote. The next step is to add the plugin on the class requires. To do so, we need to select the BooksGrid, go to the config panel, and add the requires with the name of the plugin (Ext.ux.grid.FiltersFeature): Some developers like to add the plugin file directly as a JavaScript file on app.html/index.html. Sencha provides the dynamic loading feature so let's take advantage of it and use it! First, we cannot forget to add the uxfolder with the plugin on the project root folder as shown in the following screenshot: Next, we need to set the application loader. Select the Application from the project inspector (Step 5), then go to the config panel, locate the Loader Config property, click on the +icon (Step 6), then click on the arrow icon (Step 7). The details of the loader will be available on the config panel. Locate the paths property and click on it (Step 8). The code editor will be opened with the loader path's default value, which is {"Ext": "."}(Step 9). Do not remove it; simply add the path of the Ext.uxnamespace which is the uxfolder (Step 10): And we are almost done! We need to add the filterable option in each column we want to allow the user to filter its values (Step 11): we can use the config panel to add a new property (we need to select the desired column from the project inspector first—always remember to do this). And then, we can choose what type of property we want to add (Step 12 and Step 14). For example, we can add filterable: true(Step 13) for the id column and filterable: {type: 'string'}(Step 15 and Step 16) to the Name column as shown in the following screenshot: And the plugin is ready to be used! Summary In this article we learned some useful tricks that can help in our everyday tasks while working with Sencha projects using Sencha Architect. Also we covered advanced topics such as creating overrides to use third party plugins and features and implement multilingual apps. Resources for Article: Further resources on this subject: Sencha Touch: Layouts Revisited [Article] Sencha Touch: Catering Form Related Needs [Article] Creating a Simple Application in Sencha Touch [Article]
Read more
  • 0
  • 0
  • 1679

article-image-customization
Packt
29 Aug 2013
18 min read
Save for later

Customization

Packt
29 Aug 2013
18 min read
(For more resources related to this topic, see here.) Now that you've got a working multisite installation, we can start to add some customizations. Customizations can come in a few different forms. You're probably aware of the customizations that can be made via WordPress plugins and custom WordPress themes. Another way we can customize a multisite installation is by creating a landing page that displays information about each blog in the multisite network, as well as displaying information about the author for that individual blog. I wrote a blog post shortly after WordPress 3.0 came out detailing how to set this landing page up. At the time, I was working for a local newspaper and we were setting up a blog network for some of our reporters to blog about politics (being in Iowa, politics are a pretty big deal here, especially around Caucus time). You can find the post at http://www.longren.org/how-to-wordpress-3-0-multi-site-blog-directory/ if you'd like to read it. There's also a blog-directory.zip file attached to the post that you can download and use as a starting point. Before we get into creating the landing page, let's get the really simple stuff out of the way and briefly go over how themes and plugins are managed in WordPress multisite installations. We'll start with themes. Themes can be activated network-wide, which is really nice if you have a theme that you want every site in your blog network to use. You can also activate a theme for an individual blog, instead of activating the theme for the entire network. This is helpful if one or two individual blogs need to have a totally unique theme that you don't want to be available to the other blogs. Theme management You can install themes on a multisite installation the same way you would with a regular WordPress install. Just upload the theme folder to your wp-content/themes folder to install the theme. Installing a theme is only part of the process for individual blogs to use the themes; you'll need to activate them for the entire blog network or for specific blogs. To activate a theme for an entire network, click on Themes and then click on Installed Themes in the Network Admin dashboard. Check the themes that you want to enable, select Network Enable in the Bulk Actions drop-down menu, and then click on the Apply button. That's all there is to activating a theme (or multiple themes) for an entire multisite network. The individual blog owners can apply the theme just as you would in a regular, nonmultisite WordPress installation. To activate a theme for just one specific blog and not the entire network, locate the target blog using the Sites menu option in the Network Admin dashboard. After you've found it, put your mouse cursor over the blog URL or domain. You should see the action menu appear immediately under the blog URL or domain. The action menu includes options such as Edit, Dashboard, and Deactivate. Click on the Edit action menu item and then navigate to the Themes tab. To activate an individual theme, just click on Enable below the theme that you want to activate. Or, if you want to activate multiple themes for the blog, check all the themes you want through the checkboxes on the left-hand side of each theme from the list, select Enable in the Bulk Actions drop-down menu, and then click on the Apply button. An important thing to keep in mind is that themes that have been activated for the entire network won't be shown here. Now the blog administrator can apply the theme to their blog just as they normally would. Plugin management To install a plugin for network use, upload the plugin folder to wp-content/plugins/ as you normally would. Unlike themes, plugins cannot be activated on a per-site basis. As network administrator, you can add a plugin to the Plugins page for all sites, but you can't make a plugin available to one specific site. It's all or nothing. You'll also want to make sure that you've enabled the Plugins page for the sites that need it. You can enable the Plugins page by visiting the Network Admin dashboard and then navigating to the Network Settings page. At the bottom of that page you should see a Menu Settings section where you can check a box next to Plugins to enable the plugins page. Make sure to click on the Save Changes button at the bottom or nothing will change. You can see the Menu Settings section in the following screenshot. That's where you'll want to enable the Plugins page. Enabling the Plugins page After you've ensured that the Plugins page is enabled, specific site administrators will be able to enable or disable plugins as they normally would. To enable a plugin for the entire network go to the Network Admin dashboard, mouse over the Plugins menu item, and then click on Installed Plugins. This will look pretty familiar to you; it looks pretty much like the Installed Plugins page does on a typical WordPress single-site installation. The following screenshot shows the installed Plugins page: Enable plugins for the entire network You'll notice below each plugin there's some text that reads Network Activate. I bet you can guess what clicking that will do. Yes, clicking on the Network Activate link will activate that plugin for the entire network. That's all there is to the basic plugin setup in WordPress multisite. There's another plugin feature that is often overlooked in WordPress multisite, and that's must-use plugins. These are plugins that are required for every blog or site on the network. Must-use plugins can be installed in the wp-content/mu-plugins/ folder but they must be single-file plugins. The files within folders won't be read. You can't deactivate or activate the must-use plugins. If they exist in the mu-plugins folder, they're used. They're entirely hidden from the Plugin pages, so individual site administrators won't even see them or know they're there. I don't think must-use plugins are a commonly used thing, but it's nice information to have just in case. Some plugins, especially domain mapping plugins, need to be installed in mu-plugins and need to be activated before the normal plugins. Third-party plugins and plugins for plugin management We should also discuss some of the plugins that are available for making the management of plugins and themes on WordPress multisite installations a bit easier. One of the most popular plugins is called Multisite Plugin Manager, and is developed by Aaron Edwards of UglyRobot.com. The Multisite Plugin Manager plugin was previously known as WPMU Plugin Manager. The plugin can be obtained from the WordPress Plugin Directory at http://wordpress.org/plugins/multisite-plugin-manager/. Here's a quick rundown of some of the plugin features: Select which plugins specific sites have access to Set certain plugins to autoactivate itself for new blogs or sites Activate/deactivate a plugin on all network sites Assign some special plugin access permissions to specific network sites Another plugin that you may find useful is called WordPress MU Domain Mapping. It allows you to easily map any blog or site to an external domain. You can find this plugin in the WordPress Plugin Directory at http://wordpress.org/plugins/wordpress-mu-domain-mapping/. There's one other plugin I want to mention; the only drawback is that it's not a free plugin. It's called WP Multisite Replicator, and you can probably guess what it does. This plugin will allow you to set up a "template" blog or site and then replicate that site when adding new sites or blogs. The idea is that you'd create a blog or site that has all the features that other sites in your network will need. Then, you can easily replicate that site when creating a new site or blog. It will copy widgets, themes, and plugin settings to the new site or blog, which makes deploying new, identical sites extremely easy. It's not an expensive plugin, costing about $36 at the moment of writing, which is well worth it in my opinion if you're going to be creating lots of sites that have the same basic feature set. WP Multisite Replicator can be found at http://wpebooks.com/replicator/. Creating a blog directory / landing page Now that we've got the basic theme and plugin stuff taken care of, I think it's time to move onto creating a blog directory or a landing page, whichever you prefer to call it. From this point on I'll be referring to it as a blog directory. You can see a basic version of what we're going to make in the following screenshot. The users on my example multisite installation, at http://multisite.longren.org/, are Kayla and Sydney, my wife and daughter. Blog directory example As I mentioned earlier in this article, I wrote a post about creating this blog directory back when WordPress 3.0 was first released in 2010. I'll be using that post as the basis for most of what we'll do to create the blog directory with some things changed around, so this will integrate more nicely into whatever theme you're using on the main network site. The first thing we need to do is to create a basic WordPress page template that we can apply to a newly created WordPress page. This template will contain the HTML structure for the blog directory and will dictate where the blog names will be shown and where the recent posts and blog description will be displayed. There's no reason that you need to stick with the following blog directory template specifically. You can take the code and add or remove various elements, such as the recent post if you don't want to show them. You'll want to implement this blog directory template as a child theme in WordPress. To do that, just make a new folder in wp-content/themes/. I typically name my child theme folders after their parent themes. So, the child theme folder I made was wp-content/themes/twentythirteen-tyler/. Once you've got the child theme folder created, make a new file called style.css and make sure it has the following code at the top: /*Theme Name: Twenty Thirteen Child ThemeTheme URI: http://yourdomain.comDescription: Child theme for the Twenty Thirteen themeAuthor: Your name hereAuthor URI: http://example.com/about/Template: twentythirteenVersion: 0.1.0*//* ================ *//* = The 1Kb Grid = */ /* 12 columns, 60 pixels each, with 20pixel gutter *//* ================ */.grid_1 { width:60px; }.grid_2 { width:140px; }.grid_3 { width:220px; }.grid_4 { width:300px; }.grid_5 { width:380px; }.grid_6 { width:460px; }.grid_7 { width:540px; }.grid_8 { width:620px; }.grid_9 { width:700px; }.grid_10 { width:780px; }.grid_11 { width:860px; }.grid_12 { width:940px; }.column {margin: 0 10px;overflow: hidden;float: left;display: inline;}.row {width: 960px;margin: 0 auto;overflow: hidden;}.row .row {margin: 0 -10px;width: auto;display: inline-block;}.author_bio {border: 1px solid #e7e7e7;margin-top: 10px;padding-top: 10px;background:#ffffff url('images/sign.png') no-repeat right bottom;z-index: -99999;}small { font-size: 12px; }.post_count {text-align: center;font-size: 10px;font-weight: bold;line-height: 15px;text-transform: uppercase;float: right;margin-top: -65px;margin-right: 20px;}.post_count a {color: #000;}#content a {text-decoration: none;-webkit-transition: text-shadow .1s linear;outline: none;}#content a:hover {color: #2DADDA;text-shadow: 0 0 6px #278EB3;} The preceding code adds the styling to your child theme, and also tells WordPress the name of your child theme. You can set a custom theme name if you want by changing the Theme Name line to whatever you like. The only fields in that big comment block that are required are the Theme Name and Template. Template, which should be set to whatever the parent theme's folder name is. Now create another file in your child theme folder and name it blog-directory.php. The remaining blocks of code need to go into that blog-directory.php file: <?php/*** Template Name: Blog Directory** A custom page template with a sidebar.* Selectable from a dropdown menu on the add/edit page screen.** @package WordPress* @subpackage Twenty Thirteen*/?><?php get_header(); ?><div id="container" class="onecolumn"><div id="content" role="main"><?php the_post(); ?><div id="post-<?php the_ID(); ?>" <?php post_class(); ?>><?php if ( is_front_page() ) { ?><h2 class="entry-title"><?php the_title(); ?></h2><?php } else { ?><h1 class="entry-title"><?php the_title(); ?></h1><?php } ?><div class="entry-content"><!-- start blog directory --><?php// Get the authors from the database ordered randomlyglobal $wpdb;$query = "SELECT ID, user_nicename from $wpdb->users WHERE ID != '1'ORDER BY 1 LIMIT 50";$author_ids = $wpdb->get_results($query);// Loop through each authorforeach($author_ids as $author) {// Get user data$curauth = get_userdata($author->ID);// Get link to author page$user_link = get_author_posts_url($curauth->ID);// Get blog details for the authors primary blog ID$blog_details = get_blog_details($curauth->primary_blog);$postText = "posts";if ($blog_details->post_count == "1") {$postText = "post";}$updatedOn = strftime("%m/%d/%Y at %l:%M %p",strtotime($blog_details->last_updated));if ($blog_details->post_count == "") {$blog_details->post_count = "0";}$posts = $wpdb->get_col( "SELECT ID FROM wp_".$curauth->primary_blog."_posts WHERE post_status='publish' AND post_type='post' ANDpost_author='$author->ID' ORDER BY ID DESC LIMIT 5");$postHTML = "";$i=0;foreach($posts as $p) {$postdetail=get_blog_post($curauth->primary_blog,$p);if ($i==0) {$updatedOn = strftime("%m/%d/%Y at %l:%M %p",strtotime($postdetail->post_date));}$postHTML .= "&#149; <a href="$postdetail->guid">$postdetail->post_title</a><br />";$i++;}?> The preceding code sets up the theme and queries the WordPress database for authors. In WordPress multisite, users who have the Author permission type have a blog on the network. There's also code for grabbing posts from each of the network sites for displaying the recent posts from them: <div class="author_bio"><div class="row"><div class="column grid_2"><a href="<?php echo $blog_details->siteurl; ?>"><?php echo get_avatar($curauth->user_email, '96','http://www.gravatar.com/avatar/ad516503a11cd5ca435acc9bb6523536'); ?></a></div><div class="column grid_6"><a href="<?php echo $blog_details->siteurl; ?>" title="<?php echo$curauth->display_name; ?> - <?=$blog_details->blogname?>"><?php //echo $curauth->display_name; ?> <?=$curauth->display_name;?></a><br /><small><strong>Updated <?=$updatedOn?></strong></small><br /><?php echo $curauth->description; ?></div><div class="column grid_3"><h3>Recent Posts</h3><?=$postHTML?></div></div><span class="post_count"><a href="<?php echo $blog_details->siteurl;?>" title="<?php echo $curauth->display_name; ?>"><?=$blog_details->post_count?><br /><?=$postText?></a></span></div><?php } ?><!-- end blog directory --><?php wp_link_pages( array( 'before' => '<div class="page-link">' .__( 'Pages:', 'twentythirteen' ), 'after' => '</div>' ) ); ?><?php edit_post_link( __( 'Edit', 'twentythirteen' ), '<spanclass="edit-link">', '</span>' ); ?></div><!-- .entry-content --></div><!-- #post-<?php the_ID(); ?> --><?php comments_template( '', true ); ?></div><!-- #content --></div><!-- #container --><?php //get_sidebar(); ?><?php get_footer(); ?> Once you've got your blog-directory.php template file created, we can get actually started by setting up the page to serve as our blog directory. You'll need to set the root site's theme to your child theme; do it just as you would on a nonmultisite WordPress installation. Before we go further, let's create a couple of network sites so we have something to see on our blog directory. Go to the Network Admin dashboard, mouse over the Sites menu option in the left-hand side menu, and then click on Add New. If you're using a directory network type, as I am, the value you enter for the Site Address field will be the path to the directory that site sits in. So, if you enter tyler as the Site Address value, that the site can be reached at http://multisite.longren.org/tyler/. The settings that I used to set up multisite.longren.org/tyler/ can be seen in the following screenshot. You'll probably want to add a couple of sites just so you get a good idea of what your blog directory page will look like. Example individual site setup Now we can set up the actual blog directory page. On the main dashboard (that is, /wp-admin/index.php), mouse over the Pages menu item on the left-hand side of the page and then click on Add New to create a new page. I usually name this page Home, as I use the blog directory as the first page that visitors see when visiting the site. From there, visitors can choose which blog they want to visit and are also shown a list of the most recent posts from each blog. There's no need to enter any content on the page, unless you want to. The important part is selecting the Blog Directory template. Before you publish your new Home / blog directory page, make sure that you select Blog Directory as the Template value in the Page Attributes section. An example a Home / blog directory page can be seen in the following screenshot: Example Home / blog directory page setup Once you've got your page looking like the example, as shown in the previous screenshot, you can go ahead and publish that page. The Update button in the previous screenshot will say Publish if you've not yet published the page. Next you'll want to set the newly created Home / blog directory page as the front page for the site. To do this, mouse over the Settings menu option on the left-hand side of the page and then click on Reading. For the Front page displays value, check A static page (select below). Previously, Your latest posts was checked. Then for the Front Page drop-down menu, just select the Home page that we just created and click on the Save Changes button at the bottom of the page. I usually don't set anything for the Posts page drop-down menu because I never post to the "parent" site. If you do intend to make posts on the parent site, I'd suggest that you create a new blank page titled Posts and then select that page as your Posts page. The reading settings I use at multisite.longren.org can be as shown in the following screenshot: Reading settings setup After you've saved your reading settings, open up your parent site in your browser and you should see something similar to what I showed in the Blog directory example screenshot. Again, there's no need for you to keep the exact setup that I've used in the example blog-directory.php file. You can give that any style/design that you want. You can rearrange the various pieces on the page as you prefer. You should probably have a decent working knowledge of HTML and CSS to accomplish this, however. You should have a basic blog directory at this point. If you have any experience with PHP, HTML, and CSS, you can probably extend this basic code and do a whole lot more with it. The number of plugins is astounding and they are of very good quality, generally. And I think Automattic has done great things for WordPress in general. No other CMS can claim to have anything like the number of plugins that WordPress does. Summary You should be able to effectively manage themes and plugins in a multisite installation now. If you set the code up, you've got a directory showcasing network member content and, more importantly, know how to set up and customize a WordPress child theme now. Resources for Article : Further resources on this subject: Customization using ADF Meta Data Services [Article] Overview of Microsoft Dynamics CRM 2011 [Article] Customizing an Avatar in Flash Multiplayer Virtual Worlds [Article]
Read more
  • 0
  • 0
  • 1719

article-image-packing-everything-together
Packt
22 Aug 2013
13 min read
Save for later

Packing Everything Together

Packt
22 Aug 2013
13 min read
(For more resources related to this topic, see here.) Creating a package When you are distributing your extensions, often, the problem you are helping your customer solve cannot be achieved with a single extension, it actually requires multiple components, modules, and plugins that work together. Rather than making the user install all of these extensions manually one by one, you can package them all together to create a single install package. Our click-to-call plugin and folio component go together nicely, so let's package them together. Create a folder named pkg_folio_v1.0.0 on your desktop, and within it, create a folder named packages. Copy into the packages folder the latest version of com_folio and plg_content_clicktocall, for example, com_folio_v2.7.0.zip and plg_content_clicktocall_v1.2.0.zip. Now create a file named pkg_folio.xml in the root of the pkg_folio_v1.0.0 folder, and add the following code to it: <?xml version="1.0" encoding="UTF-8" ?> <extension type="package" version="3.0"> <name>Folio Package</name> <author>Tim Plummer</author> <creationDate>May 2013</creationDate> <packagename>folio</packagename> <license>GNU GPL</license> <version>1.0.0</version> <url>www.packtpub.com</url> <packager>Tim Plummer</packager> <packagerurl>www.packtpub.com</packagerurl> <description>Single Install Package combining Click To Call plugin with Folio component</description> <files folder="packages"> <file type="component" id="folio" >com_folio_v2.7.0.zip</file> <file type="plugin" id="clicktocall" group="content">plg_content_clicktocall_v1.2.0.zip</file> </files> </extension> This looks pretty similar to our installation XML file that we created for each component; however, there are a few differences. Firstly, the extension type is package: <extension type="package" version="3.0"> We have some new tags that help us to describe what this package is and who made it. The person creating the package may be different to the original author of the extensions: <packagename>folio</packagename><packager>Tim Plummer</packager><packagerurl>www.packtpub.com</packagerurl> You will notice that we are looking for our extensions in the packages folder; however, this could potentially have any name you like: <files folder="packages"> For each extension, we need to say what type of extension it is, what its name is, and the file containing it: <file type="component" id="folio" >com_folio_v2.7.0.zip</file> You can package together as many components, modules, and plugins as you like, but be aware that some servers have a maximum size for uploaded files that is quite low, so, if you try to package too much together, you may run into problems. Also, you might get timeout issues if the file is too big. You'll avoid most of these problems if you keep the package file under a couple of megabytes. You can install packages via Extension Manager in the same way you install any other Joomla! extension: However, you will notice that the package is listed in addition to all of the individual extensions within it: Setting up an update server Joomla! has a built-in update software that allows you to easily update your core Joomla! version, often referred to as one-click updates (even though it usually take a few clicks to launch it). This update mechanism is also available to third-party Joomla! extensions; however, it involves you setting up an update server. You can try this out on your local development environment. To do so, you will need two Joomla! sites: http://localhost/joomla3, which will be our update server, and http://localhost/joomlatest, which will be our site that we are going to try to update the extensions on. Note that the update server does not need to be a Joomla! site; it could be any folder on a web server. Install our click-to-call plugin on the http://localhost/joomlatest site, and make sure it's enabled and working. To enable the update manager to be able to check for updates, we need to add some code to the clicktocall.xml installation XML file under /plugins/content/clicktocall/: <?xml version="1.0" encoding="UTF-8"?> <extension version="3.0" type="plugin" group="content" method="upgrade"> <name>Content - Click To Call</name> <author>Tim Plummer</author> <creationDate>April 2013</creationDate> <copyright>Copyright (C) 2013 Packt Publishing. All rights reserved.</copyright> <license>http://www.gnu.org/licenses/gpl-3.0.html</license> <authorEmail>example@packtpub.com</authorEmail> <authorUrl>http://packtpub.com</authorUrl> <version>1.2.0</version> <description>This plugin will replace phone numbers with click to call links. Requires Joomla 3.0 or greater. Don't forget to publish this plugin! </description> <files> <filename plugin="clicktocall">clicktocall.php</filename> <filename plugin="clicktocall">index.html</filename> </files> <languages> <language tag="en-GB">language/en-GB/en-GB.plg_content_clicktocall.ini</language> </languages> <config> <fields name="params"> <fieldset name="basic"> <field name="phoneDigits1" type="text" default="4" label="PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS1_LABEL" description="PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS1_DESC" /> <field name="phoneDigits2" type="text" default="4" label="PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS2_LABEL" description="PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS2_DESC" /> </fieldset> </fields> </config> <updateservers> <server type="extension" priority="1" name="Click To Call Plugin Updates">http://localhost/joomla3/updates/clicktocall.xml</server> </updateservers> </extension> The type can either be extension or collection; in most cases you'll be using extension, which allows you to update a single extension, as opposed to collection, which allows you to update multiple extensions via a single file: type="extension" When you have multiple update servers, you can set a different priority for each, so you can control the order in which the update servers are checked. If the first one is available, it won't bother checking the rest: priority="1" The name attribute describes the update server; you can put whatever value you like in here: name="Click To Call Plugin Updates" We have told the extension where it is going to check for updates, in this case http://localhost/joomla3/updates/clicktocall.xml. Generally, this should be a publically accessible site so that users of your extension can check for updates. Note that you can specify multiple update servers for redundancy. Now on your http://localhost/joomla3 site, create a folder named updates and put the usual index.html file in it. Copy it in the latest version of your plugin, for example, plg_content_clicktocall_v1.2.1.zip. You may wish to make a minor visual change so you can see if the update actually worked. For example, you could edit the en-GB.plg_content_clicktocall.ini language file under /language/en-GB/, then zip it all back up again. PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS1_LABEL="Digits first part"PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS1_DESC="How many digits inthe first part of the phone number?"PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS2_LABEL="Digits last part"PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS2_DESC="How many digits inthe second part of the phone number?" Now create the clicktocall.xml file with the following code in your updates folder: <?xml version="1.0" encoding="utf-8"?> <updates> <update> <name>Content - Click To Call</name> <description>This plugin will replace phone numbers with click to call links. Requires Joomla 3.0 or greater. Don't forget to publish this plugin! </description> <element>clicktocall</element> <type>plugin</type> <folder>content</folder> <client>0</client> <version>1.2.1</version> <infourl title="Click To Call Plugin 1.2.1">http://packtpub.com</infourl> <downloads> <downloadurl type="full" format="zip">http://localhost/joomla3/updates/plg_content_clicktocall_v1.2.1.zip</downloadurl> </downloads> <targetplatform name="joomla" version="3.1" /> </update> </updates> This file could be called anything you like, it does not need to be the extensionname.xml as long as it matches the name you set in your installation XML for the extension. The updates tag surrounds all the update elements. Each time you release a new version, you will need to create another update section. Also, if your extension supports both Joomla! 2.5 and Joomla! 3, you will need to have separate <update> definitions for each version. And if you want to support updates for both Joomla! 3.0 and Joomla! 3.1, you will need separate tags for each of them. The value of the name tag is shown in the Extension Manager Update view, so using the same name as your extension should avoid confusion: <name>Content - Click To Call</name> The value of the description tag is shown when you hover over the name in the update view. The value of the element tag is the installed name of the extension. This should match the value in the element column in the jos_extensions table in your database: <element>clicktocall</element> The value of the type tag describes whether this is a component, module, or a plugin: <type>plugin</type> The value of the folder tag is only required for plugins, and describes the type of plugin this is, in our case a content plugin. Depending on your plugin type, this may be system, search, editor, user, and so on. <folder>content</folder> The value of the client tag describes the client_id in the jos_extensions table, which tells Joomla! if this is a site (0) or an administrator (1) extension type. Plugins will always be 0, components will always be 1; however, modules could vary depending on whether it's a frontend or a backend module: <client>0</client> Plugins must have <folder> and <client> elements, otherwise the update check won't work. The value of the version tag is the version number for this release. This version number needs to be higher than the currently installed version of the extension for available updates to be shown: <version>1.2.1</version> The the infourl tag is optional, and allows you to show a link to information about the update, such as release notes: <infourl title="Click To Call Plugin 1.2.1">http://packtpub.com</infourl> The downloads tag shows all of the available download locations for the update. The value of the Downloadurl tag is the URL to download the extension from. This file could be located anywhere you like, it does not need to be in the updates folder on the same site. The type attribute describes whether this is a full package or an update, and the format attribute defines the package type such as zip or tar: <downloadurl type="full" format="zip">http://localhost/joomla3/updates/plg_content_clicktocall_v1.2.1.zip</downloadurl> The targetplatform tag describes the Joomla! version this update is meant for. The value of the name attribute should always be set to joomla. If you want to target your update to a specific Joomla! version, you can use min_dev_level and max_dev_level in here, but in most cases you'd want your update to be available for all Joomla! versions in that Joomla! release. Note that min_dev_level and max_dev_level are only available in Joomla! 3.1 or higher. <targetplatform name="joomla" version="3.1" /> So, now you should have the following files in your http://localhost/joomla3/updates folder. clicktocall.xmlindex.htmlplg_content_clicktocall_v1.2.1.zip You can make sure the XML file works by typing the full URL http://localhost/joomla3/updates/clicktocall.xml: As the update server was not defined in our extension when we installed it, we need to manually add an entry to the jos_update_sites table in our database before the updates will work. So, now go to your http://localhost/joomlatest site and log in to the backend. From the menu navigate to Extensions | Extension Manager, and then click on the Update menu on the left-hand side. Click on the Find Updates button, and you should now see the update, which you can install: Select the Content – Click To Call update and press the Update button, and you should see the successful update message: And if all went well, you should now see the visual changes that you made to your plugin. These built-in updates are pretty good, so why doesn't every extension developer use them? They work great for free extensions, but there is a flaw that prevents many extension developers using this; there is no way to authenticate the user when they are updating. Essentially, what this means is that anyone who gets hold of your extension or knows the details of your update server can get ongoing free updates forever, regardless of whether they have purchased your extension or are an active subscriber. Many commercial developers have either implemented their own update solutions, or don't bother using the update manager, as their customers can install new versions via extension manager over the top of previous versions. This approach although is slightly inconvenient for the end user, it is easier for the developer to control the distribution. One such developer who has come up with his own solution to this, is Nicholas K. Dionysopoulos from Akeeba, and he has kindly shared his solution, the Akeeba Release System, which you can get for free from his website and easily integrate into your own extensions. As usual, Nicholas has excellent documentation that you can read if you are interested, but it's beyond the scope of this book to go into detail about this alternative solution (https://www.akeebabackup.com/products/akeeba-release-system.html). Summary Now you know how to package up your extensions and get them ready for distribution. You learnt how to set up an update server, so now you can easily provide your users with the latest version of your extensions. Resources for Article: Further resources on this subject: Tips and Tricks for Joomla! Multimedia [Article] Adding a Random Background Image to your Joomla! Template [Article] Showing your Google calendar on your Joomla! site using GCalendar [Article]
Read more
  • 0
  • 0
  • 947
article-image-catering-your-form-related-needs
Packt
22 Aug 2013
16 min read
Save for later

Catering to Your Form-related Needs

Packt
22 Aug 2013
16 min read
(For more resources related to this topic, see here.) Getting your form ready with form panels This recipe shows how to create a basic form using Sencha Touch and implement some of the behaviors such as how to submit the form data and how to handle the errors during the submission. Getting ready Make sure that you have set up your development environment. How to do it... Carry out the following steps to create a form panel: Create a ch02 folder in the same folder where we had created the ch01 folder. Create and open a new file ch02_01.js and paste the following code into it: Ext.application({name: 'MyApp',requires: ['Ext.MessageBox'],launch: function() {var form;//form and related fields configvar formBase = {//enable vertical scrolling in case the form exceeds the pageheightscrollable: 'vertical',standardSubmit: false,submitOnAction: true,url: 'http://localhost/test.php',items: [{//add a fieldsetxtype: 'fieldset',title: 'Personal Info',instructions: 'Please enter the information above.',//apply the common settings to all the child items of the fieldsetdefaults: {required: true,//required fieldlabelAlign: 'left',labelWidth: '40%'},items: [{//add a text feildxtype: 'textfield',name : 'name',label: 'Name',clearIcon: true,//shows the clear icon in the field when usertypesautoCapitalize : true},{ //add a password fieldxtype: 'passwordfield',name : 'password',label: 'Password',clearIcon: false}, {xtype: 'passwordfield',name : 'reenter',label: 'Re-enter Password',clearIcon: true}, { //add an email fieldxtype: 'emailfield',name : 'email',label: 'Email',placeHolder: 'you@sencha.com',clearIcon: true}]}, {//items docked to the bottom of the formxtype: 'toolbar',docked: 'bottom',items: [{text: 'Reset',handler: function() {form.reset(); //reset the fields}},{text: 'Save',ui: 'confirm',handler: function() {//sumbit the form data to the urlform.submit({success: function(form, result) {Ext.Msg.alert("INFO", "Formsubmitted!");},failure: function(form, result) {Ext.Msg.alert("INFO", "Formsubmission failed!");}});}}]}]};if (Ext.os.is.Phone) {formBase.fullscreen = true;} else { //if desktopExt.apply(formBase, {modal: true,centered: true,hideOnMaskTap: false,height: 385,width: 480});}//create form panelform = Ext.create('Ext.form.Panel', formBase);Ext.Viewport.add(form);}}); Include the following line of code in the index.html file: <script type="text/javascript" charset="utf-8"src = "ch02/ch02_01.js"></script > Deploy and access it from the browser. You will see a screen as shown in the following screenshot: How it works... The code creates a form panel with a fieldset inside it. The fieldset has four fields specified as part of its child items. The xtype config mentioned for each field tells the Sencha Touch component manager which class to use to instantiate them. form = new Ext.form.FormPanel(formBase); creates the form and the other field components using the config defined as part of the formBase. The form.show(); code renders the form to the body, and that's how it will appear on the screen. url contains the URL where the form data will be posted upon submission. The form can be submitted in two ways: By hitting Go on the virtual keyboard, or Enter on a field, which ends up generating the action event By clicking on the Save button, which internally calls the submit() method on the form object form.reset() resets the status of the form and its fields to the original state. So, if you had entered the values in the fields and clicked on the Reset button, all the fields would be cleared. form.submit() posts the form data to the specified URL. The data is posted as an Ajax request using the POST method. Use of useClearIcon on the field tells Sencha Touch whether it should show the clear icon in the field when the user starts entering values in it. On clicking this icon, the value in the field is cleared. There's more... In the preceding code, we saw how to construct a form panel, add fields to it, and handle events. Let us see what other non-trivial things we may have to do in the project and how we can achieve these using Sencha Touch. Standard submit This is an old and traditional way for posting form data to the server URL. If your application's need is to use the standard form submit rather than Ajax, you will have to set the standardSubmit property to true on the form panel. This is set to false by default. The following code snippet shows the usage of this property: var formBase = {scroll: 'vertical',standardSubmit: true,... After this property is set to true on the form panel, form.submit() will load the complete page specified in the url property. Submitting on field action As we saw earlier, the form data automatically gets posted to the URL if the action event occurs (when the Go button or the Enter key is hit). In many applications, this default feature may not be desirable. To disable this feature, you will have to set submitOnAction to false on the form panel. Post-submission handling Say we posted our data to the URL. Now, either the call may fail or it may succeed. To handle these specific conditions and act accordingly, we will have to pass additional config options to the form's submit() method. The following code shows the enhanced version of the submit call: form.submit({success: function(form, result) {Ext.Msg.alert("INFO", "Form submitted!");},failure: function(form, result) {Ext.Msg.alert("INFO", "Form submission failed!");}}); In case the Ajax call (to post form data) fails, the failure() callback function is called and if it's successful, the success() callback function is called. This works only if the standardSubmit property is set to false. Reading form data To read the values entered into a form field, form panel provides the getValues() method, which returns an object with field names and their values. It is important that you set the name property on your form field otherwise that field value will not appear in the object returned by the getValues() method: handler: function() {console.log('INFO', form.getValues());//sumbit the form data to the urlform.submit({...... Loading data in the form fields To set the form field values, the form panel provides record config and two methods, setValues() and setRecord(). The setValues() method expects a config object with name-value pairs for the fields. The following code shows how to use the setValues() method: {text: 'Set Data',handler: function() {form.setValues({name:'Ajit Kumar',email: 'ajit@wtc.com'});}},{text: 'Reset',...... The preceding code adds a new button named Set Data; by clicking on it, the form field data is populated as shown in the following screenshot. As we had passed values for the Name and Email fields they are set: The other method, setRecord(),expects an instance of the Ext.data.Model class. The following code shows how we can create a model and use it to populate the form fields: ,{text: 'Load Data',handler: function() {Ext.define('MyApp.model.User', {extend: 'Ext.data.Model',config: {fields: ['name', 'email']}});var ajit = Ext.create('MyApp.model.User', {name:'Ajit Kumar',email:'ajit@wtc.com'});form.setRecord(ajit);}},{text: 'Reset',...... We shall use setRecord() when our data is stored as a model, or we will construct it as a model to use the benefits of the model (for example, loading from a remote data source, data conversion, data validation, and so on) that are not available with the JSON presentation of the data. While the methods help us to set the field values at runtime the, record config allows us to populate the form field values when the form panel is constructed. The following code snippet shows how we can pass a model at the time of instantiation of the form panel: var ajit = Ext.create('MyApp.model.User', {name:'Ajit Kumar',email:'ajit@wtc.com'});var formBase = {scroll: 'vertical',standardSubmit: true,record: ajit,... Working with search We will go over each of the form fields and understand how to work with them. This recipe describes the steps required to create and use a search form field. Getting ready Make sure that you have set up your development. How to do it... Carry out the following steps: Copy ch02_01.js to ch02_02.js. Open a new file ch02_02.js and replace the definition of formBase with the following code: var formBase = {items: [{xtype: 'searchfield',name: 'search',label: 'Search'}]}; Include ch02_02.js in place of ch02_01.js in index.html. Deploy and access the application in the browser. You will see a form panel with a search field. How it works... A search field can be constructed using the Ext.field.Search class instance or using the xtype: 'searchfield' approach. A search form field implements the HTML5 <input> element with type="search". However, the implementation is very limited. For example, the search field in HTML5 allows us to associate a data list that it can use during the search, whereas this feature is not present in Sencha Touch. Similarly, the W3 search field defines a pattern attribute to allow us to specify a regular expression against which a user agent is meant to check the value, which is not supported yet in Sencha Touch. For more detail, you may refer to the W3 search field (http://www.w3.org/TR/html-markup/input.search.html) and the source code of the Ext.field.Search class. There's more... In the application, we often do not use a label for the search fields. Rather, we would like to show text, such as Search…, inside the field that will disappear when the focus is on the field. Let us see how we can achieve this. Using a placeholder Placeholders are supported by most of the form fields in Sencha Touch using the placeholder property. Placeholder text appears in the field as long as there is no value entered in it and the field does not have the focus. The following code snippet shows the typical usage of it: {xtype: 'searchfield',name: 'search',label: 'Search',placeHolder: 'Search...'} Applying custom validation in the e-mail field This recipe describes how to make use of the e-mail form field provided by Sencha Touch, and how to validate the value entered into it to find out whether the entered e-mail passes the validation rule or not. Getting ready Make sure that you have set up your development environment. How to do it... Carry out the following steps: Copy ch02_01.js to ch02_03.js. Open a new file ch02_03.js and replace the definition of formBase with the following code: var formBase = {items: [{xtype: 'emailfield',name : 'email',label: 'Email',placeHolder: 'you@sencha.com',clearIcon: true,listeners: {blur: function(thisTxt, eventObj) {var val = thisTxt.getValue();//validate using the patternif (val.search("[a-c]+@[a-z]+[.][a-z]+") == -1)Ext.Msg.alert("Error", "Invalid e-mail address!!");elseExt.Msg.alert("Info", "Valid e-mail address!!");}}}]}; Include ch02_03.js in place of ch02_02.js in index.html. Deploy and access the application in the browser. How it works... The Email field can be constructed using the Ext.field.Email class instance or using the xtype value as emailfield. The e-mail form field implements the HTML5 <input> element with type="email". However, similar to the search field, the implementation is very limited. For example, the e-mail field in HTML5 allows us to specify a regular expression pattern, which can be used to validate the value entered in the field. Working with dates using the date picker This recipe describes how to make use of the date picker form field provided by Sencha Touch, which allows the user to select a date. Getting ready Make sure that you have set up your development environment. How to do it... Carry out the following steps: Copy ch02_01.js to ch02_04.js. Open a new file ch02_04.js and replace the definition of formBase with the following code: var formBase = {items: [{xtype: 'datepickerfield',name: 'date',label: 'Date'}]}; Include ch02_04.js in place of ch02_03.js in index.html. Deploy and access the application in the browser. How it works... The date picker field can be constructed using the Ext.field.DatePicker class instance or using the xtype: datepickerfield approach. The date picker form field implements the HTML <select> element. When the user tries to select an entry, it shows the date picker component with the slots for the month, day, and year for selection. After selection, when the user clicks on the Done button, the field is set with the selected value. There's more... Additionally, there are other things that can be done, such as setting a date to the current date or a particular date, or changing the order of appearance of month, day, and year. Let us see what it takes to accomplish this. Setting the default date to the current date To set the default value to the current date, the value property must be set to the current date. The following code shows how to do it: var formBase = {items: [{xtype: 'datepickerfield',name: 'date',label: 'Date',value: new Date(),… Setting the default date to a particular date The default date is January 01, 1970. Let's suppose that you need to set the date to a different date but not the current date. To do so, you will have to set the value property using the year, month, and day properties, as follows: var formBase = {items: [{xtype: 'datepickerfield',name: 'date',label: 'Date',value: {year: 2011, month: 6, day: 11},… Changing the slot order By default, the slot order is month, day, and year. You can change it by setting the slotOrder property of the picker property of date picker, as shown in the following code: var formBase = {items: [{xtype: 'datepickerfield',name: 'date',label: 'Date',picker: {slotOrder: ['day', 'month', 'year']}}]}; Setting the picker date range By default, the date range shown by the picker is from 1970 till the current year. For our application need, if we have to alter the year range to a different range, then we can do so by setting the yearFrom and yearTo properties of the picker property of the date picker, as follows: var formBase = {items: [{xtype: 'datepickerfield',name: 'date',label: 'Date',picker: {yearFrom: 2000, yearTo: 2013}}]}; Making a field hidden Often in an application, there would be a need to hide the fields that are not needed in a particular context but are required, and hence they need to be shown. In this recipe, we will see how to make a field hidden and show it conditionally. Getting ready Make sure that you have set up your development environment. How to do it... Carry out the following steps: Edit ch02_04.js and modify the code, as follows, by adding the hidden property: var formBase = {items: [{xtype: 'datepickerfield',id: 'datefield-id',name: 'date',hidden: true,label: 'Date'}]}; Deploy and access the application in the browser. How it works... When a field is marked as hidden, Sencha Touch uses the DOM's hide() method on the element to hide that particular field. There's more... Let's see how we can programmatically show/hide a field. Showing/hiding a field at runtime Each component in Sencha Touch supports two methods, show() and hide(). The show() method shows the element and the hide() method hides the element. To call these methods, first we will have to find the reference to the component, which can be achieved by either using the object reference or by using the Ext.getCmp() method. Given a component ID, the getCmp() method returns us the component. The following code snippet demonstrates showing an element: var cmp = Ext.getCmp('datefield-id');cmp.show(); To hide an element, we will have to call cmp.hide(). Working with the select field This recipe describes the use of the select form field, which allows the user to select a value from a list of choices, such as a combobox. Getting ready Make sure that you have set up your development environment. How to do it... Carry out the following steps: Copy ch02_01.js to ch02_05.js Open a new file ch02_05.js and replace the definition of formBase with the following code: var formBase = {items: [{xtype: 'selectfield',name: 'select',label: 'Select',placeHolder: 'Select...',options: [{text: 'First Option', value: 'first'},{text: 'Second Option', value: 'second'},{text: 'Third Option', value: 'third'}]}]}; Include ch02_05.js in place of ch02_04.js in index.html. Deploy and access the application in the browser. How it works... The preceding code creates a select form field with three options for selection. The select field can be constructed using the Ext.field.Select class instance or using the xtype: 'selectfield' approach. The select form field implements the HTML <select> element. By default, it uses the text property to show the text for selection. There's more... It may not always be possible or desirable to use text and value properties in the date to populate the selection list. In case we have a different property in place of text, then how do we make sure that the selection list is populated correctly without any further conversion? Let's see how we can do this. Using a custom display value We shall use displayField to specify the field that will be used as text, as shown in the following code: {xtype: 'selectfield',name: 'select',label: 'Second Select',placeHolder: 'Select...',displayField: 'desc',options: [ {desc: 'First Option', value: 'first'}, {desc: 'Second Option', value: 'second'}, {desc: 'Third Option', value: 'third'}]} Changing a value using slider This recipe describes the use of the slider form field, which allows the user to change the value by mere sliding. Getting ready Make sure that you have set up your development environment. How to do it... Carry out the following steps: Copy ch02_01.js to ch02_06.js. Open a new file ch02_06.js and replace the definition of formBase with the following code: var formBase = {items: [{xtype: 'sliderfield',name : 'height',label: 'Height',minValue: 0,maxValue: 100,increment: 10}]}; Include ch02_06.js in place of ch02_05.js in index.html. Deploy and access the application in the browser. How it works... The preceding code creates a slider field with 0 to 100 as the range of values, with 10 as the increment value; this means that, when a user clicks on the slider, the value will change by 10 on every click. The increment value must be a whole number.
Read more
  • 0
  • 0
  • 1152

article-image-mailbox-database-management
Packt
19 Aug 2013
10 min read
Save for later

Mailbox Database Management

Packt
19 Aug 2013
10 min read
(For more resources related to this topic, see here.) Determining the average mailbox size per database PowerShell is very flexible and gives you the ability to generate very detailed reports. When generating mailbox database statistics, we can utilize data returned from multiple cmdlets provided by the Exchange Management Shell. This section will show you an example of this, and you will learn how to calculate the average mailbox size per database using PowerShell. How to do it... To determine the average mailbox size for a given database, use the following one-liner: Get-MailboxStatistics -Database DB1 | ForEach-Object {$_.TotalItemSize.value.ToMB()} | Measure-Object -Average | Select-Object –ExpandProperty Average How it works... Calculating an average is as simple as performing some basic math, but PowerShell gives us the ability to do this quickly with the Measure-Object cmdlet. The example uses the Get-MailboxStatistics cmdlet to retrieve all the mailboxes in the DB1 database. We then loop through each one, retrieving only the TotalItemSize property, and inside the ForEach-Object script block we convert the total item size to megabytes. The result from each mailbox can then be averaged using the Measure-Object cmdlet. At the end of the command, you can see that the Select-Object cmdlet is used to retrieve only the value for the Average property. The number returned here will give us the average mailbox size in total for regular mailboxes, archive mailboxes, as well as any other type of mailbox that has been disconnected. If you want to be more specific, you can filter out these mailboxes after running the Get-MailboxStatistics cmdlet: Get-MailboxStatistics -Database DB1 | Where-Object{!$_.DisconnectDate -and !$_.IsArchive} | ForEach-Object {$_.TotalItemSize.value.ToMB()} | Measure-Object -Average | Select-Object –ExpandProperty Average Notice that, in the preceding example, we have added the Where-Object cmdlet to filter out any mailboxes that have a DisconnectDate defined or where the IsArchive property is $true. Another thing that you may want to do is round the average. Let's say the DB1 database contained 42 mailboxes and the total size of the database was around 392 megabytes. The value returned from the preceding command would roughly look something like 2.39393939393939. Rarely are all those extra decimal places of any use. Here are a couple of ways to make the output a little cleaner: $MBAvg = Get-MailboxStatistics -Database DB1 | ForEach-Object {$_.TotalItemSize.value.ToMB()} | Measure-Object -Average | Select-Object –ExpandProperty Average[Math]::Round($MBAvg,2) You can see that this time, we stored the result of the one-liner in the $MBAvg variable. We then use the Round method of the Math class in the .NET Framework to round the value, specifying that the result should only contain two decimal places. Based on the previous information, the result of the preceding command would be 2.39. We can also use string formatting to specify the number of decimal places to be used: [PS] "{0:n2}" -f $MBAvg2.39 Keep in mind that this command will return a string, so if you need to be able to sort on this value, cast it to double: [PS] [double]("{0:n2}" -f $MBAvg)2.39 The -f format operator is documented in PowerShell's help system in about_operators. There's more... The previous examples have only shown how to determine the average mailbox size for a single database. To determine this information for all mailbox databases, we can use the following code (save it to a file called size.ps1): foreach($DB in Get-MailboxDatabase) { Get-MailboxStatistics -Database $DB | ForEach-Object{$_.TotalItemSize.value.ToMB()} |Measure-Object -Average | Select-Object @{n="Name";e={$DB.Name}}, @{n="AvgMailboxSize";e={[Math] ` ::Round($_.Average,2)}} | Sort-Object ` AvgMailboxSize -Desc} The result of this command would look something like this: This example is very similar to the one we looked at previously. The difference is that, this time, we are running our one-liner using a foreach loop for every mailbox database in the organization. When each mailbox database has been processed, we sort the output based on the AvgMailboxSize property. Restoring data from a recovery database When it comes to recovering data from a failed database, you have several options depending on what kind of backup product you are using or how you have deployed Exchange 2013. The ideal method for enabling redundancy is to use a DAG, which will replicate your mailbox databases to one or more servers and provide automatic failover in the event of a disaster. However, you may need to pull old data out of a database restored from a backup. In this section, we will take a look at how you can create a recovery database and restore data from it using the Exchange Management Shell. How to do it... First, restore the failed database using the steps required by your current backup solution. For this example, let's say that we have restored the DB1 database file to E:RecoveryDB1 and the database has been brought to a clean shutdown state. We can use the following steps to create a recovery database and restore mailbox data: Create a recovery database using the New-MailboxDatabase cmdlet: New-MailboxDatabase -Name RecoveryDB `-EdbFilePath E:RecoveryDB1DB1.edb `-LogFolderPath E:RecoveryDB01 `-Recovery `-Server MBX1 When you run the preceding command, you will see a warning that the recovery database was created using the existing database file. The next step is to check the state of the database, followed by mounting the database: Eseutil /mh .DB1.edbEseutil /R E02 /DMount-Database -Identity RecoveryDB Next, query the recovery database for all mailboxes that reside in the database RecoveryDB: Get-MailboxStatistics –Database RecoveryDB | fl DisplayName,MailboxGUID,LegacyDN Lastly, we will use the New-MailboxRestoreRequest cmdlet to restore the data from the recovery database for a single mailbox: New-MailboxRestoreRequest -SourceDatabase RecoveryDB `-SourceStoreMailbox "Joe Smith" `-TargetMailbox joe.smith When running the eseutil commands, make sure you are in the folder where the restored mailbox database and logs are placed. How it works... When you restore the database file from your backup application, you may need to ensure that the database is in a clean shutdown state. For example, if you are using Windows Server Backup for your backup solution, you will need to use the Eseutil.exe database utility to play any uncommitted logs into the database to get it in a clean shutdown state. Once the data is restored, we can create a recovery database using the New-MailboxDatabase cmdlet, as shown in the first example. Notice that when we ran the command we used several parameters. First, we specified the path to the EDB file and the logfiles, both of which are in the same location where we restored the files. We have also used the -Recovery switch parameter to specify that this is a special type of database that will only be used for restoring data and should not be used for production mailboxes. Finally, we specified which mailbox server the database should be hosted on using the -Server parameter. Make sure to run the New-MailboxDatabase cmdlet from the mailbox server that you are specifying in the -Server parameter, and then mount the database using the Mount-Database cmdlet. The last step is to restore data from one or more mailboxes. As we saw in the previous example, New-MailboxRestoreRequest is the tool to use for this task. This cmdlet was introduced in Exchange 2010 SP1, so if you have used this process in the past, the procedure is the same with Exchange 2013. There's more… When you run the New-MailboxRestoreRequest cmdlet, you need to specify the identity of the mailbox you wish to restore using the -SourceStoreMailbox parameter. There are three possible values you can use to provide this information: DisplayName, MailboxGuid, and LegacyDN . To retrieve these values, you can use the Get-MailboxStatistics cmdlet once the recovery database is online and mounted: Get-MailboxStatistics -Database RecoveryDB | fl DisplayName,MailboxGUID,LegacyDN Here we have specified that we want to retrieve all three of these values for each mailbox in the RecoveryDB database. Understanding target mailbox identity When restoring data with the New-MailboxRestoreRequest cmdlet, you also need to provide a value for the -TargetMailbox parameter. The mailbox needs to already exist before running this command. If you are restoring data from a backup for an existing mailbox that has not changed since the backup was done, you can simply provide the typical identity values for a mailbox for this parameter. If you want to restore data to a mailbox that was not the original source of the data, you need to use the -AllowLegacyDNMismatch switch parameter. This will be useful if you are restoring data to another user's mailbox, or if you've recreated the mailbox since the backup was taken. Learning about other useful parameters The New-MailboxRestoreRequest cmdlet can be used to granularly control how data is restored out of a mailbox. The following parameters may be useful to customize the behavior of your restores: ConflictResolutionOption: This parameter specifies the action to take if multiple matching messages exist in the target mailbox. The possible values are KeepSourceItem, KeepLatestItem, or KeepAll. If no value is specified, KeepSourceItem will be used by default. ExcludeDumpster: Use this switch parameter to indicate that the dumpster should not be included in the restore. SourceRootFolder: Use this parameter to restore data only from a root folder of a mailbox. TargetIsArchive: You can use this switch parameter to perform a mailbox restore to a mailbox archive. TargetRootFolder: This parameter can be used to restore data to a specific folder in the root of the target mailbox. If no value is provided, the data is restored and merged into the existing folders, and, if they do not exist, they will be created in the target mailbox. These are just a few of the useful parameters that can be used with this cmdlet, but there are more. For a complete list of all the available parameters and full details on each one, run Get-Help New-MailboxRestoreRequest -Detailed. Understanding mailbox restore request cmdlets There is an entire cmdlet set for mailbox restore requests in addition to the New-MailboxRestoreRequest cmdlet. The remaining available cmdlets are outlined as follows: Get-MailboxRestoreRequest: Provides a detailed status of mailbox restore requests Remove-MailboxRestoreRequest : Removes fully or partially completed restore requests Resume-MailboxRestoreRequest : Resumes a restore request that was suspended or failed Set-MailboxRestoreRequest: Can be used to change the restore request options after the request has been created Suspend-MailboxRestoreRequest: Suspends a restore request any time after the request was created but before the request reaches the status of Completed For complete details and examples for each of these cmdlets, use the Get-Help cmdlet with the appropriate cmdlet using the -Full switch parameter. Taking it a step further Let's say that you have restored your database from backup, you have created a recovery database, and now you need to restore each mailbox in the backup to the corresponding target mailboxes that are currently online. We can use the following script to accomplish this: $mailboxes = Get-MailboxStatistics -Database RecoveryDBforeach($mailbox in $mailboxes) { New-MailboxRestoreRequest -SourceDatabase RecoveryDB ` -SourceStoreMailbox $mailbox.DisplayName ` -TargetMailbox $mailbox.DisplayName } Here you can see that first we use the Get-MailboxStatistics cmdlet to retrieve all the mailboxes in the recovery database and store the results in the $mailboxesvariable. We then loop through each mailbox and restore the data to the original mailbox. You can track the status of these restores using the Get-MailboxRestoreRequest cmdlet and the Get-MailboxRestoreRequestStatistics cmdlet. Summary Thus in this article, we covered a very small but an appetizing part of mailbox database management by determining the average mailbox size per database and restoring data from a recovery database. Resources for Article : Further resources on this subject: Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio [Article] Microsoft SQL Azure Tools [Article] SQL Server 2008 R2: Multiserver Management Using Utility Explorer [Article]
Read more
  • 0
  • 0
  • 1121

article-image-installing-magento
Packt
19 Aug 2013
22 min read
Save for later

Installing Magento

Packt
19 Aug 2013
22 min read
(For more resources related to this topic, see here.) Installing Magento locally Whether you're working on a Windows computer, Mac, or Linux machine, you will notice very soon that it comes in handy to have a local Magento test environment available. Magento is a complex system and besides doing regular tasks, such as adding products and other content, you should never apply changes to your store directly in the live environment. When you're working on your own a local test system is easy to set up and it gives you the possibility to test changes without any risk. When you're working in a team it makes sense to have a test environment running on your own server or hosting provider. In here, we'll start by explaining how to set up your local test system. Requirements Before we jump into action, it's good to have a closer look at Magento's requirements. What do you need to run it? Simply put, all up-to-date requirements for Magento can be found here: http://www.magentocommerce.com/system-requirements. But maybe that's a bit overwhelming if you are just a beginner. So let's break this up into the most essential stuff: Requirement Notes Operating system: Linux Magento runs best on Linux, as offered by most hosting companies. Don't worry about your local test environment as that will run on Windows or Mac as well. But for your live store you should go in for a Linux solution because if you decide to run it on anything else other than Linux for a live store, it will not be supported. Web server: Apache Magento runs on Versions 1.3.x, 2.0.x, and 2.2.x of this very popular web server. As of Version 1.7 of Magento community and Version 1.12 of Magento Enterprise there's a new web server called Nginx that is compatible as well. Programming language: PHP Magento has been developed using PHP, a programming language which is very popular. Many major open source solutions such as WordPress and Joomla for instance, have been built using PHP. Use Versions 5.2.13 - 5.3.15. Do not use PHP4 anymore, nor use PHP 5.4 yet! PHP extensions Magento requires a number of extensions, which should be available on top of PHP itself. You will need: PDO_MySQL, mcrypt, hash, simplexml, GD, DOM, Iconv, and Curl. Besides that you also need to have the possibility to switch off ''safe mode''. You do not have a clue about all of this? Don't worry. A host offering Magento services already takes care of this. And for your local environment there are only a few additional steps to take. We'll get there in a minute. Database: MySQL MySQL is the database, where Magento will store all data for your store. Use Version 4.1.20 or (and preferably) newer. As you can see, even in a simplified format, there are quite some things that need to be taken care of. Magento hosting is not as simple as hosting for a small WordPress or Joomla! website, currently the most popular open source solutions to create a regular site. The requirements are higher and you just cannot expect to host your store for only a couple of dollars per month. If you do, your online store may still work, but it is likely that you'll run into some performance issues. Be careful with the cheapest hosting solutions. Although Magento may work, you'll be consuming too that need server resources soon. Go for a dedicated server or a managed VPS (Virtual Private Server), but definitely for a host that is advertising support of Magento. Time for action – installing Magento on a Windows machine We'll speak more deeply about Magento hosting later on. Let's first download and install the package on a local Windows machine. Are you a Mac user? Don't worry, we'll give instructions for Mac users as well later on. Note that the following instructions are written for Windows users, but will contain valuable information for Mac users as well. Perform the following steps to install Magento on your Windows computer: Download the Magento installation package. Head over to http://www.magentocommerce.com/download and download the package you need. For a Windows user almost always the full ZIP package is the most convenient one. In our situation Version 1.7.0.2 is the latest one, but please be aware that this will certainly change over time when newer versions are released. You will need to create a (free) account to download the software. This account will also be helpful later on. It will give you access to the Magento support forums, so make sure to store your login details somewhere.The download screen should look something like this: If you're a beginner then it is handy to have some sample data in your store. Magento offers a download package containing sample data on the same page, so download that as well. Note that for a production environment you would never install the sample data, but for a test system like the local installation we're doing here, it might be a good idea to use it. The sample data will create a few items and customers in your store, which will make the learning process easier. Did you notice the links to Magento Go at every download link? Magento Go is Magento's online platform, which you can use out of the box, without doing any installation at all. However, in the remaining part of this article, we assume that you are going to set up your own environment and want to have full control over your store. Next, you need a web server, so that you can run your website locally, on your own machine. On Windows machines, XAMPP is an easy to use all-in-one solution. Download the installer version via: http://www.apachefriends.org/en/xampp-windows.html. XAMPP is also available for Mac and Linux. The download screen is as follows: Once downloaded, run the executable code to start the installation process. You might receive some security warnings that you have to accept, especially when you're using Windows Vista, 7 or 8, like in the following example: Because of this it's best to install XAMPP directly in the root of your hard drive, c:xampp in most cases. Once you click on OK, you will see the following screen, which shows the progress of installation: Once the installation has finished, the software asks if you'd like to start the Control Panel. If you do so, you'll see a number of services that have not been started yet. The minimum that you should start by clicking the Start button are Apache, the web server and MySQL, the database server. Now you're running your own web server on your local computer. Be aware that generally this web server will not be accessible for the outside world. It's running on your local machine, just for testing purposes. Before doing the next step, please verify if your web server is actually running. You can do so by using your browser and going to http://localhost or http://127.0.0.1 If all went well you should see something similar to the following: No result? If you're on a Windows computer, please first reboot your machine. Next, check using the XAMPP control panel if the Apache service is running. If it isn't, try to start it and pay attention to the error messages that appear. Need more help? Start with the help available on XAMPP's website at: http://www.apachefriends.org/en/faq-xampp-windows.html. Can't start the Apache service? Check if there are any other applications using ports 80 and 443. The XAMPP control panel will give you more information. One of the applications that you should for instance stop before starting XAMPP is Skype. It's also possible to change this setting in Skype by navigating to Tools | Options | Advanced | Connections. Change the port number to something else, for instance port 8080. Then close and restart Skype. This prevents the two from interfering with each other in the future. So, the next thing that needs to be done is installing Magento on top of it. But before we do so, we first have to change a few settings. Change the following Windows file: C:WindowsSystem32driversetchosts.Make sure to open your editor using administrator rights, otherwise you will not be able to save your changes. Add the following line to the host file: 127.0.0.1 www.localhost.com. This is needed because Magento will not work correctly on a localhost without this setting. You may use a different name, but the general rule is that at least one dot must be used in the local domain name. The following screenshot gives an example of a possible host file. Please note that every host file will look a bit different. Also, your security software or Windows security settings may prevent you from making changes to this file, so please make sure you have the appropriate rights to change and save its contents: Do you need a text editor? There are really lots of possibilities when it comes to editing text for the web, as long as you use a ''plain text'' editor. Something like Microsoft Word isn't suitable because it will add a lot of unwanted code to your files! For very simple things like the one above, even Notepad would work. But soon you'll notice that it is much more convenient to use an editor that will help you in structuring and formatting your files. Personally, I can recommend the free Notepad++ for Windows users, which is even available in lots of different languages: http://notepad-plus-plus.org. Mac users can have a look at Coda: http://panic.com/coda/ or TextWrangler http://www.barebones.com/products/textwrangler/. Unzip the downloaded Magento package and put all files in a subfolder of your XAMPP installation. This could for instance be c:xampphtdocsmagento. Now, go to www.localhost.com/magento to check if the installation screen of Magento is visible, as shown in the following screenshot. But do not yet start the installation process! Before you start the installation, first create a MySQL database. To do this, use a second browser tab and navigate to localhost | phpMyAdmin. By default the user is root, and so without a password you should be able to continue without logging in. Click on Databases and create a database with a name of your choice. Write it down, as you will need it during the Magento installation. After creating the database you may close the browser tab. It's finally time to start the installation process now. Go back to the installation screen of Magento, accept the license agreement and click on Continue. Next, set your country, Time Zone and Default Currency. If you're working with multiple currencies that will be addressed later on: The next screen is actually the most important one of the installation process and this is where most beginners go wrong because they do not know what values to use. Using XAMPP this is an easy task, however, fill in your Database Name, User Name (root) and do not forget to check the Skip Base URL Validation Before the Next Step box, otherwise your installation might fail: In this same form there are some fields that you can use to immediately improve the security level of your Magento setup. On a local test environment that isn't necessary, so we'll pay attention to those settings later on when we'll discuss installing Magento at a hosting provider. Please note that the Use Secure URLs option should remain unchecked for a local installation like we're doing here. In the last step, yes, really! Just fill out your personal data and chose a username and password. Also in here, since you're working locally you do not have to create a complicated, unique password now. But you know what we mean, right? Doing a live installation at a hosting provider requires a good, strong password! You do not have to fill the Encryption Key field, Magento will do that for you: In the final screen please just make a note of the Encryption Key value that was generated. You might need it in the future whenever upgrading your Magento store to a newer software version: What just happened? Congratulations! You just installed Magento for the very first time! Summarizing it, you just: Downloaded and installed XAMPP Changed your Windows host file Created a MySQL database using PhpMyAdmin Installed Magento I'm on Mac; what should I do? Basically, the steps using XAMPP are a bit different if you're using Mac. We shall be using Mac OS X 10.8 as an example of Mac OS version. According to our experience, as an alternative to XAMPP, MAMP is a bit easier if you are working with Mac. You can find the MAMP software here: http://www.mamp.info/en/downloads/index.html And the documentation for MAMP is available here: http://documentation.mamp.info/en/mamp/installation The good thing about MAMP is that it is easy to install, with very few configuration changes. It will not conflict with any already running Apache installation on your Mac, in case you have any. And it's easy to delete as well; just removing the Mamp folder from your Applications folder is already sufficient to delete MAMP and all local websites running on it. Once you've downloaded the package, it will be in the Downloads folder of your Mac. If you are running Mac OS X 10.8, you first need to set the correct security settings to install MAMP. You can find out which version of Mac OS X you have using the menu option in the top-left corner of your screen: You can find the security settings menu by again going to the Apple menu and then selecting System Preferences: In System Preferences, select the Security & Privacy icon that can be found in the first row as seen in the following screenshot: In here, press the padlock and enter your admin password. Next, select the Anywhere radio button in the Allow applications downloaded from: section. This is necessary because it will not be possible to run the MAMP installation you downloaded without it: Open the image you've downloaded and simply move the Mamp folder to your Applications folder. That's all. Now that you've MAMP installed on your system, you may launch MAMP.app (located at Applications | Mamp | Mamp.app). While you're editing your MAMP settings, MAMP might prompt you for an administrator password. This is required because it needs to run two processes: httpd (Apache) and mysqld (MySQL). Depending on the settings you set for those processes, you may or may not need to enter your password. Once you open MAMP, click on Preferences button. Next, click on Ports. The default MAMP ports are 8888 for Apache, and 8889 for MySQL. If you use this configuration, you will not be asked for your password, but you will need to include the port number in the URL when using it (http://localhost:8888). You may change this by setting the Apache port to 80, for which you'll probably have to enter your administrator password. If you have placed your Magento installation in the Shop folder, it is advised to call your Magento installation through the following URL: http://127.0.0.1:8888/shop/, instead of http://localhost:8888/shop/. The reason for this is that Magento may require dots in the URL. The last thing you need to do is visit the Apache tab, where you'll need to set a document root. This is where all of your files are going to be stored for your local web server. An example of a document root is Users | Username | Sites. To start the Apache and MySQL servers, simply click on Start Servers from the main MAMP screen. After the MAMP servers start, the MAMP start page should open in your web browser. If it doesn't, click on Open start page in the MAMP window. From there please select phpMyAdmin. In PhpMyAdmin, you can create a database and start the Magento installation procedure, just like we did when installing Magento on a Windows machine. See the Time for action – installing Magento on a Windows machine section, point 8 to continue the installation of Magento. Of course you need to put the Magento files in your Mamp folder now, instead of the Windows path mentioned in that procedure. In some cases, it is necessary to change the Read & Write permissions of your Magento folder before you can use Magento on Mac. To do that, right-click on the Magento folder, and select the Get Info option. In the bottom of the resulting screen, you will see the folder permissions. Set all of these to Read & Write, if you have trouble in running Magento. Installing Magento at a hosting service There are thousands of hosting providers with as many different hosting setups. The difficulty of explaining the installation of Magento at a commonly used hosting service is that the procedure differs from hosting provider to hosting provider, depending on the tools they use for their services. There are providers, for instance, who use Plesk, DirectAdmin, or cPanel. Although these user environments differ from each other, the basic steps always remain the same: Check the requirements of Magento (there's more information on this topic at the beginning of this article). Upload the Magento installation files using an ftp tool, for instance, Filezilla (download this free at: http://filezilla-project.org). Create a database. This step differs slightly per hosting provider, but often a tool, such as PhpMyAdmin is used. Ask your hosting provider if you're in doubt about this step. You will need: the database name, database user, password, and the name of the database server. Browse to your domain and run the Magento installation process, which is the same as we saw earlier in this article. How to choose a Magento hosting provider One important thing we didn't discuss yet during this article is selecting a hosting provider that is capable of running your online store. We already mentioned that you should not expect performance for a couple of dollars per month. Magento will often still run at a cheap hosting service, but the performance is regularly very poor. So, you should pay attention to your choices here and make sure you make the right decision. Of course everything depends on the expectations for your online store. You should not aim for a top performance, if all you expect to do during your first few years is 10,000 dollars of revenue per year. OK, that's difficult sometimes. It's not always possible to create a detailed estimation of the revenue you may expect. So, let's see what you should pay attention to: Does the hosting provider mention Magento on its website? Or maybe they are even offering special Magento hosting packages? If yes, you are sure that technically Magento will run. There are even hosting providers for which Magento hosting is their speciality. Are you serious about your future Magento store? Then ask for references! Clients already running on Magento at this hosting provider can tell you more about the performance and customer support levels. Sometimes a hosting provider also offers an optimized demo store, which you can check out to see how it is performing. Ask if the hosting provider has Magento experts working for them and if yes, how many. Especially in case of large, high-traffic stores, it is important to hire the knowledge you need. Do not forget to check online forums and just do some research about this provider. However, we must also admit that you will find negative experiences of customers about almost every hosting provider. Are you just searching for a hosting provider to play around with Magento? In that case any cheap hosting provider would do, although your Magento store could be very slow. Take for instance, Hostgator (http://hostgator.com), which offers small hosting plans for a couple of U.S. dollars per month. Anyway, a lot of hosts are offering a free trial period, which you may use to test the performance. Installatron Can't this all be done a bit more easily? Yes, that's possible. If your host offers a service named Installatron and if it also includes Magento within it, your installation process will become a lot easier. We could almost call it a ''one-click'' installation procedure. Check if your hosting provider is offering the latest Magento version; this may not always be the case! Of course you may ask your (future) hosting provider if they are offering Installatron on their hosting packages. The example shown is from Simple Helix provider (http://simplehelix.com), a well-known provider specialized in Magento hosting solutions. Time for action – installing Magento using Installatron The following short procedure shows the steps you need to take to install Magento using Installatron: First, locate the Installatron Applications Installer icon in the administration panel of your hosting provider. Normally this is very easy to find, just after logging in: Next, within Installatron Applications Installer, click on the Applications Browser option: Inside Applications Browser, you'll see a list of CMS solutions and webshop software that you can install. Generally Magento can be located in the e-Commerce and Business group: Of course, click on Magento and after that click on the Install this application button. The next screen is the setup wizard for installing Magento. It lists a bunch of default settings, such as admin username, database settings, and the like. We recommend to change as little as possible for your first installation. You should pick the right location to install though! In our example, we will choose the test directory on www.boostingecommerce.com: Note that for this installation, we've chosen to install the Magento sample data, which will help us in getting an explanation of the Magento software. It's fine if you're installing for learning purposes, but in a store that is meant to be your live shop, it's better to start off completely empty. In the second part of the installation form, there are a few fields that you have to pay attention to: Switch off automatic updates Set database management to automatic Choose a secure administrator password Click on the Install button when you are done reviewing the form. Installatron will now begin installing Magento. You will receive an e-mail when Installatron is ready. It contains information about the URL you just installed and your login credentials to your newfangled Magento shop. That's all! Our just installed test environment is available at http://www.boostingecommerce.com/test. If all is well, yours should look similar to the following screenshot: How to test the minimum requirements If your host isn't offering Installatron and you would like to install Magento on it, how will you know if it's possible? In other words, will Magento run? Of course you can simply try to install and run Magento, but it's better to check for the minimum requirements before going that route. You can use the following method to test if your hosting provider meets all requirements needed to run Magento. First, create a text file using your favorite editor and name it as phpinfo.php. The contents of the file should be: <?php phpinfo(); ?> Save and upload this file to the root folder of your hosting environment, using an ftp tool such as Filezilla. Next, open your browser using this address: http://yourdomain.com/phpinfo.php; use your own domain name of course. You will see a screen similar to the following: Note that in the preceding screenshot, our XAMPP installation is using PHP 5.4.7. And as we mentioned earlier, Magento isn't compatible with this PHP version yet. So what about that? Well, XAMPP just comes with a recent stable release of PHP. Although it is officially not supported, in most cases your Magento test environment will run fine. Something similar to the previous screenshot will be shown, depending on your PHP (and XAMPP) version. Using this result, we can check for any PHP module that is missing. Just go through the list at the beginning of this article and verify if everything that is needed is available and enabled: What is SSL and do I need it? SSL (Secure Sockets Layer) is the standard for secure transactions on the web. You'll recognize it by websites running on https:// instead of http://. To use it, you need to buy an SSL Certificate and add it to your hosting environment. Some hosting providers offer it as a service, whereas others just point to third parties offering SSL Certificates, like for instance, RapidSSL (https://www.rapidssl.com) or VeriSign (http://www.verisign.com), currently owned by Symantec. We'll not offer a complete set of instructions on using SSL here, which is beyond the scope of this article. However, it is good to know when you'll need to pay attention to SSL. There can be two reasons to use an SSL certificate: You are accepting payments directly on your website and may even be storing credit card information. In such a case, make sure that you are securing your store by using SSL. On the other hand, if you are only using third parties to accept payments, like for example, Google Checkout or PayPal, you do not have to worry about this part. The transaction is done at the (secure part of the) website of your payment service provider and in such a case you do not need to offer SSL. However, there's another reason that makes using SSL interesting for all shop owners: trust. Regular shoppers know that https:// connections are secure and might feel just a bit more comfortable in closing the sale with you. It might seem a little thing, but getting a new customer to trust you is an essential step of the online purchase process. Summary In this article we've gone through several different ways to install Magento. We looked at doing it locally on your own machine using XAMPP or MAMP, or by using a hosting provider to bring your store online. When working with a hosting provider, using the Installatron tool makes the Magento installation very easy. Resources for Article: Further resources on this subject: Magento: Exploring Themes [Article] Getting Started with Magento Development [Article] Integrating Facebook with Magento [Article]
Read more
  • 0
  • 0
  • 2746
article-image-creating-new-forum
Packt
19 Aug 2013
6 min read
Save for later

Creating a new forum

Packt
19 Aug 2013
6 min read
(For more resources related to this topic, see here.) In the WordPress Administration, click on New Forum, which is a subpage of the Forums menu item on the sidebar. You will be taken to a screen that is quite similar to a WordPress post creation page, but slightly different with a few extra areas: If you are not familiar with the WordPress post creation page, the following is a list of the page's features: The Enter Title Here box The long box on the top of the page is your forum title. This, on the forum page, will be what is clicked on, and will also provide the basis for the forum's URL Slug with some changes, as URL Slugs generally have to be letters, numbers, and dashes. So for example, if your forum title is My Product's Support Section, your Slug will probably be my-products-support-section. When you insert the forum title, the URL Slug will be generated below. However, if you wish to change it, click on the yellow highlighted section to change the Slug, and then click on OK. The Post box Beneath the title box is the post box. This should contain your forum description. This will be shown beneath your forum's name on the forum index page. You can add rich text to this, such as bold or italicized text, but my advice is to keep this short. One or two lines of text would suffice, otherwise it could make your forum look peculiar. Forum attributes Towards the right-hand side of the screen, you should see a Forum Attributes section. bbPress allows to set a number of different attributes for your created forum. The attributes are explained in detail as follows: Forum type: Your forum can be one of two types: "Forum" or "Category". Category is a section of the site where you cannot post, but forums are grouped in. So for example, if you have forums for "Football", "Cricket", and "Athletics", you may group them into a "Sport" category. Unless you have a large forum with a number of different areas, you shouldn't need many categories. Normally you would begin with a few forums, but then as your forums grow, you would introduce categories. If you create a category, any forum you create must be a subforum of the category. We will talk about creating subforums later in this article. Status: Your forum's status indicates if other users can post in the forum. If the status is "Open", any user can post in the forum. If the forum is "Closed", nobody can contribute other than Keymasters. Unless one of your forums is a "Forum Rules" forum, you would probably keep all forums as Open. Visibility: bbPress allows three types of forum visibility . These, as the names suggest, decide who gets to see the forums. The three options are as follows: Public: This type allows anybody visiting the site to see the forum and its contents. Private: This type allows users who are logged in to view and contribute to the forum, but the forum is hidden from users that are not logged in or users that are blocked. Private forums are prefixed with the word "Private". Hidden: This type allows only Moderators and Keymasters to view the forum. Most forums will probably have majority of their forums set to Public, but have selections that are Private or Hidden. Usually, having a Hidden forum to discuss forum matters with Administrators or Moderators is a good thing. You can have a private forum as well that could help encourage people to register on the site. Parent: You can have subforums of forums. By giving a parent to the forum, you make it a subforum. An example of this would be if you had a "Travel" forum, you can have subforums dedicated to "Europe", "Australia", and "Asia". Again, you will probably start with just a few forums, but over time, you will probably grow your forum to include subforums. Order: The Order field helps define the order in which your forums are listed. By default, or if unspecified, the order is always alphabetical. However, if you give a number, then the order of the forum will be determined by the Order number, from smallest to largest. It is good to put important forums at the top, and less important forums towards the bottom of the page. It's a good idea to number your orders in multiples of 10, rather than 1, 2, 3, and so on. That way, if you want to add a forum to your site that will be between two other forums, you can add it in with a number between the two multiples of 10, thus saving time. Now that you have set up a forum, click on publish, and congratulations, you should have a forum! Editing and deleting forums Forums are a community, and like all good communities, they evolve over time depending on their user's needs. As such, over time, you may need to restructure or delete forums. Luckily, this is easily done. First, click on Forums in the sidebar of the WordPress Administration. You should see a list of all the current forums you have on your site: If you hover over a forum, two options will appear: Edit, which will allow you to edit the forum. A screen similar to the New Forum page will appear, which will allow you to make changes to your forum. The second option is Trash, which will move your forum into Trash. After a while, it will be deleted from your site. When you click on Trash, you will trash everything associated with your forum (any topics, replies, or tags will be deleted). Be careful! Summary Right now, you should have a bustling forum, ably overseen by yourself and maybe even a couple of Moderators.Remember that all I have described so far has been how to use bbPress to manage your forum, and not how to manage your forum. Each forum will have its own rules and guidelines, and you will eventually learn how to manage your bbPress forum with more and more members joining in.A general rule of thumb, though, is set out your rules at the start of your forum, welcome change, act quickly on violations, and most importantly, treat your users with respect. As without users, you will have a very quiet forum. However, bbPress is a WordPress plugin, and in itself can be extensible and can take advantage of plugins and themes, both specifically designed for bbPress or even those that work with WordPress. Resources for Article: Further resources on this subject: Getting Started with WordPress 3 [Article] How to Create an Image Gallery in WordPress 3 [Article] Integrating phpList 2 with WordPress [Article]
Read more
  • 0
  • 0
  • 1977

article-image-creating-courses-blackboard-learn
Packt
14 Aug 2013
10 min read
Save for later

Creating Courses in Blackboard Learn

Packt
14 Aug 2013
10 min read
(For more resources related to this topic, see here.) Courses in Blackboard Learn The basic structure of any learning management system relies on the basic course, or course shell. A course shell holds all the information and communication that goes on within our course and is the central location for all activities between students and instructors. Let's think about our course shell as a virtual house or apartment. A house or apartment is made up of different rooms where we put things that we use in our everyday life. These rooms such as the living room, kitchen, or bedrooms can be compared to content areas within our course shell. Within each of these content areas, there are items such as telephones, dishwashers, computers, or televisions that we use to interact, communicate, or complete tasks. These items would be called course tools within the course shell. These content areas and tools are available within our course shells and we can use them in the same ways. While as administrators, we won't take a deep dive into all these tools; we should know that they are available and instructors use them within their courses. Blackboard Learn offers many different ways to create courses, but to help simplify our discussion, we will classify those ways in two categories, basic and advanced. This article will discuss the course creation options that we classify as basic. Course names and course IDs When we get ready to create a course in Blackboard Learn, the system requires a few items. It requires a course name and a course ID. The first one should be self-explanatory. If you are teaching a course on "Underwater Basket Weaving" (a hobby I highly recommend), you would simply place this information into the course name. Now the course ID is a bit trickier. Think of it like a barcode that you can find on your favorite cereal. That barcode is unique and tells the checkout scanner the item you have purchased. The course ID has a similar function in Blackboard Learn. It must be unique; so if you plan to have multiple courses on "Underwater Basket Weaving", you will need to figure out a way to express the differences in each course ID. We just talked about how each course ID in Blackboard has to be unique. We as administrators will find that most Blackboard Learn instances we deal with have numerous course shells. Providing multiple courses to the users might become difficult. So we should consider creating a course ID naming convention if one isn't already in place. Our conversation will not tell you which naming convention will be best for your organization, but here are some helpful tips for us to start with: Use a symbol to separate words, acronyms, and numbers from one another. Some admins may use an underscore, period, or dash. However, whitespace, percent, ampersand, less than, greater than, equals, or plus characters are not accepted within course IDs. If you plan to collect reporting data from your instance, make sure to include the term or session and department in the course ID. Collect input from people and teams within your organization who will enroll and support users. Their feedback about a course's naming convention will help it be successful. Many organizations use a student information system, (SIS), which manages the enrollment process.   Default course properties The first item in our Course Settings area allows us to set up several of the default access options within our courses. The Default Course Properties page covers when and who has access to a course by default. Available by Default: This option gives us the ability to have a course available to enrolled students when it is created. Most administrators will have this set to No, since the instructor may not want to immediately give access to the course. Allow Guests by Default and Allow Observers by Default: The next options allow us to set guest and observer access to created courses by default. Most administrators normally set these to No because the guest access and observer role aren't used by their organizations. Default Enrollment Options: We can set default enrollment options to either allow the instructor or system administrator to enroll students or allow the student to self enroll. If we choose the former, we can give the student the ability to e-mail the instructor to request access. If we set Self Enrollment, we can set dates when this option is available and even set a default access code for students to use when they can self enroll. Now that we have these two options for default enrollment, most administrators would suggest setting the default course enrollment option to instructors or system administrators, which will allow instructors to set self enrollment within their own course. Default Duration: The Continuous option allows the course to run continuously with no start or end date set. Select Dates sets specific start and end dates for all courses. The last option called Days from the Date of Enrollment sets courses to run for a specific number of days after the student was enrolled within our Blackboard Learn environment. This is helpful if a student self enrolls in a self-paced course with a set number of days to complete it. Pitfalls of setting start and end dates When using the Start and End dates to control course duration, we may find that all users enrolled within the course will lose access. Course themes and icons If we are using the Blackboard 2012 theme, we have the ability to enable course themes within our Blackboard instance. These themes are created by Blackboard and can be applied to an instructor's course by clicking on the theme icon, seen in the following screenshot, in the upper-right corner of the content area while in a course. They have a wide variety of options, but currently administrators cannot create custom course themes. We can also select which icon sets courses will use by default in our Blackboard instance. These icon themes are created by Blackboard and will appear beside different content items and tools within the course. In the following screenshot, we can see some of the icons that make up one of the sets. Unlike the course themes, these icons will be enforced across the entire instance. Course Tools The Course Tools area offers us the ability to set what tools and content items are available within courses by default. We can also control these settings along with organizations and system tools by clicking on the Tools link under the Tools and Utilities module. Let's review what tools are available and how to enable and disable them within our courses. The options we use to set course tools are exactly same as those used in the Tools area we just mentioned. Use the information provided here to set tool availability with a page. Let's take a more detailed look into the default availability setting within this page. We have four options for each tool. Every tool has the same options. Default On: A course automatically has this tool available to users, but an instructor or leader can disable the tool within it Default Off: Users in a course will not have access to this tool by default, but the instructor or leader can enable it Always On: Instructors or leaders are unable to turn this tool off in their course or organization Always Off: Users do not see this tool in a course or organization, nor can the instructor or leader turn it on within the course Once we make the changes, we must click on the Submit button. Quick Setup Guide The Quick Setup Guide page was introduced into Blackboard 9.1 Service Pack 8. As seen in the following screenshot, it offers instructors the basic introduction into the course if they have never used Blackboard before. Most of the links are to the content from the On Demand area of the Blackboard website. We as administrators can disable this from appearing when an instructor enters the course. If we leave the guide enabled, we can add custom text to the guide, which can help educate instructors about changes, help, and support available from our organization. Custom images We can continue to customize default look and feel of our course shells with images in the course entry point and at the top of the menu. We might use these images to spotlight that our organization has been honored with an award. Here we find an example of how these images would look. Two images can be located at the bottom of the course entry page, which is the page we see after entering a course. Another image can be located at the top of the course menu. This area also allows us to make these images linkable to a website. Here's an example. Default course size limits We can also create a default course size limit for the course and the course export and archive packages within this area. Course Size Limits allows administrators to control storage space, which may be limited in some instances. When a course size limit is within 10 percent of being reached, the administrator and instructor get an e-mail notification. This notification is triggered by the disk usage task that runs once a day. After getting the notification, the instructor can remove content from the course, or the administrator can increase the course quota for that specific course. Maximum Course disk size: This option sets the amount of disk space a course shell can use for storage. This includes all course and student files within the course shell. Maximum Course Package Size: This sets the maximum amount of content from the Course Files area included in a course copy, export, or archive. Grade Center settings This area allows us to set default controls over the Grade History portion of the Grade Center. Grade history is exactly what it says. It keeps a history of the changes within the Grade Center. Most administrators recommend having grade history enabled by default because of the historical benefits. There may be a discussion within your organization to permit instructors to disable this feature within their course or clear the history altogether. Course menu and structures The course menu offers the main navigation for any course user. Our organization can create a default course menu layout for all new course shells created based on the input from instructional designers and pedagogical experts. As seen in the following screenshot, we simply edit the default menu that appears on this page. As administrators, we should pay close attention when creating a default course menu. Any additions or removals to the default menu are automatically changed without clicking on the Submit or Cancel buttons, and are applied to any courses created from that point forward. Blackboard recently introduced course structures. If enabled, these pre-built course menus are available to the instructor within their course's control panel. The course structures fall into a number of different course instruction scenarios. An example of the course structure selection interface is shown in the following screenshot:
Read more
  • 0
  • 0
  • 1400