Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Mobile

213 Articles
article-image-getting-started-playstation-mobile
Packt
26 Apr 2013
7 min read
Save for later

Getting Started with PlayStation Mobile

Packt
26 Apr 2013
7 min read
(For more resources related to this topic, see here.) The PlayStation Mobile (PSM) SDK represents an exciting opportunity for game developers of all stripes, from hobbyists to indie and professional developers. It contains everything you need to quickly develop a game using the C# programming language. Perhaps more importantly, it provides a market for those games. If you are currently using XNA, you will feel right at home with the PSM SDK. You may be wondering at this point, Why develop for PlayStation Mobile at all? Obviously, the easiest answer is, so you can develop for PlayStation Vita , which of itself will be enough for many people. Perhaps, though the most important reason is that it represents a group of dedicated gamers hungry for games. While there are a wealth of games available for Android, finding them on the App Store is a mess, while supporting the literally thousands of devices is a nightmare. With PlayStation Mobile, you have a common development environment, targeting powerful devices with a dedicated store catering to gamers. We are now going to jump right in and get those tools up and running. Of course, we will also write some code and show how easy it is to get it running on your device. PlayStation Mobile allows you to target a number of different devices and we will cover the three major targets (the Simulator, PlayStation Vita, and Android). You do not need to have a device to follow along, although certain functionality will not be available on the Simulator. One thing to keep in mind with the PlayStation Mobile SDK is that it is essentially two SDKs in one. There is a much lower level set of libraries for accessing graphics, audio, and input, as well as a higher-level layer build over the top of this layer, mostly with the complete source available. Of course, underneath this all there is the .NET framework. In this article, we are going to deal with the lower level graphics interface. If the code seems initially quite long or daunting for what seems like a simple task, don't worry! There is a much easier way that we will cover later in the book. Accessing the PlayStation Mobile portal This recipe looks at creating a PSM portal account. For this process it is mandatory to download and use the PSM SDK. Getting ready You need to have a Sony Entertainment Network (SEN) account to register with the PSM portal. This is the standard account you use to bring your PlayStation device online, so you may already have one. If not, create one at http://bit.ly/Yiglfk before continuing. How to do it... Open a web browser and log in to http://psm.playstation.net. Locate and click on the Register button. Sign in using the SEN account. Agree to the Terms and Conditions. You need to scroll to the bottom of the text before the Agree button is enabled. But, you always read the fine print anyways... don't you? Finally select the e-mail address and language you want for the PlayStation Mobile portal. You can use the same e-mail you used for your SEN account. Click on Register. An e-mail will be sent to the e-mail account you used to sign up. Locate the activation link and either click on it, or copy and paste into a browser window: Your account is now completed, and you can log in to the PSM developer portal now. How it works... A PlayStation Mobile account is mandatory to download the PSM tools. Many of the links to the portal require you to be logged in before they will work. It is very important that you create and activate your account and log in to the portal before continuing on with the book! All future recipes assume you are logged in to the portal. Installing the PlayStation Mobile SDK This recipe demonstrates how to install the PlayStation Mobile SDK. Getting ready First you need to download the PlayStation Mobile SDK; you can download it from http://bit.ly/W8rhhx. How to do it... Locate the installation file you downloaded earlier and double-click to launch the installer. Say yes to any security related questions. Take the default settings when prompting, making sure to install the runtimes and GTK# libraries. The installer for the Vita drivers will now launch. There is no harm in installing them even if you do not have a Vita: Installation is now complete; a browser window with the current release notes will open. How it works... The SDK is now installed on your machines. Assuming you used default directories, the SDK will be installed to C:Program Files (x86)SCEPSM if you are running 64 bit Windows, or to C:Program FilesSCEPSM if you are running 32 bit Windows. Additionally all of the documentation and samples have been installed under the Public account, located in C:UsersPublicDocumentsPSM. There's more... There are a number of samples available in the samples directory and you should certainly take a moment to check them out. They range in complexity from simple Hello World applications, up to a full blown 3rd person 3D role playing game (RPG). They are, however, often documented in Japanese and often rely on other samples, making learning from them a frustrating experience at times, at least, for those of us who do not understand Japanese! Creating a simple game loop We are now going to create our first PSM SDK application, which is the main loop of your application. Actually all the code in this sample is going to be generated by PSM Studio for us. Getting ready From the start menu, locate and launch PSM Studio in the PlayStation Mobile folder. How to do it... In PSM Studio, select the File | New | Solution... menu. In the resulting dialog box, in the left-hand panel expand C# and select PlayStation Suite, then in the right-hand panel, select PlayStation Suite Application. Fill in the Name field, which will automatically populate the Solution name field. Click on OK. Your workspace and boilerplate code will now be created; hit the F5 key or select the Run | Start Debugging menu to run your code in the Simulator. Not much to look at, but it's your first running PlayStation Mobile application! Now let's take a quick look at the code it generated: using System; using System.Collections.Generic; using Sce.PlayStation.Core; using Sce.PlayStation.Core.Environment; using Sce.PlayStation.Core.Graphics; using Sce.PlayStation.Core.Input; namespace Ch1_Example1 { public class AppMain{ private static GraphicsContext graphics; public static void Main (string[] args){ Initialize (); while (true) { SystemEvents.CheckEvents (); Update (); Render (); } } public static void Initialize (){ graphics = new GraphicsContext (); } public static void Update (){ var gamePadData = GamePad.GetData (0); } public static void Render () { graphics.SetClearColor (0.0f, 0.0f, 0.0f, 0.0f); graphics.Clear (); graphics.SwapBuffers (); } } } How it works... This recipe shows us the very basic skeleton of an application. Essentially it loops forever, displaying a black screen. private static GraphicsContext graphics; The GraphicsContext variable represents the underlying OpenGL context. It is used to perform almost every graphically related action. Additionally, it contains the capabilities (resolution, pixel depth, and so on) of the underlying graphics device. All C# based applications have a main function, and this one is no exception. Within Main() we call our Initialize() method, then loop forever, checking for events, updating, and finally rendering the frame. The Initialize() method simply creates a new GraphicsContext variable. The Update() method polls the first gamepad for updates. Finally Render() uses our GraphicsContext variable to first clear the screen to black using an RGBA color value, then clears the screen and swaps the buffers, making it visible. Graphic operations in PSM SDK generally are drawn to a back buffer. There's more... The same process is used to create PlayStation Suite library projects, which will generate a DLL file. You can use almost any C# library that doesn't rely on native code (pInvoke or Unsafe); however, they need to be recompiled into a PSM compatible DLL format. Color in the PSM SDK is normally represented as an RGBA value. The RGBA acronym stands for red, green, blue, and alpha. Each is an int variable type, with values ranging from 0 to 255 representing the strength of each primary color. Alpha represents the level of transparency, with 0 being completely transparent and 256 being opaque.
Read more
  • 0
  • 0
  • 3142

article-image-so-what-forgedui
Packt
17 Apr 2013
2 min read
Save for later

So, what is ForgedUI?

Packt
17 Apr 2013
2 min read
(For more resources related to this topic, see here.) ForgedUI is a Titanium Studio third-party WYSIWYG (what you see is what you get) module created with the aim to make cross-platform app development quicker and easier, by providing a visual environment and a drag-and-drop style interface. Even though Titanium generates multiple platform apps with a single codebase and facilitates the maintenance and management of mobile projects, it still lacks a design interface tool, and for this reason, ForgedUI comes to the market with the aim to improve what has been holding back a large portion of productivity gains on Titanium SDK. With a steep learning curve and a straightforward interface, Titanium developers are able to reduce app development time by visually designing their apps and get more time to concentrate on other aspects of the project. It doesn't matter if you choose to go with iOS or Android, ForgedUI will give you a hand on screen design, and alongside Titanium Studio, generate the cross-platform code with one click. The ForgedUI interface has common UI elements of the Android and iOS platforms and allows you to design with a simple drag-and-drop process instead of hand written code. This is how ForgedUI looks within Titanium Studio: Once you are happy with your UI design, you can generate the Titanium JavaScript code through ForgedUI with just one click, and this code can be integrated into new or old projects. The generated code is stored in a separate resource file within the current Titanium project. ForgedUI allows you to specify parent-child relationships between compatible components and intelligently generates code that conforms to the required layout and relationship rules. The key features of ForgedUI are as follows: Visual graphical user interface (GUI) designer Supports iPhone and Android (no Tablets) platform projects One-click code generation Seamless integration with Titanium Studio Summary This article has thus explained what ForgedUI is and how it makes cross-platform app development easier. Resources for Article : Further resources on this subject: Animating Properties and Tweening Pages in Android 3-0 [Article] Securely Encrypt Removable Media with Ubuntu [Article] Appcelerator Titanium: Creating Animations, Transformations, and Understanding Drag-and-drop [Article]
Read more
  • 0
  • 0
  • 2121

article-image-titanium-best-practices
Packt
15 Mar 2013
13 min read
Save for later

Titanium Best Practices

Packt
15 Mar 2013
13 min read
(For more resources related to this topic, see here.) CommonJS CommonJS is a set of specifications, and its purpose is to give a common guide to building JavaScript frameworks. It is not a framework in its own right. Appcelerator implementing these standards to their Titanium framework brings it inline with other frameworks such as NodeJS, and enables us as developers to use the same practices in more than one framework. The implications of this cannot be overstated. Developers can now switch between JavaScript frameworks, which have implemented the CommonJS model, without having a massive learning curve on the framework itself. CommonJS works by using an initial bootstrap file, which in Titanium is app.js. You then abstract your code into separate modules that are required into other modules when needed. Within Titanium, you can have native modules that enhance and expand the Titanium framework. These should not be confused with a CommonJS module, which is part of the main application. Code formatting How often have you needed to modify code that was written by other people or has been updated, modified, rewritten, or just generally messed around with? You find different people using different coding styles, braces on different lines, and equals signs at different positions; the list is endless. You then have to work your way through this code, getting more and more annoyed. Titanium Studio has a solution—an automatic code formatter, which you can set up as you require. To get to the configuration settings, go through the main menu to Preferences | Titanium Studio | Formatter, select JavaScript , and click on the edit icon. You will see the Preferences panel and you can configure it as required. If you wish to have these configurations across your development team, then you can export and import them as required. To format a file's code, open the file, go to the main menu, and select Source | Format. Alternatively, if you are on a Mac, the relevant shortcut keys are shift +command +F. If you have selected a section of code then that will be formatted, otherwise the whole file will be formatted. A gotcha with the formatter: If there are too many JavaScript errors it won't format the code. Code validation As developers, we spend our days hunting down spurious code anomalies. It could be a missing comma, semicolon, or that an event listener wasn't added while looping an array of objects. Anything that can make this task easier and show potential issues while we are writing code is a good thing. JSLint is one of those tools that you could use, as it highlights potential issues as you code. It has been described as your worst nightmare but also your best friend. When you first start using it, it may drive you mad with some of the seemingly meaningless warnings, but sort them out and be persistent; it will improve your codebase in the long term. Titanium Studio has JSLint built in but switched off by default. To enable it, go to Preferences | Titanium Studio | Validation, select JavaScript , and switch on JSLint Validator. As you can see, there are other validators on the list, which also help give you a hit list of potential issues. It is worth spending a little time making sure you understand what these are and how they work. Comment meaningfully Adding comments to your code should not be seen as a chore, instead they can be as important as the code itself. A well-commented codebase can and will save hours in the future, as you or a colleague go to update and maintain it. Always put a comment block at the start of source files, explain what the file does, and include a maintenance log, where the date, time, developer's name, and a brief description of the changes can be maintained. If you have a complex function put a comment block before it, explaining what it does. Also place inline comments where needed to explain certain pieces of code. /* * A CommonJS Module. * * This module does something … * * Author : Name * * Date : Today * * Maintenance Log * * Date : Author: * Changes : * */ Do not add comments for the sake of adding them. Make them meaningful, relevant, and useful. Too many comments can confuse the code structure. Do not pollute the global object Within an application you can define various application objects. Declare and use these with caution, as they use up resources and can easily cause clashes with other private and local objects. Application-level variables are not really required and should be avoided. If you require a small piece of data to move around the application, consider using persistent data or passing it to the required modules. Application-level event handlers are required for various tasks including, amongst others, background services and geolocation. If you do use them, always remove them when they are no longer needed. To control the flow of an application, you may need to set up a global listener, but you only need one with a common function to control the flow. In CommonJS there is no global scope; declaring a variable within a module makes it private to that module. Declaring variables in app.js as Ti.App.varName = [], does make them global, but is highly discouraged. JavaScript instance A JavaScript instance is where a session of the interpreter is invoked. In Titanium it is possible to create multiple instances within a single application. This is done by creating a window object with a url property to load the content. Var win = Ti.UI.createwindow({ url: '/app/ui/newWindow.js' }); Don't do this unless you have a very, very specific requirement. To benefit from all the advantages CommonJS provides, always work in a single JavaScript instance. The consequences of multiple instances include no scope across them, additional memory and resource usage, and a high risk of memory leaks. CommonJS modules With the adoption of the CommonJS specification by Appcelerator into the Titanium framework, you should only use the CommonJS modules, which are also referred to as factories. This provides many advantages: separation of code into specific modules, a more structured codebase, separate object scope, code maintainability, and much more. By using this method it becomes very difficult to pollute the global scope, as each module has its own object scope. Understanding this is key to the CommonJS method. Each module or factory contains functions and variables that can be exported; unless they are exported they are private to that module. This enables variable or function names to be the same in different modules without a scoping clash. By exporting only what is required at the end of the module, it enables the module to be self contained. /* A typical module format myModule.js */ var object1 = "1234"; var object2 = "5678"; /* My modules function */ functionmyModule(args){ … do something nice return mainObject; } exports.object1 = object1; exports.myModule = myModule; When a CommonJS module is required by another module it is loaded into memory. If you then require the same module elsewhere, it doesn't reload it into memory; it just makes it available to the new calling module. This means t hat if the first requiring module sets a value in the called module, when the called module is required by another module, the values which have been set are still set. It doesn't load a new instance of the module and reinitialize all the values. A few rules about modules: Only load them when needed Only export what is required by the calling module Use prototype where appropriate Avoid recursive requires Working with CommonJS recursive requires can cause major issues and, basically, you can't do it. A recursive require is where module A requires module B and module B then requires module A. You will quickly notice if you try this that this leads to a loop trying to process the continual call of the requires, and finally dies with a nondescript error message. CommonJS best practices CommonJS is one of the best things to have happened to the Titanium framework. To get the most out of the framework and the enhanced performance, these are a few things that should be considered: Be modular Be private Return an object Protect the global scope Control file loading One of the main advantages of CommonJS is the way it lends itself to creating well-structured, separated code. By being modular you create specific modules for separate sections of the code, i.e. a separate module for each window. This methodology facilitates the creation of common modules enabling a constructive codebase that is easy to understand and maintain. A good example of a common module would be one that contains the geolocation code and is then used across the whole application. Common modules enable the code to be extracted down, but don't go too far. It is tempting to extract code out into its own module when it actually lives in the module it is in. Do not be tempted to take code modularization to the extreme, having a module for each function is not necessary or productive. Remember that modules are loaded into memory once they have been required and they remain there. Making functions and variables private to a module maintains the module's integrity. Do not export all the functions in a module unless they are actually called from the requiring module. Export the required functions and variables at the end of the module, not by default. The following code example shows the two methods for exporting functions: // This exports the function inline exports.outFunc = function () { .. code .. return object; } // This only exports the function when and if required. function outFunc() { .. code .. return object; } exports.outFunc = outFunc; By defining the module functions in the second method, they become local to that module. This means that they can be used directly by any other function within the module. As you separate your code into modules, separate your modules into functions. Having one exported parent function that returns the main object after processing through other functions is a good practice. It is quite easy to declare module variables; you just declare them outside of any function. This gives them a scope that is global to the module. It is a very good way of maintaining a persistent state across the calling modules, but use them sparingly as they use up more resources. JavaScript, by default, returns the last action of a function to the calling function. This may not always be what is required, especially in the CommonJS model. Always return what you require from a function even if that is nothing. // New window function returning the null object. functionnewWin() { // Do something return; } exports.newWin = newWin; For an exported function always return the function's main object. Hence, creating a new window in a function should return that window object as the following code example shows: // New window function returning the parent object. function newWin() { var win = Ti.UI.createWindow({ .. parameters .. }); return win; } exports.newWin = newWin; The global scope should be considered as nonexistent, but you can still add application-level variables. If you have to use these, only declare them in the app.js file; do not declare them in any of the modules. At times you will have to create an application event listener in a module. Only do this when you have to and always, always remove the listener after you have finished with it. The following code example shows an event listener added with an in-built function. This is not good practice, as you cannot remove it later. The only way it will be removed is when the parent object is destroyed, which for application-level listeners is when the application is stopped, not put into background. function adList() { varmainObj = Ti.UI.createImageView(); mainObj.addEventListener('click', function(e) { .. do something here .. }); return mainObj; } Instead of defining listeners with an in-built function, always declare them to a calling function as the following example shows. They can then easily be removed, as you have to declare the called function to be identical in both addEventListener and removeEventListener. varmainObj = null; function eveList(e){ .. do something here … mainObj.removeEventListener('click', eveList); } function adList() { mainObj = Ti.UI.createImageView(); mainObj.addEventListener('click', eveList); return mainObj; } Managing memory We have explored some of the practices that will help in managing the application's memory. These practices range from controlling when the modules file is loaded to not using application-level events or variables. When you open a window, it is added to the window stack, and every time you open that window it is added to the stack. If you have a navigation system that enables you to move through the windows in any order, it is likely that you will end up with a large stack. Close the windows when they are not in use. varwin =i.UI.createWindow(); varwinBut =i.UI.createButton({ title : 'Press' }); function loadWin2() { varwin = Ti.UI.createWindow(); win.close(); win = null; win2.open(); } winBut.addEventListener('click', loadWin2); // opening a window win.add(winBut); win.open(); Some additional memory-intensive APIs in Titanium are web views, large tables, and events. It is not a good idea to have more than one web view instance running in the application at a time. If you do not close the window containing the web view instance and move to another one, at least remove the web view instance from the previous window and reload it when focus is made. The same applies to tables and map views. When you remove an object such as a web view from a window it may not always release the memory—even closing a window does not always release the memory. In all cases, null is your friend. The final code example shows a window with web view, an event listener, and how to clean up when a window is closed: // clean up example module function createWin() { var win = Ti.UI.createWindow(); varwebV = Ti.UI.createWebView(); win.addEventListener('open', openFunc); win.addEventListener('close', function(e) { win.removeEventListener('open'openFunc); win.remove(webV); webV = null; win = null; } win.add(webV); return win; } In this example the close event doesn't call another function. This is acceptable because when we close the window, the close event will fire, which nulls the win object and this removes the object's event listener. It is done this way to prevent having to use module variables to handle the window and web view objects being cleaned up in another function. Summary As you have seen, there are quite a few considerations to take into account when coding with Titanium. Some of these can also be applied to other programming languages. It is completely optional to follow best practices; they are there as a guide, as a place to start, and as a way to manage your code going forward. As a developer you will find your own way to implement the methodology used within the application, you will decide when, where, and what comments to add, which code format to use, and which module to put what code into. But best practices and guidelines are developed for a reason; they keep code consistent within the application, they allow other developers to pick up what is going on quickly, and enable clean and reliable code. Always apply a good coding style to your application. You will thank yourself in the future. Resources for Article : Further resources on this subject: Animating Properties and Tweening Pages in Android 3-0 [Article] Creating, Compiling, and Deploying Native Projects from the Android NDK [Article] Appcelerator Titanium: Creating Animations, Transformations, and Understanding Drag-and-drop [Article]
Read more
  • 0
  • 0
  • 2367

article-image-implementing-data-model
Packt
06 Mar 2013
13 min read
Save for later

Implementing the data model

Packt
06 Mar 2013
13 min read
(For more resources related to this topic, see here.) Getting on with it Before we define our model, let's define a namespace where it will live. This is an important habit to establish since it relieves us of having to worry about whether or not we'll collide with another function, object, or variable of the same name. While there are various methods used to create a namespace, we're going to do it simply using the following code snippet: // quizQuestion.js var QQ = QQ || {}; Now that our namespace is defined, we can create our question object as follows: QQ.Question = function ( theQuestion ) { var self = this; Note the use of self: this will allow us to refer to the object using self rather than using this. (Javascript's this is a bit nuts, so it's always better to refer to a variable that we know will always refer to the object.) Next, we'll set up the properties based on the diagram we created from step two using the following code snippet: self.question = theQuestion; self.answers = Array(); self.correctAnswer = -1; We've set the self.correctAnswer value to -1 to indicate that, at the moment, any answer provided by the player is considered correct. This means you can ask questions where all of the answers are right. Our next step is to define the methods or interactions the object will have. Let's start with determining if an answer is correct. In the following code, we will take an incoming answer and compare it to the self.correctAnswer value. If it matches, or if the self.correctAnswer value is -1, we'll indicate that the answer is correct: self.testAnswer = function( theAnswerGiven ) { if ((theAnswerGiven == self.correctAnswer) || (self.correctAnswer == -1)) { return true; } else { return false; } } We're going to need a way to access a specific answer, so we'll define the answerAtIndex function as follows: self.answerAtIndex = function ( theIndex ) { return self.answers[ theIndex ]; } To be a well-defined model, we should always have a way of determining the number of items in the model as shown in the following code snippet: self.answerCount = function () { return self.answers.length; } Next, we need to define a method that allows an answer to be added to our object. Note that with the help of the return value, we return ourselves to permitting daisy-chaining in our code: self.addAnswer = function( theAnswer ) { self.answers.push ( theAnswer ); return self; } In theory we could display the answers to a question in the order they were given to the object. In practice, that would turn out to be a pretty boring game: the answers would always be in the same order, and chances would be pretty good that the first answer would be the correct answer. So let's give ourselves a randomized list using the following code snippet: self.getRandomizedAnswers = function () { var randomizedArray = Array(); var theRandomNumber; var theNumberExists; // go through each item in the answers array for (var i=0; i<self.answers.length; i++) { // always do this at least once do { // generate a random number less than the // count of answers theRandomNumber = Math.floor ( Math.random() * self.answers.length ); theNumberExists = false; // check to see if it is already in the array for (var j=0; j<randomizedArray.length; j++) { if (randomizedArray[j] == theRandomNumber) { theNumberExists = true; } } // If it exists, we repeat the loop. } while ( theNumberExists ); // We have a random number that is unique in the // array; add it to it. randomizedArray.push ( theRandomNumber ); } return randomizedArray; } The randomized list is just an array of numbers that indexes into the answers[] array. To get the actual answer, we'll have to use the answerAtIndex() method. Our model still needs a way to set the correct answer. Again, notice the return value in the following code snippet permitting us to daisy-chain later on: self.setCorrectAnswer = function ( theIndex ) { self.correctAnswer = theIndex; return self; } Now that we've properly set the correct answer, what if we need to ask the object what the correct answer is? For this let's define a getCorrectAnswer function using the following code snippet: self.getCorrectAnswer = function () { return self.correctAnswer; } Of course, our object also needs to return the question given to it whenever it was created; this can be done using the following code snippet: self.getQuestion = function() { return self.question; } } That's it for the question object. Next we'll create the container that will hold all of our questions using the following code line: QQ.questions = Array(); We could go the regular object-oriented approach and make the container an object as well, but in this game we have only one list of questions, so it's easier to do it this way. Next, we need to have the ability to add a question to the container, this can be done using the following code snippet: QQ.addQuestion = function (theQuestion) { QQ.questions.push ( theQuestion ); } Like any good data model, we need to know how many questions we have; we can know this using the following code snippet: QQ.count = function () { return QQ.questions.length; } Finally, we need to be able to get a random question out of the list so that we can show it to the player; this can be done using the following code snippet: QQ.getRandomQuestion = function () { var theQuestion = Math.floor (Math.random() * QQ.count()); return QQ.questions[theQuestion]; } Our data model is officially complete. Let's define some questions using the following code snippet: // quizQuestions.js // // QUESTION 1 // QQ.addQuestion ( new QQ.Question ( "WHAT_IS_THE_COLOR_OF_THE_SUN?" ) .addAnswer( "YELLOW" ) .addAnswer( "WHITE" ) .addAnswer( "GREEN" ) .setCorrectAnswer ( 0 ) ); Notice how we attach the addAnswer and setCorrectAnswer methods to the new question object. This is what is meant by daisy-chaining: it helps us write just a little bit less code. You may be wondering why we're using upper-case text for the questions and answers. This is due to how we'll localize the text, which is next: PKLOC.addTranslation ( "en", "WHAT_IS_THE_COLOR_OF_THE_SUN?", "What is the color of the Sun?" ); PKLOC.addTranslation ( "en", "YELLOW", "Yellow" ); PKLOC.addTranslation ( "en", "WHITE", "White" ); PKLOC.addTranslation ( "en", "GREEN", "Green" ); PKLOC.addTranslation ( "es", "WHAT_IS_THE_COLOR_OF_THE_SUN?", "¿Cuál es el color del Sol?" ); PKLOC.addTranslation ( "es", "YELLOW", "Amarillo" ); PKLOC.addTranslation ( "es", "WHITE", "Blanco" ); PKLOC.addTranslation ( "es", "GREEN", "Verde" ); The questions and answers themselves serve as keys to the actual translation. This serves two purposes: it makes the keys obvious in our code, so we know that the text will be replaced later on, and should we forget to include a translation for one of the keys, it'll show up in uppercase letters. PKLOC as used in the earlier code snippet is the namespace we're using for our localization library. It's defined in www/framework/localization.js. The addTranslation method is a method that adds a translation to a specific locale. The first parameter is the locale for which we're defining the translation, the second parameter is the key, and the third parameter is the translated text. The PKLOC.addTranslation function looks like the following code snippet: PKLOC.addTranslation = function (locale, key, value) { if (PKLOC.localizedText[locale]) { PKLOC.localizedText[locale][key] = value; } else { PKLOC.localizedText[locale] = {}; PKLOC.localizedText[locale][key] = value; } } The addTranslation method first checks to see if an array is defined under the PKLOC.localizedText array for the desired locale. If it is there, it just adds the key/value pair. If it isn't, it creates the array first and then adds the key/value pair. You may be wondering how the PKLOC.localizedText array gets defined in the first place. The answer is that it is defined when the script is loaded, a little higher in the file: PKLOC.localizedText = {}; Continue adding questions in this fashion until you've created all the questions you want. The quizQuestions.js file contains ten questions. You could, of course, add as many as you want. What did we do? In this task, we created our data model and created some data for the model. We also showed how translations are added to each locale. What else do I need to know? Before we move on to the next task, let's cover a little more of the localization library we'll be using. Our localization efforts are split into two parts: translation and data formatting . For the translation effort , we're using our own simple translation framework, literally just an array of keys and values based on locale. Whenever code asks for the translation for a key, we'll look it up in the array and return whatever translation we find, if any. But first, we need to determine the actual locale of the player, using the following code snippet: // www/framework/localization.js PKLOC.currentUserLocale = ""; PKLOC.getUserLocale = function() { Determining the locale isn't hard, but neither is it as easy as you would initially think. There is a property (navigator.language) under WebKit browsers that is technically supposed to return the locale, but it has a bug under Android, so we have to use the userAgent. For WP7, we have to use one of three properties to determine the value. Because that takes some work, we'll check to see if we've defined it before; if we have, we'll return that value instead: if (PKLOC.currentUserLocale) { return PKLOC.currentUserLocale; } Next, we determine the current device we're on by using the device object provided by Cordova. We'll check for it first, and if it doesn't exist, we'll assume we can access it using one of the four properties attached to the navigator object using the following code snippet: var currentPlatform = "unknown"; if (typeof device != 'undefined') { currentPlatform = device.platform; } We'll also provide a suitable default locale if we can't determine the user's locale at all as seen in the following code snippet: var userLocale = "en-US"; Next, we handle parsing the user agent if we're on an Android platform. The following code is heavily inspired by an answer given online at http://stackoverflow.com/a/7728507/741043. if (currentPlatform == "Android") { var userAgent = navigator.userAgent; var tempLocale = userAgent.match(/Android.*([a-zA-Z]{2}-[a-zA-Z] {2})/); if (tempLocale) { userLocale = tempLocale[1]; } } If we're on any other platform, we'll use the navigator object to retrieve the locale as follows: else { userLocale = navigator.language || navigator.browserLanguage || navigator.systemLanguage || navigator.userLanguage; } Once we have the locale, we return it as follows: PKLOC.currentUserLocale = userLocale; return PKLOC.currentUserLocale; } This method is called over and over by all of our translation codes, which means it needs to be efficient. This is why we've defined the PKLOC.currentUserLocale property. Once it is set, the preceding code won't try to calculate it out again. This also introduces another benefit: we can easily test our translation code by overwriting this property. While it is always important to test that the code properly localizes when the device is set to a specific language and region, it often takes considerable time to switch between these settings. Having the ability to set the specific locale helps us save time in the initial testing by bypassing the time it takes to switch device settings. It also permits us to focus on a specific locale, especially when testing. Translation of text is accomplished by a convenience function named __T() . The convenience functions are going to be our only functions outside of any specific namespace simply because we are aiming for easy-to-type and easy-to-remember names that aren't arduous to add to our code. This is especially important since they'll wrap every string, number, date, or percentage in our code. The __T() function depends on two functions: substituteVariables and lookupTranslation. The first function is de fined as follows: PKLOC.substituteVariables = function ( theString, theParms ) { var currentValue = theString; // handle replacement variables if (theParms) { for (var i=1; i<=theParms.length; i++) { currentValue = currentValue.replace("%" + i, theParms[i-1]); } } return currentValue; } All this function does is handle the substitution variables. This means we can define a translation with %1 in the text and we will be able to replace %1 with some value passed into the function. The next function, lookupTranslation, is defined as follows: PKLOC.lookupTranslation = function ( key, theLocale ) { var userLocale = theLocale || PKLOC.getUserLocale(); if ( PKLOC.localizedText[userLocale] ) { if ( PKLOC.localizedText[userLocale][key.toUpperCase()] ) { return PKLOC.localizedText[userLocale][key.toUpperCase()]; } } return null; } Essentially, we're checking to see if a specific translation exists for the given key and locale. If it does, we'll return the translation, but if it doesn't, we'll return null. Note that the key is always converted to uppercase, so case doesn't matter when looking up a translation. Our __T() function looks as follows: function __T(key, parms, locale) { var userLocale = locale || PKLOC.getUserLocale(); var currentValue = ""; First, we determine if the translation requested can be found in the locale, whatever that may be. Note that it can be passed in, therefore overriding the current locale. This can be done using the following code snippet: if (! (currentValue=PKLOC.lookupTranslation(key, userLocale)) ) { Locales are often of the form xx-YY, where xx is a two-character language code and YY is a two-character character code. My locale is defined as en-US. Another player's might be defined as es-ES. If you recall, we defined our translations only for the language. This presents a problem: the preceding code will not return any translation unless we defined the translation for the language and the country. Sometimes it is critical to define a translation specific to a language and a country. While various regions may speak the same language from a technical perspective, idioms often differ. If you use an idiom in your translation, you'll need to localize them to the specific region that uses them, or you could generate potential confusion. Therefore, we chop off the country code, and try again as follows: userLocale = userLocale.substr(0,2); if (! (currentValue=PKLOC.lookupTranslation(key, userLocale)) ) { But we've only defined translations for English (en) and Spanish(es)! What if the player's locale is fr-FR (French)? The preceding code will fail, because we've not defined any translation for the fr language (French). Therefore, we'll check for a suitable default, which we've defined to be en-US, American English: userLocale = "en-US"; if (! (currentValue=PKLOC.lookupTranslation(key, userLocale)) ) { Of course, we are now in the same boat as before: there are no translations defined for en-US in our game. So we need to fall back to en as follows: userLocale = "en"; if (! (currentValue=PKLOC.lookupTranslation(key, userLocale)) ) { But what happens if we can't find a translation at all? We could be mean and throw a nasty error, and perhaps you might want to do exactly that, but in our example, we're just returning the incoming key. If the convention of capitalizing the key is always followed, we'll still be able to see that something hasn't been translated. currentValue = key; } } } } Finally, we pass the currentValue parameter to the substituteVariables property in order to process any substitutions that we might need as follows: return PKLOC.substituteVariables( currentValue, parms ); } Summary In this article we saw the file quizQuestion.js which was the actual model: it specified how the data should be formatted and how we can interact with it. We also saw the quizQuestions.js file, which contained our actual question data. Resources for Article : Further resources on this subject: Configuring the ChildBrowser plugin [Article] Adding Geographic Capabilities via the GeoPlaces Theme [Article] iPhone: Issues Related to Calls, SMS, and Contacts [Article]
Read more
  • 0
  • 0
  • 1676

article-image-so-what-spring-android
Packt
20 Feb 2013
3 min read
Save for later

So, what is Spring for Android?

Packt
20 Feb 2013
3 min read
(For more resources related to this topic, see here.) RestTemplate The RestTemplate module is a port of the Java-based REST client RestTemplate, which initially appeared in 2009 in Spring for MVC. Like the other Spring template counterparts (JdbcTemplate, JmsTemplate, and so on), its aim is to bring to Java developers (and thus Android developers) a high-level abstraction of lower-level Java API; in this case, it eases the development of HTTP clients. In its Android version, RestTemplate relies on the core Java HTTP facilities (HttpURLConnection) or the Apache HTTP Client. According to the Android device version you use to run your app, RestTemplate for Android can pick the most appropriate one for you. This is according to Android developers' recommendations. See http://android-developers.blogspot.ca/2011/09/androids-http-clients.html. This blog post explains why in certain cases Apache HTTP Client is preferred over HttpURLConnection. RestTemplate for Android also supports gzip compression and different message converters to convert your Java objects from and to JSON, XML, and so on. Auth/Spring Social The goal of the Spring Android Auth module is to let an Android app gain authorization to a web service provider using OAuth (Version 1 or 2). OAuth is probably the most popular authorization protocol (and it is worth mentioning that, it is an open standard) and is currently used by Facebook, Twitter, Google apps (and many others) to let third-party applications access users account. Spring for Android Auth module is based on several Spring libraries because it needs to securely (with cryptography) persist (via JDBC) a token obtained via HTTP; here is a list of the needed libraries for OAuth: Spring Security Crypto: To encrypt the token Spring Android OAuth: This extends Spring Security Crypto adding a dedicated encryptor for Android, and SQLite based persistence provider Spring Android Rest Template: To interact with the HTTP services Spring Social Core: The OAuth workflow abstraction While performing the OAuth workflow, we will also need the browser to take the user to the service provider authentication page, for example, the following is the Twitter OAuth authentication dialog: What Spring for Android is not SpringSource (the company behind Spring for Android) is very famous among Java developers. Their most popular product is the Spring Framework for Java which includes a dependency injection framework (also called an inversion of control framework). Spring for Android does not bring inversion of control to the Android platform. In its very first release (1.0.0.M1), Spring for Android brought a common logging facade for Android; the authors removed it in the next version. Summary In this article, we have learned that Spring for Android helps in easy development of Android applications. We learned the details about the important modules present in it and its functions. We also learnt about dependency injection framework in short and that Spring for Android does not bring inversion of control to the Android platform. Resources for Article : Further resources on this subject: Top 5 Must-have Android Applications [Article] Creating, Compiling, and Deploying Native Projects from the Android NDK [Article] Manifest Assurance: Security and Android Permissions for Flash [Article]
Read more
  • 0
  • 0
  • 4808

article-image-new-ipad-features-ios-6
Packt
20 Feb 2013
2 min read
Save for later

New iPad Features in iOS 6

Packt
20 Feb 2013
2 min read
(For more resources related to this topic, see here.) New iPad features and native applications We're going to identify some of the applications that come built-in to the new iPad. The applications designed by Apple for the iPad are Safari, Mail, Photos, FaceTime, Maps, Siri, Newsstand, Messages, Calendar, Reminders, Contacts, App Store, iTunes, Music, Videos, Notes, Camera, Photo Booth, Clock, Game Center, and Settings. Let's get started by exploring these applications. Getting ready Locate the Mail, Photos, App Store, iTunes, Music, and Settings apps For details on each app, visit http://www.apple.com/ipad/built-in-apps/ How to do it... These apps form the base of the iPad and by themselves can satisfy most of our media and communication needs. We'll delve deeper into each of the following apps in the upcoming recipes. Mail Photos App Store iTunes and Music Settings How it works... The applications can work together to provide a unifying experience. From the Photos app, we are able to share photos via the Mail application. Purchasing music in iTunes will allow us playback in our Music app. Ubiquity is what makes this device and its apps so useful. Summary This article gave us a brief overview of the application that illustrate the new iPad's features. Resources for Article : Further resources on this subject: Build iPhone, Android and iPad Applications using jQTouch [Article] Getting Started on UDK with iOS [Article] Interface Designing for Games in iOS [Article]
Read more
  • 0
  • 0
  • 2788
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-applications-physics
Packt
18 Feb 2013
16 min read
Save for later

Applications of Physics

Packt
18 Feb 2013
16 min read
(For more resources related to this topic, see here.) Introduction to the Box2D physics extension Physics-based games are one of the most popular types of games available for mobile devices. AndEngine allows the creation of physics-based games with the Box2D extension. With this extension, we can construct any type of physically realistic 2D environment from small, simple simulations to complex games. In this recipe, we will create an activity that demonstrates a simple setup for utilizing the Box2D physics engine extension. Furthermore, we will use this activity for the remaining recipes in this article. Getting ready... First, create a new activity class named PhysicsApplication that extends BaseGameActivity and implements IAccelerationListener and IOnSceneTouchListener. How to do it... Follow these steps to build our PhysicsApplication activity class: Create the following variables in the class: public static int cameraWidth = 800; public static int cameraHeight = 480; public Scene mScene; public FixedStepPhysicsWorld mPhysicsWorld; public Body groundWallBody; public Body roofWallBody; public Body leftWallBody; public Body rightWallBody; We need to set up the foundation of our activity. To start doing so, place these four, common overridden methods in the class to set up the engine, resources, and the main scene: @Override public Engine onCreateEngine(final EngineOptions pEngineOptions) { return new FixedStepEngine(pEngineOptions, 60); } @Override public EngineOptions onCreateEngineOptions() { EngineOptions engineOptions = new EngineOptions(true, ScreenOrientation.LANDSCAPE_SENSOR, new FillResolutionPolicy(), new Camera(0,0, cameraWidth, cameraHeight)); engineOptions.getRenderOptions().setDithering(true); engineOptions.getRenderOptions(). getConfigChooserOptions() .setRequestedMultiSampling(true); engineOptions.setWakeLockOptions( WakeLockOptions.SCREEN_ON); return engineOptions; } @Override public void onCreateResources(OnCreateResourcesCallback pOnCreateResourcesCallback) { pOnCreateResourcesCallback. onCreateResourcesFinished(); } @Override public void onCreateScene(OnCreateSceneCallback pOnCreateSceneCallback) { mScene = new Scene(); mScene.setBackground(new Background(0.9f,0.9f,0.9f)); pOnCreateSceneCallback.onCreateSceneFinished(mScene); } Continue setting up the activity by adding the following overridden method, which will be used to populate our scene: @Override public void onPopulateScene(Scene pScene, OnPopulateSceneCallback pOnPopulateSceneCallback) { } Next, we will fill the previous method with the following code to create our PhysicsWorld object and Scene object: mPhysicsWorld = new FixedStepPhysicsWorld(60, new Vector2(0f,-SensorManager.GRAVITY_EARTH*2), false, 8, 3); mScene.registerUpdateHandler(mPhysicsWorld); final FixtureDef WALL_FIXTURE_DEF = PhysicsFactory.createFixtureDef(0, 0.1f, 0.5f); final Rectangle ground = new Rectangle(cameraWidth / 2f, 6f, cameraWidth - 4f, 8f, this.getVertexBufferObjectManager()); final Rectangle roof = new Rectangle(cameraWidth / 2f, cameraHeight – 6f, cameraWidth - 4f, 8f, this.getVertexBufferObjectManager()); final Rectangle left = new Rectangle(6f, cameraHeight / 2f, 8f, cameraHeight - 4f, this.getVertexBufferObjectManager()); final Rectangle right = new Rectangle(cameraWidth - 6f, cameraHeight / 2f, 8f, cameraHeight - 4f, this.getVertexBufferObjectManager()); ground.setColor(0f, 0f, 0f); roof.setColor(0f, 0f, 0f); left.setColor(0f, 0f, 0f); right.setColor(0f, 0f, 0f); groundWallBody = PhysicsFactory.createBoxBody( this.mPhysicsWorld, ground, BodyType.StaticBody, WALL_FIXTURE_DEF); roofWallBody = PhysicsFactory.createBoxBody( this.mPhysicsWorld, roof, BodyType.StaticBody, WALL_FIXTURE_DEF); leftWallBody = PhysicsFactory.createBoxBody( this.mPhysicsWorld, left, BodyType.StaticBody, WALL_FIXTURE_DEF); rightWallBody = PhysicsFactory.createBoxBody( this.mPhysicsWorld, right, BodyType.StaticBody, WALL_FIXTURE_DEF); this.mScene.attachChild(ground); this.mScene.attachChild(roof); this.mScene.attachChild(left); this.mScene.attachChild(right); // Further recipes in this chapter will require us to place code here. mScene.setOnSceneTouchListener(this); pOnPopulateSceneCallback.onPopulateSceneFinished(); The following overridden activities handle the scene touch events, the accelerometer input, and the two engine life cycle events—onResumeGame and onPauseGame. Place them at the end of the class to finish this recipe: @Override public boolean onSceneTouchEvent(Scene pScene, TouchEvent pSceneTouchEvent) { // Further recipes in this chapter will require us to place code here. return true; } @Override public void onAccelerationAccuracyChanged( AccelerationData pAccelerationData) {} @Override public void onAccelerationChanged( AccelerationData pAccelerationData) { final Vector2 gravity = Vector2Pool.obtain( pAccelerationData.getX(), pAccelerationData.getY()); this.mPhysicsWorld.setGravity(gravity); Vector2Pool.recycle(gravity); } @Override public void onResumeGame() { super.onResumeGame(); this.enableAccelerationSensor(this); } @Override public void onPauseGame() { super.onPauseGame(); this.disableAccelerationSensor(); } How it works... The first thing that we do is define a camera width and height. Then, we define a Scene object and a FixedStepPhysicsWorld object in which the physics simulations will take place. The last set of variables defines what will act as the borders for our physics-based scenes. In the second step, we override the onCreateEngine() method to return a FixedStepEngine object that will process 60 updates per second. The reason that we do this, while also using a FixedStepPhysicsWorld object, is to create a simulation that will be consistent across all devices, regardless of how efficiently a device can process the physics simulation. We then create the EngineOptions object with standard preferences, create the onCreateResources() method with only a simple callback, and set the main scene with a light-gray background. In the onPopulateScene() method, we create our FixedStepPhysicsWorld object that has double the gravity of the Earth, passed as an (x,y) coordinate Vector2 object, and will update 60 times per second. The gravity can be set to other values to make our simulations more realistic or 0 to create a zero gravity simulation. A gravity setting of 0 is useful for space simulations or for games that use a top-down camera view instead of a profile. The false Boolean parameter sets the AllowSleep property of the PhysicsWorld object, which tells PhysicsWorld to not let any bodies deactivate themselves after coming to a stop. The last two parameters of the FixedStepPhysicsWorld object tell the physics engine how many times to calculate velocity and position movements. Higher iterations will create simulations that are more accurate, but can cause lag or jitteriness because of the extra load on the processor. After creating the FixedStepPhysicsWorld object, we register it with the main scene as an update handler. The physics world will not run a simulation without being registered. The variable WALL_FIXTURE_DEF is a fixture definition. Fixture definitions hold the shape and material properties of entities that will be created within the physics world as fixtures. The shape of a fixture can be either circular or polygonal. The material of a fixture is defined by its density, elasticity, and friction, all of which are required when creating a fixture definition. Following the creation of the WALL_FIXTURE_DEF variable, we create four rectangles that will represent the locations of the wall bodies. A body in the Box2D physics world is made of fixtures. While only one fixture is necessary to create a body, multiple fixtures can create complex bodies with varying properties. Further along in the onPopulateScene() method, we create the box bodies that will act as our walls in the physics world. The rectangles that were previously created are passed to the bodies to define their position and shape. We then define the bodies as static, which means that they will not react to any forces in the physics simulation. Lastly, we pass the wall fixture definition to the bodies to complete their creation. After creating the bodies, we attach the rectangles to the main scene and set the scene's touch listener to our activity, which will be accessed by the onSceneTouchEvent() method. The final line of the onPopulateScene() method tells the engine that the scene is ready to be shown. The overridden onSceneTouchEvent() method will handle all touch interactions for our scene. The onAccelerationAccuracyChanged() and onAccelerationChanged() methods are inherited from the IAccelerationListener interface and allow us to change the gravity of our physics world when the device is tilted, rotated, or panned. We override onResumeGame() and onPauseGame() to keep the accelerometer from using unnecessary battery power when our game activity is not in the foreground. There's more... In the overridden onAccelerationChanged() method, we make two calls to the Vector2Pool class. The Vector2Pool class simply gives us a way of re-using our Vector2 objects that might otherwise require garbage collection by the system. On newer devices, the Android Garbage Collector has been streamlined to reduce noticeable hiccups, but older devices might still experience lag depending on how much memory the variables being garbage collected occupy. Visit http://www.box2d.org/manual.htmlto see the Box2D User Manual. The AndEngine Box2D extension is based on a Java port of the official Box2D C++ physics engine, so some variations in procedure exist, but the general concepts still apply. See also Understanding different body types in this article. Understanding different body types The Box2D physics world gives us the means to create different body types that allow us to control the physics simulation. We can generate dynamic bodies that react to forces and other bodies, static bodies that do not move, and kinematic bodies that move but are not affected by forces or other bodies. Choosing which type each body will be is vital to producing an accurate physics simulation. In this recipe, we will see how three bodies react to each other during collision, depending on their body types. Getting ready... Follow the recipe in the Introduction to the Box2D physics extension section given at the beginning of this article to create a new activity that will facilitate the creation of our bodies with varying body types. How to do it... Complete the following steps to see how specifying a body type for bodies affects them: First, insert the following fixture definition into the onPopulateScene() method: FixtureDef BoxBodyFixtureDef = PhysicsFactory.createFixtureDef(20f, 0f, 0.5f); Next, place the following code that creates three rectangles and their corresponding bodies after the fixture definition from the previous step: Rectangle staticRectangle = new Rectangle(cameraWidth / 2f,75f,400f,40f,this.getVertexBufferObjectManager()); staticRectangle.setColor(0.8f, 0f, 0f); mScene.attachChild(staticRectangle); PhysicsFactory.createBoxBody(mPhysicsWorld, staticRectangle, BodyType.StaticBody, BoxBodyFixtureDef); Rectangle dynamicRectangle = new Rectangle(400f, 120f, 40f, 40f, this.getVertexBufferObjectManager()); dynamicRectangle.setColor(0f, 0.8f, 0f); mScene.attachChild(dynamicRectangle); Body dynamicBody = PhysicsFactory.createBoxBody(mPhysicsWorld, dynamicRectangle, BodyType.DynamicBody, BoxBodyFixtureDef); mPhysicsWorld.registerPhysicsConnector(new PhysicsConnector( dynamicRectangle, dynamicBody); Rectangle kinematicRectangle = new Rectangle(600f, 100f, 40f, 40f, this.getVertexBufferObjectManager()); kinematicRectangle.setColor(0.8f, 0.8f, 0f); mScene.attachChild(kinematicRectangle); Body kinematicBody = PhysicsFactory.createBoxBody(mPhysicsWorld, kinematicRectangle, BodyType.KinematicBody, BoxBodyFixtureDef); mPhysicsWorld.registerPhysicsConnector(new PhysicsConnector( kinematicRectangle, kinematicBody); Lastly, add the following code after the definitions from the previous step to set the linear and angular velocities for our kinematic body: kinematicBody.setLinearVelocity(-2f, 0f); kinematicBody.setAngularVelocity((float) (-Math.PI)); How it works... In the first step, we create the BoxBodyFixtureDef fixture definition that we will use when creating our bodies in the second step. For more information on fixture definitions, see the Introduction to the Box2D physics extension recipe in this article. In step two, we first define the staticRectangle rectangle by calling the Rectangle constructor. We place staticRectangle at the position of cameraWidth / 2f, 75f, which is near the lower-center of the scene, and we set the rectangle to have a width of 400f and a height of 40f, which makes the rectangle into a long, flat bar. Then, we set the staticRectangle rectangle's color to be red by calling staticRectangle. setColor(0.8f, 0f, 0f). Lastly, for the staticRectangle rectangle, we attach it to the scene by calling the mScene.attachChild() method with staticRectangle as the parameter. Next, we create a body in the physics world that matches our staticRectangle. To do this, we call the PhysicsFactory.createBoxBody() method with the parameters of mPhysicsWorld, which is our physics world, staticRectangle to tell the box to be created with the same position and size as the staticRectangle rectangle, BodyType. StaticBody to define the body as static, and our BoxBodyFixtureDef fixture definition. Our next rectangle, dynamicRectangle, is created at the location of 400f and 120f, which is the middle of the scene slightly above the staticRectangle rectangle. Our dynamicRectangle rectangle's width and height are set to 40f to make it a small square. Then, we set its color to green by calling dynamicRectangle.setColor(0f, 0.8f, 0f) and attach it to our scene using mScene.attachChild(dynamicRectangle). Next, we create the dynamicBody variable using the PhysicsFactory.createBoxBody() method in the same way that we did for our staticRectangle rectangle. Notice that we set the dynamicBody variable to have BodyType of DynamicBody. This sets the body to be dynamic. Now, we register PhysicsConnector with the physics world to link dynamicRectangle and dynamicBody. A PhysicsConnecter class links an entity within our scene to a body in the physics world, representing the body's realtime position and rotation in our scene. Our last rectangle, kinematicRectangle, is created at the location of 600f and 100f, which places it on top of our staticRectangle rectangle toward the right-hand side of the scene. It is set to have a height and width of 40f, which makes it a small square like our dynamicRectangle rectangle. We then set the kinematicRectangle rectangle's color to yellow and attach it to our scene. Similar to the previous two bodies that we created, we call the PhysicsFactory.createBoxBody() method to create our kinematicBody variable. Take note that we create our kinematicBody variable with a BodyType type of KinematicBody. This sets it to be kinematic and thus moved only by the setting of its velocities. Lastly, we register a PhysicsConnector class between our kinematicRectangle rectangle and our kinematicBody body type. In the last step, we set our kinematicBody body's linear velocity by calling the setLinearVelocity() method with a vector of -2f on the x axis, which makes it move to the left. Finally, we set our kinematicBody body's angular velocity to negative pi by calling kinematicBody.setAngularVelocity((float) (-Math.PI)). For more information on setting a body's velocities, see the Using forces, velocities, and torque recipe in this article. There's more... Static bodies cannot move from applied or set forces, but can be relocated using the setTransform() method. However, we should avoid using the setTransform() method while a simulation is running, because it makes the simulation unstable and can cause some strange behaviors. Instead, if we want to change the position of a static body, we can do so whenever creating the simulation or, if we need to change the position at runtime, simply check that the new position will not cause the static body to overlap existing dynamic bodies or kinematic bodies. Kinematic bodies cannot have forces applied, but we can set their velocities via the setLinearVelocity() and setAngularVelocity() methods. See also Introduction to the Box2D physics extension in this article. Using forces, velocities, and torque in this article. Creating category-filtered bodies Depending on the type of physics simulation that we want to achieve, controlling which bodies are capable of colliding can be very beneficial. In Box2D, we can assign a category, and category-filter to fixtures to control which fixtures can interact. This recipe will cover the defining of two category-filtered fixtures that will be applied to bodies created by touching the scene to demonstrate category-filtering. Getting ready... Create an activity by following the steps in the Introduction to the Box2D physics extension section given at the beginning of the article. This activity will facilitate the creation of the category-filtered bodies used in this section. How to do it... Follow these steps to build our category-filtering demonstration activity: Define the following class-level variables within the activity: private int mBodyCount = 0; public static final short CATEGORYBIT_DEFAULT = 1; public static final short CATEGORYBIT_RED_BOX = 2; public static final short CATEGORYBIT_GREEN_BOX = 4; public static final short MASKBITS_RED_BOX = CATEGORYBIT_DEFAULT + CATEGORYBIT_RED_BOX; public static final short MASKBITS_GREEN_BOX = CATEGORYBIT_DEFAULT + CATEGORYBIT_GREEN_BOX; public static final FixtureDef RED_BOX_FIXTURE_DEF = PhysicsFactory.createFixtureDef(1, 0.5f, 0.5f, false, CATEGORYBIT_RED_BOX, MASKBITS_RED_BOX, (short)0); public static final FixtureDef GREEN_BOX_FIXTURE_DEF = PhysicsFactory.createFixtureDef(1, 0.5f, 0.5f, false, CATEGORYBIT_GREEN_BOX, MASKBITS_GREEN_BOX, (short)0); Next, create this method within the class that generates new category-filtered bodies at a given location: private void addBody(final float pX, final float pY) { this.mBodyCount++; final Rectangle rectangle = new Rectangle(pX, pY, 50f, 50f, this.getVertexBufferObjectManager()); rectangle.setAlpha(0.5f); final Body body; if(this.mBodyCount % 2 == 0) { rectangle.setColor(1f, 0f, 0f); body = PhysicsFactory.createBoxBody(this.mPhysicsWorld, rectangle, BodyType.DynamicBody, RED_FIXTURE_DEF); } else { rectangle.setColor(0f, 1f, 0f); body = PhysicsFactory.createBoxBody(this.mPhysicsWorld, rectangle, BodyType.DynamicBody, GREEN_FIXTURE_DEF); } this.mScene.attachChild(rectangle); this.mPhysicsWorld.registerPhysicsConnector(new PhysicsConnector( rectangle, body, true, true)); } Lastly, fill the body of the onSceneTouchEvent() method with the following code that calls the addBody() method by passing the touched location: if(this.mPhysicsWorld != null) if(pSceneTouchEvent.isActionDown()) this.addBody(pSceneTouchEvent.getX(), pSceneTouchEvent.getY()); How it works... In the first step, we create an integer, mBodyCount, which counts how many bodies we have added to the physics world. The mBodyCount integer is used in the second step to determine which color, and thus which category, should be assigned to the new body. We also create the CATEGORYBIT_DEFAULT, CATEGORYBIT_RED_BOX, and CATEGORYBIT_ GREEN_BOX category bits by defining them with unique power-of-two short integers and the MASKBITS_RED_BOX and MASKBITS_GREEN_BOX mask bits by adding their associated category bits together. The category bits are used to assign a category to a fixture, while the mask bits combine the different category bits to determine which categories a fixture can collide with. We then pass the category bits and mask bits to the fixture definitions to create fixtures that have category collision rules. The second step is a simple method that creates a rectangle and its corresponding body. The method takes the X and Y location parameters that we want to use to create a new body and passes them to a Rectangle object's constructor, to which we also pass a height and width of 50f and the activity's VertexBufferObjectManager. Then, we set the rectangle to be 50 percent transparent using the rectangle.setAlpha() method. After that, we define a body and modulate the mBodyCount variable by 2 to determine the color and fixture of every other created body. After determining the color and fixture, we assign them by setting the rectangle's color and creating a body by passing our mPhysicsWorld physics world, the rectangle, a dynamic body type, and the previously-determined fixture to use. Finally, we attach the rectangle to our scene and register a PhysicsConnector class to connect the rectangle to our body. The third step calls the addBody() method from step two only if the physics world has been created and only if the scene's TouchEvent is ActionDown. The parameters that are passed, pSceneTouchEvent.getX() and pSceneTouchEvent.getY(), represent the location on the scene that received a touch input, which is also the location where we want to create a new category-filtered body. There's more... The default category of all fixtures has a value of one. When creating mask bits for specific fixtures, remember that any combination that includes the default category will cause the fixture to collide with all other fixtures that are not masked to avoid collision with the fixture. See also Introduction to the Box2D physics extension in this article. Understanding different body types in this article.
Read more
  • 0
  • 0
  • 2706

Packt
18 Jan 2013
7 min read
Save for later

New Connectivity APIs – Android Beam

Packt
18 Jan 2013
7 min read
(For more resources related to this topic, see here.) Android Beam Devices that have NFC hardware can share data by tapping them together. This could be done with the help of the Android Beam feature. It is similar to Bluetooth, as we get seamless discovery and pairing as in a Bluetooth connection. Devices connect when they are close to each other (not more than a few centimeters). Users can share pictures, videos, contacts, and so on, using the Android Beam feature. Beaming NdefMessages In this section, we are going to implement a simple Android Beam application. This application will send an image to another device when two devices are tapped together. There are three methods that are introduced with Android Ice Cream Sandwich that are used in sending NdefMessages. These methods are as follows: setNdefPushMessage() : This method takes an NdefMessage as a parameter and sends it to another device automatically when devices are tapped together. This is commonly used when the message is static and doesn't change. setNdefPushMessageCallback() : This method is used for creating dynamic NdefMessages. When two devices are tapped together, the createNdefMessage() method is called. setOnNdefPushCompleteCallback() : This method sets a callback which is called when the Android Beam is successful. We are going to use the second method in our sample application. Our sample application's user interface will contain a TextView component for displaying text messages and an ImageView component for displaying the received images sent from another device. The layout XML code will be as follows: <RelativeLayout android_layout_width="match_parent" android_layout_height="match_parent" > <TextView android_id="@+id/textView" android_layout_width="wrap_content" android_layout_height="wrap_content" android_layout_centerHorizontal="true" android_layout_centerVertical="true" android_text="" /> <ImageView android_id="@+id/imageView" android_layout_width="wrap_content" android_layout_height="wrap_content" android_layout_below="@+id/textView" android_layout_centerHorizontal="true" android_layout_marginTop="14dp" /> </RelativeLayout> Now, we are going to implement, step-by-step, the Activity class of the sample application. The code of the Activity class with the onCreate() method is as follows: public class Chapter9Activity extends Activity implementsCreateNdefMessageCallback{NfcAdapter mNfcAdapter;TextView mInfoText;ImageView imageView;@Overridepublic void onCreate(Bundle savedInstanceState) {super.onCreate(savedInstanceState);setContentView(R.layout.main);imageView = (ImageView) findViewById(R.id.imageView);mInfoText = (TextView) findViewById(R.id.textView);// Check for available NFC AdaptermNfcAdapter =NfcAdapter.getDefaultAdapter(getApplicationContext());if (mNfcAdapter == null){mInfoText = (TextView) findViewById(R.id.textView);mInfoText.setText("NFC is not available on this device.");finish();return;}// Register callback to set NDEF messagemNfcAdapter.setNdefPushMessageCallback(this, this);}@Overridepublic boolean onCreateOptionsMenu(Menu menu) {getMenuInflater().inflate(R.menu.main, menu);return true;}} As you can see in this code, we can check whether the device provides an NfcAdapter. If it does, we get an instance of NfcAdapter. Then, we call the setNdefPushMessageCallback() method to set the callback using the NfcAdapter instance. We send the Activity class as a callback parameter because the Activity class implements CreateNdefMessageCallback.In order to implement CreateNdefMessageCallback, we should override the createNdefMessage()method as shown in the following code block: @Overridepublic NdefMessage createNdefMessage(NfcEvent arg0) {Bitmap icon =BitmapFactory.decodeResource(this.getResources(),R.drawable.ic_launcher);ByteArrayOutputStream stream = new ByteArrayOutputStream();icon.compress(Bitmap.CompressFormat.PNG, 100, stream);byte[] byteArray = stream.toByteArray();NdefMessage msg = new NdefMessage(new NdefRecord[] {createMimeRecord("application/com.chapter9", byteArray), NdefRecord.createApplicationRecord("com.chapter9")});return msg;}public NdefRecord createMimeRecord(String mimeType, byte[]payload) {byte[] mimeBytes = mimeType.getBytes(Charset.forName("USASCII"));NdefRecord mimeRecord = newNdefRecord(NdefRecord.TNF_MIME_MEDIA,mimeBytes, new byte[0], payload);return mimeRecord;} As you can see in this code, we get a drawable, convert it to bitmap, and then to a byte array. Then we create an NdefMessage with two NdefRecords. The first record contains the mime type and the byte array. The first record is created by the createMimeRecord() method. The second record contains the Android Application Record ( AAR). The Android Application Record was introduced with Android Ice Cream Sandwich. This record contains the package name of the application and increases the certainty that your application will start when an NFC Tag is scanned. That is, the system firstly tries to match the intent filter and AAR together to start the activity. If they don't match, the activity that matches the AAR is started. When the activity is started by an Android Beam event, we need to handle the message that is sent by the Android Beam. We handle this message in the onResume() method of the Activity class as shown in the following code block: @Overridepublic void onResume() {super.onResume();// Check to see that the Activity started due to an AndroidBeamif (NfcAdapter.ACTION_NDEF_DISCOVERED.equals(getIntent().getAction())) {processIntent(getIntent());}}@Overridepublic void onNewIntent(Intent intent) {// onResume gets called after this to handle the intentsetIntent(intent);}void processIntent(Intent intent) {Parcelable[] rawMsgs = intent.getParcelableArrayExtra(NfcAdapter.EXTRA_NDEF_MESSAGES);// only one message sent during the beamNdefMessage msg = (NdefMessage) rawMsgs[0];// record 0 contains the MIME type, record 1 is the AARbyte[] bytes = msg.getRecords()[0].getPayload();Bitmap bmp = BitmapFactory.decodeByteArray(bytes, 0,bytes.length);imageView.setImageBitmap(bmp);} As you can see in this code, we firstly check whether the intent is ACTION_NDEF_DISCOVERED. This means the Activity class is started due to an Android Beam. If it is started due to an Android Beam, we process the intent with the processIntent() method. We firstly get NdefMessage from the intent. Then we get the first record and convert the byte array in the first record to bitmap using BitmapFactory . Remember that the second record is AAR, we do nothing with it. Finally, we set the bitmap of the ImageView component. The AndroidManifest.xml file of the application should be as follows: <manifest package="com.chapter9"android:versionCode="1"android:versionName="1.0" ><uses-permission android_name="android.permission.NFC"/><uses-feature android_name="android.hardware.nfc"android:required="false" /><uses-sdkandroid:minSdkVersion="14"android:targetSdkVersion="15" /><applicationandroid:icon="@drawable/ic_launcher"android:label="@string/app_name"android:theme="@style/AppTheme" ><activityandroid:name=".Chapter9Activity"android:label="@string/title_activity_chapter9" ><intent-filter><action android_name="android.intent.action.MAIN" /><categoryandroid:name="android.intent.category.LAUNCHER" /></intent-filter><intent-filter><actionandroid:name="android.nfc.action.NDEF_DISCOVERED" /><categoryandroid:name="android.intent.category.DEFAULT" /><data android_mimeType="application/com.chapter9" /></intent-filter></activity></application></manifest> As you can see in this code, we need to set the minimum SDK to API Level 14 or more in the AndroidManifest.xml file because these APIs are available in API Level 14 or more. Furthermore, we need to set the permissions to use NFC. We also set the uses feature in AndroidManifest.xml. The feature is set as not required. This means that our application would be available for devices that don't have NFC support. Finally, we create an intent filter for android.nfc.action.NDEF_DISCOVERED with mimeType of application/com.chapter9. When a device sends an image using our sample application, the screen will be as follows: Summary In this article, we firstly learned the Android Beam feature of Android. With this feature, devices can send data using the NFC hardware. We implemented a sample Android Beam application and learned how to use Android Beam APIs. Resources for Article : Further resources on this subject: Android 3.0 Application Development: Multimedia Management [Article] Animating Properties and Tweening Pages in Android 3-0 [Article] Android User Interface Development: Animating Widgets and Layouts [Article]
Read more
  • 0
  • 0
  • 2250

article-image-hooking-native-events
Packt
04 Jan 2013
9 min read
Save for later

Hooking into native events

Packt
04 Jan 2013
9 min read
(For more resources related to this topic, see here.) Pausing your application Although we want our users to spend their time solely on our applications, they will inevitably leave our application to open another one or do something else entirely. We need to be able to detect when a user has left our application but not closed it down entirely. How to do it... We can use the PhoneGap API to fire off a particular event when our application is put into the background on the device: Create the initial HTML layout for the application, and include the reference to the Cordova JavaScript file in the head tag of the document. <!DOCTYPE HTML> <html> <head> <meta name="viewport" content="user-scalable=no, initial-scale=1, maximum-scale=1, minimum-scale=1, width=device-width;" /> <meta http-equiv="Content-type" content="text/html;> <title>Pausing an application</title> <script type="text/javascript" src="cordova-2.0.0.js"></script> </head> <body> </body> </html> Before the closing head tag, create a new script tag block and add the event listener to check when the device is ready and the PhoneGap code is ready to run. <script type="text/javascript"> document.addEventListener("deviceready", onDeviceReady, false); </script> Create the onDeviceReady function, which will run when the event listener is fired. Inside this, we'll create a new event listener that will check for a pause event, and once received will fire the onPause method. function onDeviceReady() { document.addEventListener("pause", onPause, false); } Let's create the onPause method. In this example application, we'll ask the device to notify the user that the application has moved into the background by playing an audio beep. The numeric parameter specifies how many times we want the audio notification to be played — in this case, just once. function onPause() { navigator.notification.beep(1); } Developing for iOS? There is no native beep API for iOS. The PhoneGap API will play an audio file using the media API, but the developer must provide the file, named beep.wav and under 30 seconds in length, in the /www directory of the application project files. iOS will also ignore the beep count argument and will play the audio once. If developing for Windows 7 mobile, the WP7 Cordova library contains a generic beep audio file that will be used. When we run the application on the device, if you press the home button or navigate to another application, the device will play the notification audio. How it works... To correctly determine the flow of our lifecycle events, we first set up the deviceready event listener to ensure that the native code was properly loaded. At this point, we were then able to set the new event listener for the pause event. As soon as the user navigated away from our application, the native code would set it into the background processes on the device and fire the pause event, at which point our listener would run the onPause method. To find out more about the pause event, please refer to the official documentation, available here: http://docs.phonegap.com/en/2.0.0/cordova_events_events.md.html#pause. There's more... In this recipe we applied the pause event in an incredibly simple manner. There is a possibility your application will want to do something specific other than sending an audio notification when the user pauses your application. For example, you may want to save and persist any data currently in the view or in memory, such as any draft work (if dealing with form inputs) or saving responses from a remote API call. We'll build an example that will persist data in the next recipe, as we'll be able to quantify its success when we resume the use of the application and bring it back into the foreground. Resuming your application Multi-tasking capabilities that are now available on mobile devices specify that the user has the ability to switch from one application to another at any time. We need to handle this possibility and ensure that we can save and restore any processes and data when the user returns to our application. How to do it... We can use the PhoneGap API to detect when our application is brought back into the foreground on the device. The following steps will help us to do so: Create the initial layout for the HTML and include the JavaScript references to the Cordova and the xui.js files. We will also be setting the deviceready listener once the DOM has fully loaded, so let's apply an onload attribute to the body tag. <!DOCTYPE HTML> <html> <head> <meta name="viewport" content="user-scalable=no, initial-scale=1, maximum-scale=1, minimum-scale=1, width=device-width;" /> <meta http-equiv="Content-type" content="text/html; charset=utf-8"> <title>Resuming an application</title> <script type="text/javascript" src="cordova-2.0.0.js"></script> <script type="text/javascript" src="xui.js"></script> </head> <body onload="onLoad()"> </body> </html> Create a new script tag block before the closing head tag and add the deviceready event listener within the onLoad method. We'll also set two global variables, savedTime, and localStorage, the latter of which will reference the localStorage API on the device: <script type="text/javascript"> var savedTime; var localStorage = window.localStorage; function onLoad() { document.addEventListener("deviceready", onDeviceReady, false); } </script> Create the onDeviceReady function, within which we'll set the two event listeners to check for the pause and resume events, as follows: function onDeviceReady() { document.addEventListener("pause", onPause, false); document.addEventListener("resume", onResume, false); } We can now add the first of the new callback functions for the added listeners. onPause will run when a pause event has been detected. In this method, we'll create a new date variable holding the current time, and store it into the global savedTime variable we created earlier. If the user has entered something in to the text input field, we'll also take the value and set it into the localStorage API, before clearing out the input field. function onPause() { savedTime = new Date(); var strInput = x$('#userInput').attr('value'); if(strInput) { localStorage.setItem('saved_input', strInput); x$('#userInput').attr('value', ''); } } Define the onResume method, which will run when a resume event has been detected. In this function, we'll save a new date variable and we'll use it in conjunction with the savedTime variable created in the onPause method to generate the time difference between the two dates. We'll then create a string message to display the time details to the user. We'll then check the localStorage for the existence of an item stored using the key saved_input. If this exists, we'll extend the message string and append the saved user input value before setting the message into the DOM to display. function onResume() { var currentTime = new Date(); var dateDiff = currentTime.getTime() - savedTime.getTime(); var objDiff = new Object(); objDiff.days = Math.floor(dateDiff/1000/60/60/24); dateDiff -= objDiff.days*1000*60*60*24; objDiff.hours = Math.floor(dateDiff/1000/60/60); dateDiff -= objDiff.hours*1000*60*60; objDiff.minutes = Math.floor(dateDiff/1000/60); dateDiff -= objDiff.minutes*1000*60; objDiff.seconds = Math.floor(dateDiff/1000); var strMessage = '<h2>You are back!</h2>' strMessage += '<p>You left me in the background for ' strMessage += '<b>' + objDiff.days + '</b> days, ' strMessage += '<b>' + objDiff.hours + '</b> hours, ' strMessage += '<b>' + objDiff.minutes + '</b> minutes, ' strMessage += '<b>' + objDiff.seconds + '</b> seconds.</p>'; if(localStorage.getItem('saved_input')) { strMessage = strMessage + '<p>You had typed the following before you left:<br /><br />' strMessage += '"<b>' + localStorage.getItem('saved_input') + '</b>"</p>'; } x$('#message').html(strMessage); } Finally, let's add the DOM elements to the application. Create a new div element with the id attribute set to message, and an input text element with the id set to userInput. <body onload="onLoad()"> <div id="message"></div> <input type="text" id="userInput" /> </body> When we run the application on the device, the initial output would provide the user with an input box to enter text, should they wish to, as shown in the following screenshot: If we were to pause the application and then resume it after a period of time, the display would then update to look something like the following screenshot: How it works... We set up the deviceready event listener after the DOM was fully loaded, which would then run the onDeviceReady function. Within this method we then added two new event listeners to catch the pause and resume events respectively. When the application is paused and placed into the background processes on the device, we saved the current date and time into a global variable. We also checked for the existence of any user-supplied input and if it was present we saved it using the localStorage capabilities on the device. When the application was resumed and placed back into the foreground on the device, the onResume method was run, which obtained the time difference between the saved and current datetime values to output to the user. We also retrieved the saved user input from the localStorage if we had set it within the onPause method. To find out more about the resume event, please refer to the official documentation, available here: http://docs.phonegap.com/en/2.0.0/cordova_events_events.md.html#resume.
Read more
  • 0
  • 0
  • 2455

article-image-page-events
Packt
02 Jan 2013
4 min read
Save for later

Page Events

Packt
02 Jan 2013
4 min read
(For more resources related to this topic, see here.) Page initialization events The jQuery Mobile framework provides the page plugin which automatically handles page initialization events. The pagebeforecreate event is fired before the page is created. The pagecreate event is fired after the page is created but before the widgets are initialized. The pageinit event is fired after the complete initialization. This recipe shows you how to use these events. Getting ready Copy the full code of this recipe from the code/08/pageinit sources folder. You can launch this code using the URL http://localhost:8080/08/pageinit/main.html How to do it... Carry out the following steps: Create main.html with three empty <div> tags as follows: <div id="content" data-role="content"> <div id="div1"></div> <div id="div2"></div> <div id="div3"></div> </div> Add the following script to the <head> section to handle the pagebeforecreate event : var str = "<a href='#' data-role='button'>Link</a>"; $("#main").live("pagebeforecreate", function(event) { $("#div1").html("<p>DIV1 :</p>"+str); }); Next, handle the pagecreate event : $("#main").live("pagecreate", function(event) { $("#div1").find("a").attr("data-icon", "star"); }); Finally, handle the pageinit event : $("#main").live("pageinit", function(event) { $("#div2").html("<p>DIV 2 :</p>"+str); $("#div3").html("<p>DIV 3 :</p>"+str); $("#div3").find("a").buttonMarkup({"icon": "star"}); }); How it works... In main.html, add three empty divs to the page content as shown. Add the given script to the page. In the script, str is an HTML string for creating an anchor link with the data-role="button" attribute. Add the callback for the pagebeforecreate event , and set str to the div1 container. Since the page was not yet created, the button in div1 is automatically initialized and enhanced as seen in the following image. Add the callback for the pagecreate event . Select the previous anchor button in div1 using the jQuery find() method, and set its data-icon attribute. Since this change was made after page initialization but before the button was initialized, the star icon is automatically shown for the div1 button as shown in the following screenshot. Finally, add the callback for the pageinit event and add str to both the div2 and div3 containers. At this point, the page and widgets are already initialized and enhanced. Adding an anchor link will now show it only as a native link without any enhancement for div2, as shown in the following screenshot. But, for div3, find the anchor link and manually call the buttonmarkup method on the button plugin, and set its icon to star. Now when you load the page, the link in div3 gets enhanced as follows:     There's more... You can trigger "create" or "refresh" on the plugins to let the jQuery Mobile framework enhance the dynamic changes done to the page or the widgets after initialization. Page initialization events fire only once The page initialization events fire only once. So this is a good place to make any specific initializations or to add your custom controls. Do not use $(document).ready() The $(document).ready() handler only works when the first page is loaded or when the DOM is ready for the first time. If you load a page via Ajax, then the ready() function is not triggered. Whereas, the pageinit event is triggered whenever a page is created or loaded and initialized. So, this is the best place to do post initialization activities in your app. $(document).bind("pageinit", callback() {...});</p>  
Read more
  • 0
  • 0
  • 7890
article-image-batterymonitor-application
Packt
30 Nov 2012
9 min read
Save for later

BatteryMonitor Application

Packt
30 Nov 2012
9 min read
(For more resources related to this topic, see here.) Overview of the technologies The BatteryMonitor application makes reference to two very important frameworks to allow for drawing of graphics to the iOS device's view, as well as composing and sending of e-mail messages, directly within the application. In this article, we will be making use of the Core Graphics framework that will be responsible for handling the creation of our battery gauge to allow the contents to be filled based on the total amount of battery life remaining on the device. We will then use the MessageUI framework that will be responsible for composing and sending e-mails whenever the application has determined that the battery levels fall below the 20 percent threshold. This is all handled and done directly within our app. We will make use of the UIDevice class that will be used to gather the device information for our iOS device. This class enables you to recover device-specific values, including the model of the iOS device that is being used, the device name, and the OS name and version. We will then use the MFMailComposeViewController class object to directly open up the e-mail dialog box within the application. The information that you can retrieve from the UIDevice class is shown in the following table: Type Description System name This returns the name of the operating system that is currently in use. Since all current generation iOS devices run using the same OS, only one will be displayed; that is iOS 5.1. System version This lists the firmware version that is currently installed on the iOS device; that is, 4.3, 4.31, 5.01, and so on. Unique identifier The unique identifier of the iOS device generates a hexadecimal number to guarantee that it is unique for each iOS device, and does this by applying an internal hash to several of its hardware specifiers, including the device's serial number. This unique identifier is used to register the iOS devices at the iOS portal for provisioning of distribution of software apps. Apple is currently phasing out and rejecting apps that access the Unique Device Identifier on an iOS device to solve issues with piracy, and has suggested that you should create a unique identifier that is specific to your app. Model The iOS model returns a string that describes its platform; that is, iPhone, iPod Touch, and iPad. Name This represents the assigned name of the iOS device that has been assigned by the user within iTunes. This name is also used to create the localhost names for the device, particularly when networking is used. For more information on the UIDevice class, you can refer to the Apple Developer Documentation that can be found and located at the following URL: https://developer.apple.com/library/ios/#DOCUMENTATION/UIKit/Reference/UIDevice_Class/Reference/UIDevice.html. Building the BatteryMonitor application Monitoring battery levels is a common thing that we do in our everyday lives. The battery indicator on the iPhone/iPad lets us know when it is time for us to recharge our iOS device. In this section, we will look at how to create an application that can run on an iOS device to enable us to monitor battery levels on an iOS device, and then send an e-mail alert when the battery levels fall below the threshold. We first need to create our BatteryMonitor project. It is very simple to create this in Xcode. Just follow the steps listed here. Launch Xcode from the /Xcode4/Applications folder. Choose Create a new Xcode project, or File | New Project. Select the Single View Application template from the list of available templates. Select iPad from under the Device Family drop-down list. Ensure that the Use Storyboard checkbox has not been selected. Select the Use Automatic Reference Counting checkbox. Ensure that the Include Unit Tests checkbox has not been selected. Click on the Next button to proceed with the next step in the wizard. Enter in BatteryMonitor as the name for your project. Then click on the Next button to proceed with the next step of the wizard. Specify the location where you would like to save your project. Then, click on the Save button to continue and display the Xcode workspace environment. Now that we have created our BatteryMonitor project, we need to add the MessageUI framework to our project. This will enable us to send e-mail alerts when the battery levels fall below the threshold. Adding the MessageUI framework to the project As we mentioned previously, we need to add the MessageUI framework to our project to allow us to compose and send an e-mail directly within our iOS application, whenever we determine that our device is running below the allowable percentage. To add the MessageUI framework, select Project Navigator Group, and follow the simple steps outlined here: Click and select your project from Project Navigator. Then, select your project target from under the TARGETS group. Select the Build Phases tab. Expand the Link Binary With Libraries disclosure triangle. Finally, use + to add the library you want. Select MessageUI.framework from the list of available frameworks. Now that we have added MessageUI.framework into our project, we need to start building our user interface that will be responsible for allowing us to monitor the battery levels of our iOS device, as well as handle sending out e-mails when the battery levels fall below the agreed threshold. Creating the main application screen The BatteryMonitor application doesn't do anything at this stage; all we have done is created the project and added the MessageUI framework to handle the sending of e-mails when our battery levels are falling below the threshold. We now need to start building the user interface for our BatteryMonitor application. This screen will consist of a View controller, and some controls to handle setting the number of bars to be displayed, as well as whether the monitoring of the battery should be enabled or disabled. Select the ViewController.xib file from Project Navigator. Set the value of Background of the View controller to read Black Color. Next, from Object Library, select-and-drag a (UILabel) Label control, and add this to our view. Modify the Text property of the control to Battery Status:. Modify the Font property of the control to System 42.0. Modify the Alignment property of the control to Center. Next, from Object Library, select-and-drag another (UILabel) Label control, and add this to our view directly underneath the Battery Status label. Modify the Text property of the control to Battery Level:. Modify the Font property of the control to System 74.0. Modify the Alignment property of the control to Center. Now that we have added our label controls to our view controller, our next step is to start adding the rest of our controls that will make up our user interface. So let's proceed to the next section. Adding the Enable Monitoring UISwitch control Our next step is to add a switch control to our view controller; this will be responsible for determining whether or not we are to monitor our battery levels and send out alert e-mails whenever our battery life is running low on our iOS device. This can be achieved by following these simple steps: From Object Library, select-and-drag a (UILabel) Label control, and add this to the bottom right-hand corner of our view controller. Modify the Text property of the control to Enable Monitoring:. Modify the Font property of the control to System 17.0. Modify the Alignment property of the control to Left. Next, from Object Library, select-and-drag a (UISwitch) Switch control to the right of the Enable Monitoring label. Next, from the Attributes Inspector section, change the value of State to On. Then, change the value of On Tint to Default. Now that we have added our Enable Monitoring switch control to our BatteryMonitor View controller, our next step is to add the Send E-mail Alert switch that will be responsible for sending out e-mail alerts if it has determined that the battery levels have fallen below our threshold. So, let's proceed with the next section. Adding the Send E-mail Alert UISwitch control Now, we need to add another switch control to our view that will be responsible for sending e-mail alerts. This can be achieved by following these simple steps: From Object Library, select-and-drag another (UILabel) Label control, and add this underneath our Enable Monitoring label. Modify the Text property of the control to Send E-mail Alert:. Modify the Font property of the control to System 17.0. Modify the Alignment property of the control to Left. Next, from Object Library, select-and-drag a (UISwitch) Switch control to the right of the Send Email Alert label. Next, from the Attributes Inspector section, change the value of State to On. Then, change the value of On Tint to Default. To duplicate a UILabel and/or UISwitch control and have them retain the same attributes, you can use the keyboard shortcut Command + D. You can then update the Text label for the newly added control. Now that we have added our Send E-mail Alert button to our BatteryMonitor view controller, our next step is to add the Fill Gauge Levels switch that will be responsible for filling our battery gauge when it has been set to ON. Adding the Fill Gauge Levels UISwitch control Now, we need to add another switch control to our view that will be responsible for determining whether our gauge should be filled to show the amount of battery remaining. This can be achieved by following these simple steps: From Object Library, select-and-drag another (UILabel) Label control, and add this underneath our Send E-mail Alert label. Modify the Text property of the control to Fill Gauge Levels:. Modify the Font property of the control to System 17.0. Modify the Alignment property of the control to Left. Next, from Object Library, select-and-drag a (UISwitch) Switch control to the right of the Fill Gauge Levels label. Next, from the Attributes Inspector section, change the value of State to On. Then, change the value of On Tint to Default. Now that we have added our Fill Gauge Levels switch control to our BatteryMonitor view controller, our next step is to add the Increment Bars stepper that will be responsible for increasing the number of bar cells within our battery gauge.
Read more
  • 0
  • 0
  • 2812

article-image-using-maps-your-windows-phone-app
Packt
30 Jul 2012
6 min read
Save for later

Using Maps in your Windows Phone App

Packt
30 Jul 2012
6 min read
  (For more resources on Windows Phone, see here.) Understanding map geometry Windows Phone 7.5 supports two methods of map display in your mobile app: Bing Maps Silverlight Control for Windows Phone Bing Maps task Launcher   Before we delve into the methods, actions, and tasks of the Windows Phone Bing Maps Silverlight Control or the Bing Maps task Launcher, it is a good idea to get acquainted with the background of map geometry and how it works for Bing Maps. If you have a background in Computer Science, then you would be aware of keywords such as projection, trajectory, coordinate systems, raster and scalable graphics. If you are not from a Computer Science background, then a basic understanding of the Bing Maps API can be found at http://msdn.microsoft.com/en-us/library/ff428643.aspx. This should be good to get you started with Bing Maps. Bing Maps uses the Mercator projection model of converting the Earth's sphere into a corresponding flat surface, grid-based, parallel map. In such a projection the longitude lines are parallel, and hence the land mass further from the equator tends to be distorted. However, the Mercator projection works well for navigational purposes, and therefore, despite the drawbacks, it is still used today. The Mercator projection offers two compelling advantages: The map scale is constant around any position. Mercator projection is a cylindrical projection. North and south are straight up and down, while west and east are always left and right respectively. (This helps in keeping track of your course in navigation.)   The following diagrams should give you a good idea about the Mercator projection:   Earth's surface as a sphere diagram courtesy Michael Pidwirny from http://www.eoearth.org/article/Maps and http://www.physicalgeography.net/fundamentals/2a.html. Mercator projection of the Earth's surface diagram courtesy Michael Pidwirny from http://www.eoearth.org/article/Maps and http://www.physicalgeography.net/fundamentals/2a.html. The world map is pre-rendered at many different levels of detail and cut into tiles for quick retrieval. When you zoom in or zoom out on your Bing Maps, it is nothing but loading different tiles at different levels. To read more about the Bing Maps Tile System please see the following MSDN link: http://msdn.microsoft.com/en-us/library/bb259689.aspx Overview of the Windows Phone Bing Maps Silverlight Control The Bing Maps Silverlight Control for Windows Phone 7.5 is a port of the desktop version of the Silverlight Map Control, which provides full mapping capabilities on the Windows Phone 7.5 device. Before using the Bing Maps control you need to get an application key from Microsoft's Bing Maps portal at: https://www.bingmapsportal.com/. The Microsoft.Phone.Controls.Maps namespace contains the classes of the Bing Maps Silverlight Control for Windows Phone. Let us quickly see an example of using maps in our WP7.5 app. Using maps in your Windows Phone 7.5 app – Hello Maps We will now create a new application titled HelloMaps that shows the Windows Phone Bing Maps Silverlight Control in action: Launch Microsoft Visual Studio 2010 Express for Windows Phone. Create a new Project from the File | New Project menu option and Name it HelloMaps. Add the Map control to your app by selecting it from the Toolbox. Change the Application Title to Hello Maps and the Page Title to Bing Maps. Your project should now look like the following screenshot: If you run the app now it will show the following output, as depicted in the next screenshot:Invalid Credentials. Sign up for a developer account at: http://www.microsoft.com/maps/developers This is because we have not signed up for a map key from https://www.bingmapsportal.com/. Let's do so. Visit https://www.bingmapsportal.com/ and sign up or log in with your Windows Live ID. Create your application and store the map key in a safe place. Now that we have our key, (for safety reason we assume xxxxxxxxxxxxxx as the key) let us initialize our Map control with the same. Notice the XAML tag <my:Map> when you added the Map control to your application. Add the key we got from step 7 by using the CredentialsProvider attribute of the Bing Maps Silverlight Control. Also change the name of the map to myMap. <Grid x_Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0"> <my:Map Height="595" CredentialsProvider="xxxxxxxxxxxxxx" HorizontalAlignment="Left" Margin="6,6,0,0" Name="myMap" VerticalAlignment="Top" Width="444" /></Grid> Running the app in the emulator now will not show the Invalid Credentials message we saw earlier. Now let us make our application more exciting. We will add an Application Bar to our Hello Maps application that will allow us to choose the map mode: Road Mode or Aerial Mode. In your MainPage.xaml uncomment the following lines that add a default application bar to your application: <!--Sample code showing usage of ApplicationBar--><!--<phone:PhoneApplicationPage.ApplicationBar> <shell:ApplicationBar IsVisible="True" IsMenuEnabled="True"> <shell:ApplicationBarIconButton IconUri="/Images/appbar_button1.png" Text="Button 1"/> <shell:ApplicationBarIconButton IconUri="/Images/appbar_button2.png" Text="Button 2"/> <shell:ApplicationBar.MenuItems> <shell:ApplicationBarMenuItem Text="MenuItem 1"/> <shell:ApplicationBarMenuItem Text="MenuItem 2"/> </shell:ApplicationBar.MenuItems> </shell:ApplicationBar></phone:PhoneApplicationPage.ApplicationBar>--> Modify it to look like the following: <phone:PhoneApplicationPage.ApplicationBar> <shell:ApplicationBar IsVisible="True" IsMenuEnabled="True"> <shell:ApplicationBar.MenuItems> <shell:ApplicationBarMenuItem Text="Aerial Mode"/> <shell:ApplicationBarMenuItem Text="Road Mode"/> </shell:ApplicationBar.MenuItems> </shell:ApplicationBar></phone:PhoneApplicationPage.ApplicationBar> With your code editor open, go to the Aerial Mode Application Bar Menu Item and before the Text property enter Click="". IntelliSense will prompt you <New Event Handler> as shown in the following screenshot. Select it. Do the same for the other Application Bar Menu Item. Your code should now be as follows: <shell:ApplicationBarMenuItem Click="ApplicationBarMenuItem_Click" Text="Aerial Mode"/><shell:ApplicationBarMenuItem Click="ApplicationBarMenuItem_Click_1" Text="Road Mode"/> Open your MainPage.xaml.cs file and you will find the two click event functions created automatically: ApplicationBarMenuItem_Click and ApplicationBarMenuItem_Click_1. As the first menu item is for Aerial Mode, we set the map mode to Aerial Mode by using the following code in the ApplicationBarMenuItem_Click function: private void ApplicationBarMenuItem_Click(object sender, EventArgs e) { myMap.Mode = new AerialMode(); } Note the myMap variable was assigned to the Map control in step 8. Similarly we do the same for the ApplicationBarMenuItem_Click_1 function, however here we set the mode to Road by using the following code: private void ApplicationBarMenuItem_Click_1(object sender, EventArgs e) { myMap.Mode = newRoadMode(); } Run the application in the emulator and click on the three dots you see on the lower right-hand side of your application footer. This invokes the Application Bar and your app screen should like the following screenshot: Select the aerial mode menu item and see your map change to Aerial Mode in real-time. You can switch back to Road Mode by selecting the road mode menu item again.  
Read more
  • 0
  • 0
  • 2091

article-image-getting-started-livecode-mobile
Packt
27 Jul 2012
8 min read
Save for later

Getting Started with LiveCode for Mobile

Packt
27 Jul 2012
8 min read
  (For more resources on Mobile Development, see here.) iOS, Android, or both? It could be that you only have an interest in iOS or only in Android. You should be able to easily see where to skip ahead to, unless you're intrigued about how the other half lives! If, like me, you're a capitalist, then you should be interested in both OSes. Far fewer steps are needed to get the Android SDK than to get the iOS developer tools, because of having to sign up as a developer with Apple, but the configuraton for Android is more involved. We'll go through all the steps for Android and then the ones for iOS. If you're an iOS-only kind of person, skip the next few sections, picking up again at the Becoming an iOS Developer section. Becoming an Android developer It is possible to develop Android OS apps without having to sign up for anything, but we'll try to be optimistic and assume that within the next 12 months, you will find time to make an awesome app that will make you rich! To that end, we'll go over what is involved in signing up to publish your apps in both the Android Market and the Amazon Appstore. Android Market A good starting location would be http://developer.android.com/. You will be back here shortly to download the Android SDK, but for now, click on the Learn More link in the Publish area. There will be a sign-in screen; sign in using your usual Google details. Which e-mail address to use? Some Google services are easier to sign up for, if you have a Gmail account. Creating a Google+ account, or signing up for some of their Cloud services, requires a Gmail address (or so it seemed to me at the time!). If you have previously set up Google Checkout as part of your account, some of the steps in signing up process become simpler. So, use your Gmail address, and if you don't have one, create one! Google charges a $25 fee for you to sign up for Android Market. At least you find out about this right away! Enter the values for Developer Name, Email Address, Website URL (if you have one), and Phone Number. The payment of the $25 can be done through Google Checkout. Using Google Checkout saves you from having to enter in your billing details, each time. Hopefully you won't guess the other 12 digits of my credit card number! Finally, you need to agree to the Android Market Developer Distribution Agreement. You're given an excuse to go and make some coffee… Some time later, you're all signed up and ready to make your fortune!   Amazon Appstore Whereas the rules and costs for the Google Android Market are fairly relaxed, Amazon has taken a more Apple-like approach, both in the amount they charge you to register and in the review process for accepting app submissions. The starting page is http://developer.amazon.com/home.html.   When you click on Get Started, you will be asked to sign into your Amazon account. Which e-mail address to use? This feels like déjà vu! There is no real advantage in using your Google e-mail address when signing up for the Amazon Appstore Developer Program, but if you happen to have an account with Amazon, sign in with that one. It will simplify the payment stage, and your developer account and general Amazon account will be associated with each other. You are asked to agree to the APPSTORE DISTRIBUTION AGREEMENT terms before learning about the costs. Those costs are $99 per year, but the first year is free. So that's good! Unlike the Google Android Market, Amazon asks for your bank details upfront, ready to send you lots of money later, we hope! That's it; you're ready to make another fortune, to go along with the one that Google sends you!   Downloading the Android SDK Head back over to http://developer.android.com/, and click on the Download link, or go straight to http://developer.android.com/sdk/index.html. In the book previously mentioned, we're only going to cover Windows and Mac OS X (Intel), and only as much as is needed to make LiveCode work with the Android and iOS SDKs. If you intend to do native Java-based applicatons, then you may be interested in reading through all of the steps that are described in the web page: http://developer.android.com/sdk/installing.html Click on the Download link for your platform. The steps you'll have to go through are different for Mac and Windows. Let's start with Mac. Installing Android SDK on Mac OS X (Intel) LiveCode itself doesn't require an Intel Mac; you can develop stacks using a PowerPC-based Mac, but both the Android SDK and some of the iOS tools require an Intel-based Mac, which sadly means that if you're reading this as you sit next to your Mac G4 or G5, then you're not going to get too far! The file that you just downloaded will automatically expand to show a folder named android-sdk-macosx. It may be in your downloads folder right now; a more natural place for it would be in your Documents folder, so move it there before performing the next steps. There is an SDK Readme text file that lists the steps you will need to take. If those steps are different to what we have here, then follow the steps in the Readme, in case they have been updated since the steps shown here were written. Open the application Terminal, which is in Applications/Utilities. You need to change the directories to be located in the android-sdk-macosx folder. One handy trick in Terminal is that you can drag the items into the Terminal window to get the file path to that item. Using that trick, you can type cd and a space into the Terminal window, then drag the android-sdk-macosx folder just afer the space character. You'll end up with this line: new-host-3:~ colin$ cd /Users/colin/Documents/android-sdk-macosx Of course, the first part of the line and the user folder will match yours, not mine! The rest will look the same. Here's how it would look for a user named fred: new-host-3:~ fred$ cd /Users/fred/Documents/android-sdk-macosx Whatever your name is, press the Return or Enter key after entering that line. The location line changes to look similiar to the following: new-host-3:android-sdk-macosx colin$ Either type carefully or copy and paste this line from the read me file: tools/android update sdk --no-ui Press Return or Enter again. How long the downloads take will depend on your Internet connection. Even with a very fast Internet connection, it will still take over an hour.   Installing Android SDK on Windows The downloads page recommends using the exe download link, and that will do extra things, such as check whether you have the Java Development Kit (JDK) installed. When you click on the link, use either the Run or Save options as you would with any download of a Windows installer. Here we opted to use Run. If you do use Save, then you will need to open the file after it has saved to your hard drive. In the following case, as the JDK wasn't installed, a dialog box appears saying go to Oracle's site to get the JDK: If you see this, then you can leave the dialog box open and click on the Visit java.oracle.com button. On the Oracle page, you have to click on a checkbox to agree to their terms, and then on the Download link that corresponds with your platform. Choose the 64-bit option if you are running a 64-bit version of Windows, or the x86 option if you are running 32-bit Windows. Either way, you're greeted with another installer to Run or Save, as you prefer. Naturally, it takes a while for that installer to do its thing too! When the installation completes, you will see a JDK registration page; it's up to you if you register or not. Back at the Android SDK installer dialog box, you can click on the Back button, and then the Next button to get back to that JDK checking stage; only now it sees that you have the JDK installed. Complete the remaining steps of the SDK installer, as you would with any Windows installer. One important thing to notice; the last screen of the installer offers to open the SDK Manager. You do want to do that, so resist the temptation to uncheck that box! Click on Finish, and you'll be greeted with a command-line window for a few moments, then the Android SDK Manager will appear and do its thing. As with the Mac version, it takes a very long time for all these add-ons to download. Pointing LiveCode to the Android SDK After all that installation and command-line work, it's a refreshing change to get back into LiveCode! Open the LiveCode Preferences, and choose Mobile Support. We will set the two iOS entries after getting iOS going (on Mac that is, these options will be grayed out on Windows). For now, click on the … button next to the Android development SDK root field, and navigate to where the SDK has been installed. If you followed the earlier steps correctly, then it will be in the Documents folder on Mac, or in C:Program Files (x86)Android on Windows (or somewhere else if you chose to use a custom location). Phew! Now, let's do iOS…
Read more
  • 0
  • 0
  • 1963
article-image-tabula-rasa-nurturing-your-site-tablets
Packt
09 Mar 2012
16 min read
Save for later

Tabula Rasa: Nurturing your Site for Tablets

Packt
09 Mar 2012
16 min read
The human touch There's a reason touchscreen interfaces were rarely used before Apple re-invented them in the iPhone. It's because programming them is very difficult. With a mouse-driven interface you have a single point of contact: the mouse's pointer. With a touchscreen, you potentially have ten points of contact, each one with a separate motion. And you also have to deal with limiting spurious input when the user accidentally touches the tablet when they didn't mean to. Does the user's swipe downward mean they want to scroll the page or to drag a single page element? The questions go on to infinity. With this article, we stand on the shoulders of those giants who have done the heavy lifting and given us a JavaScript interface that registers touch and gestures for use in our web pages. Many Bothans died to bring us this information. To understand the tablet is to understand the touch interface, and in order to understand the touch interface, we need to learn how touch events differ from mouse events. But that begs the question: what is an event? The event-driven model Many developers use JavaScript-based events and have not even the slightest clue as to what they can do or their power. In addition, many developers get into situations where they don't know why their events are misfiring or, worse yet, bubbling to other event handlers and causing a cascade of event activity. As you may or may not know, an HTML document is made up of a series of tags organized in a hierarchical structure called the HTML document. In JavaScript, this document is referred to through the reserved word document. Simple enough, right? Well, what if I want to interact with the tag inside of a document, and not the document as a whole? Well, for that we need a way of addressing nested items inside the main <html> tag. For that, we use the Document Object Model (DOM). DOM is a cross-platform and language-independent convention for representing and interacting with objects in HTML, XHTML, and XML documents. Aspects of the DOM (such as its elements) may be addressed and manipulated within the syntax of the programming language in use. The public interface of a DOM is specified in its Application Programming Interface (API). For more details on DOM, refer to the Wikipedia document at: http://en.wikipedia.org/wiki/Document_Object_Model. The body of that document then becomes document.body. The head of the document, likewise, becomes document.head. Now, what happens when your mouse interacts with this web page? This is said to be a DOM event. When you click, the elements that are the receivers of that action are said to propagate the event through the DOM. In the early days, Microsoft and Netscape/Firefox had competing ways of handling those events. But they finally gave way to the modern W3C's standard, which unifies the two ways and, even more importantly, jQuery has done a lot to standardize the way we think about events and event handling. In most browsers today, mouse events are pretty standardized, as we are now more than 20 years into the mouse-enabled computing era: For tablets and touchscreen phones, obviously, there is no mouse. There are only your fingers to serve the purpose of the mouse. And here's where things get simultaneously complicated as well as simple. Touch and go Much of what we talk about as touch interaction is made up of two distinct types of touches—single touches and gestures. A single touch is exactly that. One finger placed on the screen from the start till the end. A gesture is defined as one or more fingers touching the surface of the area and accompanied by a specific motion: Touch + Motion. To open most tablets, you swipe your finger across a specific area. To scroll inside a div element, you use two fingers pushing up and down. In fact, scrolling itself is a gesture and tablets only respond to the scroll event once it's over. We will cover more on that later. Gestures have redefined user interaction. I wonder how long it took for someone to figure out that the zoom in and zoom out is best accomplished with a pinch of the fingers? It seems so obvious once you do it and it immediately becomes second nature. My mom was pinching to zoom on her iPhone within the first 5 minutes of owning it. Touch events are very similar to multiple mouse events without a hover state. There is no response from the device when a finger is over the device but has not pressed down. There is an effort on the part of many mobile OS makers to simulate the hover event by allowing the hover event to trigger with the first click, and the click event to trigger with the second click on the same object. I would advise against using it for any meaningful user interaction as it is inconsistently implemented, and many times the single click triggers the link as well as the hover-reveal in drop-down menus. Not using the hover event to guide users through navigation changes the way we interact with a web page. Much of the work we've done to guide users through our pages is based on the hover-response event model to clue users in on where links are. We have to get beyond that. Drop-down menus quickly become frustrating at the second and third levels, especially if the click and hover events were incorrectly implemented in the desktop browser. Forward and back buttons are rendered obsolete by a forward and backwards swipe gesture. The main event There are basically three touch events—touchstart, touchmove, and touchend. Gesture events are, likewise: gesturestart, gesturemove, and gestureend. All gestures register a touch event but not all touch events register gestures. Gestures are registered when multiple fingers make contact with the touch surface and register significant location change in a concerted effort, such as two or more fingers swiping, a pinch action, and so on. In general, I've found it a good practice to use touch events to register finger actions; but it is required to return null on a touch event when there are multiple fingers involved and to handle such events with gestures. jQuery mobile has a nice suite of touch events built into its core that we can hook into. But jQuery and jQuery mobile sometimes fall short of the interaction we want to have for our users, so we'll outline best practices for adding customized user touch events to both the full and mobile version of the demo site. Let's get started… Time for action – adding a swipe advance to the home page The JavaScript to handle touch events is a little tricky; so, pay attention: Add the following lines to both sites/all/themes/dpk/js/global.js and sites/all/themes/dpk_mobile/js/global.js: Drupal.settings.isTouchDevice = function() { return "ontouchstart" in window; } if (Drupal.settings.isTouchDevice() ) { Drupal.behaviors.jQueryMobileSlideShowTouchAdvance = { attach: function(context, settings) { self = Drupal.behaviors.jQueryMobileSlideShowTouchAdvance; jQuery.each(jQuery(".views_slideshow_cycle_main. viewsSlideshowCycle-processed"), function(idx, value) { value.addEventListener("touchstart", self. handleTouchStart); jQuery(value).addClass("views-slideshow-mobileprocessed"); }) jQuery(self).bind("swipe", self.handleSwipe); }, detach: function() { }, original: { x: 0, y: 0}, changed: { x: 0, y: 0}, direction: { x: "", y: "" }, fired: false,handleTouchStart: function(evt) { self = Drupal.behaviors.jQueryMobileSlideShowTouchAdvance; if (evt.touches) { if (evt.targetTouches.length != 1) { return false; } if (evt.touches.length) { evt.preventDefault(); evt. stopPropagation() } self.original = { x: evt.touches[0].clientX, y: evt. touches[0].clientY } self.target = jQuery(this).attr("id").replace("views_ slideshow_cycle_main_", ""); Drupal.viewsSlideshow.action({ "action": "pause", "slideshowID": self.target }); evt.target.addEventListener("touchmove", self. handleTouchMove); evt.target.addEventListener("touchend", self. handleTouchEnd); } }, handleTouchMove: function(evt) { self = Drupal.behaviors.jQueryMobileSlideShowTouchAdvance; self.changed = { x: (evt.touches.length) ? evt.touches[0].clientX: evt.changedTouches[0].clientX, y: (evt.touches.length) ? evt.touches[0].clientY: evt.changedTouches[0].clientY }; h = parseInt(self.original.x - self.changed.x), v = parseInt(self.original.y - self.changed.y); if (h !== 0) { self.direction.x = (h < 0) ? "right":"left"; } if (v !== 0) { self.direction.y = (v < 0) ? "up": "down"; } jQuery(self).trigger("swipe"); }, handleTouchEnd: function(evt) { self = Drupal.behaviors.jQueryMobileSlideShowTouchAdvance; evt.target.removeEventListener("touchmove", self. handleTouchMove); evt.target.removeEventListener("touchend", self. handleTouchEnd); self.fired = false; }, handleSwipe: function(evt) { self = Drupal.behaviors.jQueryMobileSlideShowTouchAdvance; if (evt != undefined && self.fired == false) { Drupal.viewsSlideshow.action({ "action": (self.direction.x == "left")?"nextSlide":"previousSlide", "slideshowID": self.target}); self.fired = true; //only fire advance once per touch } } } } Clear Drupal's cache by either navigating to Configuration | Performance and clicking on the Clear cache button or entering these lines in a terminal: cd ~/sites/dpk/drush cc all Navigate to either home page with a touch-enabled device and you should be able to advance the home page slideshow with your fingers. What just happened? Let's take a look at how this code works. First, we have a function, isTouchDevice. This function returns true/false values if touch events are enabled on the browser. We use an if statement to wall off the touchscreen code, so browsers that aren't capable don't register an error. The Drupal behavior jQueryMobileSlideShowTouchAdvance has the attach and detach functions to satisfy the Drupal behavior API. In each function, we locally assign the self variable with the value of the entire object. We'll use this in place of the this keyword. In the Drupal behavior object, this can sometimes ambiguously refer to the entire object, or to the current sub-object. In this case, we want the reference to be to just the sub-object so we assign it to self. The attach function grabs all slideshow_cycle div elements in a jQuery each loop. The iteration of the loop adds an event listener to the div tag. It's important to note that the event listener is not bound with jQuery event binding. jQuery event binding does not yet support touch events. There's an effort to add them, but they are not in the general release that is used with Drupal 7. We must then add them with the browser native function, AddEventListener. We use the handleTouchStart method to respond to the touchstart event. We will add touchend and touchmove events after the touchstart is triggered.The other event that we're adding listens to this object for the swipe event. This is a custom event we will create that will be triggered when a swipe action happens. We will cover more on that shortly.The detach function is used to add cleanup to items when they are removed from the DOM. Currently, we have no interaction that removes items from the DOM and therefore no cleanup that's necessary for that removal to take place.Next, we add some defaults—original, changed, direction, and fired. We'll use those properties in our event response methods.HandleTouchStart event is fired when the finger first touches the surface. We make sure the evt.touches object has value and is only one touch. We want to disregard touches that are gestures. Also, we use preventDefault and stopPropagation on the event to keep it from bubbling up to other items in the DOM. self.original is the variable that will hold the touch's original coordinates. We store the values for touch[0]. We also name the target by getting the DOM ID of the cycle containing the div element. We can use string transforms on that ID to obtain the ID of the jQuery cycle being touched and will use that value when we send messages to the slideshow, based on the touch actions, like we do in the next line. We tell the slideshow to pause normal activity while we figure out what the user wants. To figure that out, we add touchmove and touchend events listening to the div element. handleTouchMove figures out the changed touch value. It does so by looking at the ClientX and ClientY values in the touch event.Some browsers support the changedTouches value which will do some calculations on how much the touch has changed since the last event was triggered. If it's available, we use it, or we use the value of the X and Y coordinates in the touch event's touches array. We do some subtraction against the original touch to find out how much the touch has changed and in what direction. We use self.direction to store the direction of the change. We store the direction in and tell the world that a swipe has begun on our div element by triggering a custom event on our self object.If you remember correctly, we used the handleSwipe method to respond to the swipe event. In handleSwipe we make sure the event has not already fired. If it hasn't, we use that swipe event to trigger a next or previous action on our jQuery cycle slideshow. Once we've fired the event, we change the self.fired to true so it will only fire once per touch. In the touchend responder, HandleTouchEnd, we remove both the touchmove and touchend responders and reset the fired state.But adding the touch events to both the desktop and the mobile themes begs the question, "Into which category does the table fall?" Have a go hero – adding a swipe gesture Add a swipe gesture event to the Menu Item page that allows you to scroll through menu items. The changing landscape (or portrait) Responsive web design is a design discipline that believes that the same markup should be used for both desktop and mobile screens, with the browser managing the display of items, rather than the user choosing an experience. If the screen is smaller, the layout adjusts and content emphasis remains.Conversely, the popularity of Internet-connected game consoles and DVI ports on large screen televisions gives us yet another paradigm for web pages—the large screen. I sit in front of a 72" TV screen and connect it to either my laptop or iPad and I have a browsing experience that is more passive, but completely immersive.Right now, I bet you're thinking, "So which is it Mr Author, two sites or one?" Well, both, actually. In some cases, with some interactions it will be necessary to do two site themes and maintain them both. In some cases, when you can start from scratch, you can do a design that can work on every browser screen size. Let's start over and put responsive design principals to work with what we already know about media queries and touch interfaces. "Starting over" or "Everything you know about designing websites is wrong" Responsive web design forces the designer to start over—to forget the artificial limitations of the size that print imposes and to start with a blank canvas. Once that blank canvas is in place, though, how do you fill it? How do you create "The One True Design" (cue the theme music)?This book is not a treatise on how to create the perfect design. For that, I can recommend A Book Apart and anything published by smashingmagazine.com. Currently, they are at the forefront of this movement and regularly publish ideas and information that is helpful without too much technical jargon.No, this book is more about giving you strategies to implement the designs you're given or that you create using Drupal. In point of fact, responsive design, at the time of writing, is in its infancy and will change significantly over the next 10 years, as new technology forces us to rethink our assumptions about what books, television, and movies are and what the Web is.So suffice to say, it begins with content. Prioritizing content is the job of the designer. Items you want the user to perceive first, second, and third are the organizing structure of your responsive design makeover. In most instances, it's helpful to present the web developer with four views of the website. Wire framing made easy Start with wireframes. A great wire framing tool is called Balsamiq. It has a purposefully "rough" look to all of the elements you use. That way, it makes you focus on the elements and leave the design for a later stage. It's also helpful for focusing clients on the elements. Many times the stake holders see a mockup and immediately begin the discussion of "I like blue but I don't like green/I like this font, but don't like that one." It can be difficult to move the stake holders out of this mindset, but presenting them with black-and-white chalk-style drawings of website elements can, in many cases, be helpful. Balsamiq is a great tool for doing just that: These were created with Balsamiq but could have been created in almost any primitive drawing program. There are many free ones as well as the more specialized pay ones. A simple layout like this is very easy to plan and implement. But very few of the websites you develop will ever be this simple. Let's take for instance that the menu item we have not, as yet, implemented, is for online ordering. How does that work? What do those screens look like? At this point we have a Menu page but, as per this mockup, that menu page will become the online ordering section. How do we move these menu items we created to a place where they can be put in an order and paid for? And more importantly, how does each location know what was ordered from their location?These are questions that come up in the mockup and requirements phase and whether you are building the site yourself or being given requirements from a superior, or a client, you now have a better idea of the challenges you will face implementing the single design for this site. With that, we've been given these mockups for the new online ordering system. The following mockup diagram is for adding an order: The following mockup diagram is for placing an order: We'll implement these mockups using the Drupal 7 Commerce module. The Commerce module is just a series of customized data entities and views that we can use as the building blocks of our commerce portion. We'll theme the views in the standard Drupal way but with an eye to multi-width screens, lack of hover state, and keeping in mind "hit zones" with fingers on small mobile devices. We'll also add some location awareness to assist with the delivery process. Once an order is placed, an e-mail will need to be sent to the correct franchise notifying them of the pizza order and initiating the process of getting it out the door.
Read more
  • 0
  • 0
  • 3062

article-image-making-your-iad
Packt
17 Feb 2012
6 min read
Save for later

Making Your iAd

Packt
17 Feb 2012
6 min read
Getting iAd Producer iAd Producer is the tool that allows us to assemble great interactive ads with a simple drag-and-drop visual interface. Download and install iAd Producer on your Mac, so that you can start creating an ad. Time for action – installing iAd Producer To install iAd Producer, follow these steps: To download and use iAd Producer, you need to be a paid member of the iOS Developer Program. Go to https://developer.apple.com/ios/ and click on the Log in button. Enter your Apple ID and password, and click on Sign In. After you've signed in, find the Downloads section at the bottom of the page. Click on iAd Producer to start downloading it. You can see the download highlighted here: If you cannot see iAd Producer in the Downloads, make sure you're logged in and your developer account has been activated. After the download is complete, open the file and run iAd Producer.mpkg to start the installation wizard. Follow the steps in the installation and enter your Mac password, if asked for it. When installing certain software, you need to enter your Mac password to allow it to have privileged access to your system. Don't confuse this with your Apple ID that we set up for the iOS Developer Program. If you don't have a password on your Mac, just leave the password area blank and click on OK. When you've gone through the installation steps it'll take a couple of moments to install. After you get a The installation was successful message you can close the installer. What just happened? We now have iAd Producer installed; whenever you need to open it, you can find it in the Applications folder on your Mac. Working with iAd Producer Let's take a look at some of the main parts of iAd Producer that you'll be using regularly, to familiarize yourself with the interface. Launch screen When you first open iAd Producer, you'll be able to start a new iPhone or iPad project from the project selector, as shown in the following screenshot. As the screen size and experience is so different between the two devices, we have to design and build ads specifically for each one: From the launch screen, you can also open existing projects you've been working on. Default ad Once you have chosen to create either an iPad or iPhone iAd, a placeholder ad is created for you, showing the visual flow. This is the overview of your ad, which you'll be using to piece the sections of your ad together. The following screenshot shows the default overview: Double-clicking on any of the screens in your ad flow will ask you to pick a template for that page; once assigned, you're then able to design the iAd using the canvas editor. Template selector Before we edit any page of an ad, we have to apply a template to it, even if it's just a blank canvas to build upon. iAd Producer automatically shows the relevant templates to the current page you're editing. This means your ad follows a structure that the users expect. Templates provide some great starting points for your iAd, whether it's for a simple banner with an image and text or a 3D image carousel that the user can flick and manipulate, all created with easy point and click. The following screenshot is an example of the template chooser: Asset Library The Asset Library holds all the media and content for your iAd, such as the images, videos, and audio. When adding media to your Asset Library, make sure you're using high-resolution images for the high-resolution Retina display. iAd Producer automatically generates the lower-resolution images for your ad, whenever you import resources. If you wanted an image to be 200px wide and 300px high, you should double the horizontal and vertical pixels to 400px wide and 600px high. This will mean your graphics look crisp and awesome on the high-resolution screens. The following screenshot shows an example of media in the Asset Library: Ad canvas Once you've selected a template, you can double-click on the item in the Overview to open up the canvas for that page. The ad canvas is where you customize your iAd with a powerful visual editor to manipulate each page of your ad. Here's an example of the ad canvas with a video carousel added to it: Setting up your ad Let's create and save an empty project to use as we create our iAd; you'll only need to do this once for each ad. Whenever you're working with something digital, it's important to save your iAd whenever you make a significant change, in case iAd Producer closes unexpectedly. Try to get into the habit of saving regularly, to avoid losing your ad. Time for action – creating a new project In order to create a new project, follow the ensuing steps: If you haven't created a new project already, open iAd Producer from your Applications folder. Select the iPhone from the launch screen and choose Select. You'll now see the default ad overview. iAd Producer has automatically made us a project called Untitled and populated it with the default set of pages. From the File menu, select Save to save your empty iAd, ready to have the components added to it later. Name the project something like Dino Stores, as that is the ad we'll be working on. You can now save the progress of your project at any time by choosing File then Save from the menu bar or pressing Command + S on your keyboard. What just happened? You've now seen the project selector and the launch screen in action, and have the base project that we'll be building upon as we make our first iAd. If you quit this project you can now open the project from within iAd Producer by clicking on File | Open, from the menu bar; or, simply double-click the project file in Finder to automatically open it. Getting the resources In this article, we'll be using the Dino Stores example resources that are available to download with this book. If you want to use your own assets, you'll need the following media: An image for your banner, approximately 120px wide and 100px high An image of your company logo or name, around 420px wide and 45px high An 80px square image, with transparency, to be used as a map pin A loading image, approximately 600px wide and 400px high Between six and 10 images for a gallery, each around 304px wide and 440px high Two or more images that will change when the iPhone is shaken, each around 600px wide and 800px high An image related to your product or service, at least 300px wide, to use on the main menu page These pixel sizes are at double-size to account for the high-resolution Retina display found on the iPhone 4 and later. iAd Producer will automatically create the lower-resolution versions for older devices.
Read more
  • 0
  • 0
  • 2873