Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Mobile

213 Articles
article-image-android-fragmentation-management
Packt
16 Sep 2013
8 min read
Save for later

Android Fragmentation Management

Packt
16 Sep 2013
8 min read
(For more resources related to this topic, see here.) Smartphones, by now, have entered our lives not only as users and consumers but also as producers of our own content. Though this kind of device has been on the market since 1992 (the first was the Simon model by IBM), the big diffusion was driven by Apple's iPhone, when it was produced in 2007 (last year, the fifth generation of this device was released). Meanwhile, another big giant, Google, developed an open source product to be used as the internal operating system in mobile devices; in a different manner from the leader of the market, this company doesn't constraint itself to a unique hardware-specific device, but allows third-party companies to use it on their cell phones, which have different characteristics. The big advantage was also to be able to sell this device to consumers that don't want to (or can't have) spend as much money as the Apple phone costs. This allowed Android to win the battle of diffusion. But there is another side to the coin. A variety of devices by different producers means more fragmentation of the underlying system and a non-uniform user experience that can be really disappointing. As programmers, we have to take into account these problems and this article strives to be a useful guideline to solve that problem. The Android platform was born in 2003, as the product of a company which at first was known as Android Inc. and which was acquired by Google in 2005. Its direct competitors were and are still today the iOS platform by Apple and the RIM, know as Blackberry. Technically speaking, its core is an operating system using a Linux Kernel, aimed to be installed on devices with very different hardware (mainly mobile devices, but today it is also used in general embedded systems, for example, the game console OUYA that features a modified version of Android 4.0). Like any software that has been around for a while, many changes happened to the functionality and many versions came out, each with a name of a dessert: Apple Pie (API level 1) Banana Bread (API level 2) 1.5 Cupcake (API level 3) 1.6 – Donut (API level 4) 2.0-2.1x – Eclair (API level 5 to 7) 2.2 – Froyo (API level 8) 2.3 – Gingerbread (API level 9 and 10) 3.0-3.2 – Honeycomb (API level 11 to 13) 4.0 – Ice Cream Sandwich (API level 14 and 15) 4.1 – Jelly Bean (API level 16) Like in many other software projects, the names are in alphabetical order (another project that follows this approach is the Ubuntu distribution). The API level written in the parenthesis is the main point about the fragmentation. Each version of software introduces or removes features and bugs. In its lifetime, an operating system such as Android aims to add more fantastic innovations while avoiding breaking pre-installed applications in older versions, but also aims to make available to these older versions the same features with a process technically called backporting. For more information about the API levels, carefully read the official documentation available at http://developer.android.com/guide/topics/manifest/uses-sdk- element.html#ApiLevels. If you look at the diffusion of these versions as given by the following pie chart you can see there are more than 50 percent of devices have installed versions that we consider outdated with the latest; all that you will read in the article is thought to address these problems, mainly using backporting; in particular, to specifically address the backward compatibility issues with version 3.0 of the Android operating system—the version named Honeycomb. Version 3.0 was first intended to be installed on tablets, and in general, on devices with large screens. Android is a platform that from the beginning was intended to be used on devices with very different characteristics (think of a system where an application must be usable on VGA screens, with or without physical keyboards, with a camera, and so on); with the release of 3.0, all this was improved with specific APIs thought to extend and make developing applications easier, and also to create new patterns with the graphical user interfaces. The more important innovation was the introduction of the Fragment class. Earlier, the only main class in developing the Android applications was Activity, a class that provides the user with a screen in order to accomplish a specific task, but that was too coarse grain and not re-usable enough to be used in the applications with large screens such as a tablet. With the introduction of the Fragment class to be used as the basic block, it is now possible to create responsive mobile design; that is, producing content adapting to the context and optimizing the block's placement, using reflowing or a combination of each Fragment inside the main Activity. These are concepts inspired by the so called responsive web design, where developers build web pages that adapt to the viewport's size; the preeminent article about this argument is Responsive Web Design, Ethan Marcotte. For sake of completeness, let me list other new capabilities introduced with Honeycomb (look into the official documentation for a better understanding of them): Copy and Paste: A clipboard-based framework Loaders: Load data asynchronously Drag and Drop: Permits the moving of data between views Property animation framework: Supersedes the old Animation package, allowing the animation of almost everything into an application Hardware acceleration: From API level 11, the graphic pipeline uses dedicated hardware when it is present Support for encrypted storage In particular for address these changes and new features, Google make available a particular library called "Support library" that backports Fragment and Loader. Although the main characteristics of this classes are maintained, in the article is explained in detail how to use the low level API related with the threading stuff. Indeed an Android application is not a unique block of instructions executed one after the other, but is composed of multiple pipelines of execution. The main concepts here are the process and thread. When an application is started, the operating system creates a process (technically a Linux process) and each component is associated to this process. Together with the process, a thread of execution named main is also created. This is a very important thread because it is in charge of dispatching events to the appropriate user interface elements and receiving events from them. This thread is also called UI Thread. It's important to note that the system does not create a separate thread for each element, but instead uses the same UI thread for all of them. This can be dangerous for the responsiveness of your application, since if you perform an intensive or time expensive operation, this will block the entire UI. All Android developers fight against the ANR (Application Not Responding) message that is presented when the UI is not responsive for more than 5 seconds. Following Android's documentation, there are only two rules to follow to avoid the ANR: Do not block the UI thread Do not access the Android UI toolkit from outside the UI thread These two rules can seem simple, but there are some particulars that have to be clear. In the article some examples are shown using not only the Thread class (and the Runnable interface) but also the (very) low-level classes named Looper and Handler. Also the interaction between GUI elements and these classes are investigated to avoid nasty exceptions. Another important element introduced in Google's platform is the UI pattern named ActionBar—a piece of interface at the top of an application where the more important menu's buttons are visualized in order to be easily accessible. Also a new contextual menu is available in the action bar. When, for example, one or more items in a list are selected (such as, the Gmail application), the appearance of the bar changes and shows new buttons related to the actions available for the selected items. One thing not addressed by the compatibility package is ActionBar. Since this is a very important element for integration with the Android ecosystem, some alternatives are born, the first one from Google itself, as a simple code sample named ActionBar Compatibility that you can find in the sample/directory of the Android SDK. In the article, we will follow a different approach, using a famous open source project, ActionBarSherlock. The code for this library is not available from SDK, so we need to download it from its website (http://actionbarsherlock.com/). This library allows us to use the most part of the functionality of the original ActionBar implementation such as UP button (that permits a hierarchical navigation), ActionView and contextual action menu. Summary Thus in this article we learned about Android Fragmentation Management. Resources for Article : Further resources on this subject: Android Native Application API [Article] New Connectivity APIs – Android Beam [Article] So, what is Spring for Android? [Article]
Read more
  • 0
  • 0
  • 1775

article-image-introducing-android-platform
Packt
12 Sep 2013
9 min read
Save for later

Introducing an Android platform

Packt
12 Sep 2013
9 min read
(For more resources related to this topic, see here.) Introducing an Android app Mobile software application that runs on Android is an Android app. The apps use the extension of .apk as the installer file extension. There are several popular examples of mobile apps, such as Foursquare, Angry Birds, and Fruit Ninja. Primarily in an Eclipse environment, we use Java, which is then compiled into Dalvik bytecode (not the ordinary Java bytecode). Android provides Dalvik virtual machine (DVM) inside Android (not Java virtual machine JVM). Dalvik VM does not ally with Java SE and Java ME libraries, and is built on an Apache Harmony java implementation. What is Dalvik virtual machine? Dalvik VM is a register-based architecture, authored by Dan Bornstein. It is being optimized for low memory requirements, and the virtual machine was slimmed down to use less space and less power consumption. Preparing for Android development Eclipse ADT In this part of the article, we will see how to install the development environment for Android on Eclipse Juno (4.2). Eclipse is a major IDE for Android development. We need to install an Eclipse extension ADT Android Development Toolkit (ADT) for development of the Android application. Debugging an Android project It is advisable to use the Log class for this purpose, the reason being we can filter, print different colors, and define log types. This could be one of the ways of debugging your program, by displaying variables value or parameters. To use Log, import android.util.Log, and use one the following methods to print messages to LogCat: v(String, String) (verbose) d(String, String) (debug) i(String, String) (information) w(String, String) (warning) e(String, String) (error) LogCat is used to view the internal log of the Android system. It is useful to trace any activity happening inside the device or emulator through the Android Debug Bridge (ADB). The Android project structure The following table illustrates the brief description of the important folders and files available in an Android project: Folder Functions /src The Java codes are placed in this folder. /gen It is generated automatically. /assets You can put your fonts, videos, and sounds here. It is more like a filesystem, and can also place CSS, JavaScript files, and so on. /libs It is an external library (normally in JAR). /res It contains images, layout, and global variables. /drawable-xhdpi It is used for extra high specification devices (for example, Tablet, Galaxy SIII, HTC One X). /drawable-hdpi  It is used for high specification phones (for example, SGSI, SGSII) /drawable-mdpi It is used for medium specification phones (for example, Galaxy W and HTC Desire). /drawable-ldpi  It is used for low specification phones (for example: Galaxy Y and HTC WildFire). /layout  It includes all the XML file for the screen(s) layout. /menu XML files for screen menu. /values It includes global constants. /values-v11 These are template styles definitions for devices with Honeycomb (Android API level 11). /values-v14 These are template styles definitions for devices with ICS (Android API level 14). AndroidManifest.xml This is one of important files to define the apps. This is the first file located by the Android OS in order to run the app. It contains the app's properties, activity declarations, and list of permissions.   Dalvik Debug Monitor Server (DDMS) DDMS is a must have tool to view the emulator/device activities. To access DDMS in the Eclipse, navigate to Windows | Open Perspective | Other, and then choose DDMS. By default it is available in the Android SDK (it's inside the folder android-sdk/tools by the file ddms). From this perspective the following aspects are available: Devices: The list of the devices and AVD that are connected to ADB Emulator control: It helps to carry out device functions LogCat: It views real-time system log messages Threads: It gives an idea of currently running threads within a VM Heap: It shows heap usage by application Allocation tracker: It provides information on memory allocation of objects File explorer: It explores the device filesystem Creating a new Android project using Eclipse ADT To create a new Android project in Eclipse, navigate to File | New | Project. A new project window will appear, from there choose Android | Android Application Project from the list. Then click on the Next button. Application Name: This is the name of your application, it will appear side-by-side to the launcher icon. Choose a project name that relevant to your application. Project Name: This is typically similar to your application name. Avoid having the same name with existing projects in Eclipse, it is not permitted. Package Name: This is the package name of the application. It will act as an ID in the Google Play app store if we wish to publish. Typically it will be the reverse of your domain name if we have one (since this is unique) followed by the application name, and a valid Java package name else we can have anything now and refactor it before publishing. Running the application on an Android device To run and deploy on real device, first install the driver of the device. This varies as per device model and manufacturer. These are a few links you could refer to: For Google Android devices refer to http://developer.android.com/sdk/win-usb.html. For others refer to http://www.teamandroid.com/download-android-usb-drivers/. Make sure the Android phone is connected to the computer through the USB cable. To check whether the phone is properly connected to your PC and in debug mode, please switch to the DDMS perspective. Adding multiple activity in Android application This exercise is to add an information screen on the SimpleNumb3r5 app. The information regarding the developer, e-mail, Facebook fan page, and other information is displayed. Since the screen contains a lot of text information including several pictures, so we make use of an HTML page as our approach here: Create an activity class to handle the new screen. Open the src folder, right-click on the package name (net.kerul.SimpleNumb3r5), and choose New | Other... from the selections, choose to add a new Android activity, and click on the Next button. Then, choose a blank activity and click on Next. Set the activity name as Info, and the wizard will suggest the screen layout as info_activity. Click on the Finish button. Adding the RadioGroup or RadioButton controls Android SDK provides two types of radio controls to be used in conjunction, where only one control can be chosen at a given time. RadioGroup (Android widget RadioGroup) is used to encapsulate a set of RadioButton controls for this purpose. Defining the preference screen Preferences are an important aspect of the Android applications. It allows users to have the choice to modify and personalize it. Preferences can be set in two ways: first method is to create the preferences.xml file in the res/xml directory, and second method is to set the preferences from the code. We will use the former also the easier one, by creating the preferences.xml file. Usually, there are five different preference views as listed in the following table: Views Description CheckBoxPreference It is a simple checkbox which returns true/false ListPreference It shows RadioGroup, where only 1 item can be selected EditTextPreference It shows dialog box to edit TextView, and returns String RingTonePreference It is a radioGroup that shows ringtone PreferenceCategory It is a category with preferences Fragment A fragment is an independent component that can be connected to an Activity or simply is subactivity. Typically it defines a part of UI but can also exist with no user interface, that is, headless. An instance of fragment must exist within an activity. Fragments ease the reuse of components for different layouts. Fragments are the way to support UI variances across different types of screens. The most popular use is of building single pane layouts for phone and multipane layouts for tablets (large screens). Adding an external library Android project – AdMob An Android application cannot achieve everything on its own, it will always need the company of external jars/libraries to achieve different goals and serve various purposes. Almost every free Android application published on store has advertisement embedded in it, which makes use of external component to achieve it. Incorporating advertisement in the Android application is a vital aspect of today's application development. In this article, we will continue on our DistanceConverter application, and make use of the external library, AdMob to incorporate advertisement in our application. Adding the AdMob SDK to the project Let's extract the previously downloaded AdMob SDK zip file, and we should get the folder GoogleAdMobAdsSdkAndroid-6.*.*, under that folder there is GoogleAdMobAdsSdk-6.x.x.jar. You should copy this JAR file in the libs folder of the project. Signing and distributing APK The Android package (APK), in simple terms is similar to the runnable JAR or executable file (on Windows OS) which consists of everything that is needed to run the application. The Android ecosystem uses a virtual machine, that is, Dalvik virtual machine (DVM) to run the Java applications. Dalvik uses its own bytecode which is quite different from the Java bytecode. Generating a private key An android application must be signed with our own private key. It identifies a person, corporate, or entity associated with the application. This can be generated using the program keytool from the Java SDK. The following command is used for generating the key: keytool -genkey -v -keystore <filename>.keystore -alias <key-name> -keyalg RSA -keysize 2048 -validity 10000 We can use different key for each published application, and specify different name to identify it. Also, Google expects validity of at least 25 years or more. A very important thing to consider is to keep back up and securely store key, because once it is compromised it impossible to update an already published application. Publishing to Google Play Publishing at Google Play is very simple and involves register for Google play. You just have to visit and register it at https://play.google.com/. It requires $25 USD to register, and is fairly straight and can take a few days until you get the final access. Summary In this article, we learned how to install the Eclipse Juno (the IDE), the Android SDK and the testing platform. Also, we learned about the fragment and its usage, and used it to have different layouts for landscape mode for our application DistanceConverter. We also learned about handling different screen types and persisting state during screen mode changes. Resources for Article: Further resources on this subject: Installing Alfresco Software Development Kit (SDK) [Article] JBoss AS plug-in and the Eclipse Web Tools Platform [Article] Creating a pop-up menu [Article]
Read more
  • 0
  • 0
  • 1816

article-image-creating-puzzle-app
Packt
06 Sep 2013
11 min read
Save for later

Creating a Puzzle App

Packt
06 Sep 2013
11 min read
(For more resources related to this topic, see here.) A quick introduction to puzzle games Puzzle games are a genre of video games that have been around for decades. These types of games challenge players to use logic and critical thinking to complete patterns. There is a large variety of puzzle games available, and in this article, we'll start by learning how to create a 3-by-3 jigsaw puzzle titled My Jigsaw Puzzle. In My Jigsaw Puzzle, players will have to complete a jigsaw puzzle by using nine puzzle pieces. Each puzzle piece will have an image on it, and the player will have to match the puzzle piece to the puzzle board by dragging the pieces from right to left. When the puzzle piece matches the correct location, the game will lock in the piece. Let's take a look at the final game product. Downloading the starter kit Before creating our puzzle app, you can get the starter kit for the jigsaw puzzle from the code files available with this book. The starter kit includes all of the graphics that we will be using in this article. My Jigsaw Puzzle For the Frank's Fitness app, we used Corona's built-in new project creator to help us with setting up our project. With My Jigsaw Puzzle, we will be creating the project from scratch. Although creating a project from scratch can be more time consuming, the process will introduce you to each element that goes into Corona's new project creator. Creating the project will include creating the build.settings, config.lua, main.lua, menu.lua, and gameplay.lua files. Before we can start creating the files for our project, we will need to create a new project folder on your computer. This folder will hold all of the files that will be used in our app. build.settings The first file that we will create is the build.settings file. This file will handle our device orientation and specify our icons for the iPhone. Inside our build.settings file, we will create one table named settings, which will hold two more tables named orientation and iphone. The orientation table will tell our app to start in landscape mode and to only support landscapeLeft and landscapeRight. The iphone table will specify the icons that we want to use for our app. To create the build.settings file, create a new file named build.settings in your project's folder and input the following code: settings = { orientation = { default = "landscapeRight", supported = { "landscapeLeft", "landscapeRight" }, }, iphone = { plist = { CFBundleIconFile = "Icon.png", CFBundleIconFiles = { "Icon.png", "Icon@2x.png", "Icon-72.png", } } }} config.lua Next, we will be creating a file named config.lua in our project's folder. The config.lua file is used to specify any runtime properties for our app. For My Jigsaw Puzzle, we will be specifying the width, height, and scale methods. We will be using the letterbox scale method, which will uniformly scale content as much as possible. When letterbox doesn't scale to the entire screen, our app will display black borders outside of the playable screen. To create the config.lua file, create a new file named config.lua in your project's folder and input the following code: application ={ content = { width = 320, height = 480, scale = "letterbox" }} main.lua Now that we've configured our project, we will be creating the main.lua file—the start point for every app. For now, we are going to keep the file simple. Our main. lua file will hide the status bar while the app is active and redirect the app to the next file—menu.lua. To create main.lua, create a new file named main.lua in your project's folder and copy the following code into the file: display.setStatusBar ( display.HiddenStatusBar )local storyboard = require ( "storyboard" )storyboard.gotoScene("menu") menu.lua Our next step is to create the menu for our app. The menu will show a background, the game title, and a play button. The player can then tap on the PLAY GAME button to start playing the jigsaw puzzle. To get started, create a new file in your project's folder called menu.lua. Once the file has been created, open menu.lua in your favorite text editor. Let's start the file by getting the widget and storyboard libraries. We'll also set up Storyboard by assigning the variable scene to storyboard.newScene(). local widget = require "widget"local storyboard = require( "storyboard" )local scene = storyboard.newScene() Next, we will set up our createScene() function. The function createScene() is called when entering a scene for the first time. Inside this function, we will create objects that will be displayed on the screen. Most of the following code should look familiar by now. Here, we are creating two image display objects and one widget. Each object will also be inserted into the variable group to let our app know that these objects belong to the scene menu.lua. function scene:createScene( event ) local group = self.view background = display.newImageRect( "woodboard.png", 480, 320 ) background.x = display.contentWidth*0.5 background.y = display.contentHeight*0.5 group:insert(background) logo = display.newImageRect( "logo.png", 400, 54 ) logo.x = display.contentWidth/2 logo.y = 65 group:insert(logo) function onPlayBtnRelease() storyboard.gotoScene("gameplay") end playBtn = widget.newButton{ default="button-play.png", over="button-play.png", width=200, height=83, onRelease = onPlayBtnRelease } playBtn.x = display.contentWidth/2 playBtn.y = display.contentHeight/2 group:insert(playBtn)end After the createScene() function, we will set up the enterScene() function. The enterScene() function is called after a scene has moved on to the screen. In My Jigsaw Puzzle, we will be using this function to remove the gameplay scene. We need to make sure we are removing the gameplay scene so that the jigsaw puzzle is reset and the player can play a new game. function scene:enterScene( event ) storyboard.removeScene( "gameplay" )end After we've created our createScene() and enterScene() functions, we need to set up our event listeners for Storyboard. scene:addEventListener( "createScene", scene )scene:addEventListener( "enterScene", scene ) Finally, we end our menu.lua file by adding the following line: return scene This line of code lets our app know that we are done with this scene. Now that we've added the last line, we have finished editing menu.lua, and we will now start setting up our jigsaw puzzle. gameplay.lua By now, our game has been configured and we have set up two files—main.lua and menu.lua. In our next step, we will be creating the jigsaw puzzle. The following screenshot shows the puzzle that we will be making: Getting local libraries To get started, create a new file called gameplay.lua and open the file in your favorite text editor. Similar to our menu.lua file, we need to start the file by getting in other libraries and setting up Storyboard. local widget = require("widget")local storyboard = require( "storyboard" )local scene = storyboard.newScene() Creating variables After our local libraries, we are going to create some variables to use in gameplay. lua. When you separate the variables from the rest of your code, the process of refining your app later becomes easier. _W = display.contentWidth_H = display.contentHeightpuzzlePiecesCompleted = 0totalPuzzlePieces = 9puzzlePieceWidth = 120puzzlePieceHeight = 120puzzlePieceStartingY = { 80,220,360,500,640,780,920,1060,1200 }puzzlePieceSlideUp = 140puzzleWidth, puzzleHeight = 320, 320puzzlePieces = {}puzzlePieceCheckpoint = { {x=-243,y=76}, {x=-160, y=76}, {x=-76, y=74}, {x=-243,y=177}, {x=-143, y=157}, {x=-57, y=147}, {x=-261,y=258}, {x=-176,y=250}, {x=-74,y=248}}puzzlePieceFinalPosition = { {x=77,y=75}, {x=160, y=75}, {x=244, y=75}, {x=77,y=175}, {x=179, y=158}, {x=265, y=144}, {x=58,y=258}, {x=145,y=251}, {x=248,y=247}} Here's a breakdown of what we will be using each variable for in our app: _W and _H: These variables capture the width and height of the screen. In our app, we have already specified the size of our app to be 480 x 320. puzzlePiecesCompleted: This variable is used to track the progress of the game by tracking the number of puzzle pieces completed. totalPuzzlePieces: This variable allows us to tell our app how many puzzle pieces we are using. puzzlePieceWidth and puzzlePieceHeight: These variables specify the width and height of our puzzle piece images within the app. puzzlePieceStartingY: This table contains the starting Y location of each puzzle piece. Since we can't have all nine puzzle pieces on screen at the same time, we are displaying the first two pieces and the other seven pieces are placed off the screen below the first two. We will be going over this in detail when we add the puzzle pieces. puzzlePieceSlideUp: After a puzzle piece is added, we will slide the puzzle pieces up; this variable sets the sliding distance. puzzleWidth and puzzleHeight: These variables specify the width and height of our puzzle board. puzzlePieces: This creates a table to hold our puzzle pieces once they are added to the board. puzzlePieceCheckpoint: This table sets up the checkpoints for each puzzle piece in x and y coordinates. When a puzzle piece is dragged to the checkpoint, it will be locked into position. When we add the checkpoint logic, we will learn more about this in greater detail. puzzlePieceFinalPosition: This table sets up the final puzzle location in x and y coordinates. This table is only used once the puzzle piece passes the checkpoint. Creating display groups After we have added our variables, we are going to create two display groups to hold our display objects. Display groups are simply a collection of display objects that allow us to manipulate multiple display objects at once. In our app, we will be creating two display groups—playGameGroup and finishGameGroup. playGameGroup will contain objects that are used when the game is being played and the finishGameGroup will contain objects that are used when the puzzle is complete.Insert the following code after the variables: playGameGroup = display.newGroup()finishGameGroup = display.newGroup() The shuffle function Our next task is to create a shuffle function for My Jigsaw Puzzle. This function will randomize the puzzle pieces that are presented on the right side of the screen. Without the shuffle function, our puzzle pieces would be presented in a 1, 2, 3 manner, while the shuffle function makes sure that the player has a new experience every time. Creating the shuffle function To create the shuffle function, we will start by creating a function named shuffle. This function will accept one argument (t) and proceed to randomize the table for us. We're going to be using some advanced topics in this function, but before we start explaining it, let's add the following code to gameplay.lua under our display group: function shuffle(t) local n = #t while n > 2 do local k = math.random(n) t[n] = t[k] t[k] = t[n] n = n - 1 end return tend At first glance, this function may look complex; however, the code gets a lot simpler once it's explained. Here's a line-by-line breakdown of our shuffle function. The local n = #t line introduces two new features—local and #. By using the keyword local in front of our variable name, we are saying that this variable (n) is only needed for the duration of the function or loop that we are in. By using local variables, you are getting the most out of your memory resources and practicing good programming techniques. For more information about local variables, visit www.lua.org/pil/4.2.html. In this line, we are also using the # symbol. This symbol will tell us how many pieces or elements are in a table. In our app, our table will contain nine pieces or elements. Inside the while loop, the very first line is local k = math.random(n). This line is assigning a random number between 1 and the value of n (which is 9 in our app) to the local variable k. Then, we are randomizing the elements of the table by swapping the places of two pieces within our table. Finally, we are using n = n – 1 to work our way backwards through all of the elements in the table. Summary After reading this article, you will have a game that is ready to be played by you and your friends. We learned how to use Corona's feature set to create our first business app. In My Jigsaw Puzzle, we only provided one puzzle for the player, and although it's a great puzzle, I suggest adding more puzzles to make the game more appealing to more players. Resources for Article: Further resources on this subject: Atmosfall – Managing Game Progress with Coroutines [Article] Creating and configuring a basic mobile application [Article] Defining the Application's Policy File [Article]
Read more
  • 0
  • 0
  • 6968

article-image-creating-sample-application-simple
Packt
04 Sep 2013
8 min read
Save for later

Creating a sample application (Simple)

Packt
04 Sep 2013
8 min read
(For more resources related to this topic, see here.) How to do it... To create an application, include the JavaScript and CSS files in your page. Perform the following steps: Create an HTML document, index.html, under your project directory. Please note that this directory should be placed in the web root of your web server. Create the directories styles and scripts under your project directory. Copy the CSS file kendo.mobile.all.min.css, from <downloaded directory>/styles to the styles directory created in step 2. Then add a reference to the CSS file in the head section of the document. Download the jQuery library from jQuery.com. Place this file in the scripts directory and add a reference to this file in the document before closing the body tag. You can also specify the CDN location of the file in the document. Copy the JavaScript file kendo.mobile.min.js, from the <downloaded directory>/js tag to the scripts directory created in step 2. Then add a reference to this JavaScript file in the document (after jQuery). Add the text "Hello Kendo!!" in the body tag of the index.html file as follows: <!DOCTYPE HTML><html><head><title>My first Kendo Mobile Application</title><link rel="stylesheet"type="text/css"href="styles/kendo.mobile.all.min.css"></head><body>Hello Kendo!!<script type="text/javascript"src = "scripts/jquery.min.js"></script><script type="text/javascript"src = "scripts/kendo.mobile.min.js"></script></body></html> The preceding code snippet is a simple HTML page with references to Kendo Mobile CSS and JavaScript files. These files are minified and contain all the features, themes, and widgets. In production, you would like to include only those that are required. The downloaded ZIP file includes CSS and JavaScript files for specific features. However, in development you can use the minified files that contain all features. Another thing to note is that apart from the reference to the script kendo.mobile.min.js, the page also includes a reference to jQuery. It is the only external dependency for Kendo UI. When you view this page on a mobile device, you will see the text Hello Kendo!! shown. This page does not include any of the widgets that come as a part of the library. Now let's build on top of our Hello World application and add some visual elements; that is, UI widgets to the page. This can be done with the following steps: Add a layout first. A mobile application generally has a header, a footer, and multiple views. It is also observed that while navigating through different views in an application, the header and footer remain constant. The framework allows you to define a global layout that may contain a header and a footer for all the views in the application. Also, the framework allows you to define multiple views that can share the same layout. The following is the same page that now includes a header and footer defined in the layout: <body><div data-role="layout" data-id="defaultLayout"> <header data-role="header"> <div data-role="navbar"> My first application </div> </header> <footer data-role="footer"> <div data-role="tabstrip"> <a data-icon="about">About</a> <a data-icon="settings">Settings</a> </div> </footer> </div></body> The body contains a few div tags with data attributes. Let's look into one of these tags in detail. <div data-role="layout" data-id="defaultLayout"> Here, the div tag contains two data attributes, role and id. The role data attribute is used to initialize and configure a widget. The data-role attribute has a value, layout, identifying the target element as a layout widget. All the widgets are expected to have a role data attribute that helps in marking the target element for a specific purpose. It instructs the library which widget needs to be added to the page. The id data attribute is used to identify the widget (the layout widget) in the page. A page may define several layout widgets and each one of these must be identified by a unique ID. Here, the data-id attribute has defaultLayout as its value. Now there can be many views referring to this layout by its id. Similarly, there are other elements in the page with the data-role attribute, defining them as one of widgets in the page. Let's take a look at the header and footer widgets defined inside the layout. <header data-role="header">... </header><footer data-role="footer">...</footer> The header and footer tags have the role data attribute set to header and footer respectively. This aligns them to the top and bottom of the page, giving the rest of the available space for different views to render. Also, note that there is a navbar widget in the header and a tabstrip widget defined in the footer. As mentioned earlier, the framework comes with several widgets that can help you build the application rapidly. Now add views to the page. The index.html page now has a layout defined and when you run the page in the browser, you will see an error message in the console which says: Uncaught Error: Your kendo mobile application element does not contain any direct child elements with data-role="view" attribute set. Make sure that you instantiate the mobile application using the correct container. Views represent the actual content that has to be displayed between the header and the footer that we defined while creating a layout. A layout cannot exist without a view and hence you see that error message in the console. To fix this error, you need to define a view for your mobile application. Add the following to your index.html page: <div data-role="view" data-layout="defaultLayout"> Hello Kendo!!</div> As mentioned earlier, every widget needs to have a role data attribute to identify itself as a particular widget in the page. Here, the target element is defined as a view widget and tied to the layout by defining the data-layout attribute. The data-layout attribute has a value defaultLayout that is the same as the value for the data-id attribute of the layout that we defined earlier. This attaches the view to the layout and you will not see the error message anymore. Similarly, you can have multiple Views defined in the page that can make use of the same layout. Now, there's only one pending task for the application to start working: initializing the application. A Kendo Mobile application can be initialized using the Application object. To do that, add the following code to the page: <script> var app = new kendo.mobile.Application();</script> Include the previous script block right after references to jQuery and Kendo Mobile and before closing the body tag. This single line of JavaScript code will initialize your Kendo Mobile application and all the widgets with the data-role attribute. The Application object is used for many other purposes . How it works... When you run the index.html page in a browser, you will see a navbar and a tabstrip in the header and footer of the page. Also, the message Hello Kendo!! being shown in the body of the page. The following screenshot shows how it will look like when you view the page on an iPhone: If you have noticed, this looks like a native iOS application. The framework has the capability to render the application that looks like a native application on a device. When you view the same page on an Android device, it will look like an native Android application, as shown in the following screenshot: The framework identifies the platform on which the mobile application is being run and then provides native look and feel to the application. There are ways in which you can customize this behavior. Summary Creating a sample application (Simple)got us started with the Kendo UI Mobile framework and showed us how to build a sample application using the same. We also saw some of the Mobile UI widgets, such as layouts, views, navbar, and tabstrip in brief. Resources for Article : Further resources on this subject: Working with remote data [Article] The Decider: External APIs [Article] Constructing and Evaluating Your Design Solution [Article]
Read more
  • 0
  • 0
  • 1333

article-image-getting-started-kinect
Packt
30 Aug 2013
8 min read
Save for later

Getting Started with Kinect

Packt
30 Aug 2013
8 min read
(For more resources related to this topic, see here.) Before the birth of Microsoft Kinect, few people were familiar with the technology of motion sensing. Similar devices have been invented and developed originally for monitoring aerial and undersea aggressors in wars. Then in the non-military cases, motion sensors are widely used in alarm systems, lighting systems and so on, which could detect if someone or something disrupts the waves throughout a room and trigger predefined events. Although radar sensors and modern infrared motion sensors are used more popularly in our life, we seldom notice their existence, and can hardly make use of these devices in our own applications. But Kinect changed everything from the time it was launched in North America at the end of 2010. Different from most other user input controllers, Kinect enables users to interact with programs without really touching a mouse or a pad, but only through gestures. In a top-level view, a Kinect sensor is made up of an RGB camera, a depth sensor, an IR emitter, and a microphone array, which consists of several microphones for sound and voice recognition. A standard Kinect (for Windows) equipment is shown as follows: The Kinect device The Kinect drivers and software, which are either from Microsoft or from third-party companies, can even track and analyze advanced gestures and skeletons of multiple players. All these features make it possible to design brilliant and exciting applications with handsfree user inputs. And until now, Kinect had already brought a lot of games and software to an entirely new level. It is believed to be the bridge between the physical world we exist in and the virtual reality we create, and a completely new way of interacting with arts and a pro fitable business opportunity for individuals and companies. In this article, we will try to make an interesting game with the popular Kinect technology for user inputs, As Kinect captures the camera and depth images as video streams, we can also merge this view of our real-world environment with virtual elements, which is called Augmented Reality (AR) . This enables users to feel as if they appear and live in a nonexistent world, or something unbelievable exists in the physical earth. In this article, we will first introduce the installation of Kinect hardware and software on personal computers, and then consider a good enough idea compounded of Kinect and augmented reality elements. Before installing the Kinect device on your PCs, obviously you should buy Kinect equipment first. In this article, we will depend on Kinect for Windows or Kinect for Xbox 360, which can be learned about and bought at: http://www.microsoft.com/en-us/kinectforwindows/ http://www.xbox.com/en-US/kinect Please note that you don't need to buy an Xbox 360 at all. Kinect will be connected to PCs so that we can make custom programs for it. An alternative choice is Kinect for Windows, which is located at: http://www.microsoft.com/en-us/kinectforwindows/purchase/ The uses and developments of both will be of no difference for our cases. Installation of Kinect It is strongly suggested that you have a Windows 7 operating system or higher. It can be either 32-bit or 64-bit and with dual-core or faster processors. Linux developers can also benefit from third-party drivers and SDKs to manipulate Kinect components. Before we start to discuss the software installation, you can download both the Microsoft Kinect SDK and the Developer Toolkit from: http://www.microsoft.com/en-us/kinectforwindows/develop/developerdownloads.aspx In this article, we prefer to develop Kinect-based applications using Kinect SDK Version 1.5 (or higher versions) and the C++ language. Later versions should be backward compatible so that the source code provided in this article doesn't need to be changed. Setting up your Kinect software on PCs After we have downloaded the SDK and the Developer Toolkit, it's time for us to install them on the PC and ensure that they can work with the Kinect hardware. Let's perform the following steps: Run the setup executable with administrator permissions. Select I agree to the license terms and conditions after reading the License Agreement. The Kinect SDK setup dialog Follow the steps until the SDK installation has finished. And then, install the toolkit following similar instructions. The hardware installation is easy: plug the ends of the cable into the USB port and a power point, and plug the USB into your PC. Wait for the drivers to be found automatically. Now, start the Developer Toolkit Browser, choose Samples: C++ from the tabs, and find and run the sample with the name Skeletal Viewer. You should be able to see a new window demonstrating the depth/ skeleton/color images of the current physical scene, which is similar to the following image: The depth (left), skeleton (middle), and color (right) images read from Kinect Why did I do that? We chose to set up the SDK software at first so that it will install the motor and camera drivers, the APIs, and the documentations, as well as the toolkit including resources and samples onto the PC. If the operation steps are inversed, that is, the hardware is connected before installing the SDK, your Windows OS may not be able to recognize the device. Just start the SDK setup at this time and the device should be identified again during the installation process. But before actually using Kinect, you still have to ensure there is nothing between the device and you (the player). And it's best to keep the play space at least 1.8 m wide and about 1.8 m to 3.6 m long from the sensor. If you have more than one Kinect device, don't keep them face-to-face as there may be infrared interference between them. If you have multiple Kinects to install on the same PC, please note that one USB root hub can have one and only one Kinect connected. The problem happens because Kinect takes over 50 percent of the USB bandwidth, and it needs an individual USB controller to run. So plugging more than one device on the same USB hub means only one of them will work. The depth image at the left in the preceding image shows a human (in fact, the author) standing in front of the camera. Some parts may be totally black if they are too near (often less than 80 cm), or too far (often more than 4 m). If you are using Kinect for Windows, you can turn on Near Mode to show objects that are near the camera; however, Kinect for Xbox 360 doesn't have such features. You can read more about the software and hardware setup at: http://www.microsoft.com/en-us/kinectforwindows/purchase/sensor_setup.aspx The idea of the AR-based Fruit Ninja game Now it's time for us to define the goal we are going to achieve in this article. As a quick but practical guide for Kinect and augmented reality, we should be able to make use of the depth detection, video streaming, and motion tracking functionalities in our project. 3D graphics APIs are also important here because virtual elements should also be included and interacted with irregular user inputs not common mouse or keyboard inputs). A fine example is the Fruit Ninja game, which is already a very popular game all over the world. Especially on mobile devices like smartphones and pads, you can see people destroy different kinds of fruits by touching and swiping their fingers on the screen. With the help of Kinect, our arms can act as blades to cut off flying fruits, and our images can also be shown along with the virtual environment so that we can determine the posture of our bodies and position of our arms through the screen display. Unfortunately, this idea is not fresh enough for now. Already, there are commercial products with similar purposes available in the market; for example: http://marketplace.xbox.com/en-US/Product/Fruit-Ninja-Kinect/66acd000-77fe-1000-9115-d80258410b79 But please note that we are not going to design a completely different product here, or even bring it to the market after finishing this article. We will only learn how to develop Kinect-based applications, work in our own way from the very beginning, and benefit from the experience in our professional work or as an amateur. So it is okay to reinvent the wheel this time, and have fun in the process and the results. Summary Kinect, which is a portmanteau of the words "kinetic" and "connect", is a motion sensor developed and released by Microsoft. It provides a natural user interface (NUI) for tracking and manipulating handsfree user inputs such as gestures and skeleton motions. It can be considered as one of the most successful consumer electronics device in recent years, and we will be using this novel device to build the Fruit Ninja game in this article. We will focus on developing Kinect and AR-based applications on Windows 7 or higher using the Microsoft Kinect SDK 1.5 (or higher) and the C++ programming language. Mainly, we have introduced how to install Kinect for Windows SDK in this article. Resources for Article : Further resources on this subject: So, what is KineticJS? [Article] Mission Running in EVE Online [Article] Making Money with Your Game [Article]
Read more
  • 0
  • 0
  • 2368

article-image-getting-ready-rubymotion
Packt
22 Aug 2013
13 min read
Save for later

Getting Ready for RubyMotion

Packt
22 Aug 2013
13 min read
(For more resources related to this topic, see here.) How can I develop an iOS application? To develop iOS applications, there are various third-party frameworks available, apart from Apple libraries. If we broadly categorize the ways in which we can create iOS applications, we can divide them into three ways. Native apps using Objective-C This is the most standard way to build your application, by interacting with Apple APIs and writing apps in Objective-C. Applications made using native Apple APIs can use all possible device capabilities, and are relatively more reliable and high performing (however, the topic of performance is debatable based on the quality of the developer's code). Mobile web applications Mobile web applications are simple web applications extended for mobile web browsers, which can be created using standard web technologies such as HTML5. For example, if we browse through http://www.twitter.com in a mobile browser, it will be redirected to http://mobile.twitter.com, which renders its corresponding views for mobile devices. These applications are easy to create but the downside is that they have limited access to user data (for example, phonebook) and hardware (for example, camera). Hybrid applications These applications are somewhere in between mobile web apps and native applications. They are created using common web technologies such as HTML5 and JavaScript and have the ability to use device capabilities via their homegrown APIs. Some of the popular hybrid frameworks include Rhomobile and Phonegap. If we compare the speed of development and user experience, it can be summed up with the following diagrams: From the preceding diagrams we see that mobile web apps can be created very quickly but we have to compromise on user experience. While native apps using Objective-C have good user experience, they have a very steep initial learning curve for web developers. RubyMotion is good news for both users and developers. Users get an amazing experience of a native application and developers are able to develop applications rapidly in comparison to applications developed using Objective-C. Let's now learn about RubyMotion. What is RubyMotion? RubyMotion is a toolchain that allows developers to develop native iOS applications using the Ruby programming language. RubyMotion acts as a compiler that interacts with the iOS SDK(Software Development Kit). This gives us enormous power to make use of Apple libraries; therefore, once the application has compiled and loaded, the device has no idea whether it's an application made using Objective-C or RubyMotion. RubyMotion is a product of HipByte, founded by Laurent Sansonetti. While developing applications with RubyMotion using Ruby, you always have access to the iOS SDK classes. This gives you the benefit of even mixing Objective-C and Ruby code, as RubyMotion implements Ruby on top of the Objective-C runtime and iOS Foundation classes. This is how a typical RubyMotion application works. The code written in RubyMotion is fully compiled into machine code, so the application created by RubyMotion is as fast as the one created using Objective-C. Why RubyMotion? So far we have learned what RubyMotion is, but the question that comes to mind is, why should we use RubyMotion? There are many reasons why RubyMotion is a good choice for building robust iOS apps. The following sections detail a few that we think matter the most. If you are not an Objective-C fan For a newbie developer, Objective-C is an arduous affair. It's complicated to code; even for doing a simple thing, we have to write many lines of code. Though it is a powerful language and one of the best object-oriented ones available, it is time consuming and the learning curve is very steep. On the other hand, Ruby is more expressive, simple, and productive in comparison to Objective-C. Because of its simplicity, developers can shift their focus onto problem solving rather than spending time on trivial stuff, which is taken care by Ruby itself. In short, we can say RubyMotion allows us to use the power of Objective-C with the simplicity of Ruby. Ruby classes used in RubyMotion are inherited from Objective-C classes. If you are familiar with the concept of object-oriented programming, you can understand its power. This means we can directly use Apple iOS SDK classes from your RubyMotion code. It is not a bridge RubyMotion apps get direct access to iOS SDK APIs, which means the size of application and performance created using RubyMotion is comparable to the one created using Objective-C. It implements Ruby on top of the Objective-C runtime and iOS Foundation classes. RubyMotion uses a state-of-the-art static compiler based on Low Level Virtual Machine (LLVM), which converts the Ruby source code into a blazing fast machine code. The original source code is never present in the application bundle. A typical application weighs less than 1 MB, but the size can increase depending on the use case. Managed memory One of the key features of RubyMotion is that it takes care of memory management. Just like ARC (Automatic Reference Counting) with Xcode 4.4 and above, we don't have to take the pain of releasing the memory once an object is no longer used. RubyMotion does the magic and we don't need to think about it. It handles it on its own. Terminal-based workflow RubyMotion has a terminal-based workflow; from creation of the application to deployment, everything can be done through terminals. If you are used to working on terminals, you know it adds to speedier development. Easy debugging with REPL The terminal window where you run Rake also gives you the option to debug with REPL (Read Evaluate Print Loop), which lets you use Ruby expressions that are evaluated on the spot, and the results are reflected on the simulator while the application is still running. The ability to make live changes to the user interface and internal application data structures at runtime is extremely useful for testing and troubleshooting issues with the application, as this saves a lot of time and is much faster than a traditional edit-compile-run loop. It is extendable We can use RubyMotion salted gems easily by just adding them in the Rakefile. What are RubyMotion salted gems? We can't use all the gems that are available for Ruby right now, but there are a lot of gems specifically developed for RubyMotion. As the RubyMotion developer community expands, so will its gem bouquet, and this will make our application development rapid. Third-party Objective-C libraries can be easily used in a RubyMotion project. It supports CocoaPods, which is a dependency manager for Objective-C libraries, making this process a bit easier. Debugging and testing RubyMotion has a console-based inbuilt interactive debugger for troubleshooting the issues both on a simulator and on a device using GDB (GNU Debugger). GDB is extremely powerful on its own, and RubyMotion uses it for debugging the compiled Ruby code. Also, RubyMotion projects are fit for Test Driven Development (TDD). We can write a unit test for our code from the beginning. We can use Behavior Driven Development (BDD) with RubyMotion, which is integrated into every project. Pop quiz Q.1.How can we distinguish between the iOS application created by RubyMotion and the iOS application created by Objective-C? You can distinguish based on the user experience of the application. You can distinguish based on the performance of the application. You can't distinguish based on the user experience and performance of the application. Solution: If your answer was option 3, you were right. We can't distinguish between applications created by RubyMotion or Objective-C as the user experience and performance are similar. Q.2. How can we extend RubyMotion? We can use Objective-C libraries. We can use all Ruby gems. We can use RubyMotion-flavored gems. We can't use any other libraries. Solution: If your answer was option 1 and 3, you were right. Yes, we can use Objective-C libraries and also RubyMotion-flavored gems. RubyMotion installation – furnish your environment Now that we have got a good introduction to RubyMotion, let's set up our development environment; but before that let's run through some of the prerequisites. Prerequisites for RubyMotion You need a Mac OS: we can't develop iOS applications with RubyMotion on any other operating system; so we definitely need a Mac OS. OSX 10.6 or higher: RubyMotion requires a Mac running OSX 10.6 or higher. OSX 10.7 Lion is highly recommended. Ruby: the Ruby framework comes preinstalled with Mac OS X. If you have multiple versions of Ruby, we recommend that you use Ruby Version Manager (RVM). For more details, visit https://rvm.io/. Xcode: next we need to install Xcode, which includes the iOS SDK, developed by Apple and essential for developing iOS applications. It can be downloaded from the App Store for free. It also includes the iPhone/iPad simulator, which will be used for testing our application. Command Line Tools: after installing the Xcode toolchain, we need to install the command-line tools package, which is necessary for RubyMotion. To confirm that command-line tools is installed with your Xcode, open Xcode in your Applications folder, go to the Preferences window, and click on the Downloads tab. You should see the Command Line Tools package in this list. If it is not yet installed, make sure to click on the Install button. If you have an old version of Xcode, run the following command on the terminal: sudo xcode-select -switch /Applications/Xcode.app/Contents/Developer This command will set up the default Xcode path. Installing RubyMotion RubyMotion installation is really simple and takes no time at all. RubyMotion is a commercial product that you need to purchase from www.rubymotion.com. Once purchased, you will receive your unique license key and installer. RubyMotion installation is a five-step procedure and is given here: Once you have received the package, run the RubyMotion installer as follows: Read and accept the EULA (End User License Agreement). Enter the license number you have received as shown in the following screenshot: Time for a short break—it will take a few minutes for RubyMotion to get downloaded and installed on your system. You can relax for some time. Yippee!! There is no step 5. And that's how quick it is to start working with RubyMotion. Update RubyMotion RubyMotion is a fast-moving framework and we need to upgrade it once there is a new release available. Upgrading RubyMotion is again really simple—with one command, you can easily upgrade it to the latest version. sudo motion update You need to be connected to the Internet for an upgrade to happen. If you want to work on an old version, you can downgrade using the following command: sudo motion update –force-version=1.2 But we recommend using the latest version. How do we check we've done everything correctly? Now that we have installed our RubyMotion copy, it's good practice to confirm which version we have installed; to do this, go to the terminal and run the following: motion –v This command outputs the RubyMotion version installed on your machine. If you get an error, you need to reinstall. Pick your own editor – you are not forced to use Xcode With RubyMotion, you are not forced to use Xcode. As every developer is more comfortable with a specific editor, you are open to choose what you like. However, we recommend the following editors for Ruby development: RubyMine Vim TextMate Sublime Emacs RubyMine now provides full support to a RubyMotion project. How to get help If you are facing some issues, the preferred way to get a solution is to discuss it at the RubyMotion Google group, (https://groups.google.com/forum/?fromgroups#!forum/rubymotion), where you can interact with fellow developers from the community and get a speedy resolution. Sometimes you might not get a precise response from the RubyMotion group. Not to worry, RubyMotion support is there to rescue you. If you have a feature request, an issue, or simply want to ask a question, you can log a support ticket—that too from the command line using the following command: $ motion support This will open up a new window in your browser. You can fill and submit the form with your query. Your RubyMotion license key, email address, and environment details will be added automatically. The RubyMotion community is growing at a very fast pace. In a short span of time, a lot of popular RubyMotion gems have been created by developers. FAQs We believe no question is silly. By now you will have many questions in your mind regarding RubyMotion. We have tried to answer a few of the most frequently asked questions (FAQs) related to topics covered so far in this section. Here are a few of them: Q1. Are the applications created by RubyMotion in keeping with Apple guidelines? Answer. Yes, RubyMotion strongly follows the review guidelines provided by Apple. Many applications created using RubyMotion are already available at the App Store. Q2. Will my RubyMotion application work on a Blackberry, Android, or Windows phone? Answer. No, applications created using RubyMotion are only for iOS devices; it is an alternative to programming in Objective-C. For a single-source multi-device application, we would recommend hybrid frameworks such as Rhomobile, Phonegap, and Titanium. For android development using Ruby, you can try Rubuto. Q3. Can I share an application with someone? Answer. Yes and no. With the Apple Developer Program membership, you can share your application only for testing purposes with a maximum of 100 devices, where each device has to be registered individually with Apple. Also, you cannot distribute your application on the App Store for testing. Once you have finished developing your application and are ready to ship, you can submit it to Apple for an App Store review. Q4. Can I use Ruby gems? Answer. Yes and no. No because we can't use normal Ruby gems, which you generally use in your Ruby on Rails projects; and yes because you can use gems that are specifically developed for RubyMotion, and there are already many such gems. Q5. Will my application work on iPad and iPod Touch? Answer. Absolutely, your application will work on any iOS devices, namely iPhone, iPad, and iPod Touch. Q6 Is Ruby allowed on the App Store? Answer. The App Store can't distinguish between applications made using Objective-C and those made using RubyMotion. So, no worries, our RubyMotion applications are fit for the App Store. Q7. Can I use third-party Objective-C libraries? Answer. Certainly. Third-party Objective-C libraries can be used in your project. RubyMotion provides integration with the CocoaPods dependency manager, which helps in reducing the hassle. You also can use C/C++ code provided that you wrap it into the Objective-C classes and methods. Q8. Is RubyMotion open source? Answer. RubyMotion as a toolchain is open source (available at GitHub). The closed source part is the Ruby runtime, which is, however, very similar to MacRuby runtime (which is open source). Summary Let's review all that we have learned so far. We first discussed the different ways to create iOS applications. Then we started with RubyMotion and discussed why to use it. And in the last section, we learned how to get started with RubyMotion and which editor fits with it. Now that we have our RubyMotion framework up and running, the next obvious task is to create our very first application, the most rudimentary Hello World application. Resources for Article : Further resources on this subject: Introducing RubyMotion and the Hello World app [Article] iPhone Applications Tune-Up: Design for Performance [Article] Introducing Xcode Tools for iPhone Development [Article]
Read more
  • 0
  • 0
  • 1449
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-coding-real-time-web
Packt
30 Jul 2013
9 min read
Save for later

Coding for the Real-time Web

Packt
30 Jul 2013
9 min read
(For more resources related to this topic, see here.) As the lines between web apps and traditional desktop apps blur, our users have come to expect real-time behavior in our web apps—something that is traditionally the domain of the desktop. One cannot really blame them. Real-time interaction with data, services, and even other users has driven the connected revolution, and we are now connected in more ways than ever before. However valid this desire to be always connected and immediately informed of an event, there are inherent challenges in real-time interactions within web apps. The first challenge is that the Web is stateless. The Web is built on HTTP, a protocol that is request/response; for each request a browser makes, there is one and only one response. There are frameworks and techniques we can use to mask the statelessness of the Web, but there is no true state built into the Web or HTTP. This is further complicated as the Web is client/server. As it's stateless, a server only knows of the clients connected at any one given moment, and clients can only display data to the user based upon the last interaction with the server. The only time the client and server have any knowledge of the other is during an active request/response, and this action may change the state of the client or the server. Any change to the server's state is not reflected to the other clients until they connect to the server with a new request. It's somewhat like the uncertainty principle in that the more one tries to pin down one data point of the relationship, the more uncertain one becomes about the other points. All hope is not lost. There are several techniques that can be used to enable real-time (or near real-time) data exchange between the web server and any active client. Simulating a connected state In traditional web development, there has not been a way to maintain a persistent connection between a client browser and the web server. Web developers have gone to great lengths to try and simulate a connected world in the request/response world of HTTP. Several developers have met with success using creative thinking and loopholes within the standard itself to develop techniques such as long polling and the forever frame. Now, thanks to the realization that such a technique is needed, the organizations overseeing the next generation of web standards are also heeding the call with server-sent events and web sockets. Long polling Long polling is the default fallback for any client and server content exchange. It is not reliant on anything but HTTP—no special standards checklists or other chicanery are required. Long polling is like getting the silent treatment from your partner. You ask a question and you wait indefinitely for an answer. After some known period of time and what may seem like an eternity, you finally receive an answer or the request eventually times out. The process repeats again and again until the request is fully satisfied or the relationship terminates. So, yeah, it's exactly like the silent treatment. Forever Frame The Forever Frame technique relies on the HTTP 1.1 standard and a hidden iframe. When the page loads, it contains (or constructs) a hidden iframe used to make a request back to the server. The actual exchange between the client and the server leverages a feature of HTTP 1.1 known as Chunked Encoding. Chunked Encoding is identified by a value of chunked in the HTTP Transfer-Encoding header. This method of data transfer is intended to allow the server to begin sending portions of data to the client before the entire length of the content is known. When simulating a real-time connection between a browser and web server, the server can dispatch messages to the client as individual chunks on the request made by the iframe. Server-Sent Events Server-Sent Events (SSE) provide a mechanism for a server to raise DOM events within a client web browser. This means to use SSE, the browser must support it. As of this writing, support for SSE is minimal but it has been submitted to W3C for inclusion into the HTML5 specification. The use of SSE begins by declaring an EventSource variable: var source = new EventSource('/my-data-source'); If you then want to listen to any and all messages sent by the source, you simply treat it as a DOM event and handle it in JavaScript. source.onmessage = function(event) {// Process the event.} SSE supports the raising of specific events and complex event messaging. The message format is a simple text-based format derivative of JSON. Two newline characters separate each message within the stream, and each message may have an id, data, and event property. SSE also supports setting the retry time using the retry keyword within a message. :comment:simple messagedata:"this string is my message":complex message targeting an eventevent:thatjusthappeneddata:{ "who":"Professor Plum", "where":"Library", "with":"candlestick"} As of this writing, SSE is not supported in Internet Explorer and is partially implemented in a few mobile browsers. WebSockets The coup de grâce of real-time communication on the Web is WebSockets. WebSockets support a bidirectional stream between a web browser and web server and only leverage HTTP 1.1 to request a connection upgrade. Once a connection upgrade has been granted, WebSockets communicate in full-duplex using the WebSocket protocol over a TCP connection, literally creating a client-server connection within the browser that can be used for real-time messaging. All major desktop browsers and almost all mobile browsers support WebSockets. However, WebSocket usage requires support from the web server, and a WebSocket connection may have trouble working successfully behind a proxy. With all the tools and techniques available to enable real-time connections between our mobile web app and the web server, how does one make the choice? We could write our code to support long polling, but that would obviously use up resources on the server and require us to do some pretty extensive plumbing on our end. We could try and use WebSockets, but for browsers lacking support or for users behind proxies, we might be introducing more problems than we would solve. If only there was a framework to handle all of this for us, try the best option available and degrade to the almost guaranteed functionality of long polling when required. Wait. There is. It's called SignalR. SignalR provides a framework that abstracts all the previously mentioned real-time connection options into one cohesive communication platform supporting both web development and traditional desktop development. When establishing a connection between the client and server, SignalR will negotiate the best connection technique/technology possible based upon client and server capability. The actual transport used is hidden beneath a higher-level communication framework that exposes endpoints on the server and allows those endpoints to be invoked by the client. Clients, in turn, may register with the server and have messages pushed to them. Each client is uniquely identified to the server via a connection ID. This connection ID can be used to send messages explicitly to a client or away from a client. In addition, SignalR supports the concept of groups, each group being a collection of connection IDs. These groups, just like individual connections, can be specifically included or excluded from a communication exchange. All of these capabilities in SignalR are provided to us by two client/server communication mechanisms: persistent connections and hubs Persistent connections Persistent connections are the low-level connections of SignalR. That's not to say they provide access to the actual communication technique being used by SignalR, but to illustrate their primary usage as raw communication between client and server. Persistent connections behave much as sockets do in traditional network application development. They provide an abstraction above the lower-level communication mechanisms and protocols, but offer little more than that. When creating an endpoint to handle persistent connection requests over HTTP, the class for handling the connection requests must reside within the Controllers folder (or any other folder containing controllers) and extend the PersistentConnection class. public class MyPersistentConnection: PersistentConnection{} The PersistentConnection class manages connections from the client to the server by way of events. To handle these connection events, any class that is derived from PersistentConnection may override the methods defined within the PersistentConnection class. Client interactions with the server raise the following events: OnConnected: This is invoked by the framework when a new connection to the server is made. OnReconnected: This is invoked when a client connection that has been terminated has reestablished a connection to the server. OnRejoiningGroups: This is invoked when a client connection that has timed out is being reestablished so that the connection may be rejoined to the appropriate groups. OnReceived: This method is invoked when data is received from the client OnDisconnected: This is invoked when the connection between the client and server has been terminated. Interaction with the client occurs through the Connection property of the PersistentConnection class. When an event is raised, the implementing class can determine if it wishes to broadcast a message using Connection.Broadcast, respond to a specific client using Connection.Send, or add the client that triggered the message to a group using Connection.Groups. Hubs Hubs provide us an abstraction over the PersistentConnection class by masking some of the overhead involved in managing raw connections between client and server. Similar to a persistent connection, a hub is contained within the Controllers folder of your project but instead, extends the Hub base class. public class MyHub : Hub{} While a hub supports the ability to be notified of connection, reconnection, and disconnection events, unlike the event-driven persistent connection a hub handles the event dispatching for us. Any publicly available method on the Hub class is treated as an endpoint and is addressable by any client by name. public class MyHub : Hub{public void SendMeAMessage(string message){ /* ... */ }} A hub can communicate with any of its clients using the Clients property of the Hub base class. This property supports methods, just like the Connection property of PersistentConnection, to communicate with specific clients, all clients, or groups of clients. Rather than break down all the functionality available to us in the Hub class, we will instead learn from an example.
Read more
  • 0
  • 0
  • 1477

article-image-introducing-rubymotion-and-hello-world-app
Packt
22 Jul 2013
7 min read
Save for later

Introducing RubyMotion and the Hello World app

Packt
22 Jul 2013
7 min read
(For more resources related to this topic, see here.) If you're reading this, you're either searching for an understanding of how RubyMotion can give you the keys to make iPhone, iPad, and OS X applications, or you're simply looking for further depth in your understanding of Ruby and Objective-C development. Either way, you're in the right place. To start this journey, we need to understand the basics of these two respected, but philosophically dissimilar, technologies and how a path has been beaten between them. Starting at the base, Apple development for iOS has traditionally been handled in Objective-C. Though Apple products have grown in popularity, Objective-C has not always been the first choice for application development. There's a long and torturous road of developers who have given up their app ambitions because of Objective-C. It is clear that for the greater part of over two decades, Objective-C has generally been the only programming language choice available for apps with Apple. Objective-C was made popular by Steve Jobs' company NeXT, for licensing Objective-C from StepStone in 1988. You'll often see evidence of this in the naming conventions of fundamental objects prefixed with NS for NeXTStep/Sun. This history renders the language a business decision as much as it was ever a developer-based decision. At the time Objective-C was licensed, the Ruby programming language was just an unnamed idea in Matz's head (Yukihiro "Matz" Matsumoto, inventor of Ruby). Objective-C has evolved, grown, and survived the test of time, but it ultimately remains verbose, without standardization, and programmatically rigid. In today's world, developers can afford to be opinionated with their programming language preferences, and a lot of them choose Ruby. Ruby is a standardized, dynamic, and general object-oriented programming language that takes inspiration from a long list of successful languages. Ruby is known to support a myriad of programming paradigms and is especially known for yielding elegant code. It's also often the cool programming language. Compared to the verbose and explicit nature of Objective-C, Ruby is a far cry and extremely opinionated language that programmers often adore. Let's take a moment to identify some core differences in the syntax of these two programming languages, starting with Objective-C. Objective-C is strongly typed. Counter to what some believe, strongly typed doesn't mean you hit the keys really hard, it means your variables have restrictions in their data types and how those variables can intermix. In Objective-C this is strictly checked and handled at compile time by a .hfile, the end result being that you're usually managing at least two files to make changes in one. Though you'll often find Objective-C methods to be long and capitalized in CamelCase, Ruby clashes with a Python-styled lowercase brevity. For example: Objective-C styled method SomeObject.performSomeMethodHere Ruby styled method SomeObject.some_method It's by no accident that I've shortened the method name in the preceding example. It's actually quite common for Objective-C to have long-winded method names, while, conversely Ruby methods are to be as short as possible, while maintaining the general intention of the method. Additionally, Ruby is more likely to sample functional and meta-programming aspects to make applications simple. So, if you're wondering which of these paradigms you will need to use and be accustomed to, the answer is both! I've seen a lot of RubyMotion code, and some people simply abandon their Ruby ways to try and make their code fit in with Objective-C libraries, all with great haste. But by far, the best method I've seen, and I highly recommend, is a mix. All Objective-C and Cocoa foundation framework objects should remain CamelCase, while all Ruby remains snake_case. At first glance this seems like twice the work, but it's really next to no effort at all, as your custom objects will be all written in Ruby by you. The advantage here is that upon examination, you can tell if a function, object, or variable should be looked up online on the Apple developer site (http://developer.apple.com/library/ios/navigation/) or if it should be searched for in a local code or Ruby code (http://www.ruby-doc.org/). I kind of wish I had this benefit with other Ruby languages, since I can instantly return to an older project and distinguish which code is purely mine and which is framework. Another key diff erence is the translation of code from messages to parameterized styling. I'm going to use some of the examples from RubyMotion.com to elaborate this issue. To properly convert the Objective-C: [string drawAtPoint:point withFont:font]; You will have to simply slide everything down to the left. The parameter to string is a method on it, and the rest become parameters. This yields the following code: string.drawAtPoint(point, withFont:font) Let's start with our Hello World app now. Have the controller in place so we can break away from the lifeless shell application and move more toward a true "Hello World" app. We won't need to deal with the AppDelegateany longer. Now we can start placing code in our controller. Remember when I said that this is a framework that calls into our code at given points, here's where we choose one of those points in time to hook into, and we'll choose one of the most common UIViewControllermethods, viewDidLoad. So, create a method called viewDidLoad, and let's put the following code inside: To find out what delegates are supported by any method and the order of method calls, you can look up the class reference of the object you are extending. For example, the documentation on the UIViewControllerclass' viewDidLoadcall is identified at http://developer.apple.com/library/ios/#documentation/uikit/reference/UIViewController_Class/Reference/Reference.html. Running your application at this point (by typing rakein the console) will cause your application to output "Hello World" to the application's standard output (the command line in this case). There's nothing too special happening here, we're using the traditional Ruby putsmethod to give us some feedback on the console. Running this does start the simulator. To quit the simulator, you can use command+ Q or from the console window, press Ctrl+ Dor Ctrl+ C. Since our phone doesn't really have a console to receive the standard output message, let's take this up a notch by adding a label to the application and configuring it to be a more traditional Hello World" on the device. So, we plan on making a label that will have the text "Hello World", correct? Let's create the test for that. We'll start by making a new file in our specfolder called hello_world_controller_spec.rband we'll make it look like the following: Let's inspect the testing code from the previous image. Everything looks similar to the other code but, as you can see, there's no need to make a beforeblock, since we are able to access the controller by simply stating the controller we're testing on the second line. This shortcut works for any ViewController you're testing! The actual testing and use for the variable starts with our specification. We grab the label so we can apply our two requirements on the following lines. We check the text value, and verify that the label has been added to the view. This might seem quite nebulous without knowing what we're testing, but the logic is simple. We want to make sure there's a label that says "Hello World" and we want it to be visible. Running these tests will fail, which puts us on track to write the actual "Hello World" portion. You should add the following code to your project's hello_world_controller.rbfile: Summary There it is! We've finally written a real Hello World application! Congratulations on your first RubyMotion application! You deserve it! Resources for Article : Further resources on this subject: Integrating Solr: Ruby on Rails Integration [Article] Building tiny Web-applications in Ruby using Sinatra [Article] Getting started with using Chef [Article]
Read more
  • 0
  • 0
  • 1320

article-image-understanding-passbook
Packt
15 Jul 2013
5 min read
Save for later

Understanding Passbook

Packt
15 Jul 2013
5 min read
(For more resources related to this topic, see here.) Getting ready With iOS 6, Apple introduced the Passbook app as a central digital wallet for all the store cards, coupons, boarding passes, and event tickets that have become a popular feature of apps. A company wishing to take advantage of this digital wallet and the extra functionality it provides, can use Apple's developer platform to create a Pass for their users. How to do it... To understand Passbook, we need to see a Pass in action. Download the example Pass from: http://passkit.pro/example-generic-pkpass If you open this link within Mobile Safari on an iPhone or iPod Touch running iOS 6, you will be presented with the Pass and the option to add it to your Passbook. Alternatively, you can download the Pass on a Mac or PC and e-mail it to yourself, and then open the e-mail within the Mail app on an iPhone or iPod Touch. Tapping the Pass attachment link will present the Pass. If you choose to add the Pass to your Passbook app, the displayed Pass will disappear, having been filed away within your Passbook. Now click on the home button to return to the home screen and launch the Passbook app. In the app you will now see the Pass that was just added. It contains information specified by the app creator and can be presented when interacting with the company providing the service. Additional information can be placed on the back of the Pass. Tap the i button in the top-right hand corner of the Pass, to reveal the this information. How it works… The following diagram describes how Passes are delivered to a Passbook, and how these can be updated: The process of creating a Pass involves cryptographically signing the Pass using a certificate and key generated from your iOS developer account. For this reason, the generation of the Pass needs to take place on a server, and then be delivered to Passbook either via your own app, as an e-mail attachment, or by embedding it in a website. It's important to note that Apple does not provide any system for the Pass providers to authenticate, validate, or invalidate Passes. The Pass can contain barcode information, but it is up to the Pass provider to provide the infrastructure for reading and processing these barcodes. Instead of just sitting in the Passbook app, waiting to be used, a Pass can contain location and time triggers, that proactively present the Pass to the user, serving as both a reminder and providing convenient access. For example, an event Pass could be set to appear 15 mins before the start time, at the time when a user likely wants to present their event Pass to an attendant. Alternatively, a coupon Pass could be presented as a user approaches their local store where the coupon can be redeemed. Passes that have been added to Passbook can also be updated dynamically. For example, if the Pass is for a store card, a change to the card balance may require an update to the Pass. In the case of, for an airline ticket Pass, a departure gate change should trigger a Pass update. When a Pass needs to be updated, your server sends a push notification to the Passbook app on the user's device. This push notification is not displayed to the user. Upon receiving this Push Notification, the Passbook app then makes a request to your server for the updated Pass information. Your server would then respond to the relevant request, and provide the updated information in the expected format. When the Passbook App on the user's device receives the updated information, it silently updates the Pass. The next time the user looks at the Pass contained in the Passbook app, the updated information is displayed. There's more… Support for Passbook is also built into OSX Mountain Lion (10.8.2). Pass files with the -pkpass file extension will open in a preview window: Clicking on the Add to Passbook button will place the Pass in the Passbook associated with the iCloud account set up in OSX system preferences. The OSX Mail app and Safari also support embedded Passes. When building a Pass, you can specify a relevant time and up to 10 relevant locations that will trigger a message to be displayed on the lock screen. The message looks similar to a push notification, however a Pass notification is less intrusive. When it is relevant to display, it doesn't vibrate the iPhone and it doesn't wake up the screen. The notification only becomes visible when the phone wakes up from sleep. The option to specify relevant time and locations, and how far from the location the notification is triggered, is determined by the Pass type, as we will see later. Apps using Passbook Some of the apps in the App Store using Passbook are as follows : Hotels.com: This uses Passbook for room reservation details. It can be downloaded from http://appstore.com/hotelscom/hotelscom. Starbucks: This uses Passbook for a store card. It can be downloaded from http://appstore.com/starbuckscoffeecompany. Ticketmaster: This uses Passbook for event tickets. It can be downloaded from http://appstore.com/ticketmaster/ticketmaster. United Airlines: This uses Passbook for boarding passes. It can be downloaded from http://appstore.com/unitedairlines. Summary This article introduced you to Passbook. Apple's Passbook feature is a collection of technologies that come together to provide digital wallet functionality to the user. We understood what Passbook consists of, from both the user and Pass creator perspective. Resources for Article : Further resources on this subject: Development of iPhone Applications [Article] iPhone Applications Tune-Up: Design for Performance [Article] New iPad Features in iOS 6 [Article]
Read more
  • 0
  • 0
  • 3271

article-image-using-storyboards
Packt
14 Jun 2013
7 min read
Save for later

Using Storyboards

Packt
14 Jun 2013
7 min read
(For more resources related to this topic, see here.) Configuring storyboards for a project Getting ready In this recipe, we will learn how to configure an application's project properties using Xcode so that it is set up correctly to use a storyboard file. How to do it... To begin, perform the following simple steps: Select your project from the project navigator window. Then, select your project target from under the TARGETS group and select the Summary tab. Select MainStoryboard from the Main Storyboard drop-down menu, as shown in the preceding screenshot. How it works... In this recipe, we gained an understanding of what storyboards are, as well as how they differ from user interfaces created in the past, whereby a new view would need to be created for each XIB file for your application. Whether you are creating applications for the iPad or iPhone, each view controller that gets created within your storyboard represents the contents of a single screen, comprised of the contents of more than one scene. Each object contained within a view controller can be linked to another view controller that implements another scene. In our final steps, we looked at how to configure our project properties so that it is set up to use the storyboard user interface file by our application. There's more… You can also choose to manually add new Storyboard template to your project. This can be achieved by performing the following simple steps: Select your project from the project navigator window. Select File New | File…| or press command + N. Select the Storyboard template from the list of available templates, located under the User Interface subsection within the iOS section. Click on the Next button to proceed to the next step in the wizard. Ensure that you have selected iPhone from under the Device Family drop-down menu. Click on the Next button to proceed to the next step in the wizard. Specify the name of the storyboard file within the Save As field as the name of the file to be created. Click on the Create button to save the file to the specified folder. Finally, when we create our project using storyboards, we will need to modify our application's delegate AppDelegate.m file, as shown in the following code snippet: - (BOOL)application:(UIApplication *)application didFinishLaunchin gWithOptions:(NSDictionary *)launchOptions { // Override point for customization after // application launch. return YES; } For more information about using storyboards in your applications, you can refer to the Apple Developer documentation, located at https://developer.apple.com/library/ios/#documentation/ToolsLanguages/Conceptual/Xcode4UserGuide/InterfaceBuilder/InterfaceBuilder. Creating a Twitter application In this recipe, we will learn how to create a single view application to build our Twitter application. Getting ready In this recipe, we will start by creating our TwitterExample project. How to do it... To begin with creating a new Xcode project, perform the following simple steps: Launch Xcode from the /Developer/Applications folder. Select Create a new Xcode project, or click on File | New | Project…. Select Single View Application from the list of available templates. Click on the Next button to proceed to the next step in the wizard. Next, enter in TwitterExample as the name of your project. Select iPhone from under the Devices drop-down menu. Ensure that the Use Storyboards checkbox has been checked. Ensure that the Use Automatic Reference Counting checkbox has been checked. Ensure that the Include Unit Tests checkbox has not been checked. Click on the Next button to proceed to the next step in the wizard. Specify the location where you would like to save your project. Then, click on the Create button to save your project at the specified location. The Company Identifier for your app needs to be unique. Apple recommends that you use the reverse domain style (for example, com.domainName.appName). Once your project has been created, you will be presented with the Xcode development environment, along with the project files that the template created for you. How it works... In this recipe, we just created an application that contains a storyboard and consists of one view controller, which does not provide any functionality at the moment. In the following recipes, we will look at how we can add functionality to view controllers, create storyboard scenes, and transition between them. Creating storyboard scenes The process of creating scenes involves adding a new view controller to the storyboard, where each view controller is responsible for managing a single scene. A better way to describe scenes would be to think of a movie reel, where each frame that is being displayed is the actual scene that connects onto the next part. Getting ready When adding scenes to your storyboard file, you can add controls and views to the view controller's view, just as you would do for an XIB file, and have the ability to configure outlets and actions between your view controllers and its views. How to do it... To add a new scene into your storyboard file, perform the following simple steps: From the project navigator, select the file named MainStoryboard.storyboard. From Object Library, select and drag a new View Controller object on to the storyboard canvas. This is shown in the following screenshot: Next, drag a Label control on to the view and change the label's text property to read About Twitter App. Next, drag a Round Rect Button control on to the view that we will use in a later section to call the calling view. In the button's attributes, change the text to read Go Back. Next, on the first view controller, drag a Round Rect Button control on to the view. In the button's attributes, change the text to read About Twitter App. This will be used to call the new view that we added in the previous step. Next, on the first view controller, drag a Round Rect Button control on to the view, underneath the About Twitter App button that we created in the previous step. In the button's attributes, change the text to read Compose Tweet. Next, save your project by selecting File | Save from the menu bar, or alternatively by pressing command + S. Once you have added the controls to each of the view, your final interface should look something like what is shown in the following screenshot: The next step is to create the Action event for our Compose Tweet button so that it has the ability to post tweets. To create an action, perform the following steps: Open the assistant editor by selecting Navigate | Open In Assistant Editor or by pressing option + command + ,. Ensure that the ViewController.h interface file gets displayed. Select the Compose Tweet button; hold down the control key, and drag it from the Compose Tweet button to the ViewController.h interface file between the @interface and @end tags. Choose Action from the Connection drop-down menu for the connection to be created. Enter composeTweet for the name of the method to create. Choose UIButton from the Type drop-down menu for the type of method to create. The highlighted line in the following code snippet shows the completed ViewController.h interface file, with our method that will be responsible for calling and displaying our tweet sheet. // ViewController.h // TwitterExample // // Created by Steven F Daniel on 21/09/12. // Copyright (c) 2012 GenieSoft Studios. All rights reserved. #import @interface ViewController : UIViewController // Create the action methods - (IBAction)composeTweet:(UIButton *)sender; @end Now that we have created our scene, buttons, and actions, our next step is to configure the scene, which is shown in the next recipe. How it works... In this recipe, we looked at how we can add a new view controller to our storyboard and then started to add controls to each of our view controllers and customize their properties. Next, we looked at how we can create an Action event for our Compose Tweet button that will be responsible for responding and executing the associated code behind it to display our tweet sheet. Instead of us hooking up an event handler to the TouchUpInside event of the button, we decided to simply add an action to it and handle the output of this ourselves. These types of actions are called "instance methods". Here we are basically creating the Action method that will be responsible for allowing the user to compose and send a Twitter message.
Read more
  • 0
  • 0
  • 1932
article-image-article-decider-external-apis
Packt
11 Jun 2013
22 min read
Save for later

The Decider: External APIs

Packt
11 Jun 2013
22 min read
(For more resources related to this topic, see here.) Using an external API APIs are provided as a service from many different companies. This is not an entirely altruistic move on the part of the company. The expectation is that by providing the information and access to the company's data, the company gets more usage for their service and more customers. With this in mind, most (if not all) companies will require you to have an account on their system in order to access their API. This allows you to access their systems and information from within your application, but more importantly from the company's perspective, it allows them to maintain control over how their data can be used. If you violate the company's usage policies, they can shut off your application's access to the data, so play nice. The API key Most APIs require a key in order to use them. An API key is a long string of text that gets sent as an extra parameter on any request you send to the API. The key is often composed of two separate pieces and it uniquely identifies your application to the system much like a username and a password would for a regular user account. As such it's also a good idea to keep this key hidden in your application so that your users can't easily get it. While each company is different, an API key is typically a matter of filling out a web form and getting the key. Most companies do not charge for this service. However, some do limit the usage available to outside applications, so it's a good idea to look at any restrictions the company sets on their service. Once you have an API key you should take a look at the available functions for the API. API functions API functions typically come in two types – public and protected: The public functions can simply be requested with the API key The protected functions will also require that a user be logged into the system in order to make the request If the API function is protected, your application will also need to know how to log in correctly with the remote system. The login functions will usually be a part of the API or a web standard such as Facebook and Google's OAuth. It should be noted that while OAuth is a standard, its implementation will vary depending on the service. You will need to consult the documentation for the service you are using to make sure that the features and functions you need are supported. Be sure to read through the service's API documentation to understand which functions you will need and if they require a login. Another thing to understand about APIs is that they don't always do exactly what you need them to do. You may find that you need to do a little more work than you expect to get the data you need. In this case, it's always good to do a little bit of testing. Many APIs offer a console interface where you can type commands directly into the system and examine the results: Image This can be really helpful for digging into the data, but consoles are not always available for every API service. Another option is to send the commands in your application (along with your API credentials) and examine the data returned in the Safari console. The drawback of this method is that the data is often returned as a single-line string that is very difficult to read as shown in the screenshot: Image This is where a tool like JSONLint comes in handy. You can copy and paste the single-line string from your Safari console into the page at http://jsonlint.com and have the string formatted so that it is much easier to read and validate the string as JSON at the same time: Image Once you get a hold of what data is being sent and received, you will need to set it all up in Sencha Touch. External APIs and Sencha Touch As we have talked about earlier in the book, you cannot use a standard AJAX request to get data from another domain. You will need to use a JSONP proxy and store to request data from an external API. Using the API or the Safari console, you can get a good idea of the data that is coming back to you and use it to set up your model. For this example, let's use a simple model called Category. code We can then set up a store to load data from the API: This will set up a store with our Category model and call the url property for our external API. Remember that we have to send our credentials along with the request so we set these as extraParams on the proxy section. The apiKey and appSecret properties shown here are examples. You will need your own API key information to use an API. We also need to set a property called rootProperty in the reader section. Most API's send back a ton of detailed information along with the request and the store needs some idea of where to start loading in the category records. We can also add additional parameters later by calling the setExtraParam() function on our store proxy. This will let us add additional parameters to be sent to our external API URL. Please note that setExtraParam() will add an additional parameter but setExtraParams() will replace all of our extraParams with the new values. The basic application The Decider application is designed to use a combination of local storage, Google's Map API, and the Foursquare API. The application will take a list of people and their food preferences, and then use Foursquare and Google Maps to find nearby places to eat that will match everyone's food preferences. This screenshot provides a pictorial representation of the preceding explanation: Image Our contacts and categories will be stored using local storage. External APIs from Google and Foursquare will generate our maps and restaurant listings respectively. We will start with a quick overview of the basic application structure and forms, before diving into the store setup and API integration. Our main container is a simple card layout: code In this viewport we will add two cards: a navigation view and a form panel. Our navigationvew will serve as our main window for display. We will add additional containers to it via our controller: code This mainView contains our navigationBar and our homeScreen container with the big Get Started button. This button will add new containers to the navigation view (we will look at this later in the controller). A DataStage project stores jobs and define their environment, such as their security and execution resources. Your project, as well as your user account, is typically created by your DataStage administrator. The second item that is added to our viewport is our form panel. This will contain text fields for first and last name, as well as a selectable list for our different food categories: code We close out the form with a segmentedbutton property, which has options for Save and Cancel. We will add the handler functions for these buttons later on in our controller. We also include a title bar at the top of the form to give the user some idea of what they are doing. One of the key pieces of this form is the categories list, so let's take a closer look at how it works. Creating the categories list Since we will be getting our list of potential restaurants from the Foursquare API, we need to use their categories as well so that we can match things up with some degree of accuracy. The Foursquare API can be found at https://developer.foursquare.com/. As mentioned before, you will need a Foursquare account to access the API. You will also need an API key in order to integrate Foursquare with your application. We can use the Foursquare's API to get a list of categories, however the API returns a list of a few hundred categories including Airports, Trains, Taxis, Museums, and Restaurants. Additionally, each of these has its own subcategories. All we really want is the subcategories for Restaurants. To make things more complicated, Foursquare's API also returns the data like this: code This means we can only get at a specific category by its order in the array of categories. For example, if Restaurants is the twenty-third category in the array, we can get to it as: categories[23], but we cannot get to it by calling categories['Restaurants']. Unfortunately, if we use categories[23] and Foursquare adds a new category or changes the order, our application will break. This is a situation where it pays to be adaptable. Foursquare's API includes a console where we can try out our API requests. We can use this console to request the data for all of our categories and then pull the data we need into a flat file for our application. Check this URL to see the output: https://developer.foursquare.com/docs/explore#req=venues/categories We can copy just the Restaurant information that we need from categories and save this as a file called categories.json and call it from our store. A better solution to this conundrum would be to write some server code that would request the full category list from Foursquare and then pull out just the information we are interested in. But for the sake of brevity, we will just use a flat json file. Each of our categories are laid out like this: code The main pieces we care about are the id, name, shortname and icon values. This gives us a data model that looks like this: code Notice that we also add a function to create an image URL for the icons we need. We do this with the convert configuration, which lets us assemble the data for image URL based on the other data in the record: code The convert function is automatically passed both the data value (v), which we ignore in this case, and the record (rec), which lets us create a valid Foursquare URL by combining the icon.prefix value, a number, and the icon.suffix value in our record. If you take a look at our previous category data example, this would yield a URL of: https://foursquare.com/img/categories_v2/food/argentinian_32.png By changing the number we can control the size of the icon (this is part of the Foursquare API as well). We combine this with our XTemplate: code This gives us a very attractive list for choosing our categories: Images Next we need to take a look at the controller for the contact form. Creating the contact controller The contact controller handles saving the contact and canceling the action. We start out the controller by declaring our references and controls: code Remember that our refs (references) provide a handy shortcut we can use anywhere in the controller to get to the pieces we need. Our control section attaches tap listeners to our cancel and save buttons. Next we need to add our two functions after the controls section. The doCancel function is really simple: code We just use our references to clear the contact editor, deselect all the items in our category list, and switch back to our main view. The save function is a little more complex, but similar to the functions we have covered elsewhere in this book: code As with our previous save functions, we create a new MyApp.model.Contact and add the values from our form. However, since our list isn't really a standard form component we need to grab its selections separately and add them to the contact data as a comma-separated list. We do this by creating an empty array and using Ext.each() to loop through and run a function on all our categories. We then use join to implode the array into a comma-separated list. Finally, we save the contact and run our doCancel function to clean up and return to our main view. Now that we can add contacts we need to create a controller to handle our requests to the Foursquare and Google APIs, and get the data back to our users. Integrating with Google Maps and Foursquare Our application still has a couple of tasks to accomplish. It needs to: Handle the click of the Get Started button Add our maps panel and offer to adjust the current location via Google Maps API Display a list of friends to include in our search Display the search results in a list Display the details for a selected result We will start out with the basic skeleton of the controller, create the views and stores, and then finish up the controller to complete the application. Starting the mainView.js controller We will start the mainView.js controller file with some placeholders for the stores. We will add views later on and some references for those components. Keep in mind that when working with placeholders in this fashion the application will not be testable until all the files are actually in place. We create the mainView.js file in our controllers folder: code At the top of this configuration we require Ext.DateExtras. This file provides us with formatting options for date objects. If this file is not included, only the now() method for date objects will be available in your application. In our views section we have added placeholders for confirmLocation, restaurantList, friendChooser,and restaurantDetails. We will add these files later on, along with the RestaurantStore file listed in our stores section. We also have a number of references for these views, stores, and some of their sub-components. We will need to create these views before getting to the rest of our controller. We will take these views in the order the user will see them, starting with the confirmLocation view. Creating the confirmLocation view The confirmLocation view first appears when the user clicks on the Get Started button. This view will present the user with a map showing their current location and offer an option to switch to a different location if the user desires. The following screenshot gives a pictorial representation of the preceding code: Image In order to give ourselves a bit more flexibility, we will be using the Google Maps Tracker plugin as part of this view. You can find this plugin in your Sencha Touch 2 folder in examples/map/lib/plugin/google/Tracker.js. Copy the file into a lib/google folder in your main application folder and be sure to add it into the requires section of your app.js file: code This plugin will let us easily drop markers on the map. Once the Google Tracker plugin file is included in the application, we can set up our confirmLocation.js view like so: code The view itself is a simple container with some HTML at the top asking the user to confirm their location. Next we have a map container that uses our Google Tracker plugin to configure the map and animate the location marker to drop from the top of the screen to the current location of the user. The position configuration is a default location, which is used when the user denies the application access to their current location. This one is set to the Sencha Headquarters. Next we need a few options for the user to choose from: Cancel, New Location, and Next. We will add these as a segmented button under our map container. We add the code to the end of our items container (after the map container): code Each of our buttons has an associated action. This allows us to assign functions to each button within the mainView.js controller. By creating buttons in this fashion, we maintain separation between the display of the application and the functionality of the application. This is really helpful when you want to re-use a view component. The next view the user encounters is the Friends Chooser. Creating the Friends Chooser view The friendsChooser.js file uses a similar list to our previous category chooser. This lets our users select multiple people to include in the restaurant search: Image Our friendChooser extends the Ext.Container component and allows the user to select from a list of friends: code As with our previous panel, we have a container with HTML at the top to provide some instructions to the user. Below that is our list container, which, like our category list, allows for selection of multiple items via the mode: 'MULTI' configuration. We also set grouped to true. This allows our store to group the contacts together by last name. If you take a look at the ContactStore.js file, you can see where we do: code This configuration returns the first letter of the last name for grouping. The last thing we need to do with our friendChooser.js file is add the buttons at the bottom to Cancel or Finish the search. The buttons go out in the items section, just below the list: code As in our previous view, we use a segmentedbutton property with actions assigned to each of our individual buttons. Once the user clicks on Finish, we will need to return a list of restaurants they can select from. Creating the restaurant list, store, and details Our restaurant list will use a store and the Foursquare API to return a list of restaurants based on the shared preferences of everyone the user selected. The following screenshot exemplifies the preceding explanation: Image This component is pretty basic: code This component uses a simple list with a configuration option for onItemDisclosure:true. This places an arrow next to the restaurant name in the list. The user will be able to click on the arrow and see the details for that restaurant (which we will create after the store). We also set grouped to true, only this time our store will use a function to calculate and sort by distance. Creating the restaurant store and model The restaurant store is where we set up our request to the Foursquare API: code The RestaurantStore.js file sets a model and storeId field for our store and then defines our proxy. The proxy section is where we set up our request to Foursquare. As we mentioned at the start of the article, this needs to be a jsonp request since it is going to another domain. We make our request to https://api.foursquare. com/v2/venues/search and we are looking for the responses.venues section of the JSON array that gets returned. You will note that this store currently has no other parameters to send to Foursquare. We will add these later on in the controller before we load the store. For the model, we can consult the Foursquare API documentation to see the information that is returned for a restaurant (called a venue in Foursquare terms) at https://developer.foursquare.com/docs/responses/venue You can include any of the fields listed on the page. For this app, we have chosen to include the following code in our model: code You can add more fields if you want to display more information in the details view. Creating the details view The details view is a simple panel and XTemplate combination. Using our controller, the panel will receive the data record when a user clicks on a restaurant in the list: code Since the tpl tag is basically HTML, you can use any CSS styling you like here. Keep in mind that certain fields such as contact, location, and categories can have more than one entry. You will need to use <tpl for="fieldname"> to loop through these values. Now that the views are complete, we need to head back to our controller and add the functions to put everything together. Finishing the main view controller When we started out with our main controller, we added all of our views, stores, and references. Now it's time to add the functionality for the application. We start by adding a control section to the end of our config: code The controls are based on the references in the controller and they add functions to specific listeners on the component. These are each in the format of: code Once these controls are in place, we can add our functions after the config section of our controller. Our first function is doStart. This function loads our Contacts store and checks to see if we have any existing contacts. If not, we alert the user and offer to let them add some. If they have contacts we create a new instance of our confirmLocation container and push it onto the main navigation view: code Remember that since the mainView is a navigation view, a Back button will automatically be created in the top toolbar. This function will show the user our initial map panel with the users current location. This panel needs four functions: one to cancel the request, one to pop up a new location window, one to set the new location, and one to move on to the next step: code We actually want to be able to use the doCancel function from anywhere in the process. As we add new panels to our mainView navigation, these panels simply pile up in a stack. This means we need to get the number of panels currently on the mainView stack. We use length-1 to always leave the initial panel (the one with our big Get Started button) on the stack. We use pop to remove all but the first panel from the stack. This way the Cancel button will take us all the way back to the beginning of our stack, while the Back button will take us back just to the previous step. The next function is doNewLocation(), which uses Ext.Msg.prompt to ask the user to enter a new location: code If the user enters a new location, we call setNewLocation to process the text the user entered in the prompt textbox: code This code gets our map and encodes the text the user passed us as a geocode location. If Google returns a valid address, we center the map on the location and drop a marker to show the exact location. We also set the latitude and longitude so that we can reference them later. If we fail to get a valid address, we alert the user so they can fix it and try again. Once the user is happy with the location they can click on the Next button, which fires our doChooseFriends function: This function pushes our friendchooser view onto the stack for display. The friendchooser view allows the user to select multiple friends and click on Cancel or Finish. Since we have already taken care of our Cancel button with our doCancel function, we just need to write the doShowRestaurants function. This function starts by looping through the selected friends. For the first one in the list, we grab the restaurant categories we have stored for the friend and convert it from a comma-separated list (which is how we stored it) into an array. This lets us grab every subsequent selection and run Ext.Array.intersect() to find the common categories between all of the selected friends: code Next, we load the store based on the common categories by categoryID, the location data we have stored in our map, client_id, and client_secret that comprise our API key for Foursquare and a radius value (in meters). We also send a required field called v that is set to the current date. Finally, we push our restaurant list component onto the stack of containers. This will display our list of results and allow the user to click on for details. This brings us to our doShowRestaurantDetails function: code When the user taps one of the disclosure icons in our list of restaurants, we push a restaurantdetails view onto the stack of containers and set its data to the record that was tapped. This displays the details for the restaurant in our details XTemplate Homework There are a number of additional features that can be added to this type of application, including: Editing for contacts (or automatically pulling friends from Facebook) Setting up a live feed for the categories menu Adding additional venues other than restaurants Combining the application with additional APIs such as Yelp for reviews Just remember the key requirements of using additional APIs: the API key(s), studying the API documentation, and using the JSONP store for grabbing the data. Summary In this article we talked about using external APIs to enhance your Sencha Touch applications. This included: An overview of API basics Putting together the basic application Interaction with Google Maps and Foursquare Building the views, models, and stores Building the application controller Resources for Article : Further resources on this subject: How to Use jQuery Mobile Grid and Columns Layout [Article] iPhone JavaScript: Installing Frameworks [Article] An Introduction to Rhomobile [Article]
Read more
  • 0
  • 0
  • 1656

article-image-qr-codes-geolocation-google-maps-api-and-html5-video
Packt
07 Jun 2013
9 min read
Save for later

QR Codes, Geolocation, Google Maps API, and HTML5 Video

Packt
07 Jun 2013
9 min read
(For more resources related to this topic, see here.) QR codes We love our smartphones. We love showing off what our smartphones can do. So, when those cryptic squares, as shown in the following figure, started showing up all over the place and befuddling the masses, smartphone users quickly stepped up and started showing people what it's all about in the same overly-enthusiastic manner that we whip them out to answer even the most trivial question heard in passing. And, since it looks like NFC isn't taking off anytime soon, we'd better be familiar with QR codes and how to leverage them. The data shows that knowledge and usage of QR codes is very high according to surveys:(http://researchaccess.com/2012/01/new-data-on-qrcode-adoption/) More than two-thirds of smartphone users have scanned a code More than 70 percent of the users say they'd do it again (especially for a discount) Wait, what does this have to do with jQuery Mobile? Traffic. Big-time successful traffic. A banner ad is considered successful if only two percent of people lick through (http://en.wikipedia.org/wiki/Clickthrough_rate). QR codes get more than 66 percent! I'd say it's a pretty good way to get people to our reations and, thus, should be of concern. But QR codes are for more than just URLs. Here we have a URL, a block of text, a phone number, and an SMS in the following QR codes: There are many ways to generate QR codes (http://www.the-qrcode-generator.com/, http://www.qrstuff.com/). Really, just search for QR Code Generator on Google and you'll have numerous options. Let us consider a local movie theater chain. Dickinson Theatres (dtmovies.com) has been around since the 1920s and is considering throwing its hat into the mobile ring. Perhaps they will invest in a mobile website, and go all-out in placing posters and ads in bus stops and other outdoor locations. Naturally, people are going to start scanning, and this is valuable to us because they're going to tell us exactly which locations are paying off. This is really a first in the advertising industry. We have a medium that seems to spur people to interact on devices that will tell us exactly where they were when they scanned it. Geolocation matters and this can help us find the right locations. Geolocation When GPS first came out on phones, it was pretty useless for anything other than police tracking in case of emergencies. Today, it is making the devices that we hold in our hands even more personal than our personal computers. For now, we can get a latitude, longitude, and timestamp very dependably. The geolocation API specification from the W3C can be found at http://dev.w3.org/geo/api/spec-source.html. For now, we'll pretend that we have a poster prompting the user to scan a QR code to find the nearest theater and show the timings. It would bring the user to a page like this: Since there's no better first date than dinner and a movie, the movie going crowd tends to skew a bit to the younger side. Unfortunately, that group does not tend to have a lot of money. They may have more feature phones than smartphones. Some might only have very basic browsers. Maybe they have JavaScript, but we can't count on it. If they do, they might have geolocation. Regardless, given the audience, progressive enhancement is going to be the key. The first thing we'll do is create a base level page with a simple form that will submit a zip code to a server. Since we're using our template from before, we'll add validation to the form for anyone who has JavaScript using the validateMe class. If they have JavaScript and geolocation, we'll replace the form with a message saying that we're trying to find their location. For now, don't worry about creating this file. The source code is incomplete at this stage. This page will evolve and the final version will be in the source package for the article in the file called qrresponse. php as shown in the following code: <?php $documentTitle = "Dickinson Theatres"; $headerLeftHref = "/"; $headerLeftLinkText = "Home"; $headerLeftIcon = "home"; $headerTitle = ""; $headerRightHref = "tel:8165555555"; $headerRightLinkText = "Call"; $headerRightIcon = "grid"; $fullSiteLinkHref = "/"; ?> <!DOCTYPE html> <html> <head> <?php include("includes/meta.php"); ?> </head> <body> <div id="qrfindclosest" data-role="page"> <div class="logoContainer ui-shadow"></div> <div data-role="content"> <div id="latLong> <form id="findTheaterForm" action="fullshowtimes.php" method="get" class="validateMe"> <p> <label for="zip">Enter Zip Code</label> <input type="tel" name="zip" id="zip" class="required number"/> </p> <p><input type="submit" value="Go"></p> </form> </div> <p> <ul id="showing" data-role="listview" class="movieListings" data-dividertheme="g"> </ul> </p> </div> <?php include("includes/footer.php"); ?> </div> <script type="text/javascript"> //We'll put our page specific code here soon </script> </body> </html> For anyone who does not have JavaScript, this is what they will see, nothing special. We could spruce it up with a little CSS but what would be the point? If they're on a browser that doesn't have JavaScript, there's pretty good chance their browser is also miserable at rendering CSS. That's fine really. After all, progressive enhancement doesn't necessarily mean making it wonderful for everyone, it just means being sure it works for everyone. Most will never see this but if they do, it will work just fine For everyone else, we'll need to start working with JavaScript to get our theater data in a format we can digest programmatically. JSON is perfectly suited for this task. If you are already familiar with the concept of JSON, skip to the next paragraph now. If you're not familiar with it, basically, it's another way of shipping data across the Interwebs. It's like XML but more useful. It's less verbose and can be directly interacted with and manipulated using JavaScript because it's actually written in JavaScript. JSON is an acronym for JavaScript Object Notation. A special thank you goes out to Douglas Crockford (the father of JSON). XML still has its place on the server. It has no business in the browser as a data format if you can get JSON. This is such a widespread view that at the last developer conference I went to, one of the speakers chuckled as he asked, "Is anyone still actually using XML?" { "theaters":[ { "id":161, "name":"Chenal 9 IMAX Theatre", "address":"17825 Chenal Parkway", "city":"Little Rock", "state":"AR", "zip":"72223", "distance":9999, "geo":{"lat":34.7684775,"long":-92.4599322}, "phone":"501-821-2616" }, { "id":158, "name":"Gateway 12 IMAX Theatre", "address":"1935 S. Signal Butte", "city":"Mesa", "state":"AZ", "zip":"85209", "distance":9999, "geo":{"lat":33.3788674,"long":-111.6016081}, "phone":"480-354-8030" }, { "id":135, "name":"Northglen 14 Theatre", "address":"4900 N.E. 80th Street", "city":"Kansas City", "state":"MO", "zip":"64119", "distance":9999, "geo":{"lat":39.240027,"long":-94.5226432}, "phone":"816-468-1100" } ] } Now that we have data to work with, we can prepare the on-page scripts. Let's put the following chunks of JavaScript in a script tag at the bottom of the HTML where we had the comment: We'll put our page specific code here soon //declare our global variables var theaterData = null; var timestamp = null; var latitude = null; var longitude = null; var closestTheater = null; //Once the page is initialized, hide the manual zip code form //and place a message saying that we're attempting to find //their location. $(document).on("pageinit", "#qrfindclosest", function(){ if(navigator.geolocation){ $("#findTheaterForm").hide(); $("#latLong").append("<p id='finding'>Finding your location...</ p>"); } }); //Once the page is showing, go grab the theater data and find out which one is closest. $(document).on("pageshow", "#qrfindclosest", function(){ theaterData = $.getJSON("js/theaters.js", function(data){ theaterData = data; selectClosestTheater(); }); }); function selectClosestTheater(){ navigator.geolocation.getCurrentPosition( function(position) { //success latitude = position.coords.latitude; longitude = position.coords.longitude; timestamp = position.timestamp; for(var x = 0; x < theaterData.theaters.length; x++) { var theater = theaterData.theaters[x]; var distance = getDistance(latitude, longitude, theater.geo.lat, theater.geo.long); theaterData.theaters[x].distance = distance; }} theaterData.theaters.sort(compareDistances); closestTheater = theaterData.theaters[0]; _gaq.push(['_trackEvent', "qr", "ad_scan", (""+latitude+","+longitude) ]); var dt = new Date(); dt.setTime(timestamp); $("#latLong").html("<div class='theaterName'>" +closestTheater.name+"</div><strong>" +closestTheater.distance.toFixed(2) +"miles</strong><br/>" +closestTheater.address+"<br/>" +closestTheater.city+", "+closestTheater.state+" " +closestTheater.zip+"<br/><a href='tel:" +closestTheater.phone+"'>" +closestTheater.phone+"</a>"); $("#showing").load("showtimes.php", function(){ $("#showing").listview('refresh'); }); }, function(error){ //error switch(error.code) { case error.TIMEOUT: $("#latLong").prepend("<div class='ui-bar-e'> Unable to get your position: Timeout</div>"); break; case error.POSITION_UNAVAILABLE: $("#latLong").prepend("<div class='ui-bar-e'> Unable to get your position: Position unavailable</div>"); break; case error.PERMISSION_DENIED: $("#latLong").prepend("<div class='ui-bar-e'> Unable to get your position: Permission denied. You may want to check your settings.</div>"); break; case error.UNKNOWN_ERROR: $("#latLong").prepend("<div class='ui-bar-e'> Unknown error while trying to access your position.</div>"); break; } $("#finding").hide(); $("#findTheaterForm").show(); }, {maximumAge:600000}); //nothing too stale } The key here is the function geolocation.getCurrentPosition, which will prompt the user to allow us access to their location data, as shown here on iPhone If somebody is a privacy advocate, they may have turned off all location services. In this case, we'll need to inform the user that their choice has impacted our ability to help them. That's what the error function is all about. In such a case, we'll display an error message and show the standard form again.
Read more
  • 0
  • 0
  • 5974

article-image-article-working-windows-phone-controls
Packt
06 Jun 2013
14 min read
Save for later

Working with Windows Phone Controls

Packt
06 Jun 2013
14 min read
(For more resources related to this topic, see here.) Supported controls in Windows Phone The following list will illustrate the different controls supported in Windows Phone. These controls are included in the System.Windows.Controls namespace in the .NET Framework class library for Silverlight: Button: As the name goes, this is a button wherein a user interacts by clicking on it. On clicking, it raises an event. HyperlinkButton: This is a button control that displays a hyperlink. When clicked, it allows users to navigate to an external web page or content within the same application. ProgressBar: This is a control that indicates the progress of an operation. MessageBox: This is a control that is used to display a message to the user and optionally prompts for a response. TextBox: This is a control that allows users to enter single or multiple lines of text. Checkbox: This is a control that a user can select or clear, that is, the control can be checked or unchecked. ListBox: This is a control that contains a list of selectable items. PasswordBox: This is a control used for entering passwords. RadioButton: This is a control that allows users to select one option from a group of options. Slider: This is a control that allows users to select from a range of values by moving a thumb control along a track. Hello world in F# The previous section gave us an insight into different controls available for Windows Phone applications. Before understanding how to work with them, let's create a Windows Phone "Hello World" application using F#. The following steps will help us create the application: Create a new project of type F# and C# Windows Phone Application (Silverlight)A solution with App and AppHost projects will be created: Image In the App project, we will have the main visual for the application called MainPage.xaml. If you open MainPage.xaml, you will notice that MainPage is actually a PhoneApplicationPage type. This is evident from the following XAML declaration: code Note the x:Class attribute; this denotes that the XAML contains a counterpart class called MainPage available in the WindowsPhoneApp namespace. The MainPage class can be found in the AppLogic.fs file in the App project. Let us take a closer look at the UI itself. The main contents of the application is contained in a grid. A grid is a layout control that is used to define a flexible area that consists of rows and columns. The body contains three TextBlock controls. A TextBlock control, as the name suggests, is used to display a small block of text. We have three TextBlock controls on the page, one for ApplicationTitle, another for PageTitle, and the last one for Results. There is also an empty grid named ContentGrid. So this is where we will be creating our "Hello World" experiment. The XAML for the content is shown as follows: code As you can see from the code, ContentGrid is empty. So let's place a TextBlock control and a Button element inside ContentGrid. The idea is to generate the text "Hello World" when we click on the button. First, let's take a look at the XAML portion of the "Hello World" experiment in MainPage.xaml: code Pay attention to the TextBlock element and the Button names. We have the TextBlock control named as txtMessage and Button named as btnSayHelloButton. Now the second part of this experiment is to wire up the button's Click event with an event handler in the MainPage class. In the AppLogic.fs file, find the MainPage type and add the following code: code First we create a reference to the text block and the button. Then we add an event handler to the button's Click event. In F#, the way we add event handlers is by writing a function using the fun keyword. The _ (underscore) tells the F# compiler to ignore the parameters of the function and then we define the statement of a function. On button click, we just change the text of the text block to say "Hello World !". Well, that's all there is to this "Hello World" experiment. Notice the use of the ? operator. This is not F#-specific code. Rather, the project template creates a module in the AppLogic.fs file called Utilities. There, ? is defined as an operator that can be used for dynamic lookup of XAML object names for binding purposes. The code snippet of the operator is shown as follows: code Now let's build and run the project. Windows Phone Emulator will be invoked by Visual Studio to deploy the app we just built. You will see a text block with the text Click the button and a button with the text Click to Say Hello !. When the button is clicked, the text block will show the text Hello World !. The screenshots of the final output are shown as follows: Image Working with the Button control A button, as the name goes, is a rectangular control that allows a user to click on it, and when clicked, raises a Click event. We can write listeners for this Click event and add an event handler for the Click event. When the Click event occurs, the event handler will be notified and we can run our business logic against the button click—whatever logical thing we need. Let's see how to work with the button control. Create a project and add three buttons in the XAML code. For the first button, we will set its properties from the XAML itself. For the second button, we will set the properties from the code. For the third button we will set its properties in its Click event. The XAML code snippet is shown as follows: code For the second and third button, except for its Content attribute, nothing is set in XAML. The properties for the second button is set on the page load event in the MainPage class. The properties for the third button is set on the click of the third button in an event handler. Now let us see the F# code snippet for this in the MainPage class: code One thing to learn from here is—whatever properties can be set from XAML, the same can also be set from the code. The preceding demo shows how at page load and with event handlers, a control's properties can be changed at runtime. The screenshot of the final output is shown as follows: Image Working with the Checkbox control As mentioned earlier, Checkbox is a control that allows a user to select or clear an option. We can use a Checkbox control to provide a list of options that a user can select, for example a list of settings to apply in an application. The Checkbox control can have three states namely Checked, Unchecked, and Indeterminate. To demonstrate this control usage, let's build a demo that contains two checkboxes. The first checkbox demonstrates the Checked and Unchecked states. The second checkbox demonstrates the Checked, Unchecked, and Indeterminate states. We will handle the Checked event when checkboxes are checked, and the Unchecked event when checkboxes are unchecked. The XAML code snippet for this demo is shown as follows: code As you can see, we have two checkboxes stacked vertically one below the other. StackPanel is a layout control, which, as its name goes, just stacks its children content either vertically or horizontally. The second checkbox has a Boolean property named IsThreeState set to true. That means this checkbox will have three states – Checked, Unchecked, and Indeterminate. Checkboxes expose Checked, Unchecked, and Indeterminate events. We will wire up event handlers for these events and write out a message to the txtMessage text block as seen in the code snippet. The following is the code snippet where we handle the events: code We first get a reference to the checkbox controls. Then we wire up the Checked and Unchecked events. For the second checkbox, since it supports the Indeterminate state, we wire up the Indeterminate event too. When you run the app and select or clear any checkbox, a message will be shown in the text block. The screenshot of the output is shown as follows: Image Working with the Hyperlink control Hyperlink is a control that presents a button control with a hyperlink. When the hyperlink is clicked, it will navigate to the URI specified, which can be an external web page or content within the app. We specify the URI to navigate through the NavigateUri property. The XAML code snippet for this control is shown as follows: code The same effect can be obtained using code. On page load, we would have to just set the NavigateUri property, and when the user clicks on the hyperlink button, he will be navigated to the set URI. Working with the ListBox control A ListBox control represents a list of selectable items. It basically displays a collection of items. More than one item in a ListBox control is visible at a time. As part of the demo app, we will create a listbox and fill it with available color names. When an item is selected in the listbox, we will set the background of the listbox to the selected item. The XAML code snippet is shown as follows: code The code to fill up the listbox with the names of the colors along with the event handler to handle the listbox's SelectionChanged event is shown as follows: code For filling up the listbox with color names, we iterate through the public properties of the System.Windows.Media.Colors class. The Colors class implements a different set of predefined colors. We fill the listbox with the names of the predefined colors by adding them to the Items collection of the listbox. To handle item selection change, we handle the SelectionChanged event. First, we get the SelectedItem property, and since we know it's a string in our case, we convert it into a string. Then we get the Color property by making use of the string that we converted from SelectedItem. Once we get the color, we set the background of the listbox to the color selected. The final output of this demo is shown as follows: Image Working with the MessageBox control In this section we will take a look at the MessageBox control. This control displays a message to the user and optionally prompts for a response. The MessageBox class provides a static Show method, which can be used to display the message in a simple dialog box. The dialog box is modal and provides an OK button. A code to work with the MessageBox control is shown next. Note that this can be worked with only from the code and not from the XAML. First, we show a message box with the ok and cancel button. When a user clicks on the ok button, we show a simple message box with just the ok button. code The final output of this demo is shown as follows: Image Working with the PasswordBox control PasswordBox, as the name suggests, is used to enter a password in applications. The user cannot view the entered text; only password characters that represent the text are displayed. The password character to be displayed can be specified by using the property PassowrdChar. Add PasswordBox, Button, and TextBlock in the XAML code. The idea is to enter some text in the PasswordBox control, and on clicking the button show the password text in the text block. The XAML for this demo is shown as follows: code The code to handle the button click and display the password entered in the text block is shown as follows: code The password box contains a property called Password, which can be used to read the entered password. The final output of the demo is shown as follows: Image Working with the ProgressBar control The ProgressBar control is used to display the progress of an operation. This is often used in UI layouts to indicate a long running operation. One of the requirements of the Windows Phone app is to include a progress bar and show a progress animation whenever a task is a long-running task in any application. The progress bar can have a range between Minimum and Maximum values. It also has an IsIndeterminate property, which means no Minimum and Maximum value is set and the progress bar displays a repeating pattern. This is predominantly used in XAML and its visibility is controlled by the code. The XAML code snippet is shown as follows: code Working with the RadioButton control RadioButton is a control, which represents a button that allows a user to select a single option from a group of options. A RadioButton control is usually used as one item in a group of RadioButton controls. RadioButtons can be grouped by setting their GroupName property. To group radio buttons, the GroupName property on each of the radio button should have the same value. RadioButtons contain two states, namely selected or cleared. RadioButtons have the IsSelected property, which will let us know if a radio button is selected or not. Create three radio buttons in XAML. Two of them will be grouped and one will be ungrouped. We will listen for the Checked event on the radio buttons and update a text block with the appropriate message. The XAML code snippet is shown as follows: code As you can see, the first two radio buttons have their GroupName property set whereas the last radio button does not have any GroupName set. We will wire up the Checked event on all three radio buttons and update the text block with information such as which radio button was clicked. The code snippet is shown as follows: code The output from this demo is shown as follows: Image Working with the Slider control The Slider control represents a control that lets users select from a range of values by moving a thumb control along a track. The Slider control exposes certain properties that can be set to customize the functioning of the slider. We can set the Orientation property to orient the slider either horizontally or vertically. We can change the direction of the increasing value with IsDirectionReversed. The range of values can be set using the Minimum and Maximum properties. The value property can be used to set the current position of the slider. Add a Slider control to the XAML. Set its Minimum to 0 and Maximum to 10. When the user changes the position of the thumb on the slider, we will listen to the ValueChanged event on the slider and show the current value in a text block. The XAML snippet for the slider is shown as follows: code As you can see, we set the Minimum and Maximum range in the XAML. From the code, we wire up the ValueChanged event. Whenever a user changes the value using the thumb on the slider, the ValueChanged event will be fired and we just read the current value of the slider and update a text block. The final output of this demo is shown as follows: Image Working with the TextBox control The TextBox control can be used to display single or multiline text. It is often used to accept user input in applications. This control is one of the most widely used controls for data input. On a Windows Phone, whenever a textbox gets focus, an on-screen keyboard known as Software Input Panel (SIP) will be shown automatically by the Windows Phone OS. If we do not want the user to edit the text, we can set the IsReadOnly property on the textbox to true. This will prevent the user from typing anything in the textbox. We can read the value entered in a textbox using the i property. The XAML snippet for a simple textbox is shown as follows: code A screenshot of a simple textbox with SIP displayed when the textbox gets focus is shown as follows: Image Summary In this article, we took a lap around the supported controls for the Silverlight runtime on the Windows Phone platform. We looked at the XAML way of defining the controls and also how to programmatically work with these controls in the code. We learnt what properties each control exposes and how to wire up events supported by each control. Resources for Article : Further resources on this subject: Deploying .NET-based Applications on to Microsoft Windows CE Enabled Smart Devices [Article] Development of iPhone Applications [Article] Getting Started with Internet Explorer Mobile [Article]
Read more
  • 0
  • 0
  • 1034
article-image-android-native-application-api
Packt
13 May 2013
21 min read
Save for later

Android Native Application API

Packt
13 May 2013
21 min read
(For more resources related to this topic, see here.) Based on the features provided by the functions defined in these header files, the APIs can be grouped as follows: Activity lifecycle management: native_activity.h looper.h Windows management: rect.h window.h native_window.h native_window_jni.h Input (including key and motion events) and sensor events: input.h keycodes.h sensor.h Assets, configuration, and storage management: configuration.h asset_manager.h asset_manager_jni.h storage_manager.h obb.h In addition, Android NDK also provides a static library named native app glue to help create and manage native activities. The source code of this library can be found under the sources/android/native_app_glue/ directory. In this article, we will first introduce the creation of a native activity with the simple callback model provided by native_acitivity.h, and the more complicated but flexible two-threaded model enabled by the native app glue library. We will then discuss window management at Android NDK, where we will draw something on the screen from the native code. Input events handling and sensor accessing are introduced next. Lastly, we will introduce asset management, which manages the files under the assets folder of our project. Note that the APIs covered in this article can be used to get rid of the Java code completely, but we don't have to do so. The Managing assets at Android NDK recipe provides an example of using the asset management API in a mixed-code Android project. Before we start, it is important to keep in mind that although no Java code is needed in a native activity, the Android application still runs on Dalvik VM, and a lot of Android platform features are accessed through JNI. The Android native application API just hides the Java world for us. Creating a native activity with the native_activity.h interface The Android native application API allows us to create a native activity, which makes writing Android apps in pure native code possible. This recipe introduces how to write a simple Android application with pure C/C++ code. Getting ready Readers are expected to have basic understanding of how to invoke JNI functions. How to do it… The following steps to create a simple Android NDK application without a single line of Java code: Create an Android application named NativeActivityOne. Set the package name as cookbook.chapter5.nativeactivityone. Right-click on the NativeActivityOne project, select Android Tools | Add Native Support. Change the AndroidManifest.xml file as follows: <manifest package="cookbook.chapter5.nativeactivityone"android:versionCode="1"android:versionName="1.0"><uses-sdk android_minSdkVersion="9"/><application android_label="@string/app_name"android:icon="@drawable/ic_launcher"android:hasCode="true"><activity android_name="android.app.NativeActivity"android:label="@string/app_name"android:configChanges="orientation|keyboardHidden"><meta-data android_name="android.app.lib_name"android:value="NativeActivityOne" /><intent-filter><action android_name="android.intent.action.MAIN" /><category android_name="android.intent.category.LAUNCHER" /></intent-filter></activity></application></manifest> We should ensure that the following are set correctly in the preceding file: The activity name must be set to android.app.NativeActivity. The value of the android.app.lib_name metadata must be set to the native module name without the lib prefix and .so suffix. android:hasCode needs to be set to true, which indicates that the application contains code. Note that the documentation in <NDK root>/docs/NATIVE-ACTIVITY.HTML gives an example of the AndroidManifest.xml file with android:hasCode set to false, which will not allow the application to start. Add two files named NativeActivityOne.cpp and mylog.h under the jni folder. The ANativeActivity_onCreate method should be implemented in NativeActivityOne.cpp. The following is an example of the implementation: void ANativeActivity_onCreate(ANativeActivity* activity,void* savedState, size_t savedStateSize) {printInfo(activity);activity->callbacks->onStart = onStart;activity->callbacks->onResume = onResume;activity->callbacks->onSaveInstanceState = onSaveInstanceState;activity->callbacks->onPause = onPause;activity->callbacks->onStop = onStop;activity->callbacks->onDestroy = onDestroy;activity->callbacks->onWindowFocusChanged =onWindowFocusChanged;activity->callbacks->onNativeWindowCreated =onNativeWindowCreated;activity->callbacks->onNativeWindowResized =onNativeWindowResized;activity->callbacks->onNativeWindowRedrawNeeded =onNativeWindowRedrawNeeded;activity->callbacks->onNativeWindowDestroyed =onNativeWindowDestroyed;activity->callbacks->onInputQueueCreated = onInputQueueCreated;activity->callbacks->onInputQueueDestroyed =onInputQueueDestroyed;activity->callbacks->onContentRectChanged =onContentRectChanged;activity->callbacks->onConfigurationChanged =onConfigurationChanged;activity->callbacks->onLowMemory = onLowMemory;activity->instance = NULL;} Add the Android.mk file under the jni folder: LOCAL_PATH := $(call my-dir)include $(CLEAR_VARS)LOCAL_MODULE := NativeActivityOneLOCAL_SRC_FILES := NativeActivityOne.cppLOCAL_LDLIBS := -landroid -lloginclude $(BUILD_SHARED_LIBRARY) Build the Android application and run it on an emulator or a device. Start a terminal and display the logcat output using the following: $ adb logcat -v time NativeActivityOne:I *:S Alternatively, you can use the logcat view at Eclipse to see the logcat output. When the application starts, you should be able to see the following logcat output: As shown in the screenshot, a few Android activity lifecycle callback functions are executed. We can manipulate the phone to cause other callbacks being executed. For example, long pressing the home button and then pressing the back button will cause the onWindowFocusChanged callback to be executed. How it works… In our example, we created a simple, "pure" native application to output logs when the Android framework calls into the callback functions defined by us. The "pure" native application is not really pure native. Although we did not write a single line of Java code, the Android framework still runs some Java code on Dalvik VM. Android framework provides an android.app.NativeActivity.java class to help us create a "native" activity. In a typical Java activity, we extend android.app.Activity and overwrite the activity lifecycle methods. NativeActivity is also a subclass of android. app.Activity and does similar things. At the start of a native activity, NativeActivity. java will call ANativeActivity_onCreate, which is declared in native_activity.h and implemented by us. In the ANativeActivity_onCreate method, we can register our callback methods to handle activity lifecycle events and user inputs. At runtime, NativeActivity will invoke these native callback methods when the corresponding events occurred. In a word, NativeActivity is a wrapper that hides the managed Android Java world for our native code, and exposes the native interfaces defined in native_activity.h. The ANativeActivity data structure: Every callback method in the native code accepts an instance of the ANativeActivity structure. Android NDK defines the ANativeActivity data structure in native_acitivity.h as follows: typedef struct ANativeActivity {struct ANativeActivityCallbacks* callbacks;JavaVM* vm;JNIEnv* env;jobject clazz;const char* internalDataPath;const char* externalDataPath;int32_t sdkVersion;void* instance;AAssetManager* assetManager;} ANativeActivity; The various attributes of the preceding code are explained as follows: callbacks: It is a data structure that defines all the callbacks that the Android framework will invoke with the main UI thread. vm: It is the application process' global Java VM handle. It is used in some JNI functions. env: It is a JNIEnv interface pointer. JNIEnv is used through local storage data , so this field is only accessible through the main UI thread. clazz: It is a reference to the android.app.NativeActivity object created by the Android framework. It can be used to access fields and methods in the android. app.NativeActivity Java class. In our code, we accessed the toString method of android.app.NativeActivity. internalDataPath: It is the internal data directory path for the application. externalDataPath: It is the external data directory path for the application. internalDataPath and externalDataPath are NULL at Android 2.3.x. This is a known bug and has been fixed since Android 3.0. If we are targeting devices lower than Android 3.0, then we need to find other ways to get the internal and external data directories. sdkVersion: It is the Android platform's SDK version code. Note that this refers to the version of the device/emulator that runs the app, not the SDK version used in our development. instance: It is not used by the framework. We can use it to store user-defined data and pass it around. assetManager: It is the a pointer to the app's instance of the asset manager. We will need it to access assets data. We will discuss it in more detail in the Managing assets at Android NDK recipe of this article There's more… The native_activity.h interface provides a simple single thread callback mechanism, which allows us to write an activity without Java code. However, this single thread approach infers that we must quickly return from our native callback methods. Otherwise, the application will become unresponsive to user actions (for example, when we touch the screen or press the Menu button, the app does not respond because the GUI thread is busy executing the callback function). A way to solve this issue is to use multiple threads. For example, many games take a few seconds to load. We will need to offload the loading to a background thread, so that the UI can display the loading progress and be responsive to user inputs. Android NDK comes with a static library named android_native_app_glue to help us in handling such cases. The details of this library are covered in the Creating a native activity with the Android native app glue recipe. A similar problem exists at Java activity. For example, if we write a Java activity that searches the entire device for pictures at onCreate, the application will become unresponsive. We can use AsyncTask to search and load pictures in the background, and let the main UI thread display a progress bar and respond to user inputs. Creating a native activity with the Android native app glue The previous recipe described how the interface defined in native_activity.h allows us to create native activity. However, all the callbacks defined are invoked with the main UI thread, which means we cannot do heavy processing in the callbacks. Android SDK provides AsyncTask, Handler, Runnable, Thread, and so on, to help us handle things in the background and communicate with the main UI thread. Android NDK provides a static library named android_native_app_glue to help us execute callback functions and handle user inputs in a separate thread. This recipe will discuss the android_native_app_glue library in detail. Getting ready The android_native_app_glue library is built on top of the native_activity.h interface. Therefore, readers are recommended to read the Creating a native activity with the native_activity.h interface recipe before going through this one. How to do it… The following steps create a simple Android NDK application based on the android_native_app_glue library: Create an Android application named NativeActivityTwo. Set the package name as cookbook.chapter5.nativeactivitytwo. Right-click on the NativeActivityTwo project, select Android Tools | Add Native Support. Change the AndroidManifest.xml file as follows: <manifest package="cookbook.chapter5.nativeactivitytwo"android:versionCode="1"android:versionName="1.0"><uses-sdk android_minSdkVersion="9"/><application android_label="@string/app_name"android:icon="@drawable/ic_launcher"android:hasCode="true"><activity android_name="android.app.NativeActivity"android:label="@string/app_name"android:configChanges="orientation|keyboardHidden"><meta-data android_name="android.app.lib_name"android:value="NativeActivityTwo" /><intent-filter><action android_name="android.intent.action.MAIN" /><category android_name="android.intent.category.LAUNCHER" /></intent-filter></activity></application></manifest> Add two files named NativeActivityTwo.cpp and mylog.h under the jni folder. NativeActivityTwo.cpp is shown as follows: #include <jni.h>#include <android_native_app_glue.h>#include "mylog.h"void handle_activity_lifecycle_events(struct android_app* app,int32_t cmd) {LOGI(2, "%d: dummy data %d", cmd, *((int*)(app->userData)));}void android_main(struct android_app* app) {app_dummy(); // Make sure glue isn't stripped.int dummyData = 111;app->userData = &dummyData;app->onAppCmd = handle_activity_lifecycle_events;while (1) {int ident, events;struct android_poll_source* source;if ((ident=ALooper_pollAll(-1, NULL, &events, (void**)&source)) >=0) {source->process(app, source);}}} Add the Android.mk file under the jni folder: LOCAL_PATH := $(call my-dir)include $(CLEAR_VARS)LOCAL_MODULE := NativeActivityTwoLOCAL_SRC_FILES := NativeActivityTwo.cppLOCAL_LDLIBS := -llog -landroidLOCAL_STATIC_LIBRARIES := android_native_app_glueinclude $(BUILD_SHARED_LIBRARY)$(call import-module,android/native_app_glue) Build the Android application and run it on an emulator or device. Start a terminal and display the logcat output by using the following command: adb logcat -v time NativeActivityTwo:I *:S When the application starts, you should be able to see the following logcat output and the device screen will shows a black screen: On pressing the back button, the following output will be shown: How it works… This recipe demonstrates how the android_native_app_glue library is used to create a native activity. The following steps should be followed to use the android_native_app_glue library: Implement a function named android_main. This function should implement an event loop, which will poll for events continuously. This method will run in the background thread created by the library. Two event queues are attached to the background thread by default, including the activity lifecycle event queue and the input event queue. When polling events using the looper created by the library, you can identify where the event is coming from, by checking the returned identifier (either LOOPER_ID_MAIN or LOOPER_ID_INPUT). It is also possible to attach additional event queues to the background thread. When an event is returned, the data pointer will point to an android_poll_source data structure. We can call the process function of this structure. The process is a function pointer, which points to android_app->onAppCmd for activity lifecycle events, and android_app->onInputEvent for input events. We can provide our own processing functions and direct the corresponding function pointers to these functions. In our example, we implement a simple function named handle_activity_lifecycle_ events and point the android_app->onAppCmd function pointer to it. This function simply prints the cmd value and the user data passed along with the android_app data structure. cmd is defined in android_native_app_glue.h as an enum. For example, when the app starts, the cmd values are 10, 11, 0, 1, and 6, which correspond to APP_CMD_START, APP_CMD_RESUME, APP_CMD_INPUT_CHANGED, APP_CMD_INIT_WINDOW, and APP_CMD_ GAINED_FOCUS respectively. android_native_app_glue Library Internals: The source code of the android_native_ app_glue library can be found under the sources/android/native_app_glue folder of Android NDK. It only consists of two files, namely android_native_app_glue.c and android_native_app_glue.h. Let's first describe the flow of the code and then discuss some important aspects in detail. Since the source code for native_app_glue is provided, we can modify it if necessary, although in most cases it won't be necessary. android_native_app_glue is built on top of the native_activity.h interface. As shown in the following code (extracted from sources/android/native_app_glue/ android_native_app_glue.c). It implements the ANativeActivity_onCreate function, where it registers the callback functions and calls the android_app_create function. Note that the returned android_app instance is pointed by the instance field of the native activity, which can be passed to various callback functions: void ANativeActivity_onCreate(ANativeActivity* activity,void* savedState, size_t savedStateSize) {LOGV("Creating: %pn", activity);activity->callbacks->onDestroy = onDestroy;activity->callbacks->onStart = onStart;activity->callbacks->onResume = onResume;… …activity->callbacks->onNativeWindowCreated =onNativeWindowCreated;activity->callbacks->onNativeWindowDestroyed =onNativeWindowDestroyed;activity->callbacks->onInputQueueCreated = onInputQueueCreated;activity->callbacks->onInputQueueDestroyed =onInputQueueDestroyed;activity->instance = android_app_create(activity, savedState,savedStateSize);} The android_app_create function (shown in the following code snippet) initializes an instance of the android_app data structure, which is defined in android_native_app_ glue.h. This function creates a unidirectional pipe for inter-thread communication. After that, it spawns a new thread (let's call it background thread thereafter) to run the android_ app_entry function with the initialized android_app data as the input argument. The main thread will wait for the background thread to start and then return: static struct android_app* android_app_create(ANativeActivity*activity, void* savedState, size_t savedStateSize) {struct android_app* android_app = (struct android_app*)malloc(sizeof(struct android_app));memset(android_app, 0, sizeof(struct android_app));android_app->activity = activity;pthread_mutex_init(&android_app->mutex, NULL);pthread_cond_init(&android_app->cond, NULL);……int msgpipe[2];if (pipe(msgpipe)) {LOGE("could not create pipe: %s", strerror(errno));return NULL;}android_app->msgread = msgpipe[0];android_app->msgwrite = msgpipe[1];pthread_attr_t attr;pthread_attr_init(&attr);pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);pthread_create(&android_app->thread, &attr, android_app_entry,android_app);// Wait for thread to start.pthread_mutex_lock(&android_app->mutex);while (!android_app->running) {pthread_cond_wait(&android_app->cond, &android_app->mutex);}pthread_mutex_unlock(&android_app->mutex);return android_app;} The background thread starts with the android_app_entry function (as shown in the following code snippet), where a looper is created. Two event queues will be attached to the looper. The activity lifecycle events queue is attached to the android_app_entry function. When the activity's input queue is created, the input queue is attached (to the android_ app_pre_exec_cmd function of android_native_app_glue.c). After attaching the activity lifecycle event queue, the background thread signals the main thread it is already running. It then calls a function named android_main with the android_app data. android_main is the function we need to implement, as shown in our sample code. It must run in a loop until the activity exits: static void* android_app_entry(void* param) {struct android_app* android_app = (struct android_app*)param;… …//Attach life cycle event queue with identifier LOOPER_ID_MAINandroid_app->cmdPollSource.id = LOOPER_ID_MAIN;android_app->cmdPollSource.app = android_app;android_app->cmdPollSource.process = process_cmd;android_app->inputPollSource.id = LOOPER_ID_INPUT;android_app->inputPollSource.app = android_app;android_app->inputPollSource.process = process_input;ALooper* looper = ALooper_prepare(ALOOPER_PREPARE_ALLOW_NON_CALLBACKS);ALooper_addFd(looper, android_app->msgread, LOOPER_ID_MAIN,ALOOPER_EVENT_INPUT, NULL, &android_app->cmdPollSource);android_app->looper = looper;pthread_mutex_lock(&android_app->mutex);android_app->running = 1;pthread_cond_broadcast(&android_app->cond);pthread_mutex_unlock(&android_app->mutex);android_main(android_app);android_app_destroy(android_app);return NULL;} The following diagram indicates how the main and background thread work together to create the multi-threaded native activity: We use the activity lifecycle event queue as an example. The main thread invokes the callback functions, which simply writes to the write end of the pipe, while true loop implemented in the android_main function will poll for events. Once an event is detected, the function calls the event handler, which reads the exact command from the read end of the pipe and handles it. The android_native_app_glue library implements all the main thread stuff and part of the background thread stuff for us. We only need to supply the polling loop and the event handler as illustrated in our sample code. Pipe: The main thread creates a unidirectional pipe in the android_app_create function by calling the pipe method. This method accepts an array of two integers. After the function is returned, the first integer will be set as the file descriptor referring to the read end of the pipe, while the second integer will be set as the file descriptor referring to the write end of the pipe. A pipe is usually used for Inter-process Communication (IPC), but here it is used for communication between the main UI thread and the background thread created at android_ app_entry. When an activity lifecycle event occurs, the main thread will execute the corresponding callback function registered at ANativeActivity_onCreate. The callback function simply writes a command to the write end of the pipe and then waits for a signal from the background thread. The background thread is supposed to poll for events continuously and once it detects a lifecycle event, it will read the exact event from the read end of the pipe, signal the main thread to unblock and handle the events. Because the signal is sent right after receiving the command and before actual processing of the events, the main thread can return from the callback function quickly without worrying about the possible long processing of the events. Different operating systems have different implementations for the pipe. The pipe implemented by Android system is "half-duplex", where communication is unidirectional. That is, one file descriptor can only write, and the other file descriptor can only read. Pipes in some operating system is "full-duplex", where the two file descriptors can both read and write. Looper is an event tracking facility, which allows us to attach one or more event queues for an event loop of a thread. Each event queue has an associated file descriptor. An event is data available on a file descriptor. In order to use a looper, we need to include the android/ looper.h header file. The library attaches two event queues for the event loop to be created by us in the background thread, including the activity lifecycle event queue and the input event queue. The following steps should be performed in order to use a looper: Create or obtain a looper associated with the current thread: This is done by the ALooper_prepare function: ALooper* ALooper_prepare(int opts); This function prepares a looper associated with the calling thread and returns it. If the looper doesn't exist, it creates one, associates it with the thread, and returns it Attach an event queue: This is done by ALooper_addFd. The function has the following prototype: int ALooper_addFd(ALooper* looper, int fd, int ident, int events,ALooper_callbackFunc callback, void* data); The function can be used in two ways. Firstly, if callback is set to NULL, the ident set will be returned by ALooper_pollOnce and ALooper_pollAll. Secondly, if callback is non-NULL, then the callback function will be executed and ident is ignored. The android_native_app_glue library uses the first approach to attach a new event queue to the looper. The input argument fd indicates the file descriptor associated with the event queue. ident is the identifier for the events from the event queue, which can be used to classify the event. The identifier must be bigger than zero when callback is set to NULL. callback is set to NULL in the library source code, and data points to the private data that will be returned along with the identifier at polling. In the library, this function is called to attach the activity lifecycle event queue to the background thread. The input event queue is attached using the input queue specific function AInputQueue_attachLooper, which we will discuss in the Detecting and handling input events at NDK recipe. Poll for events: This can be done by either one of the following two functions: int ALooper_pollOnce(int timeoutMillis, int* outFd, int*outEvents, void** outData);int ALooper_pollAll(int timeoutMillis, int* outFd, int* outEvents,void** outData); These two methods are equivalent when callback is set to NULL in ALooper_addFd. They have the same input arguments. timeoutMillis specifies the timeout for polling. If it is set to zero, then the functions return immediately; if it is set to negative, they will wait indefinitely until an event occurs. The functions return the identifier (greater than zero) when an event occurs from any input queues attached to the looper. In this case, outFd, outEvents, and outData will be set to the file descriptor, poll events, and data associated with the event. Otherwise, they will be set to NULL. Detach event queues: This is done by the following function: int ALooper_removeFd(ALooper* looper, int fd); It accepts the looper and file descriptor associated with the event queue, and detaches the queue from the looper.
Read more
  • 0
  • 0
  • 7016

article-image-cloud-enabling-your-apps
Packt
08 May 2013
7 min read
Save for later

Cloud-enabling Your Apps

Packt
08 May 2013
7 min read
(For more resources related to this topic, see here.) Which cloud services can you use with Titanium? Here is a comparison of the services offered by three cloud-based providers who have been proven to work with Titanium: Appcelerator Cloud Services Parse StackMob Customizable storage Yes Yes Yes Push notifications Yes Yes Yes E-mail Yes No No Photos Yes Yes Yes Link with Facebook/Twitter account Yes Yes Yes User accounts Yes Yes Yes The services offered by these three leading contenders are very similar. The main difference is the cost. Which is the best one for you? It depends on your requirements; you will have to do the cost/benefit analysis to work out the best solution for you. Do you need more functionality than this? No problem, look around for other PaaS providers. The PaaS service offered by RedHat has been proven to integrate with Titanium and offers far more flexibility. There is an example of a Titanium app developed with RedHat Openshift at https://openshift.redhat.com/community/blogs/developing-mobile-appsfor-the-cloud-with-titanium-studio-and-the-openshift-paas It doesn't stop there; new providers are coming along almost every month with new and grand ideas for web and mobile integration. My advice would be to take the long view. Draw up a list of what you require initially for your app and what you realistically want in the next year. Check this list against the cloud providers. Can they satisfy all your needs at a workable cost? They should do; they should be flexible enough to cover your plans. You should not need to split your solution between providers. Clouds are everywhere Cloud-based services offer more than just storage. Appcelerator Cloud Services Appcelerator Cloud Services ( ACS) is well integrated into Titanium. The API includes commands for controlling ACS cloud objects. In the first example in this article we are going to add commentary functionality to the simple forex app. Forex commentary is an ideal example of the benefits of cloud-based storage where your data is available across all devices. First, let's cover some foreground to the requirements. First, let's cover some foreground to the requirements. The currency markets are open 24 hours a day, 5 days a week and trading opportunities can present themselves at any point. You will not be in front of your computer all of the time so you will need to be able to access and add commentary when you are on your phone or at home on your PC. This is where the power of the cloud really starts to hit home. We already know that you can create apps for a variety of devices using Appcelerator. This is good; we can access our app from most phones, but now using the cloud we can also access our commentary from anywhere. So, comments written on the train about the EURUSD rate can be seen later when at home looking at the PC. When we are creating forex commentary, we will store the following: The currency pair (that is EURUSD) ‹ The rate (the current exchange rate) The commentary (what we think about the exchange rate) We will also store the date and time of the commentary. This is done automatically by ACS. All objects include the date they were created. ACS allows you to store key value pairs (which is the same as Ti.App.Properties), that is AllowUserToSendEmails: True, or custom objects. We have several attributes to our commentary post so a key value pair will not suffice. Instead we will be using a custom object. We are going to add a screen that will be called when a user selects a currency. From this screen a user can enter commentary on the currency. Time for action – creating ACS custom objects Perform the following steps to create ACS custom objects: Enable ACS in your existing app. Go to tiapp.xml and click on the Enable... button on the Cloud Services section. Your project will gain a new Ti.Cloud module and the ACS authentication keys will be shown: Go to the cloud website, https://my.appcelerator.com/apps, find your app, and select Manage ACS. Select Development from the selection buttons at the top. You need to define a user so your app can log in to ACS. From the App Management tab select Users from the list on the right. If you have not already created a suitable user, do it now. We will split the functionality in this article over two files. The first file will be called forexCommentary.js and will contain the cloud functionality, and the second file called forexCommentaryView.js will contain the layout code. Create the two new files. Before we can do anything with ACS, we need to log in. Create an init function in forexCommentary.js which will log in the forex user created previously: function init(_args) { if (!Cloud.sessionId) { Cloud.Users.login({ login: 'forex', password: 'forex' }, function (e) { if (e.success) { _args.success({user : e.users[0]}); } else { _args.error({error: e.error}); } }); } This is not a secure login, it's not important for this example. If you need greater security, use the Ti.Cloud.Users. secureLogin functionality. Create another function to create a new commentary object on ACS. The function will accept a parameter containing the attribute's pair, rate, and commentary and create a new custom object from these. The first highlighted section shows how easy it is to define a custom object. The second highlighted section shows the custom object being passed to the success callback when the storage request completes: function addCommentary(_args) { // create a new currency commentary Cloud.Objects.create({ classname: className, fields: { pair: _args.pair, rate: _args.rate, comment: _args.commentary } }, function (e) { if (e.success) { _args.success(e.forexCommentary[0]); } else { _args.error({error: e.error}); } }); } Now to the layout. This will be a simple form with a text area where the commentary can be added. The exchange rate and currency pair will be provided from the app's front screen. Create a TextArea object and add it to the window. Note keyboardType of Ti.UI.KEYBOARD_ASCII which will force a full ASCII layout keyboard to be displayed and returnKeyType of Ti.UI.RETURNKEY_DONE which will add a done key used in the next step: var commentary = Ti.UI.createTextArea({ borderWidth:2, borderColour:'blue', borderRadius:5, keyboardType: Ti.UI.KEYBOARD_ASCII, returnKeyType: Ti.UI.RETURNKEY_DONE, textAlign: 'left', hintText: 'Enter your thoughts on '+thePair, width: '90%', height : 150 }); mainVw.add(commentary); Now add an event listener which will listen for the done key being pressed and when triggered will call the function to store the commentary with ACS: commentary.addEventListener('return', function(e) {forex.addCommentary({ pair: thePair, rate: theRate, commentary: e.value}) }); Finally add the call to log in the ACS user when the window is first opened: var forex = require('forexCommentary'); forex.init(); Run the app and enter some commentary. What just happened? You created functions to send a custom defined object to the server. Commentary entered on the phone is almost immediately available for viewing on the Appcelerator console (https://my.appcelerator.com/apps) and therefore available to be viewed by all other devices and formats. Uploading pictures Suppose you want to upload a picture, or a screenshot? This next example will show how easy it is to upload a picture to ACS.
Read more
  • 0
  • 0
  • 1514