Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - Mobile

213 Articles
article-image-building-mobile-games-craftyjs-and-phonegap-part-1
Robi Sen
18 Mar 2015
7 min read
Save for later

Building Mobile Games with Crafty.js and PhoneGap: Part 1

Robi Sen
18 Mar 2015
7 min read
In this post, we will build a mobile game using HTML5, CSS, and JavaScript. To make things easier, we are going to make use of the Crafty.js JavaScript game engine, which is both free and open source. In this first part of a two-part series, we will look at making a simple turn-based RPG-like game based on Pascal Rettig’s Crafty Workshop presentation. You will learn how to add sprites to a game, control them, and work with mouse/touch events. Setting up To get started, first create a new PhoneGap project wherever you want in which to do your development. For this article, let’s call the project simplerpg. Figure 1: Creating the simplerpg project in PhoneGap. Navigate to the www directory in your PhoneGap project and then add a new director called lib. This is where we are going to put several JavaScript libraries we will use for the project. Now, download the JQuery library to the lib directory. For this project, we will use JQuery 2.1. Once you have downloaded JQuery, you need to download the Crafty.js library and add it to your lib directory as well. For later parts of this series,you will want to be using a web server such as Apache or IIS to make development easier. For the first part of the post, you can just drag-and-drop the HTML files into your browser to test, but later, you will need to use a web browser to avoid Same Origin Policy errors. This article assumes you are using Chrome to develop in. While IE or FireFox will work just fine, Chrome is used in this article and its debugging environment. Finally, the source code for this article can be found here on GitHub. In the lessons directory, you will see a series of index files with a listing number matching each code listing in this article. Crafty PhoneGap allows you to take almost any HTML5 application and turn it into a mobile app with little to no extra work. Perhaps the most complex of all mobile apps are videos. Video games often have complex routines, graphics, and controls. As such, developing a video game from the ground up is very difficult. So much so that even major video game companies rarely do so. What they usually do, and what we will do here, is make use of libraries and game engines that take care of many of the complex tasks of managing objects, animation, collision detection, and more. For our project, we will be making use of the open source JavaScript game engine Crafty. Before you get started with the code, it’s recommended to quickly review the website here and review the Crafty API here. Bootstrapping Crafty and creating an entity Crafty is very simple to start working with. All you need to do is load the Crafty.js library and initialize Crafty. Let’s try that. Create an index.html file in your www root directory, if one does not exist; if you already have one, go ahead and overwrite it. Then, cut and paste listing 1 into it. Listing 1: Creating an entity <!DOCTYPE html> <html> <head></head> <body> <div id="game"></div> <script type="text/javascript" src="lib/crafty.js"></script> <script> // Height and Width var WIDTH = 500, HEIGHT = 320; // Initialize Crafty Crafty.init(WIDTH, HEIGHT); var player = Crafty.e(); player.addComponent("2D, Canvas, Color") player.color("red").attr({w:50, h:50}); </script> </body> </html> As you can see in listing 1, we are creating an HTML5 document and loading the Crafty.js library. Then, we initialize Crafty and pass it a width and height. Next, we create a Crafty entity called player. Crafty, like many other game engines, follows a design pattern called Entity-Component-System or (ECS). Entities are objects that you can attach things like behaviors and data to. For ourplayerentity, we are going to add several components including 2D, Canvas, and Color. Components can be data, metadata, or behaviors. Finally, we will add a specific color and position to our entity. If you now save your file and drag-and-drop it into the browser, you should see something like figure 2. Figure 2: A simple entity in Crafty.  Moving a box Now,let’s do something a bit more complex in Crafty. Let’s move the red box based on where we move our mouse, or if you have a touch-enabled device, where we touch the screen. To do this, open your index.html file and edit it so it looks like listing 2. Listing 2: Moving the box <!DOCTYPE html> <html> <head></head> <body> <div id="game"></div> <script type="text/javascript" src="lib/crafty.js"></script> <script> var WIDTH = 500, HEIGHT = 320; Crafty.init(WIDTH, HEIGHT); // Background Crafty.background("black"); //add mousetracking so block follows your mouse Crafty.e("mouseTracking, 2D, Mouse, Touch, Canvas") .attr({ w:500, h:320, x:0, y:0 }) .bind("MouseMove", function(e) { console.log("MouseDown:"+ Crafty.mousePos.x +", "+ Crafty.mousePos.y); // when you touch on the canvas redraw the player player.x = Crafty.mousePos.x; player.y = Crafty.mousePos.y; }); // Create the player entity var player = Crafty.e(); player.addComponent("2D, DOM"); //set where your player starts player.attr({ x : 10, y : 10, w : 50, h : 50 }); player.addComponent("Color").color("red"); </script> </body> </html> As you can see, there is a lot more going on in this listing. The first difference is that we are using Crafty.background to set the background to black, but we are also creating a new entity called mouseTracking that is the same size as the whole canvas. We assign several components to the entity so that it can inherit their methods and properties. We then use .bind to bind the mouse’s movements to our entity. Then, we tell Crafty to reposition our player entity to wherever the mouse’s x and y position is. So, if you save this code and run it, you will find that the red box will go wherever your mouse moves or wherever you touch or drag as in figure 3.    Figure 3: Controlling the movement of a box in Crafty.  Summary In this post, you learned about working with Crafty.js. Specifically, you learned how to work with the Crafty API and create entities. In Part 2, you will work with sprites, create components, and control entities via mouse/touch.  About the author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus-year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as UnderArmour, Sony, CISCO, IBM, and many others to help build new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems, allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.
Read more
  • 0
  • 0
  • 4154

article-image-add-a-twitter-sign-in-to-your-ios-app-with-twitterkit
Doron Katz
15 Mar 2015
5 min read
Save for later

Add a Twitter Sign In To Your iOS App with TwitterKit

Doron Katz
15 Mar 2015
5 min read
What is TwitterKit & Digits? In this post we take a look at Twitter’s new Sign-in API, TwitterKit and Digits, bundled as part of it’s new Fabric suite of tools, announced by Twitter earlier this year, as well as providing you with two quick snippets on on how to integrate Twitter’s sign-in mechanism into your iOS app. Facebook, and to a lesser-extent Google, have for a while dominated the single sign-on paradigm, through their SDK or Accounts.framework on iOS, to encourage developers to provide a consolidated form of login for their users. Twitter has decided to finally get on the band-wagon and work toward improving its brand through increasing sign-on participation, and providing a concise way for users to log-in to their favorite apps without needing to remember individual passwords. By providing a Login via Twitter button, developers will gain the user’s Twitter identity, and subsequently their associated twitter tweets and associations. That is, once the twitter account is identified, the app can engage followers of that account (friends), or to access the user’s history of tweets, to data-mind for a specific keyword or hashtag. In addition to offering a single sign-on, Twitter is also offering Digits, the ability for users to sign-on anonymously using one’s phone number, synonymous with Facebook’s new Anonymous Login API.   The benefits of Digit The rationale behind Digits is to provide users with the option of trusting the app or website and providing their Twitter identification in order to log-in. Another option for the more hesitant ones wanting to protect their social graph history, is to only provide a unique number, which happens to be a mobile number, as a means of identification and authentication. Another benefit for users is that logging in is dead-simple, and rather than having to go through a deterring form of identification questions, you just ask them for their number, from which they will get an authentication confirmation SMS, allowing them to log-in. With a brief introduction to TwitterKit and Digits, let’s show you how simple it is to implement each one. Logging in with TwitterKit Twitter wanted to make implementing its authentication mechanism a more simpler and attractive process for developers, which they did. By using the SDK as part of Twitter’s Fabric suite, you will already get your Twitter app set-up and ready, registered for use with the company’s SDK. TwitterKit aims to leverage the existing Twitter account on the iOS, using the Accounts.framework, which is the preferred and most rudimentary approach, with a fallback to using the OAuth mechanism. The easiest way to implement Twitter authentication is through the generated button, TWTRLogInButton, as we will demonstrate using iOS’Swift language. let authenticationButton = TWTRLogInButton(logInCompletion: { (session, error) in if (session != nil) { //We signed in, storing session in session object. } else { //we get an error, accessible from error object } }) It’s quite simple, leaving you with a TWTRLoginButton button subclass, that users can add to your view hierarchy and have users interact with. Logging in with Digits Having created a login button using TwitterKit, we will now create the same feature using Digits. The simplicity of implementation is maintained with Digits, with the simplest process once again to create a pre-configured button, DGTAuthenticateButton: let authenticationButton = TWTRLogInButton(logInCompletion: { (session, error) in if (session != nil) { //We signed in, storing session in session object. } else { //we get an error, accessible from error object } }) Summary Implementing TwitterKit and Digits are both quite straight forward in iOS, with different intentions. Whereas TwitterKit allows you to have full-access to the authenticated user’s social history, the latter allows for a more abbreviated, privacy-protected approach to authenticating. If at some stage the user decides to trust the app and feels more comfortable providing full access of her or his social history, you can defer catering to that till later in the app usage. The complete iOS reference for TwitterKit and Digits can be found by clicking here. The popularity and uptake of TwitterKit remains to be seen, but as an extra option for developers, when adding Facebook and Google+ login, users will have the option to pick their most trusted social media tool as their choice of authentication. Providing an anonymous mode of login also falls in line with the more privacy-conscious world, and Digits certainly provides a seamless way of implementing, and impressively straight-forward way for users to authenticate using their phone number. We have briefly demonstrated how to interact with Twitter’s SDK using iOS and Swift, but there is also an Android SDK version, with a Web version in the pipeline very soon, according to Twitter. This is certainly worth exploring, along with the rest of the tools offered in the Fabric suite, including analytics and beta-distribution tools, and more. About the author Doron Katz is an established Mobile Project Manager, Architect and Developer, and a pupil of the methodology of Agile Project Management,such as applying Kanban principles. Doron also believes in BehaviourDriven Development (BDD), anticipating user interaction prior to design, that is. Doron is also heavily involved in various technical user groups, such as CocoaHeads Sydney, and Adobe user Group.
Read more
  • 0
  • 0
  • 5624

article-image-asynctask-and-hardwaretask-classes
Packt
17 Feb 2015
10 min read
Save for later

The AsyncTask and HardwareTask Classes

Packt
17 Feb 2015
10 min read
This article is written by Andrew Henderson, the author of Android for the BeagleBone Black. This article will cover the usage of the AsyncTask and HardwareTask classes. (For more resources related to this topic, see here.) Understanding the AsyncTask class HardwareTask extends the AsyncTask class, and using it provides a major advantage over the way hardware interfacing is implemented in the gpio app. AsyncTasks allows you to perform complex and time-consuming hardware-interfacing tasks without your app becoming unresponsive while the tasks are executed. Each instance of an AsyncTask class can create a new thread of execution within Android. This is similar to how multithreaded programs found on other OSes spin new threads to handle file and network I/O, manage UIs, and perform parallel processing. The gpio app only used a single thread during its execution. This thread is the main UI thread that is part of all Android apps. The UI thread is designed to handle UI events as quickly as possible. When you interact with a UI element, that element's handler method is called by the UI thread. For example, clicking a button causes the UI thread to invoke the button's onClick() handler. The onClick() handler then executes a piece of code and returns to the UI thread. Android is constantly monitoring the execution of the UI thread. If a handler takes too long to finish its execution, Android shows an Application Not Responding (ANR) dialog to the user. You never want an ANR dialog to appear to the user. It is a sign that your app is running inefficiently (or even not at all!) by spending too much time in handlers within the UI thread. The Application Not Responding dialog in Android The gpio app performed reads and writes of the GPIO states very quickly from within the UI thread, so the risk of triggering the ANR was very small. Interfacing with the FRAM is a much slower process. With the BBB's I2C bus clocked at its maximum speed of 400 KHz, it takes approximately 25 microseconds to read or write a byte of data when using the FRAM. While this is not a major concern for small writes, reading or writing the entire 32,768 bytes of the FRAM can take close to a full second to execute! Multiple reads and writes of the full FRAM can easily trigger the ANR dialog, so it is necessary to move these time-consuming activities out of the UI thread. By placing your hardware interfacing into its own AsyncTask class, you decouple the execution of these time-intensive tasks from the execution of the UI thread. This prevents your hardware interfacing from potentially triggering the ANR dialog. Learning the details of the HardwareTask class The AsyncTask base class of HardwareTask provides many different methods, which you can further explore by referring to the Android API documentation. The four AsyncTask methods that are of immediate interest for our hardware-interfacing efforts are: onPreExecute() doInBackground() onPostExecute() execute() Of these four methods, only the doInBackground() method executes within its own thread. The other three methods all execute within the context of the UI thread. Only the methods that execute within the UI thread context are able to update screen UI elements.   The thread contexts in which the HardwareTask methods and the PacktHAL functions are executed Much like the MainActivity class of the gpio app, the HardwareTask class provides four native methods that are used to call PacktHAL JNI functions related to FRAM hardware interfacing: public class HardwareTask extends AsyncTask<Void, Void, Boolean> {   private native boolean openFRAM(int bus, int address); private native String readFRAM(int offset, int bufferSize); private native void writeFRAM(int offset, int bufferSize,      String buffer); private native boolean closeFRAM(); The openFRAM() method initializes your app's access to a FRAM located on a logical I2C bus (the bus parameter) and at a particular bus address (the address parameter). Once the connection to a particular FRAM is initialized via an openFRAM() call, all readFRAM() and writeFRAM() calls will be applied to that FRAM until a closeFRAM() call is made. The readFRAM() method will retrieve a series of bytes from the FRAM and return it as a Java String. A total of bufferSize bytes are retrieved starting at an offset of offset bytes from the start of the FRAM. The writeFRAM() method will store a series of bytes to the FRAM. A total of bufferSize characters from the Java string buffer are stored in the FRAM started at an offset of offset bytes from the start of the FRAM. In the fram app, the onClick() handlers for the Load and Save buttons in the MainActivity class each instantiate a new HardwareTask. Immediately after the instantiation of HardwareTask, either the loadFromFRAM() or saveToFRAM() method is called to begin interacting with the FRAM: public void onClickSaveButton(View view) {    hwTask = new HardwareTask();    hwTask.saveToFRAM(this); }    public void onClickLoadButton(View view) {    hwTask = new HardwareTask();    hwTask.loadFromFRAM(this); } Both the loadFromFRAM() and saveToFRAM() methods in the HardwareTask class call the base AsyncTask class execution() method to begin the new thread creation process: public void saveToFRAM(Activity act) {    mCallerActivity = act;    isSave = true;    execute(); }    public void loadFromFRAM(Activity act) {    mCallerActivity = act;    isSave = false;    execute(); } Each AsyncTask instance can only have its execute() method called once. If you need to run an AsyncTask a second time, you must instantiate a new instance of it and call the execute() method of the new instance. This is why we instantiate a new instance of HardwareTask in the onClick() handlers of the Load and Save buttons, rather than instantiating a single HardwareTask instance and then calling its execute() method many times. The execute() method automatically calls the onPreExecute() method of the HardwareTask class. The onPreExecute() method performs any initialization that must occur prior to the start of the new thread. In the fram app, this requires disabling various UI elements and calling openFRAM() to initialize the connection to the FRAM via PacktHAL: protected void onPreExecute() {    // Some setup goes here    ...   if ( !openFRAM(2, 0x50) ) {      Log.e("HardwareTask", "Error opening hardware");      isDone = true; } // Disable the Buttons and TextFields while talking to the hardware saveText.setEnabled(false); saveButton.setEnabled(false); loadButton.setEnabled(false); } Disabling your UI elements When you are performing a background operation, you might wish to keep your app's user from providing more input until the operation is complete. During a FRAM read or write, we do not want the user to press any UI buttons or change the data held within the saveText text field. If your UI elements remain enabled all the time, the user can launch multiple AsyncTask instances simultaneously by repeatedly hitting the UI buttons. To prevent this, disable any UI elements required to restrict user input until that input is necessary. Once the onPreExecute() method finishes, the AsyncTask base class spins a new thread and executes the doInBackground() method within that thread. The lifetime of the new thread is only for the duration of the doInBackground() method. Once doInBackground() returns, the new thread will terminate. As everything that takes place within the doInBackground() method is performed in a background thread, it is the perfect place to perform any time-consuming activities that would trigger an ANR dialog if they were executed from within the UI thread. This means that the slow readFRAM() and writeFRAM() calls that access the I2C bus and communicate with the FRAM should be made from within doInBackground(): protected Boolean doInBackground(Void... params) {    ...    Log.i("HardwareTask", "doInBackground: Interfacing with hardware");    try {      if (isSave) {          writeFRAM(0, saveData.length(), saveData);      } else {        loadData = readFRAM(0, 61);      }    } catch (Exception e) {      ... The loadData and saveData string variables used in the readFRAM() and writeFRAM() calls are both class variables of HardwareTask. The saveData variable is populated with the contents of the saveEditText text field via a saveEditText.toString() call in the HardwareTask class' onPreExecute() method. How do I update the UI from within an AsyncTask thread? While the fram app does not make use of them in this example, the AsyncTask class provides two special methods, publishProgress() and onPublishProgress(), that are worth mentioning. The AsyncTask thread uses these methods to communicate with the UI thread while the AsyncTask thread is running. The publishProgress() method executes within the AsyncTask thread and triggers the execution of onPublishProgress() within the UI thread. These methods are commonly used to update progress meters (hence the name publishProgress) or other UI elements that cannot be directly updated from within the AsyncTask thread. After doInBackground() has completed, the AsyncTask thread terminates. This triggers the calling of doPostExecute() from the UI thread. The doPostExecute() method is used for any post-thread cleanup and updating any UI elements that need to be modified. The fram app uses the closeFRAM() PacktHAL function to close the current FRAM context that it opened with openFRAM() in the onPreExecute() method. protected void onPostExecute(Boolean result) {    if (!closeFRAM()) {    Log.e("HardwareTask", "Error closing hardware"); }    ... The user must now be notified that the task has been completed. If the Load button was pressed, then the string displayed in the loadTextField widget is updated via the MainActivity class updateLoadedData() method. If the Save button was pressed, a Toast message is displayed to notify the user that the save was successful. Log.i("HardwareTask", "onPostExecute: Completed."); if (isSave) {    Toast toast = Toast.makeText(mCallerActivity.getApplicationContext(),      "Data stored to FRAM", Toast.LENGTH_SHORT);    toast.show(); } else {    ((MainActivity)mCallerActivity).updateLoadedData(loadData); } Giving Toast feedback to the user The Toast class is a great way to provide quick feedback to your app's user. It pops up a small message that disappears after a configurable period of time. If you perform a hardware-related task in the background and you want to notify the user of its completion without changing any UI elements, try using a Toast message! Toast messages can only be triggered by methods that are executing from within the UI thread. An example of the Toast message Finally, the onPostExecute() method will re-enable all of the UI elements that were disabled in onPreExecute(): saveText.setEnabled(true);saveButton.setEnabled(true); loadButton.setEnabled(true); The onPostExecute() method has now finished its execution and the app is back to patiently waiting for the user to make the next fram access request by pressing either the Load or Save button. Are you ready for a challenge? Now that you have seen all of the pieces of the fram app, why not change it to add new functionality? For a challenge, try adding a counter that indicates to the user how many more characters can be entered into the saveText text field before the 60-character limit is reached. Summary The fram app in this article demonstrated how to use the AsyncTask class to perform time-intensive hardware interfacing tasks without stalling the app's UI thread and triggering the ANR dialog. Resources for Article: Further resources on this subject: Sound Recorder for Android [article] Reversing Android Applications [article] Saying Hello to Unity and Android [article]
Read more
  • 0
  • 0
  • 1483
Banner background image

article-image-tour-xcode
Packt
06 Feb 2015
13 min read
Save for later

Tour of Xcode

Packt
06 Feb 2015
13 min read
In this article, written by Jayant Varma, the author of Xcode 6 Essentials, we shall look at Xcode closely as this is going to be the tool you would use quite a lot for all aspects of your app development for Apple devices. It is a good idea to know and be familiar with the interface, the sections, shortcut keys, and so on. (For more resources related to this topic, see here.) Starting Xcode Xcode, like many other Mac applications, is found in the Applications folder or the Launchpad. On starting Xcode, you will be greeted with the launch screen that offers some entry points for working with Xcode. Mostly, you will select Create a new Xcode project or Check out an existing project , if you have an existing project to continue work on. Xcode remembers what it was doing last, so if you had a project or file open, it will open up those windows again. Creating a new project After selecting the Create a new project option, we are guided via a wizard that helps us get started. Selecting the project type The first step is to select what type of project you want to create. At the moment, there are two distinct types of projects, mobile (iOS) or desktop (OS X) that you can create. Within each of those types, you can select the type of project you want. The screenshot displays a standard configuration for iOS application projects. The templates used when the selected type of project is created are self sufficient, that is, when the Run button is pressed, the app compiles and runs. It might do nothing, as this is a minimalistic template. On selecting the type of project, we can select the next step: Setting the project options This step allows selecting the options, namely setting the application name, the organization name, identifier, language, and devices to support. In the past, the language was always set to Objective-C, however with Xcode 6, there are two options: objective-C and Swift Setting the project properties On creation, the main screen is displayed. Here it offers the option to change other details related to the application such as the version number and build. It also allows you to configure the team ID and certificates used for signing the application to test on a mobile device or for distribution to the App Store. It also allows you to set the compatibility for earlier versions. The orientation and app icons, splash screens, and so on are also set from this screen. If you want to set these up later on in the project, it is fine, this can be accessed at any time and does not stop you from development. It needs to be set prior to deploying it on a device or creating an App Store ready application. Xcode overview Let us have a look at the Xcode interface to familiarize ourselves with the same as it would help improve productivity when building your application. The top section immediately following the traffic light (window chrome) displays a Play and Stop button. This allows the project to run and stop. The breadcrumb toolbar displays the project-specific settings with respect to the product and the target. With an iOS project, it could be a particular simulator for iPhone, iPad, and so on, or a physical device (number 5 in the following screenshot). Just under this are vertical areas that are the main content area with all the files, editors, UI, and so on. These can be displayed or hidden as required and can be stacked vertically or horizontally. The distinct areas in Xcode are as follows: Project navigation (number1) Editor and assistant editor (number 2 ) and (number 3 ) Utility/inspector (number 4 ) The toolbar (number 5 ) and (number 6 ) These sections can be switched on and off (shown or hidden) as required to make space for other sections or more screen space to work with: Sections in Xcode The project section The project navigation section has three sub sections, the topmost being the project toolbar that has eight icons. These can be seen as in the following screenshot. The next sub section contains the project files and all the assets required for this project. The bottom most section consists of recently edited files and filters: You can use the keyboard shortcuts to access these areas quickly with the CMD + 1...8 keys. The eight areas available under project navigation are key and for the beginner to Xcode, this could be a bit daunting. When you run the project, the current section might change and display another where you might wonder how to get back to the project (file) navigator. Getting familiar with these is always helpful and the easiest way to navigate between these is the CMD + 1..8 keys. Project navigator ( CMD + 1 ): This displays all of the files, folders, assets, frameworks, and so on that are part of this project. This is displayed as a hierarchical view and is the way that a majority of developers access their files, folders, and so on. Symbol navigator ( CMD + 2 ): This displays all of the classes, members, and methods that are available in them. This is the easiest way to navigate quickly to a method/function, attribute/property. Search navigator ( CMD + 3 ): This allows you to search the project for a particular match. This is quite useful to find and replace text. Issues navigator ( CMD + 4 ): This displays the warning and errors that occur while typing your code or on building and running it. This also displays the results of the static analyzer. Tests navigator ( CMD + 5 ); This displays the tests that you have present in your code either added by yourself or the default ones created with the project. Debug navigator ( CMD + 6 ): This displays the information about the application when you choose to run it. It has some amazing detailed information on CPU usage, memory usage, disk usage, threads, and so on. Breakpoint navigator ( CMD + 7 ): This displays all the breakpoints in your project from all files. This also allows you to create exception and symbolic breakpoints. Log navigator ( CMD + 8 ): This displays a log of all actions carried out, namely compiling, building, and running. This is more useful when used to determine the results of automated builds The editor and assistant editor sections The second area contains the editor and assistant editor sections. These display the code, the XIB (as appropriate), storyboard files, device previews, and so on. Each of the sub sections have a jump bar on the top that relates to files and allow for navigating back and forth in the files and display the location of the file in the workspace. To the right from this is a mini issues navigator that displays all warnings and errors. In the case of the assistant editors, it also displays two buttons: one to add a new assistant editor area and another to close it.   Source code editors While we are looking at the interface, it is worth noting that the Xcode code editor is a very advanced editor with a lot of features, which is now seen as standard with a lot of text editors. Some of the features that make working with Xcode easier are as follows: Code folding : This feature helps to hide code at points such as the function declaration, loops, matching brace brackets, and so on. When a function or portion of code is folded, it hides it from view, thereby allowing you to view other areas of the code that would not be visible unless you scrolled. Syntax highlighting : This is one of the most useful features as it helps you, the developer, to visually, at a glance, differentiate your source code from variables, constants, and strings. Xcode has syntax highlighting for a couple of languages as mentioned earlier. Context help : This is one of the best features whereby when you hover over a word in the source code with OPT pressed, it shows a dotted underline and the cursor changes to a question mark. When you click on a word with the dotted underline and the question mark cursor, it displays a popup with details about that word. It also highlights all instances of that word in the file. The popup details as much information as available. If it is a variable or a function that you have added to the code, then it will display the name of the file where it was declared. If it is a word that is contained in the Apple libraries, then it displays the description and other additional details. Context jump : This is another cool feature that allows jumping to the point of declaration of that word. This is achieved by clicking on a word while keeping the CMD button pressed. In many cases, this is mainly helpful to know how the function is declared and what parameters it expects. It can also be useful to get information on other enumerators and constants used with that function. The jump could be in the same file as where you are editing the code or it could be to the header files where they are declared. Edit all in scope : This is a cool feature where you can edit all of the instances of the word together rather than using search and replace. A case scenario is if you want to change the name of a variable and ensure that all instances you are using in the file are changed but not the ones that are text, then you can use this option to quickly change it. Catching mistakes with fix-it : This is another cool feature in Xcode that will save you a lot of time and hassle. As you type text, Xcode keeps analyzing the code and looking for errors. If you have declared a variable and not used it in your code, Xcode immediately draws attention to it suggesting that the variable is an unused variable. However, if it was supposed to be a pointer and you have declared it without *; Xcode immediately flags it as an error that the interface type cannot be statically allocated. It offers a fix-it solution of inserting * and the code has a greyed * character showing where it will be added. This helps the developer fix commonly overlooked issues such as missing semicolons, missing declarations, or misspelled variable names. Code completion : This is the bit that makes writing code so much easier, type in a few letters of the function name and Xcode pops up a list of functions, constants, methods, and so on that start with those letters and displays all of the required parameters (as applicable) including the return type. When selected, it adds the token placeholders that can be replaced with the actual parameter values. The results might vary from person to person depending on the settings and the speed of the system you run Xcode on. The assistant editor The assistant editor is mainly used to display the counterparts and related files to the file open in the primary editor (generally used when working with Objective-C where the .h or.m files are the related files). The assistant editors track the contents of the editor. Xcode is quite intelligent and knows the corresponding sections and counterparts. When you click on a file, it opens up in the editor. However, pressing the OPT + Shift while clicking on the file, you would be provided with an interactive dialog to select where to open the file. The options include the primary editor or the assistant editor. You can also add assistant editors as required.   Another way to open a file quickly is to use the Open Quickly option, which has a shortcut key of CMD + Shift + O . This displays a textbox that allows accessing a file from the project. The utility/inspector section The last section contains the inspector and library. This section changes based on the type of file selected in the current editor. The inspector has 6 tabs/sections and they are as follows: The file inspector ( CMD + OPT + 1 ): This displays the physical file information for the file selected. For code files, it is the text encoding, the targets that it belongs to, and the physical file path. While for the storyboard, it is the physical file path and allows setting attributes such as auto layout and size classes (new in Xcode 6). The quick help inspector ( CMD + OPT + 2 ): This displays information about the class or object selected. The identity inspector ( CMD + OPT + 3 ): This displays the class name, ID, and others that identify the object selected. The attributes inspector ( CMD + OPT + 4 ): This displays the attributes for the object selected as if it is the initial root view controller, does it extend under the top bars or not, if it has a navigation bar or not, and others. This also displays the user-defined attributes (a new feature with Xcode 6). The size inspector ( CMD + OPT + 5 ): This displays the size of the control selected and the associated constraints that help position it on the container. The connections inspector ( CMD + OPT + 6 ): This displays the connections created in the Interface Builder between the UI and the code. The lower half of this inspector contains four options that help you work efficiently, they are as follows: The file template library : This contains the options to create a new class, protocol. The options that are available when selecting the File | New option from the menu. The code snippets library : This is a wonderful but not widely used option. This can hold code snippets that can help you avoid writing repetitive blocks of code in your app. You can drag and drop the snippet to your code in the editor. This also offers features such as shortcuts, scopes, platforms, and languages. So you can have a shortcut such as appDidLoad (for example) that inserts the code to create and populate a button. This is achieved simply by setting the platform as appropriate to iOS or OS X. After creating a code snippet, as soon as you type the first few characters, the code snippet shows up in the list of autocomplete options; The object library : This is the toolbox that contains all of the controls that you need for creating your UI, be it a button, a label, a Table View, view, View Controller, or anything else. Adding a code snippet is as easy as dragging the selected code from the editor onto the snippet area. It is a little tricky because the moment you start dragging, it could break your selection highlight. You need to select the text, click (hold) and then drag it. The media library : This contains the list of all images and other media types that are available to this project/workspace. Summary In this article, you have seen a quick tour of Xcode, keeping the shortcuts and tips handy as they really do help get things done faster. The code snippets are a wonderful feature that allow for quickly setting up commonly used code with shortcut keywords. Resources for Article: Further resources on this subject: Introducing Xcode Tools for iPhone Development [article] Xcode 4 ios: Displaying Notification Messages [article] Linking OpenCV to an iOS project [article]
Read more
  • 0
  • 0
  • 2129

article-image-android-virtual-device-manager
Packt
06 Feb 2015
8 min read
Save for later

Android Virtual Device Manager

Packt
06 Feb 2015
8 min read
This article written by Belén Cruz Zapata, the author of the book Android Studio Essentials, teaches us the uses of the AVD Manager tool. It introduces us to the Google Play services. (For more resources related to this topic, see here.) The Android Virtual Device Manager (AVD Manager) is an Android tool accessible from Android Studio to manage the Android virtual devices that will be executed in the Android emulator. To open the AVD Manager from Android Studio, navigate to the Tools | Android | AVD Manager menu option. You can also click on the shortcut from the toolbar. The AVD Manager displays the list of the existing virtual devices. Since we have not created any virtual device, initially the list will be empty. To create our first virtual device, click on the Create Virtual Device button to open the configuration dialog. The first step is to select the hardware configuration of the virtual device. The hardware definitions are listed on the left side of the window. Select one of them, like the Nexus 5, to examine its details on the right side as shown in the following screenshot. Hardware definitions can be classified into one of these categories: Phone, Tablet, Wear or TV. We can also configure our own hardware device definitions from the AVD Manager. We can create a new definition using the New Hardware Profile button. The Clone Device button creates a duplicate of an existing device. Click on the New Hardware Profile button to examine the existing configuration parameters. The most important parameters that define a device are: Device Name: Name of the device. Screensize: Screen size in inches. This value determines the size category of the device. Type a value of 4.0 and notice how the Size value (on the right side) is normal. Now type a value of 7.0 and the Size field changes its value to large. This parameter along with the screen resolution also determines the density category. Resolution: Screen resolution in pixels. This value determines the density category of the device. Having a screen size of 4.0 inches, type a value of 768 x 1280 and notice how the density value is 400 dpi. Change the screen size to 6.0 inches and the density value changes to hdpi. Now change the resolution to 480 x 800 and the density value is mdpi. RAM: RAM memory size of the device. Input: Indicate if the home, back, or menu buttons of the device are available via software or hardware. Supported device states: Check the allowed states. Cameras: Select if the device has a front camera or a back camera. Sensors: Sensors available in the device: accelerometer, gyroscope, GPS, and proximity sensor. Default Skin: Select additional hardware controls. Create a new device with a screen size of 4.7 inches, a resolution of 800 x 1280, a RAM value of 500 MiB, software buttons, and both portrait and landscape states enabled. Name it as My Device. Click on the Finish button. The hardware definition has been added to the list of configurations. Click on the Next button to continue the creation of a new virtual device. The next step is to select the virtual device system image and the target Android platform. Each platform has its architecture, so the system images that are installed on your system will be listed along with the rest of the images that can be downloaded (Show downloadable system images box checked). Download and select one of the images of the Lollipop release and click on the Next button. Finally, the last step is to verify the configuration of the virtual device. Enter the name of the Android Virtual Device in the AVD Name field. Give the virtual device a meaningful name to recognize it easily, such as AVD_nexus5_api21. Click on the Show Advanced Settings button. The settings that we can configure for the virtual device are the following: Emulation Options: The Store a snapshot for faster startup option saves the state of the emulator in order to load faster the next time. The Use Host GPU tries to accelerate the GPU hardware to run the emulator faster. Custom skin definition: Select if additional hardware controls are displayed in the emulator. Memory and Storage: Select the memory parameters of the virtual device. Let the default values, unless a warning message is shown; in this case, follow the instructions of the message. For example, select 1536M for the RAM memory and 64 for the VM Heap. The Internal Storage can also be configured. Select for example: 200 MiB. Select the size of the SD Card or select a file to behave as the SD card. Device: Select one of the available device configurations. These configurations are the ones we tested in the layout editor preview. Select the Nexus 5 device to load its parameters in the dialog. Target: Select the device Android platform. We have to create one virtual device with the minimum platform supported by our application and another virtual device with the target platform of our application. For this first virtual device, select the target platform, Android 4.4.2 - API Level 19. CPU/ABI: Select the device architecture. The value of this field is set when we select the target platform. Each platform has its architecture, so if we do not have it installed, the following message will be shown; No system images installed for this target. To solve this, open the SDK Manager and search for one of the architectures of the target platform, ARM EABI v7a System Image or Intel x86 Atom System Image. Keyboard: Select if a hardware keyboard is displayed in the emulator. Check it. Skin: Select if additional hardware controls are displayed in the emulator. You can select the Skin with dynamic hardware controls option. Front Camera: Select if the emulator has a front camera or a back camera. The camera can be emulated or can be real by the use of a webcam from the computer. Select None for both cameras. Keyboard: Select if a hardware keyboard is displayed in the emulator. Check it. Network: Select the speed of the simulated network and select the delay in processing data across the network. The new virtual device is now listed in the AVD Manager. Select the recently created virtual device to enable the remaining actions: Start: Run the virtual device. Edit: Edit the virtual device configuration. Duplicate: Creates a new device configuration displaying the last step of the creation process. You can change its configuration parameters and then verify the new device. Wipe Data: Removes the user files from the virtual device. Show on Disk: Opens the virtual device directory on your system. View Details: Open a dialog detailing the virtual device characteristics. Delete: Delete the virtual device. Click on the Start button. The emulator will be opened as shown in the following screenshot. Wait until it is completely loaded, and then you will be able to try it. In Android Studio, open the main layout with the graphical editor and click on the list of the devices. As the following screenshot shows, our custom device definition appears and we can select it to preview the layout: Navigation Editor The Navigation Editor is a tool to create and structure the layouts of the application using a graphical viewer. To open this tool navigate to the Tools | Android | Navigation Editor menu. The tool opens a file in XML format named main.nvg.xml. This file is stored in your project at /.navigation/app/raw/. Since there is only one layout and one activity in our project, the navigation editor only shows this main layout. If you select the layout, detailed information about it is displayed on the right panel of the editor. If you double-click on the layout, the XML layout file will be opened in a new tab. We can create a new activity by right-mouse clicking on the editor and selecting the New Activity option. We can also add transitions from the controls of a layout by shift clicking on a control and then dragging to the target activity. Open the main layout and create a new button with the label Open Activity: <Button        android_id="@+id/button_open"        android_layout_width="wrap_content"        android_layout_height="wrap_content"        android_layout_below="@+id/button_accept"        android_layout_centerHorizontal="true"        android_text="Open Activity" /> Open the Navigation Editor and add a second activity. Now the navigation editor displays both activities as the next screenshot shows. Now we can add the navigation between them. Shift-drag from the new button of the main activity to the second activity. A blue line and a pink circle have been added to represent the new navigation. Select the navigation relationship to see its details on the right panel as shown in the following screenshot. The right panel shows the source the activity, the destination activity and the gesture that triggers the navigation. Now open our main activity class and notice the new code that has been added to implement the recently created navigation. The onCreate method now contains the following code: findViewById(R.id.button_open).setOnClickListener( new View.OnClickListener() { @Override public void onClick(View v) { MainActivity.this.startActivity( new Intent(MainActivity.this, Activity2.class)); } }); This code sets the onClick method of the new button, from where the second activity is launched. Summary This article thought us about the Navigation Editor tool. It also showed how to integrate the Google Play services with a project in Android Studio. In this article, we got acquainted to the AVD Manager tool. Resources for Article: Further resources on this subject: Android Native Application API [article] Creating User Interfaces [article] Android 3.0 Application Development: Multimedia Management [article]
Read more
  • 0
  • 0
  • 5816

article-image-application-connectivity-and-network-events
Packt
26 Dec 2014
10 min read
Save for later

Application Connectivity and Network Events

Packt
26 Dec 2014
10 min read
 In this article by Kerri Shotts, author of PhoneGap for Enterprise, we will see how an app reacts to the network changes and activities. In an increasingly connected world, mobile devices aren't always connected to the network. As such, the app needs to be sensitive to changes in the device's network connectivity. It also needs to be sensitive to the type of network (for example, cellular versus wired), not to mention being sensitive to the device the app itself is running on. Given all this, we will cover the following topics: Determining network connectivity Getting the current network type Detecting changes in connectivity Handling connectivity issues (For more resources related to this topic, see here.) Determining network connectivity In a perfect world, we'd never have to worry if the device was connected to the Internet or not, and if our backend was reachable. Of course, we don't live in that world, so we need to respond appropriately when the device's network connectivity changes. What's critical to remember is that having a network connection in no way determines the reachability of a host. That is to say, it's entirely possible for a device to be connected to a Wi-Fi network or a mobile hotspot and yet is unable to contact your servers. This can happen for several reasons (any of which can prevent proper communication with your backend). In short, determining the network status and being sensitive to changes in the status really tells you only one thing: whether or not it is futile to attempt communication. After all, if the device isn't connected to any network, there's no reason to attempt communication over a nonexistent network. On the other hand, if a network is available, the only way to determine if your hosts are reachable or not is to try and contact them. The ability to determine the device's network connectivity and respond to changes in the status is not available in Cordova/PhoneGap by default. You'll need to add a plugin before you can use this particular feature. You can install the plugin as follows: cordova plugin add org.apache.cordova.network-information The plugin's complete documentation is available at: https://github.com/apache/cordova-plugin-network-information/blob/master/doc/index.md. Getting the current network type Anytime after the deviceready event fires, you can query the plugin for the status of the current network connection by querying navigator.connection.type: var networkType = navigator.connection.type; switch (networkType) { case Connection.UNKNOWN: console.log ("Unknown connection."); break; case Connection.ETHERNET: console.log ("Ethernet connection."); break; case Connection.WIFI: console.log ("Wi-Fi connection."); break; case Connection.CELL_2G: console.log ( "Cellular (2G) connection."); break; case Connection.CELL_3G: console.log ( "Cellular (3G) connection."); break; case Connection.CELL_4G: console.log ( "Cellular (4G) connection."); break; case Connection.CELL: console.log ( "Cellular connection."); break; case Connection.NONE: console.log ( "No network connection."); break; } If you executed the preceding code on a typical mobile device, you'd probably either see some variation of the Cellular connection or the Wi-Fi connection message. If your device was on Wi-Fi and you proceeded to disable it and rerun the app, the Wi-Fi notice will be replaced with the Cellular connection notice. Now, if you put the device into airplane mode and rerun the app, you should see No network connection. Based on the available network type constants, it's clear that we can use this information in various ways: We can tell if it makes sense to attempt a network request: if the type is Connection.NONE, there's no point in trying as there's no network to service the request. We can tell if we are on a wired network, a Wi-Fi network, or a cellular network. Consider a streaming video app; this app can not only permit full quality video on a wired/Wi-Fi network, but can also use a lower quality video stream if it was running on a cellular connection. Although tempting, there's one thing the earlier code does not tell us: the speed of the network. That is, we can't use the type of the network as a proxy for the available bandwidth, even though it feels like we can. After all, aren't Ethernet connections typically faster than Wi-Fi connections? Also, isn't a 4G cellular connection faster than a 2G connection? In ideal circumstances, you'd be right. Unfortunately, it's possible for a fast 4G cellular network to be very congested, thus resulting in poor throughput. Likewise, it is possible for an Ethernet connection to communicate over a noisy wire and interact with a heavily congested network. This can also slow throughput. Also, while it's important to recognize that although you can learn something about the network the device is connected to, you can't use this to learn anything about the network conditions beyond that network. The device might indicate that it is attached to a Wi-Fi network, but this Wi-Fi network might actually be a mobile hotspot. It could be connected to a satellite with high latency, or to a blazing fast fiber network. As such, the only two things we can know for sure is whether or not it makes sense to attempt a request, and whether or not we need to limit the bandwidth if the device knows it is on a cellular connection. That's it. Any other use of this information is an abuse of the plugin, and is likely to cause undesirable behavior. Detecting changes in connectivity Determining the type of network connection once does little good as the device can lose the connection or join a new network at any time. This means that we need to properly respond to these events in order to provide a good user experience. Do not rely on the following events being fired when your app starts up for the first time. On some devices, it might take several seconds for the first event to fire; however, in some cases, the events might never fire (specifically, if testing in a simulator). There are two events our app needs to listen to: the online event and the offline event. Their names are indicative of their function, so chances are good you already know what they do. The online event is fired when the device connects to a network, assuming it wasn't connected to a network before. The offline event does the opposite: it is fired when the device loses a connection to a network, but only if the device was previously connected to a network. This means that you can't depend on these events to detect changes in the type of the network: a move from a Wi-Fi network to a cellular network might not elicit any events at all. In order to listen to these events, you can use the following code: document.addEventListener ("online", handleOnlineEvent, false); document.addEventListener ("offline", handleOfflineEvent, false); The event listener doesn't receive any information, so you'll almost certainly want to check the network type when handling an online event. The offline event will always correspond to a Connection.NONE network type. Having the ability to detect changes in the connectivity status means that our app can be more intelligent about how it handles network requests, but it doesn't tell us if a request is guaranteed to succeed. Handling connectivity issues As the only way to know if a network request might succeed is to actually attempt the request; we need to know how to properly handle the errors that might rise out of such an attempt. Between the Mobile and the Middle tier, the following are the possible errors that you might encounter while connecting to a network: TimeoutError: This error is thrown when the XHR times out. (Default is 30 seconds for our wrapper, but if the XHR's timeout isn't otherwise set, it will attempt to wait forever.) HTTPError: This error is thrown when the XHR completes and receives a response other than 200 OK. This can indicate any number of problems, but it does not indicate a network connectivity issue. JSONError: This error is thrown when the XHR completes, but the JSON response from the server cannot be parsed. Something is clearly wrong on the server, of course, but this does not indicate a connectivity issue. XHRError: This error is thrown when an error occurs when executing the XHR. This is definitely indicative of something going very wrong (not necessarily a connectivity issue, but there's a good chance). MaxRetryAttemptsReached: This error is thrown when the XHR wrapper has given up retrying the request. The wrapper automatically retries in the case of TimeoutError and XHRError. In all the earlier cases, the catch method in the promise chain is called. At this point, you can attempt to determine the type of error in order to determine what to do next: function sendFailRequest() { XHR.send( "GET", "http://www.really-bad-host-name.com /this/will/fail" ) .then(function( response ) {    console.log( response ); }) .catch( function( err ) {    if ( err instanceof XHR.XHRError ||     err instanceof XHR.TimeoutError ||     err instanceof XHR.MaxRetryAttemptsReached ) {      if ( navigator.connection.type === Connection.NONE ) {        // we could try again once we have a network connection        var retryRequest = function() {          sendFailRequest();          APP.removeGlobalEventListener( "networkOnline",         retryRequest );        };        // wait for the network to come online – we'll cover       this method in a moment        APP.addGlobalEventListener( "networkOnline",       retryRequest );      } else {        // we have a connection, but can't get through       something's going on that we can't fix.        alert( "Notice: can't connect to the server." );      }    }    if ( err instanceof XHR.HTTPError ) {      switch ( err.HTTPStatus ) {      case 401: // unauthorized, log the user back in        break;        case 403: // forbidden, user doesn't have access        break;        case 404: // not found        break;        case 500: // internal server error        break;        default:       console.log( "unhandled error: ", err.HTTPStatus );      }    }    if ( err instanceof XHR.JSONParseError ) {      console.log( "Issue parsing XHR response from server." );    } }).done(); } sendFailRequest(); Once a connection error is encountered, it's largely up to you and the type of app you are building to determine what to do next, but there are several options to consider as your next course of action: Fail loudly and let the user know that their last action failed. It might not be terribly great for user experience, but it might be the only sensible thing to do. Check whether there is a network connection present, and if not, hold on to the request until an online event is received and then send the request again. This makes sense only if the request you are sending is a request for data, not a request for changing data, as the data might have changed in the interim. Summary In this article you learnt how an app built using PhoneGap/Cordova reacts to the changing network conditions, also how to handle the connectivity issues that you might encounter. Resources for Article: Further resources on this subject: Configuring the ChildBrowser plugin [article] Using Location Data with PhoneGap [article] Working with the sharing plugin [article]
Read more
  • 0
  • 0
  • 1123
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-sneak-peek-ios-touch-id
Packt
23 Dec 2014
4 min read
Save for later

Sneak peek into iOS Touch ID

Packt
23 Dec 2014
4 min read
This article, created by Mayank Birani, the author of Learning iOS 8 for Enterprise, Apple introduced a new feature in iOS 7 called Touch ID authentication. Previously, there was only four-digit passcode security in iPhones; now, Apple has extended security and introduced a new security pattern in iPhones. In Touch ID authentication, our fingerprint acts as a password. After launching the Touch ID fingerprint-recognition technology in the iPhone 5S last year, Apple is now providing it for developers with iOS 8. Now, third-party apps will be able to use Touch ID for authentication in the new iPhone and iPad OSes. Accounting apps, and other apps that contain personal and important data, will be protected with Touch ID. Now, you can protect all your apps with your fingerprint password. (For more resources related to this topic, see here.) There are two ways to use Touch ID as an authentication mechanism in our iOS 8 applications. They are explained in the following sections. Touch ID through touch authentication The Local Authentication API is an API that returns a Boolean value to accept and decline the fingerprint. If there is an error, then an error code gets executed and tells us what the issue is. Certain conditions have to be met when using Local Authentication. They are as follows: The application must be in the foreground (this doesn't work with background processes) If you're using the straight Local Authentication method, you will be responsible for handling all the errors and properly responding with your UI to ensure that there is an alternative method to log in to your apps Touch ID through Keychain Access Keychain Access includes the new Touch ID integration in iOS 8. In Keychain Access, we don't have to work on implementation details; it automatically handles the passcode implementation using the user's passcode. Several keychain items can be chosen to use Touch ID to unlock the item when requested in code through the use of the new Access Control Lists (ACLs). ACL is a feature of iOS 8. If Touch ID has been locked out, then it will allow the user to enter the device's passcode to proceed without any interruption. There are some features of Keychain Access that make it the best option for us. They are listed here: Keychain Access uses Touch ID, and its attributes won't be synced by any cloud services. So, these features make it very safe to use. If users overlay more than one query, then the system gets confused about correct user, and it will pop up a dialog box with multiple touch issues. How to use the Local Authentication framework Apple provides a framework to use Touch ID in our app called Local Authentication. This framework was introduced for iOS 8. To make an app, including the Touch ID authentication, we need to import this framework in our code. It is present in the framework library of Apple. Let's see how to use the Local Authentication framework: Import the Local Authentication framework as follows: #import<localAuthentication/localAuthentication.h> This framework will work on Xcode 6 and above. To use this API, we have to create a Local Authentication context, as follows: LAContext *passcode = [[LAContext alloc] init]; Now, check whether Touch ID is available or not and whether it can be used for authentication: - (BOOL)canEvaluatePolicy:(LAPolicy)policy error: (NSError * __autoreleasing *)error; To display Touch ID, use the following code: - (void)evaluatePolicy:(LAPolicy)policy localizedReason: (NSString *)localizedReason    reply:(void(^)(BOOL success, NSError *error))reply; Take a look at the following example of Touch ID: LAContext *passcode = [[LAContext alloc] init]; NSError *error = nil; NSString *Reason = <#String explaining why our app needs authentication#>; if ([passcode canEvaluatePolicy: LAPolicyDeviceOwnerAuthenticationWithBiometrics error:&error]) { [passcode evaluatePolicy:      LAPolicyDeviceOwnerAuthenticationWithBiometrics    localizedReason:Reason reply:^(BOOL success, NSError      *error) {        if (success)        {          // User authenticated successfully        } else        {          // User did not authenticate successfully,            go through the error        }    }]; } else {    // could not go through policy look at error and show an appropriate message to user } Summary In this article, we focused on the Touch ID API, which was introduced in iOS 8. We also discussed how Apple has improved its security feature using this API. Resources for Article: Further resources on this subject: Sparrow iOS Game Framework - The Basics of Our Game [article] Updating data in the background [article] Physics with UIKit Dynamics [article]
Read more
  • 0
  • 0
  • 2193

article-image-getting-ready-launch-your-phonegap-app-real-world
Packt
31 Oct 2014
7 min read
Save for later

Getting Ready to Launch Your PhoneGap App in the Real World

Packt
31 Oct 2014
7 min read
In this article by Yuxian, Eugene Liang, author of PhoneGap and AngularJS for Cross-platform Development, we will run through some of the stuff that you should be doing before launching your app to the world, whether it's through Apple App Store or Google Android Play Store. (For more resources related to this topic, see here.) Using phonegap.com The services on https://build.phonegap.com/ are a straightforward way for you to get your app compiled for various devices. While this is a paid service, there is a free plan if you only have one app that you want to work on. This would be fine in our case. Choose a plan from PhoneGap You will need to have an Adobe ID in order to use PhoneGap services. If not, feel free to create one. Since the process for generating compiled apps from PhoneGap may change, it's best that you visit https://build.phonegap.com/ and sign up for their services and follow their instructions. Preparing your PhoneGap app for an Android release This section generally focuses on things that are specific for the Android platform. This is by no means a comprehensive checklist, but has some of the common tasks that you should go through before releasing your app to the Android world. Testing your app on real devices It is always good to run your app on an actual handset to see how the app is working. To run your PhoneGap app on a real device, issue the following command after you plug your handset into your computer: cordova run android You will see that your app now runs on your handset. Exporting your app to install on other devices In the previous section we talked about installing your app on your device. What if you want to export the APK so that you can test the app on other devices? Here's what you can do: As usual, build your app using cordova build android Alternatively, if you can, run cordova build release The previous step will create an unsigned release APK at /path_to_your_project/platforms/android/ant-build. This app is called YourAppName-release-unsigned.apk. Now, you can simply copy YourAppName-release-unsigned.apk and install it on any android based device you want. Preparing promotional artwork for release In general, you will need to include screenshots of your app for upload to Google Play. In case your device does not allow you to take screenshots, here's what you can do: The first technique that you can use is to simply run your app in the emulator and take screenshots off it. The size of the screenshot may be substantially larger, so you can crop it using GIMP or some other online image resizer. Alternatively, use the web app version and open it in your Google Chrome Browser. Resize your browser window so that it is narrow enough to resemble the width of mobile devices. Building your app for release To build your app for release, you will need Eclipse IDE. To start your Eclipse IDE, navigate to File | New | Project. Next, navigate to Existing Code | Android | Android Project. Click on Browse and select the root directory of your app. The Project to Import window should show platforms/android. Now, select Copy projects into workspace if you want and then click on Finish. Signing the app We have previously exported the app (unsigned) so that we can test it on devices other than those plugged into our computer. However, to release your app to the Play Store, you need to sign them with keys. The steps here are the general steps that you need to follow in order to generate "signed" APK apps to upload your app to the Play Store. Right-click on the project that you have imported in the previous section, and then navigate to Android Tools | Export Signed Application Package. You will see the Project Checks dialog. In the Project Checks dialog, you will see if your project has any errors or not. Next, you should see the Keystore selection dialog. You will now create the key using the app name (without space) and the extension .keystore. Since this app is the first version, there is no prior original name to use. Now, you can browse to the location and save the keystore, and in the same box, give the name of the keystore. In the Keystore election dialog, add your desired password twice and click on Next. You will now see the Key Creation dialog. In the Key Creation dialog, use app_name as your alias (without any spaces) and give the password of your keystore. Feel free to enter 50 for validity (which means the password is valid for 50 years). The remaining fields such as names, organization, and so on are pretty straightforward, so you can just go ahead and fill them in. Finally, select the Destination APK file, which is the location to which you will export your .apk file. Bear in mind that the preceding steps are not a comprehensive list of instructions. For the official documentation, feel free to visit http://developer.android.com/tools/publishing/app-signing.html. Now that we are done with Android, it's time to prepare our app for iOS. iOS As you might already know, preparing your PhoneGap app for Apple App Store requires similar levels, if not more, as compared to your usual Android deployment. In this section, I will not be covering things like making sure your app is in tandem with Apple User Interface guidelines, but rather, how to improve your app before it reaches the App Store. Before we get started, there are some basic requirements: Apple Developer Membership (if you ultimately want to deploy to the App Store) Xcode Running your app on an iOS device If you already have an iOS device, all you need to do is to plug your iOS device to your computer and issue the following command: cordova run ios You should see that your PhoneGap app will build and launch on your device. Note that before running the preceding command, you will need to install the ios-deploy package. You can install it using the following command: sudo npm install –g ios-deploy Other techniques There are other ways to test and deploy your apps. These methods can be useful if you want to deploy your app to your own devices or even for external device testing. Using Xcode Now let's get started with Xcode: After starting your project using the command-line tool and after adding in iOS platform support, you may actually start developing using Xcode. You can start your Xcode and click on Open Other, as shown in the following screenshot: Once you have clicked on Open Other, you will need to browse to your ToDo app folder. Drill down until you see ToDo.xcodeproj (navigate to platforms | ios). Select and open this file. You will see your Xcode device importing the files. After it's all done, you should see something like the following screenshot: Files imported into Xcode Notice that all the files are now imported to your Xcode, and you can start working from here. You can also deploy your app either to devices or simulators: Deploy on your device or on simulators Summary In this article, we went through the basics of packaging your app before submission to the respective app stores. In general, you should have a good idea of how to develop AngularJS apps and apply mobile skins on them so that it can be used on PhoneGap. You should also notice that developing for PhoneGap apps typically takes the pattern of creating a web app first, before converting it to a PhoneGap version. Of course, you may structure your project so that you can build a PhoneGap version from day one, but it may make testing more difficult. Anyway, I hope that you enjoyed this article and feel free to follow me at http://www.liangeugene.com and http://growthsnippets.com. Resources for Article: Further resources on this subject: Using Location Data with PhoneGap [Article] Working with the sharing plugin [Article] Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere [Article]
Read more
  • 0
  • 0
  • 1367

article-image-building-iphone-app-using-swift-part-2
Ryan Loomba
29 Oct 2014
5 min read
Save for later

Building an iPhone App Using Swift: Part 2

Ryan Loomba
29 Oct 2014
5 min read
Let’s continue on from Part 1, and add a new table view to our app. In our storyboard, let’s add a table view controller by searching in the bottom right and dragging. Next, let’s add a button to our main view controller that will link to our new table view controller. Similar to what we did with the web view, Ctrl + click on this button and drag it to the newly created table view controller.Upon release, choose push. Now, let’s make sure everything works properly. Hit the large play button and click on Table View. You should now be taken to a blank table: Let’s populate this table with some text. Go to File ->  New ->  File  and choose a Cocoa Touch Class. Let’s call this file TableViewController, and make this a subclass of UITableViewController in the Swift language. Once the file is saved, we’ll be presented with a file with some boilerplate code.  On the first line in our class file, let’s declare a constant. This constant will be an array of strings that will be inserted into our table: let tableArray: NSArray = ["Apple", "Orange", "Banana", "Grape", "Kiwi"] Let’s modify the function that has this signature: func tableView(tableView: UITableView!, numberOfRowsInSection section: Int) -> Int This function returns the number of rows in our table view. Instead of setting this to zero, let’s change this to ten. Next, let’s uncomment the function that has this signature: override func numberOfSectionsInTableView(tableView: UITableView!) -> Int This function controls how many sections we will have in our table view. Let’s modify this function to return 1.  Finally, let’s add a function that will populate our cells: override func tableView(tableView: UITableView!, cellForRowAtIndexPath indexPath: NSIndexPath!) -> UITableViewCell! { let cell: UITableViewCell = UITableViewCell(style: UITableViewCellStyle.Subtitle, reuseIdentifier: "MyTestCell") cell.textLabel.text = tableArray.objectAtIndex(indexPath.row) as NSString return cell }  This function iterates through each row in our table and sets the text value to be equal to the fruits we declared at the top of the class file. The final file should look like this: class TableViewController: UITableViewController { let tableArray: NSArray = ["Apple", "Orange", "Banana", "Grape", "Kiwi"] override func viewDidLoad() { super.viewDidLoad() } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } // MARK: - Table view data source override func numberOfSectionsInTableView(tableView: UITableView!) -> Int { // #warning Potentially incomplete method implementation. // Return the number of sections. return 1 } override func tableView(tableView: UITableView!, numberOfRowsInSection section: Int) -> Int { // #warning Incomplete method implementation. // Return the number of rows in the section. return tableArray.count } override func tableView(tableView: UITableView!, cellForRowAtIndexPath indexPath: NSIndexPath!) -> UITableViewCell! { let cell: UITableViewCell = UITableViewCell(style: UITableViewCellStyle.Subtitle, reuseIdentifier: "MyTestCell") cell.textLabel.text = tableArray.objectAtIndex(indexPath.row) as NSString return cell } } Finally, we need to go back to our storyboard and link to our custom table view controller class. Select the storyboard, click on the table view controller, choose the identity inspector and fill in TableViewController  for the custom class. If we click the play button to build our project and then click on our table view button, we should see our table populated with names of fruit: Adding a map view Click on the Sample Swift App icon in the top left of the screen and then choose Build Phases. Under Link Binary with Libraries, click the plus button and search for MapKit. Once found, click Add: In the story board, add another view controller. Search for a MKMapView and drag it into the newly created controller. In the main navigation controller, create another button named Map View, Ctrl + click + drag to the newly created view controller, and upon release choose push: Additionally, choose the Map View in the storyboard, click on the connections inspector, Ctrl + click on delegate and drag to the map view controller. Next, let’s create a custom view controller that will control our map view. Go to File -> New -> File and choose Cocoa Touch. Let’s call this file MapViewController and inherit from UIViewController. Let’s now link our map view in our storyboard to our newly created map view controller file. In the storyboard, Ctrl + click on the map view and drag to our Map View Controller to create an IBOutlet variable. It should look something like this: @IBOutlet var mapView: MKMapView! Let’s add some code to our controller that will display the map around Apple’s campus in Cupertino, CA. I’ve looked up the GPS coordinates already, so here is what the completed code should look like: import UIKit import MapKit class MapViewController: UIViewController, MKMapViewDelegate { @IBOutlet var mapView: MKMapView! override func viewDidLoad() { super.viewDidLoad() let latitude:CLLocationDegrees = 37.331789 let longitude:CLLocationDegrees = -122.029620 let latitudeDelta:CLLocationDegrees = 0.01 let longitudeDelta:CLLocationDegrees = 0.01 let span:MKCoordinateSpan = MKCoordinateSpan(latitudeDelta: latitudeDelta, longitudeDelta: longitudeDelta) let location:CLLocationCoordinate2D = CLLocationCoordinate2DMake(latitude, longitude) let region: MKCoordinateRegion = MKCoordinateRegionMake(location, span) self.mapView.setRegion(region, animated: true) // Do any additional setup after loading the view. } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } } This should now build, and when you click on the Map View button, you should be able to see a map showing Apple’s campus at the center of the screen.  About the Author Ryan is a software engineer and electronic dance music producer currently residing in San Francisco, CA. Ryan started up as a biomedical engineer but fell in love with web/mobile programming after building his first Android app. You can find him on GitHub @rloomba.
Read more
  • 0
  • 0
  • 1551

article-image-livecode-loops-and-timers
Packt
10 Sep 2014
9 min read
Save for later

LiveCode: Loops and Timers

Packt
10 Sep 2014
9 min read
In this article by Dr Edward Lavieri, author of LiveCode Mobile Development Cookbook, you will learn how to use timers and loops in your mobile apps. Timers can be used for many different functions, including a basketball shot clock, car racing time, the length of time logged into a system, and so much more. Loops are useful for counting and iterating through lists. All of this will be covered in this article. (For more resources related to this topic, see here.) Implementing a countdown timer To implement a countdown timer, we will create two objects: a field to display the current timer and a button to start the countdown. We will code two handlers: one for the button and one for the timer. How to do it... Perform the following steps to create a countdown timer: Create a new main stack. Place a field on the stack's card and name it timerDisplay. Place a button on the stack's card and name it Count Down. Add the following code to the Count Down button: on mouseUp local pTime put 19 into pTime put pTime into fld "timerDisplay" countDownTimer pTime end mouseUp Add the following code to the Count Down button: on countDownTimer currentTimerValue subtract 1 from currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue > 0 then send "countDownTimer" && currentTimerValue to me in 1 sec end if end countDownTimer Test the code using a mobile simulator or an actual device. How it works... To implement our timer, we created a simple callback situation where the countDownTimer method will be called each second until the timer is zero. We avoided the temptation to use a repeat loop because that would have blocked all other messages and introduced unwanted app behavior. There's more... LiveCode provides us with the send command, which allows us to transfer messages to handlers and objects immediately or at a specific time, such as this recipe's example. Implementing a count-up timer To implement a count-up timer, we will create two objects: a field to display the current timer and a button to start the upwards counting. We will code two handlers: one for the button and one for the timer. How to do it... Perform the following steps to implement a count-up timer: Create a new main stack. Place a field on the stack's card and name it timerDisplay. Place a button on the stack's card and name it Count Up. Add the following code to the Count Up button: on mouseUp local pTime put 0 into pTime put pTime into fld "timerDisplay" countUpTimer pTime end mouseUp Add the following code to the Count Up button: on countUpTimer currentTimerValue add 1 to currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue < 10 then send "countUpTimer" && currentTimerValue to me in 1 sec end if end countUpTimer Test the code using a mobile simulator or an actual device. How it works... To implement our timer, we created a simple callback situation where the countUpTimer method will be called each second until the timer is at 10. We avoided the temptation to use a repeat loop because that would have blocked all other messages and introduced unwanted app behavior. There's more... Timers can be tricky, especially on mobile devices. For example, using the repeat loop control when working with timers is not recommended because repeat blocks other messages. Pausing a timer It can be important to have the ability to stop or pause a timer once it is started. The difference between stopping and pausing a timer is in keeping track of where the timer was when it was interrupted. In this recipe, you'll learn how to pause a timer. Of course, if you never resume the timer, then the act of pausing it has the same effect as stopping it. How to do it... Use the following steps to create a count-up timer and pause function: Create a new main stack. Place a field on the stack's card and name it timerDisplay. Place a button on the stack's card and name it Count Up. Add the following code to the Count Up button: on mouseUp local pTime put 0 into pTime put pTime into fld "timerDisplay" countUpTimer pTime end mouseUp Add the following code to the Count Up button: on countUpTimer currentTimerValue add 1 to currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue < 60 then send "countUpTimer" && currentTimerValue to me in 1 sec end if end countUpTimer Add a button to the card and name it Pause. Add the following code to the Pause button: on mouseUp repeat for each line i in the pendingMessages cancel (item 1 of i) end repeat end mouseUp In LiveCode, the pendingMessages option returns a list of currently scheduled messages. These are messages that have been scheduled for delivery but are yet to be delivered. To test this, first click on the Count Up button, and then click on the Pause button before the timer reaches 60. How it works... We first created a timer that counts up from 0 to 60. Next, we created a Pause button that, when clicked, cancels all pending system messages, including the call to the countUpTimer handler. Resuming a timer If you have a timer as part of your mobile app, you will most likely want the user to be able to pause and resume a timer, either directly or through in-app actions. See previous recipes in this article to create and pause a timer. This recipe covers how to resume a timer once it is paused. How to do it... Perform the following steps to resume a timer once it is paused: Create a new main stack. Place a field on the stack's card and name it timerDisplay. Place a button on the stack's card and name it Count Up. Add the following code to the Count Up button: on mouseUp local pTime put 0 into pTime put pTime into fld "timerDisplay" countUpTimer pTime end mouseUp on countUpTimer currentTimerValue add 1 to currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue < 60 then send "countUpTimer" && currentTimerValue to me in 1 sec end if end countUpTimer Add a button to the card and name it Pause. Add the following code to the Pause button: on mouseUp repeat for each line i in the pendingMessages cancel (item 1 of i) end repeat end mouseUp Place a button on the card and name it Resume. Add the following code to the Resume button: on mouseUp local pTime put the text of fld "timerDisplay" into pTime countUpTimer pTime end mouseUp on countUpTimer currentTimerValue add 1 to currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue <60 then send "countUpTimer" && currentTimerValue to me in 1 sec end if end countUpTimer To test this, first, click on the Count Up button, then click on the Pause button before the timer reaches 60. Finally, click on the Resume button. How it works... We first created a timer that counts up from 0 to 60. Next, we created a Pause button that, when clicked, cancels all pending system messages, including the call to the countUpTimer handler. When the Resume button is clicked on, the current value of the timer, based on the timerDisplay button, is used to continue incrementing the timer. In LiveCode, pendingMessages returns a list of currently scheduled messages. These are messages that have been scheduled for delivery but are yet to be delivered. Using a loop to count There are numerous reasons why you might want to implement a counter in a mobile app. You might want to count the number of items on a screen (that is, cold pieces in a game), the number of players using your app simultaneously, and so on. One of the easiest methods of counting is to use a loop. This recipe shows you how to easily implement a loop. How to do it... Use the following steps to instantiate a loop that counts: Create a new main stack. Rename the stack's default card to MainScreen. Drag a label field to the card and name it counterDisplay. Drag five checkboxes to the card and place them anywhere. Change the names to 1, 2, 3, 4, and 5. Drag a button to the card and name it Loop to Count. Add the following code to the Loop to Count button: on mouseUp local tButtonNumber put the number of buttons on this card into tButtonNumber if tButtonNumber > 0 then repeat with tLoop = 1 to tButtonNumber set the label of btn value(tLoop) to "Changed " & tLoop end repeat put "Number of button's changed: " & tButtonNumber into fld "counterDisplay" end if end mouseUp Test the code by running it in a mobile simulator or on an actual device. How it works... In this recipe, we created several buttons on a card. Next, we created code to count the number of buttons and a repeat control structure to sequence through the buttons and change their labels. Using a loop to iterate through a list In this recipe, we will create a loop to iterate through a list of text items. Our list will be a to-do or action list. Our loop will process each line and number them on screen. This type of loop can be useful when you need to process lists of unknown lengths. How to do it... Perform the following steps to create an iterative loop: Create a new main stack. Drag a scrolling list field to the stack's card and name it myList. Change the contents of the myList field to the following, paying special attention to the upper- and lowercase values of each line: Wash Truck Write Paper Clean Garage Eat Dinner Study for Exam Drag a button to the card and name it iterate. Add the following code to the iterate button: on mouseUp local tLines put the number of lines of fld "myList" into tLines repeat with tLoop = 1 to tLines put tLoop & " - " & line tLoop of fld "myList"into line tLoop of fld "myList" end repeat end mouseUp Test the code by clicking on the iterate button. How it works... We used the repeat control structure to iterate through a list field one line at a time. This was accomplished by first determining the number of lines in that list field, and then setting the repeat control structure to sequence through the lines. Summary In this article we examined the LiveCode scripting required to implement and control count-up and countdown timers. We also learnt how to use loops to count and iterate through a list. Resources for Article:  Further resources on this subject: Introduction to Mobile Forensics [article] Working with Pentaho Mobile BI [article] Building Mobile Apps [article]
Read more
  • 0
  • 0
  • 5926
article-image-unit-and-functional-tests
Packt
21 Aug 2014
13 min read
Save for later

Unit and Functional Tests

Packt
21 Aug 2014
13 min read
In this article by Belén Cruz Zapata and Antonio Hernández Niñirola, authors of the book Testing and Securing Android Studio Applications, you will learn how to use unit tests that allow developers to quickly verify the state and behavior of an activity on its own. (For more resources related to this topic, see here.) Testing activities There are two possible modes of testing activities: Functional testing: In functional testing, the activity being tested is created using the system infrastructure. The test code can communicate with the Android system, send events to the UI, or launch another activity. Unit testing: In unit testing, the activity being tested is created with minimal connection to the system infrastructure. The activity is tested in isolation. In this article, we will explore the Android testing API to learn about the classes and methods that will help you test the activities of your application. The test case classes The Android testing API is based on JUnit. Android JUnit extensions are included in the android.test package. The following figure presents the main classes that are involved when testing activities: Let's learn more about these classes: TestCase: This JUnit class belongs to the junit.framework. The TestCase package represents a general test case. This class is extended by the Android API. InstrumentationTestCase: This class and its subclasses belong to the android.test package. It represents a test case that has access to instrumentation. ActivityTestCase: This class is used to test activities, but for more useful classes, you should use one of its subclasses instead of the main class. ActivityInstrumentationTestCase2: This class provides functional testing of an activity and is parameterized with the activity under test. For example, to evaluate your MainActivity, you have to create a test class named MainActivityTest that extends the ActivityInstrumentationTestCase2 class, shown as follows: public class MainActivityTest extends ActivityInstrumentationTestCase2<MainActivity> ActivityUnitTestCase: This class provides unit testing of an activity and is parameterized with the activity under test. For example, to evaluate your MainActivity, you can create a test class named MainActivityUnitTest that extends the ActivityUnitTestCase class, shown as follows: public class MainActivityUnitTest extends ActivityUnitTestCase<MainActivity> There is a new term that has emerged from the previous classes called Instrumentation. Instrumentation The execution of an application is ruled by the life cycle, which is determined by the Android system. For example, the life cycle of an activity is controlled by the invocation of some methods: onCreate(), onResume(), onDestroy(), and so on. These methods are called by the Android system and your code cannot invoke them, except while testing. The mechanism to allow your test code to invoke callback methods is known as Android instrumentation. Android instrumentation is a set of methods to control a component independent of its normal lifecycle. To invoke the callback methods from your test code, you have to use the classes that are instrumented. For example, to start the activity under test, you can use the getActivity() method that returns the activity instance. For each test method invocation, the activity will not be created until the first time this method is called. Instrumentation is necessary to test activities considering the lifecycle of an activity is based on the callback methods. These callback methods include the UI events as well. From an instrumented test case, you can use the getInstrumentation() method to get access to an Instrumentation object. This class provides methods related to the system interaction with the application. The complete documentation about this class can be found at: http://developer.android.com/reference/android/app/Instrumentation.html. Some of the most important methods are as follows: The addMonitor method: This method adds a monitor to get information about a particular type of Intent and can be used to look for the creation of an activity. A monitor can be created indicating IntentFilter or displaying the name of the activity to the monitor. Optionally, the monitor can block the activity start to return its canned result. You can use the following call definitions to add a monitor: ActivityMonitor addMonitor (IntentFilter filter, ActivityResult result, boolean block). ActivityMonitor addMonitor (String cls, ActivityResult result, boolean block). The following line is an example line code to add a monitor: Instrumentation.ActivityMonitor monitor = getInstrumentation().addMonitor(SecondActivity.class.getName(), null, false); The activity lifecycle methods: The methods to call the activity lifecycle methods are: callActivityOnCreate, callActivityOnDestroy, callActivityOnPause, callActivityOnRestart, callActivityOnResume, callActivityOnStart, finish, and so on. For example, you can pause an activity using the following line code: getInstrumentation().callActivityOnPause(mActivity); The getTargetContext method: This method returns the context for the application. The startActivitySync method: This method starts a new activity and waits for it to begin running. The function returns when the new activity has gone through the full initialization after the call to its onCreate method. The waitForIdleSync method: This method waits for the application to be idle synchronously. The test case methods JUnit's TestCase class provides the following protected methods that can be overridden by the subclasses: setUp(): This method is used to initialize the fixture state of the test case. It is executed before every test method is run. If you override this method, the first line of code will call the superclass. A standard setUp method should follow the given code definition: @Override protected void setUp() throws Exception { super.setUp(); // Initialize the fixture state } tearDown(): This method is used to tear down the fixture state of the test case. You should use this method to release resources. It is executed after running every test method. If you override this method, the last line of the code will call the superclass, shown as follows: @Override protected void tearDown() throws Exception { // Tear down the fixture state super.tearDown(); } The fixture state is usually implemented as a group of member variables but it can also consist of database or network connections. If you open or init connections in the setUp method, they should be closed or released in the tearDown method. When testing activities in Android, you have to initialize the activity under test in the setUp method. This can be done with the getActivity() method. The Assert class and method JUnit's TestCase class extends the Assert class, which provides a set of assert methods to check for certain conditions. When an assert method fails, AssertionFailedException is thrown. The test runner will handle the multiple assertion exceptions to present the testing results. Optionally, you can specify the error message that will be shown if the assert fails. You can read the Android reference of the TestCase class to examine all the available methods at http://developer.android.com/reference/junit/framework/Assert.html. The assertion methods provided by the Assert superclass are as follows: assertEquals: This method checks whether the two values provided are equal. It receives the actual and expected value that is to be compared with each other. This method is overloaded to support values of different types, such as short, String, char, int, byte, boolean, float, double, long, or Object. For example, the following assertion method throws an exception since both values are not equal: assertEquals(true, false); assertTrue or assertFalse: These methods check whether the given Boolean condition is true or false. assertNull or assertNotNull: These methods check whether an object is null or not. assertSame or assertNotSame: These methods check whether two objects refer to the same object or not. fail: This method fails a test. It can be used to make sure that a part of code is never reached, for example, if you want to test that a method throws an exception when it receives a wrong value, as shown in the following code snippet: try{ dontAcceptNullValuesMethod(null); fail("No exception was thrown"); } catch (NullPointerExceptionn e) { // OK } The Android testing API, which extends JUnit, provides additional and more powerful assertion classes: ViewAsserts and MoreAsserts. The ViewAsserts class The assertion methods offered by JUnit's Assert class are not enough if you want to test some special Android objects such as the ones related to the UI. The ViewAsserts class implements more sophisticated methods related to the Android views, that is, for the View objects. The whole list with all the assertion methods can be explored in the Android reference about this class at http://developer.android.com/reference/android/test/ViewAsserts.html. Some of them are described as follows: assertBottomAligned or assertLeftAligned or assertRightAligned or assertTopAligned(View first, View second): These methods check that the two specified View objects are bottom, left, right, or top aligned, respectively assertGroupContains or assertGroupNotContains(ViewGroup parent, View child): These methods check whether the specified ViewGroup object contains the specified child View assertHasScreenCoordinates(View origin, View view, int x, int y): This method checks that the specified View object has a particular position on the origin screen assertHorizontalCenterAligned or assertVerticalCenterAligned(View reference View view): These methods check that the specified View object is horizontally or vertically aligned with respect to the reference view assertOffScreenAbove or assertOffScreenBelow(View origin, View view): These methods check that the specified View object is above or below the visible screen assertOnScreen(View origin, View view): This method checks that the specified View object is loaded on the screen even if it is not visible The MoreAsserts class The Android API extends some of the basic assertion methods from the Assert class to present some additional methods. Some of the methods included in the MoreAsserts class are: assertContainsRegex(String expectedRegex, String actual): This method checks that the expected regular expression (regex) contains the actual given string assertContentsInAnyOrder(Iterable<?> actual, Object… expected): This method checks that the iterable object contains the given objects and in any order assertContentsInOrder(Iterable<?> actual, Object… expected): This method checks that the iterable object contains the given objects, but in the same order assertEmpty: This method checks if a collection is empty assertEquals: This method extends the assertEquals method from JUnit to cover collections: the Set objects, int arrays, String arrays, Object arrays, and so on assertMatchesRegex(String expectedRegex, String actual): This method checks whether the expected regex matches the given actual string exactly Opposite methods such as assertNotContainsRegex, assertNotEmpty, assertNotEquals, and assertNotMatchesRegex are included as well. All these methods are overloaded to optionally include a custom error message. The Android reference about the MoreAsserts class can be inspected to learn more about these assert methods at http://developer.android.com/reference/android/test/MoreAsserts.html. UI testing and TouchUtils The test code is executed in two different threads as the application under test, although, both the threads run in the same process. When testing the UI of an application, UI objects can be referenced from the test code, but you cannot change their properties or send events. There are two strategies to invoke methods that should run in the UI thread: Activity.runOnUiThread(): This method creates a Runnable object in the UI thread in which you can add the code in the run() method. For example, if you want to request the focus of a UI component: public void testComponent() { mActivity.runOnUiThread( new Runnable() { public void run() { mComponent.requestFocus(); } } ); … } @UiThreadTest: This annotation affects the whole method because it is executed on the UI thread. Considering the annotation refers to an entire method, statements that do not interact with the UI are not allowed in it. For example, consider the previous example using this annotation, shown as follows: @UiThreadTest public void testComponent () { mComponent.requestFocus(); … } There is also a helper class that provides methods to perform touch interactions on the view of your application: TouchUtils. The touch events are sent to the UI thread safely from the test thread; therefore, the methods of the TouchUtils class should not be invoked in the UI thread. Some of the methods provided by this helper class are as follows: The clickView method: This method simulates a click on the center of a view The drag, dragQuarterScreenDown, dragViewBy, dragViewTo, dragViewToTop methods: These methods simulate a click on an UI element and then drag it accordingly The longClickView method: This method simulates a long press click on the center of a view The scrollToTop or scrollToBottom methods: These methods scroll a ViewGroup to the top or bottom The mock object classes The Android testing API provides some classes to create mock system objects. Mock objects are fake objects that simulate the behavior of real objects but are totally controlled by the test. They allow isolation of tests from the rest of the system. Mock objects can, for example, simulate a part of the system that has not been implemented yet, or a part that is not practical to be tested. In Android, the following mock classes can be found: MockApplication, MockContext, MockContentProvider, MockCursor, MockDialogInterface, MockPackageManager, MockResources, and MockContentResolver. These classes are under the android.test.mock package. The methods of these objects are nonfunctional and throw an exception if they are called. You have to override the methods that you want to use. Creating an activity test In this section, we will create an example application so that we can learn how to implement the test cases to evaluate it. Some of the methods presented in the previous section will be put into practice. You can download the example code files from your account at http://www.packtpub.com. Our example is a simple alarm application that consists of two activities: MainActivity and SecondActivity. The MainActivity implements a self-built digital clock using text views and buttons. The purpose of creating a self-built digital clock is to have more code and elements to use in our tests. The layout of MainActivity is a relative one that includes two text views: one for the hour (the tvHour ID) and one for the minutes (the tvMinute ID). There are two buttons below the clock: one to subtract 10 minutes from the clock (the bMinus ID) and one to add 10 minutes to the clock (the bPlus ID). There is also an edit text field to specify the alarm name. Finally, there is a button to launch the second activity (the bValidate ID). Each button has a pertinent method that receives the click event when the button is pressed. The layout looks like the following screenshot: The SecondActivity receives the hour from the MainActivity and shows its value in a text view simulating that the alarm was saved. The objective to create this second activity is to be able to test the launch of another activity in our test case. Summary In this article, you learned how to use unit tests that allow developers to quickly verify the state and behavior of an activity on its own. Resources for Article: Further resources on this subject: Creating Dynamic UI with Android Fragments [article] Saying Hello to Unity and Android [article] Augmented Reality [article]
Read more
  • 0
  • 0
  • 1848

article-image-configuring-your-operating-system
Packt
18 Aug 2014
3 min read
Save for later

Configuring Your Operating System

Packt
18 Aug 2014
3 min read
In this article by William Smith, author of Learning Xamarin Studio, we will configure our operating system. (For more resources related to this topic, see here.) Configuring your Mac To configure your Mac, perform the following steps: From the Apple menu, open System Preferences. Open the Personal group. Select the Security and Privacy item. Open the Firewall tab, and ensure the Firewall is turned off. Configuring your Windows machine To configure your Windows machine, download and install the Xamarin Unified Installer. This installer includes a tool called Xamarin Bonjour Service, which runs Apple's network discovery protocol. Xamarin Bonjour Service requires administrator rights, so you may want to just run the installer as an administrator. Configuring a Windows VM within Mac There is really no difference between using the Visual Studio plugin from a Windows machine or from a VM using software, such as Parallels or VMware. However, if you are running Xamarin Studio on a Retina Macbook Pro, it is advisable to adjust the hardware video settings. Otherwise, some of the elements within Xamarin Studio will render poorly making them difficult to use. The following screenshot contains the recommended video settings: To adjust the settings in Parallels, follow these steps: If your Windows VM is running, shut it down. With your VM shut down, go to Virtual Machine | Configure…. Choose the Hardware tab. Select the Video group. Under Resolution, choose Scaled. Final installation steps Now that the necessary tools are installed and the settings have been enabled, you still need to link to your Xamarin account in Visual Studio, as well as connect Visual Studio to your Mac build machine. To connect to your Xamarin account, follow these steps: In Visual Studio, go to Tools | Xamarin Account…. Click Login to your Xamarin Account and enter your credentials. Once your credentials are verified, you will receive a confirmation message. To connect to your Mac build machine, follow these steps: On your Mac, open Spotlight and type Xamarin build host. Choose Xamarin.iOS Build Host under the Applications results group. After the Build Host utility dialog opens click the Pair button to continue. You will be provided with a PIN. Write this down. On your PC, open Xamarin Studio. Go to Tools | Options | Xamarin | iOS Settings. After the Build Host utility opens, click the Continue button. If your Mac and network are correctly configured, you will see your Mac in the list of available build machines. Choose your build machine and click the Continue button. You will be prompted to enter the PIN. Do so, then click the Pair button. Once the machines are paired, you can build, test, and deploy applications using the networked Mac. If for whatever reason you want to unpair these two machines, open the Xamarin.iOS Build Host on your Mac again, and click the Invalidate PIN button. When prompted, complete the process by clicking the Unpair button. Summary In this article, we learned how to configure our operating system. We also learned how to connect to your Mac build machine. Resources for Article: Further resources on this subject: Updating data in the background [Article] Gesture [Article] Making POIApp Location Aware [Article]
Read more
  • 0
  • 0
  • 1436

article-image-json-pojo-using-gson-android-studio
Troy Miles
01 Jul 2014
6 min read
Save for later

How to Convert POJO to JSON Using Gson in Android Studio

Troy Miles
01 Jul 2014
6 min read
JSON has become the defacto standard of data exchange on the web. Compared to its cousin XML, it is smaller in size and faster to both create and parse. In fact, it seems so simple that many developers roll their own code to convert plain old Java objects or POJO to and from JSON. For simple objects, it is fairly easy to write the conversion code, but as your objects grow more complex, your code's complexity grows as well. Do you really want to maintain a bunch of code whose functionality is not truly intrinsic to your app? Luckily there is no reason for you to do so. There are quite a few alternatives to writing your own Java JSON serializer/deserializer; in fact, json.org lists 25 of them. One of them, Gson, was created by Google for use on internal projects and later was open sourced. Gson is hosted on Google Code and the source code is available in an SVN repo. Create an Android app The process of converting POJO to JSON is called serialization. The reversed process is deserialization. A big reason that GSON is such a popular library is how simple it makes both processes. For both, the only thing you need is the Gson class. Let's create a simple Android app and see how simple Gson is to use. Start Android Studio and select new project Change the Application name to GsonTest. Click Next Click Next again. Click Finish At this point we have a complete Android hello world app. In past Android IDEs, we would add the Gson library at this point, but we don't do that anymore. Instead we add a Gson dependency to our build.gradle script and that will take care of everything else for us. It is super important to edit the correct Gradle file. There is one at the root directory but the one we want is at the app directory. Double-click it to open. Locate the dependencies section near the bottom of the script. After the last entry add the following line: compile 'com.google.code.gson:gson:2.2.4' After you add it, save the script and then click the Sync Project with Gradle Files icon. It is the fifth icon from the right-hand side in the toolbar. At this point, the Gson library is visible to your app. So let's build some test code. Create test code with JSON For our test we are going to use the JSON Test web service at https://www.jsontest.com/. It is a testing platform for JSON. Basically it gives us a place to send data to in order to test if we are properly serializing and deserializing data. JSON Test has a lot of services but we will use the validate service. You pass it a JSON string URL encoded as a query string and it will reply with a JSON object that indicates whether or not the JSON was encoded correctly, as well as some statistical information. The first thing we need to do is create two classes. The first class, TestPojo, is the Java class that we are going to serialize and send to JSON Test. TestPojo doesn't do anything important. It is just for our test; however, it contains several different types of objects: ints, strings, and arrays of ints. Classes that you create can easily be much more complicated, but don't worry, Gson can handle it, for example: 1 package com.tekadept.gsontest.app; 2 3 public class TestPojo { 4 private intvalue1 = 1; 5 private String value2 = "abc"; 6 private intvalues[] = {1, 2, 3, 4}; 7 private transient intvalue3 = 3; 8 9 // no argsctor 10 TestPojo() { 11 } 12 } 13 Gson will also respect the Java transient modifier, which specifies that a field should not be serialized. Any field with it will not appear in the JSON. The second class, JsonValidate, will hold the results of our call to JSON Test. In order to make it easy to parse, I've kept the field names exactly the same as those returned by the service, except for one. Gson has an annotation, @SerializedName, if you place it before a field name, you can have name the class version of a field be different than the JSON name. For example, if we wanted to name the validate field isValid all we would have to do is: 1 package com.tekadept.gsontest.app; 2 3 import com.google.gson.annotations.SerializedName; 4 5 public class JsonValidate { 6 7 public String object_or_array; 8 public booleanempty; 9 public long parse_time_nanoseconds; 10 @SerializedName("validate") 11 public booleanisValid; 12 public intsize; 13 } By using the @SerializedName annotation, our name for the JSON validate becomes isValid. Just remember that you only need to use the annotation when you change the field's name. In order to call JSON Test's validate service, we follow the best practice of not doing it on the UI thread by using an async task. An async task has four steps: onPreExecute, doInBackground, onProgressUpdate, and onPostExecute. The doInBackground method happens on another thread. It allows us to wait for the JSON Test service to respond to us without triggering the dreaded application not responding error. You can see this in action in the following code: 60 @Override 61 protected String doInBackground(String... notUsed) { 62 TestPojotp = new TestPojo(); 63 Gsongson = new Gson(); 64 String result = null; 65 66 try { 67 String json = URLEncoder.encode(gson.toJson(tp), "UTF-8"); 68 String url = String.format("%s%s", Constants.JsonTestUrl, json); 69 result = getStream(url); 70 } catch (Exception ex){ 71 Log.v(Constants.LOG_TAG, "Error: " + ex.getMessage()); 72 } 73 return result; 74 } To encode our Java object, all we need to do is create an instance of the Gson class, then call its toJson method, passing an instance of the class we wish to serialize. Deserialization is nearly as simple. In the onPostExecute method, we get the string of JSON from the web service. We then call the convertFromJson method that does the conversion. First it makes sure that it got a valid string, then it does the conversion by calling Gson'sfromJson method, passing the string and the name of its the class, as follows: 81 @Override 82 protected void onPostExecute(String result) { 83 84 // convert JSON string to a POJO 85 JsonValidatejv = convertFromJson(result); 86 if (jv != null) { 87 Log.v(Constants.LOG_TAG, "Conversion Succeed: " + result); 88 } else { 89 Log.v(Constants.LOG_TAG, "Conversion Failed"); 90 } 91 } 92 93 private JsonValidateconvertFromJson(String result) { 94 JsonValidatejv = null; 95 if (result != null &&result.length() >0) { 96 try { 97 Gsongson = new Gson(); 98 jv = gson.fromJson(result, JsonValidate.class); 99 } catch (Exception ex) { 100     Log.v(Constants.LOG_TAG, "Error: " + ex.getMessage()); 101                 } 102             } 103             return jv; 104         } Conclusion For most developers this is all you need to know. There is a complete guide to Gson at https://sites.google.com/site/gson/gson-user-guide. The complete source code for the test app is at https://github.com/Rockncoder/GsonTest. Discover more Android tutorials and extra content on our Android page - find it here.
Read more
  • 0
  • 0
  • 10159
article-image-sprites
Packt
24 Jun 2014
6 min read
Save for later

Sprites

Packt
24 Jun 2014
6 min read
The goal of this article is to learn how to work with sprites and get to know their main properties. After reading this article, you will be able to add sprites to your games. In this article, we will cover the following topics: Setting up the initial project Sprites and their main properties Adding sprites to the scene Adding sprites as a child node of another sprite Manipulating sprites (moving, flipping, and so on) Performance considerations when working with many sprites Creating spritesheets and using the sprite batch node to optimize performance Using basic animation Creating the game project We could create many separate mini projects, each demonstrating a single Cocos2D aspect, but this way we won't learn how to make a complete game. Instead, we're going to create a game that will demonstrate every aspect of Cocos2D that we learn. The game we're going to make will be about hunting. Not that I'm a fan of hunting, but taking into account the material we need to cover and practically use in the game's code, a hunting game looks like the perfect candidate. The following is a screenshot from the game we're going to develop. It will have several levels demonstrating several different aspects of Cocos2D in action: Time for action – creating the Cocohunt Xcode project Let's start creating this game by creating a new Xcode project using the Cocos2D template, just as we did with HelloWorld project, using the following steps: Start Xcode and navigate to File | New | Project… to open the project creation dialog. Navigate to the iOS | cocos2d v3.x category on the left of the screen and select the cocos2d iOS template on the right. Click on the Next button. In the next dialog, fill out the form as follows: Product Name: Cocohunt Organization Name: Packt Publishing Company Identifier: com.packtpub Device Family: iPhone Click on the Next button and pick a folder where you'd like to save this project. Then, click on the Create button. Build and run the project to make sure that everything works. After running the project, you should see the already familiar Hello World screen, so we won't show it here. Make sure that you select the correct simulator version to use. This project will support iPhone, iPhone Retina (3.5-inch), iPhone Retina (4-inch), and iPhone Retina (4-inch, 64-bit) simulators, or an actual iPhone 3GS or newer device running iOS 5.0 or higher. What just happened? Now, we have a project that we'll be working on. The project creation part should be very similar to the process of creating the HelloWorld project, so let's keep the tempo and move on. Time for action – creating GameScene As we're going to work on this project for some time, let's keep everything clean and tidy by performing the following steps: First of all, let's remove the following files as we won't need them: HelloWorldScene.h HelloWorldScene.m IntroScene.h IntroScene.m We'll use groups to separate our classes. This will allow us to keep things organized. To create a group in Xcode, you should right-click on the root project folder in Xcode, Cocohunt in our case, and select the New Group menu option (command + alt + N). Refer to the following sceenshot: Go ahead and create a new group and name it Scenes. After the group is created, let's place our first scene in it. We're going to create a new Objective-C class called GameScene and make it a subclass of CCScene. Right-click on the Scenes group that we've just created and select the New File option. Right-clicking on the group and selecting New File instead of using File | New | File will place our new file in the selected group after creation. Select Cocoa Touch category on the left of the screen and the Objective-C class on the right. Then click on the Next button. In the next dialog, name the the class as GameScene and make it a subclass of the CCScene class. Then click on the Next button. Make sure that you're in the Cocohunt project folder to save the file and click on the Create button. You can create the Scenes folder while in the save dialog using New Folder button and save the GameScene class there. This way, the hierarchy of groups in Xcode will match the physical folders hierarchy on the disk. This is the way I'm going to do this so that you can easily find any file in the book's supporting file's projects. However, the groups and files organization within groups will be identical, so you can always just open the Cocohunt.xcodeproj project and review the code in Xcode. This should create the GameScene.h and GameScene.m files in the Scenes group, as you can see in the following screenshot: Now, switch to the AppDelegate.m file and remove the following header imports at the top: #import "IntroScene.h" #import "HelloWorldScene.h" It is important to remove these #import directives or we will get errors as we removed the files they are referencing. Import the GameScene.h header as follows: #import "GameScene.h" Then find the startScene: method and replace it with following: -(CCScene *)startScene { return [[GameScene alloc] init]; } Build and run the game. After the splash screen, you should see the already familiar black screen as follows: What just happened? We've just created another project using the Cocos2D template. Most of the steps should be familiar as we have already done them in the past. After creating the project, we removed the unneeded files generated by the Cocos2D template, just as you will do most of the time when creating a new project, since most of the time you don't need those example scenes in your project. We're going to work on this project for some time and it is best to start organizing things well right away. This is why we've created a new group to contain our game scene files. We'll add more groups to the project later. As a final step, we've created our GameScene scene and displayed it on the screen at the start of the game. This is very similar to what we did in our HelloWorld project, so you shouldn't have any difficulties with it.
Read more
  • 0
  • 0
  • 2008

article-image-writing-tag-content
Packt
10 Jun 2014
9 min read
Save for later

Writing Tag Content

Packt
10 Jun 2014
9 min read
(For more resources related to this topic, see here.) The NDEF Message is composed by one or more NDEF records, and each record contains the payload and a header in which the data length and type and identifier are stored. In this article, we will create some examples that will allow us to work with the NDEF standard and start writing NFC applications. Working with the NDEF record Android provides an easy way to read and write data when it is formatted as per the standard NDEF. This format is the easiest way for us to work with tag data because it saves us from performing lots of operations and processes of reading and writing raw bytes. So, unless we need to get our hands dirty and write our own protocol, this is the way to go (you can still build it on top of NDEF and achieve a custom, yet standard-based protocol). Getting ready Make sure you have a working Android development environment. If you don't, ADT Bundle is a good environment to start with (you can access it by navigating to http://developer.android.com/sdk/index.html). Make sure you have an NFC-enabled Android device or a virtual test environment. It will be assumed that Eclipse is the development IDE. How to do it... We are going to create an application that writes any NDEF record to a tag by performing the following steps: Open Eclipse and create a new Android application project named NfcBookCh3Example1 with the package name nfcbook.ch3.example1. Make sure the AndroidManifest.xml file is correctly configured. Open the MainActivity.java file located under com.nfcbook.ch3.example1 and add the following class member: private NfcAdapter nfcAdapter; Implement the enableForegroundDispatch method and filter tags by using Ndef and NdefFormatable.Invoke in the onResume method: private void enableForegroundDispatch() { Intent intent = new Intent(this, MainActivity.class).addFlags(Intent.FLAG_RECEIVER_REPLACE_PENDING); PendingIntent pendingIntent = PendingIntent.getActivity(this, 0, intent, 0); IntentFilter[] intentFilter = new IntentFilter[] {}; String[][] techList = new String[][] { { android.nfc.tech.Ndef.class.getName() }, { android.nfc.tech.NdefFormatable.class.getName() } }; if ( Build.DEVICE.matches(".*generic.*") ) { //clean up the tech filter when in emulator since it doesn't work properly. techList = null; } nfcAdapter.enableForegroundDispatch(this, pendingIntent, intentFilter, techList); } Instantiate the nfcAdapter class field in the onCreate method: protected void onCreate(Bundle savedInstanceState) { ... nfcAdapter = NfcAdapter.getDefaultAdapter(this); } Implement the formatTag method: private boolean formatTag(Tag tag, NdefMessage ndefMessage) { try { NdefFormatable ndefFormat = NdefFormatable.get(tag); if (ndefFormat != null) { ndefFormat.connect(); ndefFormat.format(ndefMessage); ndefFormat.close(); return true; } } catch (Exception e) { Log.e("formatTag", e.getMessage()); } return false; } Implement the writeNdefMessage method: private boolean writeNdefMessage(Tag tag, NdefMessage ndefMessage) { try { if (tag != null) { Ndef ndef = Ndef.get(tag); if (ndef == null) { return formatTag(tag, ndefMessage); } else { ndef.connect(); if (ndef.isWritable()) { ndef.writeNdefMessage(ndefMessage); ndef.close(); return true; } ndef.close(); } } } catch (Exception e) { Log.e("formatTag", e.getMessage()); } return false; } Implement the isNfcIntent method: boolean isNfcIntent(Intent intent) { return intent.hasExtra(NfcAdapter.EXTRA_TAG); } Override the onNewIntent method using the following code: @Override protected void onNewIntent(Intent intent) { try { if (isNfcIntent(intent)) { NdefRecord ndefEmptyRecord = new NdefRecord(NdefRecord.TNF_EMPTY, new byte[]{}, new byte[]{}, new byte[]{}); NdefMessage ndefMessage = new NdefMessage(new NdefRecord[] { ndefEmptyRecord }); Tag tag = intent.getParcelableExtra(NfcAdapter.EXTRA_TAG); if (writeNdefMessage(tag, ndefMessage)) { Toast.makeText(this, "Tag written!", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(this, "Failed to write tag", Toast.LENGTH_SHORT).show(); } } } catch (Exception e) { Log.e("onNewIntent", e.getMessage()); } } Override the onPause method and insert the following code: @Override protected void onPause() { super.onPause(); nfcAdapter.disableForegroundDispatch(this); } Open the NFC Simulator tool and simulate a few tags. The tag should be marked as modified, as shown in the following screenshot: In the NFC Simulator, a tag name that ends with _LOCKED is not writable, so we won't be able to write any content to the tag and, therefore, a Failed to write tag toast will appear. How it works... NFC intents carry an extra value with them that is a virtual representation of the tag and can be obtained using the NfcAdapter.EXTRA_TAG key. We can get information about the tag, such as the tag ID and its content type, through this object. In the onNewIntent method, we retrieve the tag instance and then use other classes provided by Android to easily read, write, and retrieve even more information about the tag. These classes are as follows: android.nfc.tech.Ndef: This class provides methods to retrieve and modify the NdefMessage object on a tag android.nfc.tech.NdefFormatable: This class provides methods to format tags that are capable of being formatted as NDEF The first thing we need to do while writing a tag is to call the get(Tag tag) method from the Ndef class, which will return an instance of the same class. Then, we need to open a connection with the tag by calling the connect() method. With an open connection, we can now write a NDEF message to the tag by calling the writeNdefMessage(NdefMessage msg) method. Checking whether the tag is writable or not is always a good practice to prevent unwanted exceptions. We can do this by calling the isWritable() method. Note that this method may not account for physical write protection. When everything is done, we call the close() method to release the previously opened connection. If the get(Tag tag) method returns null, it means that the tag is not formatted as per the NDEF format, and we should try to format it correctly. For formatting a tag with the NDEF format, we use the NdefFormatable class in the same way as we did with the Ndef class. However, in this case, we want to format the tag and write a message. This is achieved by calling the format(NdefMessage firstMessage) method. So, we should call the get(Tag tag) method, then open a connection by calling connect(), format the tag, and write the message by calling the format(NdefMessage firstMessage) method. Finally, close the connection with the close() method. If the get(Tag tag) method returns null, it means that the tag is the Android NFC API that cannot automatically format the tag to NDEF. An NDEF message is composed of several NDEF records. Each of these records is composed of four key properties: Type Name Format (TNF): This property defines how the type field should be interpreted Record Type Definition (RTD): This property is used together with the TNF to help Android create the correct NDEF message and trigger the corresponding intent Id: This property lets you define a custom identifier for the record Payload: This property contains the content that will be transported to the record Using the combinations between the TNF and the RTD, we can create several different NDEF records to hold our data and even create our custom types. In this recipe, we created an empty record. The main TNF property values of a record are as follows: - TNF_ABSOLUTE_URI: This is a URI-type field - TNF_WELL_KNOWN: This is an NFC Forum-defined URN - TNF_EXTERNAL_TYPE: This is a URN-type field - TNF_MIME_MEDIA: This is a MIME type based on the type specified The main RTF property values of a record are: - RTD_URI: This is the URI based on the payload - RTD_TEXT: This is the NFC Forum-defined record type Writing a URI-formatted record URI is probably the most common content written to NFC tags. It allows you to share a website, an online service, or a link to the online content. This can be used, for example, in advertising and marketing. How to do it... We are going to create an application that writes URI records to a tag by performing the following steps. The URI will be hardcoded and will point to the Packt Publishing website. Open Eclipse and create a new Android application project named NfcBookCh3Example2. Make sure the AndroidManifest.xml file is configured correctly. Set the minimum SDK version to 14: <uses-sdk android_minSdkVersion="14" /> Implement the enableForegroundDispatch, isNfcIntent, formatTag, and writeNdefMessage methods from the previous recipe—steps 2, 4, 6, and 7. Add the following class member and instantiate it in the onCreate method: private NfcAdapter nfcAdapter; protected void onCreate(Bundle savedInstanceState) { ... nfcAdapter = NfcAdapter.getDefaultAdapter(this); } Override the onNewIntent method and place the following code: @Override protected void onNewIntent(Intent intent) { try { if (isNfcIntent(intent)) { NdefRecord uriRecord = NdefRecord.createUri("http://www.packtpub.com"); NdefMessage ndefMessage = new NdefMessage(new NdefRecord[] { uriRecord }); Tag tag = intent.getParcelableExtra(NfcAdapter.EXTRA_TAG); boolean writeResult = writeNdefMessage(tag, ndefMessage); if (writeResult) { Toast.makeText(this, "Tag written!", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(this, "Tag write failed!", Toast.LENGTH_SHORT).show(); } } } catch (Exception e) { Log.e("onNewIntent", e.getMessage()); } super.onNewIntent(intent); } Run the application. Tap a tag on your phone or simulate a tap in the NFC Simulator. A Write Successful! toast should appear. How it works... URIs are perfect content for NFC tags because with a relatively small amount of content, we can send users to more rich and complete resources. These types of records are the most easy to create, and this is done by calling the NdefRecord.createUri method and passing the URI as the first parameter. URIs are not necessarily URLs for a website. We can use other URIs that are quite well-known in Android such as the following: - tel:+000 000 000 000 - sms:+000 000 000 000 If we write the tel: uri syntax, the user will be prompted to initiate a phone call. We can always create a URI record the hard way without using the createUri method: public NdefRecord createUriRecord(String uri) { NdefRecord rtdUriRecord = null; try { byte[] uriField; uriField = uri.getBytes("UTF-8"); byte[] payload = new byte[uriField.length + 1]; //+1 for the URI prefix payload[0] = 0x00; //prefixes the URI System.arraycopy(uriField, 0, payload, 1, uriField.length); rtdUriRecord = new NdefRecord(NdefRecord.TNF_WELL_KNOWN, NdefRecord.RTD_URI, new byte[0], payload); } catch (UnsupportedEncodingException e) { Log.e("createUriRecord", e.getMessage()); } return rtdUriRecord; } The first byte in the payload indicates which prefix should be used with the URI. This way, we don't need to write the whole URI in the tag, which saves some tag space. The following list describes the recognized prefixes: 0x00 No prepending is done 0x01 http://www. 0x02 https://www. 0x03 http:// 0x04 https:// 0x05 tel: 0x06 mailto: 0x07 ftp://anonymous:anonymous@ 0x08 ftp://ftp. 0x09 ftps:// 0x0A sftp:// 0x0B smb:// 0x0C nfs:// 0x0D ftp:// 0x0E dav:// 0x0F news: 0x10 telnet:// 0x11 imap: 0x12 rtsp:// 0x13 urn: 0x14 pop: 0x15 sip: 0x16 sips: 0x17 tftp: 0x18 btspp:// 0x19 btl2cap:// 0x1A btgoep:// 0x1B tcpobex:// 0x1C irdaobex:// 0x1D file:// 0x1E urn:epc:id: 0x1F urn:epc:tag: 0x20 urn:epc:pat: 0x21 urn:epc:raw: 0x22 urn:epc: 0x23 urn:nfc:
Read more
  • 0
  • 0
  • 2245