Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Android Programming

62 Articles
article-image-prerequisites-map-application
Packt
16 Sep 2015
10 min read
Save for later

Prerequisites for a Map Application

Packt
16 Sep 2015
10 min read
In this article by Raj Amal, author of the book Learning Android Google Maps, we will cover the following topics: Generating an SHA1 fingerprint in the Windows, Linux, and Mac OS X Registering our application in the Google Developer Console Configuring Google Play services with our application Adding permissions and defining an API key Generating the SHA1 fingerprint Let's learn about generating the SHA1 fingerprint in different platforms one by one. Windows The keytool usually comes with the JDK package. We use the keytool to generate the SHA1 fingerprint. Navigate to the bin directory in your default JDK installation location, which is what you configured in the JAVA_HOME variable, for example, C:Program FilesJavajdk 1.7.0_71. Then, navigate to File | Open command prompt. Now, the command prompt window will open. Enter the following command, and then hit the Enter key: keytool -list -v -keystore "%USERPROFILE%.androiddebug.keystore" - alias androiddebugkey -storepass android -keypass android You will see output similar to what is shown here: Valid from: Sun Nov 02 16:49:26 IST 2014 until: Tue Oct 25 16:49:26 IST 2044 Certificate fingerprints: MD5: 55:66:D0:61:60:4D:66:B3:69:39:23:DB:84:15:AE:17 SHA1: C9:44:2E:76:C4:C2:B7:64:79:78:46:FD:9A:83:B7:90:6D:75:94:33 In the preceding output, note down the SHA1 value that is required to register our application with the Google Developer Console: The preceding screenshot is representative of the typical output screen that is shown when the preceding command is executed. Linux We are going to obtain the SHA1 fingerprint from the debug.keystore file, which is present in the .android folder in your home directory. If you install Java directly from PPA, open the terminal and enter the following command: keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey - storepass android -keypass android This will return an output similar to the one we've obtained in Windows. Note down the SHA1 fingerprint, which we will use later. If you've installed Java manually, you'll need to run a keytool from the keytool location. You can export the Java JDK path as follows: export JAVA_HOME={PATH to JDK} After exporting the path, run the keytool as follows: $JAVA_HOME/bin/keytool -list -v -keystore ~/.android/debug.keystore - alias androiddebugkey -storepass android -keypass android The output of the preceding command is shown as follows: Mac OS X Generating the SHA1 fingerprint in Mac OS X is similar to you what you performed in Linux. Open the terminal and enter the command. It will show output similar to what we obtained in Linux. Note down the SHA1 fingerprint, which we will use later: keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey - storepass android -keypass android Registering your application to the Google Developer Console This is one of the most important steps in our process. Our application will not function without obtaining an API key from the Google Developer Console. Follow these steps one by one to obtain the API key: Open the Google Developer Console by visiting https://console.developers.google.com and click on the CREATE PROJECT button. A new dialog box appears. Give your project a name and a unique project ID. Then, click on Create: As soon as your project is created, you will be redirected to the Project dashboard. On the left-hand side, under the APIs & auth section, select APIs: Then, scroll down and enable Google Maps Android API v2: Next, under the same APIs & auth section, select Credentials. Select Create new Key under the Public API access, and then select Android key in the following dialog: In the next window, enter the SHA1 fingerprint we noted in our previous section followed by a semicolon and the package name of the Android application we wish to register. For example, my SHA1 fingerprint value is C9:44:2E:76:C4:C2:B7:64:79:78:46:FD:9A:83:B7:90:6D:75:94:33, and the package name of the app I wish to create is com.raj.map; so, I need to enter the following: C9:44:2E:76:C4:C2:B7:64:79:78:46:FD:9A:83:B7:90:6D:75:94:33;com.raj.map You need to enter the value shown in the following screen: Finally, click on Create. Now our Android application will be registered with the Google Developer Console and it will a display a screen similar to the following one: Note down the API key from the screen, which will be similar to this: AIzaSyAdJdnEG5vfo925VV2T9sNrPQ_rGgIGnEU Configuring Google Play services Google Play services includes the classes required for our map application. So, it is required to be set up properly. It differs for Eclipse with the ADT plugin and Gradle-based Android Studio. Let's see how to configure Google Play services for both separately; It is relatively simple. Android Studio Configuring Google Play Services with Android Studio is very simple. You need to add a line of code to your build.gradle file, which contains the Gradle build script required to build our project. There are two build.gradle files. You must add the code to the inner app's build.gradle file. The following screenshot shows the structure of the project: The code should be added to the second Gradle build file, which contains our app module's configuration. Add the following code to the dependencies section in the Gradle build file: compile 'com.google.android.gms:play-services:7.5.0 The structure should be similar to the following code: dependencies { compile 'com.google.android.gms:play-services:7.5.0' compile 'com.android.support:appcompat-v7:21.0.3' } The 7.5.0 in the code is the version number of Google Play services. Change the version number according to your current version. The current version can be found from the values.xml file present in the res/values directory of the Google Play services library project. The newest version of Google Play services can found at https://developers.google.com/android/guides/setup. That's it. Now resync your project. You can sync by navigating to Tools | Android | Sync Project with Gradle Files. Now, Google Play services will be integrated with your project. Eclipse Let's take a look at how to configure Google Play services in Eclipse with the ADT plugin. First, we need to import Google Play services into Workspace. Navigate to File | Import and the following window will appear: In the preceding screenshot, navigate to Android | Existing Android Code Into Workspace. Then click on Next. In the next window, browse the sdk/extras/google/google_play_services/libproject/google-play-services_lib directory root directory as shown in the following screenshot: Finally, click on Finish. Now, google-play-services_lib will be added to your Workspace. Next, let's take a look at how to configure Google Play services with our application project. Select your project, right-click on it, and select Properties. In the Library section, click on Add and choose google-play-services_lib. Then, click on OK. Now, google-play-services_lib will be added as a library to our application project as shown in the following screenshot: In the next section, we will see how to configure the API key and add permissions that will help us to deploy our application. Adding permissions and defining the API key The permissions and API key must be defined in the AndroidManifest.xml file, which provides essential information about applications in the operating system. The OpenGL ES version must be specified in the manifest file, which is required to render the map and also the Google Play services version. Adding permissions Three permissions are required for our map application to work properly. The permissions should be added inside the <manifest> element. The four permissions are as follows: INTERNET ACCESS_NETWORK_STATE WRITE_EXTERNAL_STORAGE READ_GSERVICES Let's take a look at what these permissions are for. INTERNET This permission is required for our application to gain access to the Internet. Since Google Maps mainly works on real-time Internet access, the Internet it is essential. ACCESS_NETWORK_STATE This permission gives information about a network and whether we are connected to a particular network or not. WRITE_EXTERNAL_STORAGE This permission is required to write data to an external storage. In our application, it is required to cache map data to the external storage. READ_GSERVICES This permission allows you to read Google services. The permissions are added to AndroidManifest.xml as follows: <uses-permission android_name="android.permission.INTERNET"/> <uses-permission android_name="android.permission.ACCESS_NETWORK_STATE"/> <uses-permission android_name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android_name="com.google.android.providers.gsf.permission.READ_GSERVICES" /> There are some more permissions that are currently not required. Specifying the Google Play services version The Google Play services version must be specified in the manifest file for the functioning of maps. It must be within the <application> element. Add the following code to AndroidManifest.xml: <meta-data android_name="com.google.android.gms.version" android_value="@integer/google_play_services_version" />   Specifying the OpenGL ES version 2 Android Google maps uses OpenGL to render a map. Google maps will not work on devices that do not support version 2 of OpenGL. Hence, it is necessary to specify the version in the manifest file. It must be added within the <manifest> element, similar to permissions. Add the following code to AndroidManifest.xml: <uses-feature android_glEsVersion="0x00020000" android_required="true"/> The preceding code specifies that version 2 of OpenGL is required for the functioning of our application. Defining the API key The Google maps API key is required to provide authorization to the Google maps service. It must be specified within the <application> element. Add the following code to AndroidManifest.xml: <meta-data android_name="com.google.android.maps.v2.API_KEY" android_value="API_KEY"/> The API_KEY value must be replaced with the API key we noted earlier from the Google Developer Console. The complete AndroidManifest structure after adding permissions, specifying OpenGL, the Google Play services version, and defining the API key is as follows: <?xml version="1.0" encoding="utf-8"?> <manifest package="com.raj.sampleapplication" android_versionCode="1" android_versionName="1.0" > <uses-feature android_glEsVersion="0x00020000" android_required="true"/> <uses-permission android_name="android.permission.INTERNET"/> <uses-permission android_name="android.permission.ACCESS_NETWORK_STATE"/> <uses-permission android_name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android_name="com.google.android.providers.gsf.permission.READ_GSERVICES" /> <application> <meta-data android_name="com.google.android.gms.version" android_value="@integer/google_play_services_version" /> <meta-data android_name="com.google.android.maps.v2.API_KEY" android_value="AIzaSyBVMWTLk4uKcXSHBJTzrxsrPNSjfL18lk0"/> </application> </manifest>   Summary In this article, we learned how to generate the SHA1 fingerprint in different platforms, registering our application in the Google Developer Console, and generating an API key. We also configured Google Play services in Android Studio and Eclipse and added permissions and other data in a manifest file that are essential to create a map application. Resources for Article: Further resources on this subject: Testing with the Android SDK [article] Signing an application in Android using Maven [article] Code Sharing Between iOS and Android [article]
Read more
  • 0
  • 0
  • 1383

article-image-tracking-objects-videos
Packt
12 Aug 2015
13 min read
Save for later

Tracking Objects in Videos

Packt
12 Aug 2015
13 min read
In this article by Salil Kapur and Nisarg Thakkar, authors of the book Mastering OpenCV Android Application Programming, we will look at the broader aspects of object tracking in Videos. Object tracking is one of the most important applications of computer vision. It can be used for many applications, some of which are as follows: Human–computer interaction: We might want to track the position of a person's finger and use its motion to control the cursor on our machines Surveillance: Street cameras can capture pedestrians' motions that can be tracked to detect suspicious activities Video stabilization and compression Statistics in sports: By tracking a player's movement in a game of football, we can provide statistics such as distance travelled, heat maps, and so on In this article, you will learn the following topics: Optical flow Image Pyramids (For more resources related to this topic, see here.) Optical flow Optical flow is an algorithm that detects the pattern of the motion of objects, or edges, between consecutive frames in a video. This motion may be caused by the motion of the object or the motion of the camera. Optical flow is a vector that depicts the motion of a point from the first frame to the second. The optical flow algorithm works under two basic assumptions: The pixel intensities are almost constant between consecutive frames The neighboring pixels have the same motion as the anchor pixel We can represent the intensity of a pixel in any frame by f(x,y,t). Here, the parameter t represents the frame in a video. Let's assume that, in the next dt time, the pixel moves by (dx,dy). Since we have assumed that the intensity doesn't change in consecutive frames, we can say: f(x,y,t) = f(x + dx,y + dy,t + dt) Now we take the Taylor series expansion of the RHS in the preceding equation: Cancelling the common term, we get: Where . Dividing both sides of the equation by dt we get: This equation is called the optical flow equation. Rearranging the equation we get: We can see that this represents the equation of a line in the (u,v) plane. However, with only one equation available and two unknowns, this problem is under constraint at the moment. The Horn and Schunck method By taking into account our assumptions, we get: We can say that the first term will be small due to our assumption that the brightness is constant between consecutive frames. So, the square of this term will be even smaller. The second term corresponds to the assumption that the neighboring pixels have similar motion to the anchor pixel. We need to minimize the preceding equation. For this, we differentiate the preceding equation with respect to u and v. We get the following equations: Here, and  are the Laplacians of u and v respectively. The Lucas and Kanade method We start off with the optical flow equation that we derived earlier and noticed that it is under constrained as it has one equation and two variables: To overcome this problem, we make use of the assumption that pixels in a 3x3 neighborhood have the same optical flow: We can rewrite these equations in the form of matrices, as shown here: This can be rewritten in the form: Where: As we can see, A is a 9x2 matrix, U is a 2x1 matrix, and b is a 9x1 matrix. Ideally, to solve for U, we just need to multiply by A-1on both sides of the equation. However, this is not possible, as we can only take the inverse of square matrices. Thus, we try to transform A into a square matrix by first multiplying the equation by AT on both sides of the equation: Now is a square matrix of dimension 2x2. Hence, we can take its inverse: On solving this equation, we get: This method of multiplying the transpose and then taking an inverse is called pseudo-inverse. This equation can also be obtained by finding the minimum of the following equation: According to the optical flow equation and our assumptions, this value should be equal to zero. Since the neighborhood pixels do not have exactly the same values as the anchor pixel, this value is very small. This method is called Least Square Error. To solve for the minimum, we differentiate this equation with respect to u and v, and equate it to zero. We get the following equations: Now we have two equations and two variables, so this system of equations can be solved. We rewrite the preceding equations as follows: So, by arranging these equations in the form of a matrix, we get the same equation as obtained earlier: Since, the matrix A is now a 2x2 matrix, it is possible to take an inverse. On taking the inverse, the equation obtained is as follows: This can be simplified as: Solving for u and v, we get: Now we have the values for all the , , and . Thus, we can find the values of u and v for each pixel. When we implement this algorithm, it is observed that the optical flow is not very smooth near the edges of the objects. This is due to the brightness constraint not being satisfied. To overcome this situation, we use image pyramids. Checking out the optical flow on Android To see the optical flow in action on Android, we will create a grid of points over a video feed from the camera, and then the lines will be drawn for each point that will depict the motion of the point on the video, which is superimposed by the point on the grid. Before we begin, we will set up our project to use OpenCV and obtain the feed from the camera. We will process the frames to calculate the optical flow. First, create a new project in Android Studio. We will set the activity name to MainActivity.java and the XML resource file as activity_main.xml. Second, we will give the app the permissions to access the camera. In the AndroidManifest.xml file, add the following lines to the manifest tag: <uses-permission android_name="android.permission.CAMERA" /> Make sure that your activity tag for MainActivity contains the following line as an attribute: android:screenOrientation="landscape" Our activity_main.xml file will contain a simple JavaCameraView. This is a custom OpenCV defined layout that enables us to access the camera frames and processes them as normal Mat objects. The XML code has been shown here: <LinearLayout       android_layout_width="match_parent"    android_layout_height="match_parent"    android_orientation="horizontal">      <org.opencv.android.JavaCameraView        android_layout_width="fill_parent"        android_layout_height="fill_parent"        android_id="@+id/main_activity_surface_view" />   </LinearLayout> Now, let's work on some Java code. First, we'll define some global variables that we will use later in the code: private static final String   TAG = "com.packtpub.masteringopencvandroid.chapter5.MainActivity";      private static final int       VIEW_MODE_KLT_TRACKER = 0;    private static final int       VIEW_MODE_OPTICAL_FLOW = 1;      private int                   mViewMode;    private Mat                   mRgba;    private Mat                   mIntermediateMat;    private Mat                   mGray;    private Mat                   mPrevGray;      MatOfPoint2f prevFeatures, nextFeatures;    MatOfPoint features;      MatOfByte status;    MatOfFloat err;      private MenuItem               mItemPreviewOpticalFlow, mItemPreviewKLT;      private CameraBridgeViewBase   mOpenCvCameraView; We will need to create a callback function for OpenCV, like we did earlier. In addition to the code we used earlier, we will also enable CameraView to capture frames for processing: private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {        @Override        public void onManagerConnected(int status) {            switch (status) {                case LoaderCallbackInterface.SUCCESS:                {                    Log.i(TAG, "OpenCV loaded successfully");                      mOpenCvCameraView.enableView();                } break;                default:                {                    super.onManagerConnected(status);                } break;            }        }    }; We will now check whether the OpenCV manager is installed on the phone, which contains the required libraries. In the onResume function, add the following line of code: OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_10,   this, mLoaderCallback); In the onCreate() function, add the following line before calling setContentView to prevent the screen from turning off, while using the app: getWindow().addFlags(WindowManager.LayoutParams. FLAG_KEEP_SCREEN_ON); We will now initialize our JavaCameraView object. Add the following lines after setContentView has been called: mOpenCvCameraView = (CameraBridgeViewBase)   findViewById(R.id.main_activity_surface_view); mOpenCvCameraView.setCvCameraViewListener(this); Notice that we called setCvCameraViewListener with the this parameter. For this, we need to make our activity implement the CvCameraViewListener2 interface. So, your class definition for the MainActivity class should look like the following code: public class MainActivity extends Activity   implements CvCameraViewListener2 We will add a menu to this activity to toggle between different examples in this article. Add the following lines to the onCreateOptionsMenu function: mItemPreviewKLT = menu.add("KLT Tracker"); mItemPreviewOpticalFlow = menu.add("Optical Flow"); We will now add some actions to the menu items. In the onOptionsItemSelected function, add the following lines: if (item == mItemPreviewOpticalFlow) {            mViewMode = VIEW_MODE_OPTICAL_FLOW;            resetVars();        } else if (item == mItemPreviewKLT){            mViewMode = VIEW_MODE_KLT_TRACKER;            resetVars();        }          return true; We used a resetVars function to reset all the Mat objects. It has been defined as follows: private void resetVars(){        mPrevGray = new Mat(mGray.rows(), mGray.cols(), CvType.CV_8UC1);        features = new MatOfPoint();        prevFeatures = new MatOfPoint2f();        nextFeatures = new MatOfPoint2f();        status = new MatOfByte();        err = new MatOfFloat();    } We will also add the code to make sure that the camera is released for use by other applications, whenever our application is suspended or killed. So, add the following snippet of code to the onPause and onDestroy functions: if (mOpenCvCameraView != null)            mOpenCvCameraView.disableView(); After the OpenCV camera has been started, the onCameraViewStarted function is called, which is where we will add all our object initializations: public void onCameraViewStarted(int width, int height) {        mRgba = new Mat(height, width, CvType.CV_8UC4);        mIntermediateMat = new Mat(height, width, CvType.CV_8UC4);        mGray = new Mat(height, width, CvType.CV_8UC1);        resetVars();    } Similarly, the onCameraViewStopped function is called when we stop capturing frames. Here we will release all the objects we created when the view was started: public void onCameraViewStopped() {        mRgba.release();        mGray.release();        mIntermediateMat.release();    } Now we will add the implementation to process each frame of the feed that we captured from the camera. OpenCV calls the onCameraFrame method for each frame, with the frame as a parameter. We will use this to process each frame. We will use the viewMode variable to distinguish between the optical flow and the KLT tracker, and have different case constructs for the two: public Mat onCameraFrame(CvCameraViewFrame inputFrame) {        final int viewMode = mViewMode;        switch (viewMode) {            case VIEW_MODE_OPTICAL_FLOW: We will use the gray()function to obtain the Mat object that contains the captured frame in a grayscale format. OpenCV also provides a similar function called rgba() to obtain a colored frame. Then we will check whether this is the first run. If this is the first run, we will create and fill up a features array that stores the position of all the points in a grid, where we will compute the optical flow:                mGray = inputFrame.gray();                if(features.toArray().length==0){                   int rowStep = 50, colStep = 100;                    int nRows = mGray.rows()/rowStep, nCols = mGray.cols()/colStep;                      Point points[] = new Point[nRows*nCols];                    for(int i=0; i<nRows; i++){                        for(int j=0; j<nCols; j++){                            points[i*nCols+j]=new Point(j*colStep, i*rowStep);                        }                    }                      features.fromArray(points);                      prevFeatures.fromList(features.toList());                    mPrevGray = mGray.clone();                    break;                } The mPrevGray object refers to the previous frame in a grayscale format. We copied the points to a prevFeatures object that we will use to calculate the optical flow and store the corresponding points in the next frame in nextFeatures. All of the computation is carried out in the calcOpticalFlowPyrLK OpenCV defined function. This function takes in the grayscale version of the previous frame, the current grayscale frame, an object that contains the feature points whose optical flow needs to be calculated, and an object that will store the position of the corresponding points in the current frame:                nextFeatures.fromArray(prevFeatures.toArray());                Video.calcOpticalFlowPyrLK(mPrevGray, mGray,                    prevFeatures, nextFeatures, status, err); Now, we have the position of the grid of points and their position in the next frame as well. So, we will now draw a line that depicts the motion of each point on the grid:                List<Point> prevList=features.toList(), nextList=nextFeatures.toList();                Scalar color = new Scalar(255);                  for(int i = 0; i<prevList.size(); i++){                    Core.line(mGray, prevList.get(i), nextList.get(i), color);                } Before the loop ends, we have to copy the current frame to mPrevGray so that we can calculate the optical flow in the subsequent frames:                mPrevGray = mGray.clone();                break; default: mViewMode = VIEW_MODE_OPTICAL_FLOW; After we end the switch case construct, we will return a Mat object. This is the image that will be displayed as an output to the user of the application. Here, since all our operations and processing were performed on the grayscale image, we will return this image: return mGray; So, this is all about optical flow. The result can be seen in the following image: Optical flow at various points in the camera feed Image pyramids Pyramids are multiple copies of the same images that differ in their sizes. They are represented as layers, as shown in the following figure. Each level in the pyramid is obtained by reducing the rows and columns by half. Thus, effectively, we make the image's size one quarter of its original size: Relative sizes of pyramids Pyramids intrinsically define reduce and expand as their two operations. Reduce refers to a reduction in the image's size, whereas expand refers to an increase in its size. We will use a convention that lower levels in a pyramid mean downsized images and higher levels mean upsized images. Gaussian pyramids In the reduce operation, the equation that we use to successively find levels in pyramids, while using a 5x5 sliding window, has been written as follows. Notice that the size of the image reduces to a quarter of its original size: The elements of the weight kernel, w, should add up to 1. We use a 5x5 Gaussian kernel for this task. This operation is similar to convolution with the exception that the resulting image doesn't have the same size as the original image. The following image shows you the reduce operation: The reduce operation The expand operation is the reverse process of reduce. We try to generate images of a higher size from images that belong to lower layers. Thus, the resulting image is blurred and is of a lower resolution. The equation we use to perform expansion is as follows: The weight kernel in this case, w, is the same as the one used to perform the reduce operation. The following image shows you the expand operation: The expand operation The weights are calculated using the Gaussian function to perform Gaussian blur. Summary In this article, we have seen how to detect a local and global motion in a video, and how we can track objects. We have also learned about Gaussian pyramids, and how they can be used to improve the performance of some computer vision tasks. Resources for Article: Further resources on this subject: New functionality in OpenCV 3.0 [article] Seeing a Heartbeat with a Motion Amplifying Camera [article] Camera Calibration [article]
Read more
  • 0
  • 0
  • 2632

article-image-sending-and-syncing-data
Packt
10 Aug 2015
4 min read
Save for later

Sending and Syncing Data

Packt
10 Aug 2015
4 min read
This article, by Steven F. Daniel, author of the book, Android Wearable Programming, will provide you with the background and understanding of how you can effectively build applications that communicate between the Android handheld device and the Android wearable. Android Wear comes with a number of APIs that will help to make communicating between the handheld and the wearable a breeze. We will be learning the differences between using MessageAPI, which is sometimes referred to as a "fire and forget" type of message, and DataLayerAPI that supports syncing of data between a handheld and a wearable, and NodeAPI that handles events related to each of the local and connected device nodes. (For more resources related to this topic, see here.) Creating a wearable send and receive application In this section, we will take a look at how to create an Android wearable application that will send an image and a message, and display this on our wearable device. In the next sections, we will take a look at the steps required to send data to the Android wearable using DataAPI, NodeAPI, and MessageAPIs. Firstly, create a new project in Android Studio by following these simple steps: Launch Android Studio, and then click on the File | New Project menu option. Next, enter SendReceiveData for the Application name field. Then, provide the name for the Company Domain field. Now, choose Project location and select where you would like to save your application code: Click on the Next button to proceed to the next step. Next, we will need to specify the form factors for our phone/tablet and Android Wear devices using which our application will run. On this screen, we will need to choose the minimum SDK version for our phone/tablet and Android Wear. Click on the Phone and Tablet option and choose API 19: Android 4.4 (KitKat) for Minimum SDK. Click on the Wear option and choose API 21: Android 5.0 (Lollipop) for Minimum SDK: Click on the Next button to proceed to the next step. In our next step, we will need to add Blank Activity to our application project for the mobile section of our app. From the Add an activity to Mobile screen, choose the Add Blank Activity option from the list of activities shown and click on the Next button to proceed to the next step: Next, we need to customize the properties for Blank Activity so that it can be used by our application. Here we will need to specify the name of our activity, layout information, title, and menu resource file. From the Customize the Activity screen, enter MobileActivity for Activity Name shown and click on the Next button to proceed to the next step in the wizard: In the next step, we will need to add Blank Activity to our application project for the Android wearable section of our app. From the Add an activity to Wear screen, choose the Blank Wear Activity option from the list of activities shown and click on the Next button to proceed to the next step: Next, we need to customize the properties for Blank Wear Activity so that our Android wearable can use it. Here we will need to specify the name of our activity and the layout information. From the Customize the Activity screen, enter WearActivity for Activity Name shown and click on the Next button to proceed to the next step in the wizard:   Finally, click on the Finish button and the wizard will generate your project and after a few moments, the Android Studio window will appear with your project displayed. Summary In this article, we learned about three new APIs, DataAPI, NodeAPI, and MessageAPIs, and how we can use them and their associated methods to transmit information between the handheld mobile and the wearable. If, for whatever reason, the connected wearable node gets disconnected from the paired handheld device, the DataApi class is smart enough to try sending again automatically once the connection is reestablished. Resources for Article: Further resources on this subject: Speeding up Gradle builds for Android [article] Saying Hello to Unity and Android [article] Testing with the Android SDK [article]
Read more
  • 0
  • 0
  • 992

article-image-detecting-touchscreen-gestures
Packt
06 Aug 2015
18 min read
Save for later

Detecting Touchscreen Gestures

Packt
06 Aug 2015
18 min read
In this article by Kyle Mew author of the book, Android 5 Programming by Example, we will learn how to: Add a GestureDetector to a view Add an OnTouchListener and an OnGestureListener Detect and refine fling gestures Use the DDMS Logcat to observe the MotionEvent class Edit the Logcat filter configuration Simplify code with a SimpleOnGestureListener Add a GestureDetector to an Activity Edit the Manifest to control launch behavior Hide UI elements Create a splash screen Lock screen orientation (For more resources related to this topic, see here.) Adding a GestureDetector to a view Together, view.GestureDetector and view.View.OnTouchListener are all that are required to provide our ImageView with gesture functionality. The listener contains an onTouch() callback that relays each MotionEvent to the detector. We are going to program the large ImageView so that it can display a small gallery of related pictures that can be accessed by swiping left or right on the image. There are two steps to this task as, before we implement our gesture detector, we need to provide the data for it to work on. Adding the gallery data As this app is for demonstration and learning purposes, and so we can progress as quickly as possible, we will only provide extra images for one or two of the ancient sites in the project. Here is how it's done: Open the Ancient Britain project. Open the MainData.java file. Add the following arrays: static Integer[] hengeArray = {R.drawable.henge_large, R.drawable.henge_2, R.drawable.henge_3, R.drawable.henge_4}; static Integer[] horseArray = {}; static Integer[] wallArray = {R.drawable.wall_large, R.drawable.wall_2}; static Integer[] skaraArray = {}; static Integer[] towerArray = {}; static Integer[][] galleryArray = {hengeArray, horseArray, wallArray, skaraArray, towerArray}; Either download the project files from the Packt website or find four of your own images (around 640 x 480 px). Name them henge_2, henge_3, henge_4, and wall_2 and place them in your res/drawable directory. This is all very straightforward, and the code that will accompany it allows you to have individual arrays of any length. This is all we need to add to our gallery data. Now, we need to code our GestureDetector and OnTouchListener. Adding the GestureDetector Along with the OnTouchListener that we will define for our ImageView, the GestureDetector has its own listeners. Here we will use GestureDetector.OnGestureListener to detect a fling gesture and collect the MotionEvent that describe it. Follow these steps to program your ImageView to respond to fling gestures: Open the DetailActivity.java file. Declare the following class fields: private static final int MIN_DISTANCE = 150; private static final int OFF_PATH = 100; private static final int VELOCITY_THRESHOLD = 75; private GestureDetector detector; View.OnTouchListener listener; private int ImageIndex; In the onCreate() method assigns both the detector and listener like this: detector = new GestureDetector(this, new GalleryGestureDetector()); listener = new View.OnTouchListener() { @Override public boolean onTouch(View v, MotionEvent event) { return detector.onTouchEvent(event); } }; Beneath this, add the following line: ImageIndex = 0; Beneath the line detailImage = (ImageView) findViewById(R.id.detail_image);, add the following line: detailImage.setOnTouchListener(listener); Create the following inner class: class GalleryGestureDetector implements GestureDetector.OnGestureListener { } Before dealing with the errors this generates, add the following field to the class: private int item; { item = MainActivity.currentItem; } Click anywhere on the line registering the error and press Alt + Enter. Then select Implement Methods, making sure that you have the Copy JavaDoc and Insert @Override boxes checked. Complete the onDown() method like this: @Override public boolean onDown(MotionEvent e) { return true; } Fill in the onShowPress() method: @Override public void onShowPress(MotionEvent e) { detailImage.setElevation(4); } Then fill in the onFling() method: @Override public boolean onFling(MotionEvent event1, MotionEvent event2, float velocityX, float velocityY) { if (Math.abs(event1.getY() - event2.getY()) > OFF_PATH) return false; if (MainData.galleryArray[item].length != 0) { // Swipe left if (event1.getX() - event2.getX() > MIN_DISTANCE && Math.abs(velocityX) > VELOCITY_THRESHOLD) { ImageIndex++; if (ImageIndex == MainData.galleryArray[item].length) ImageIndex = 0; detailImage.setImageResource(MainData .galleryArray[item][ImageIndex]); } else { // Swipe right if (event2.getX() - event1.getX() > MIN_DISTANCE && Math.abs(velocityX) > VELOCITY_THRESHOLD) { ImageIndex--; if (ImageIndex < 0) ImageIndex = MainData.galleryArray[item].length - 1; detailImage.setImageResource(MainData .galleryArray[item][ImageIndex]); } } } detailImage.setElevation(0); return true; } Test the project on an emulator or handset. The process of gesture detection in the preceding code begins when the OnTouchListener listener's onTouch() method is called. It then passes that MotionEvent to our gesture detector class, GalleryGestureDetector, which monitors motion events, sometimes stringing them together and timing them until one of the recognized gestures is detected. At this point, we can enter our own code to control how our app responds as we did here with the onDown(), onShowPress(), and onFling() callbacks. It is worth taking a quick look at these methods in turn. It may seem, at the first glance, that the onDown() method is redundant; after all, it's the fling gesture that we are trying to catch. In fact, overriding the onDown() method and returning true from it is essential in all gesture detections as all the gestures begin with an onDown() event. The purpose of the onShowPress() method may also appear unclear as it seems to do a little more than onDown(). As the JavaDoc states, this method is handy for adding some form of feedback to the user, acknowledging that their touch has been received. The Material Design guidelines strongly recommend such feedback and here we have raised the view's elevation slightly. Without including our own code, the onFling() method will recognize almost any movement across the bounding view that ends in the user's finger being raised, regardless of direction or speed. We do not want very small or very slow motions to result in action; furthermore, we want to be able to differentiate between vertical and horizontal movement as well as left and right swipes. The MIN_DISTANCE and OFF_PATH constants are in pixels and VELOCITY_THRESHOLD is in pixels per second. These values will need tweaking according to the target device and personal preference. The first MotionEvent argument in onFling() refers to the preceding onDown() event and, like any MotionEvent, its coordinates are available through its getX() and getY() methods. The MotionEvent class contains dozens of useful classes for querying various event properties—for example, getDownTime(), which returns the time in milliseconds since the current onDown() event. In this example, we used GestureDetector.OnGestureListener to capture our gesture. However, the GestureDetector has three such nested classes, the other two being SimpleOnGestureListener and OnDoubleTapListener. SimpleOnGestureListener provides a more convenient way to detect gestures as we only need to implement those methods that relate to the gestures we are interested in capturing. We will shortly edit our Activity so that it implements the SimpleOnGestureListener instead, allowing us to tidy our code and remove the four callbacks that we do not need. The reason for taking this detour, rather than applying the simple listener to begin with, was to get to see all of the gestures available to us through a gesture listener and demonstrate how useful JavaDoc comments can be, particularly if we are new to the framework. For example, take a look at the following screenshot: Another very handy tool is the Dalvik Debug Monitor Server (DDMS), which allows us to see what is going on inside our apps while they are running. The workings of our gesture listener are a good place to do this as most of its methods operate invisibly. Viewing gesture activity with DDMS To view the workings of our OnGestureListener with DDMS, we need to first create a tag to identify our messages and then a filter to view them. The following steps demonstrate how to do this: Open the DetailActivity.java file. Declare the following constant: private static final String DEBUG_TAG = "tag"; Add the following line inside the onDown() method: Log.d(DEBUG_TAG, "onDown"); Add the line Log.d(DEBUG_TAG, "onShowPress"); to the onShowPress() method and do the same for each of our OnGestureDetector methods. Add the following lines to the appropriate clauses in onFling(): Log.d(DEBUG_TAG, "left"); Log.d(DEBUG_TAG, "right"); Open the Android DDMS pane from the Android tab at the bottom of the window or by pressing Alt + 6. If logcat is not visible, it can be opened with the icon to the right of the top-right drop-down menu. Click on this drop-down menu and select Edit Filter Configuration. Complete the dialog as shown in the following screenshot: You can now run the project on a handset or emulator and view, in the Logcat, which gestures are being triggered and how. Your output should resemble the one here: 02-17 14:39:00.990 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onDown 02-17 14:39:01.039 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onSingleTapUp 02-17 14:39:03.503 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onDown 02-17 14:39:03.601 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onShowPress 02-17 14:39:04.101 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onLongPress 02-17 14:39:10.484 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onDown 02-17 14:39:10.541 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onScroll 02-17 14:39:11.091 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onScroll 02-17 14:39:11.232 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onFling 02-17 14:39:11.680 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ right 02-17 14:39:01.039   1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onSingleTapUp DDMS is an invaluable tool when it comes to debugging our apps and seeing what is going on beneath the hood. Once a Log Tag has been defined in the code, we can then create a filter for it so that we see only the messages we are interested in. The Log class contains several methods to report information based on its level of importance. We used Log.d, which stands for debug. All these methods work with the same two parameters: Log.[method](String tag, String message). The full list of these methods is as follows: Log.v: Verbose Log.d: Debug Log.i: Information Log.w: Warning Log.e: Error Log.wtf: Unexpected error It is worth noting that most debug messages will be ignored during the packaging for distribution except for the verbose messages; thus, it is essential to remove them before your final build. Having seen a little more of the inner workings of our gesture detector and listener, we can now strip our code of unused methods by implementing GestureDetector.SimpleOnGestureListener. Implementing a SimpleOnGestureListener It is very simple to convert our gesture detector from one class of listener to another. All we need to do is change the class declaration and delete the unwanted methods. To do this, perform the following steps: Open the DetailActivity file. Change the class declaration for our gesture detector class to the following: class GalleryGestureDetector extends GestureDetector.SimpleOnGestureListener { Delete the onShowPress(), onSingleTapUp(), onScroll(), and onLongPress() methods. This is all you need to do to switch to the SimpleOnGestureListener. We have now successfully constructed and edited a gesture detector to allow the user to browse a series of images. You will have noticed that there is no onDoubleTap() method in the gesture listener. Double-taps can, in fact, be handled with the third GestureDetector listener, OnDoubleTapListener, which operates in a very similar way to the other two. However, Google, in its UI guidelines, recommends that a long press should be used instead, whenever possible. Before moving on to multitouch events, we will take a look at how to attach a GestureDetector listener to an entire Activity by adding a splash screen to our project. In the process, we will also see how to create a Full-Screen Activity and how to edit the Maniftest file so that our app launches with the splash screen. Adding a GestureDetector to an Activity The method we have employed so far allows us to attach a GestureDetector listener to any view or views and this, of course, applies to ViewGroups such as Layouts. There are times when we may want to detect gestures to be applied to the whole screen. For this purpose, we will create a splash screen that can be dismissed with a long press. There are two things we need to do before implementing the gesture detector: creating a layout and editing the Manifest file so that the app launches with our splash screen. Designing the splash screen layout The main difference between processing gestures for a whole Activity and an individual widget, is that we do not need an OnTouchListener as we can override the Activity's own onTouchEvent(). Here is how it is done: Create a new Blank Activity from the Project Explorer context menu called SplashActivity.java. The Activity wizard should have created an associated XML layout called activity_splash.xml. Open this and view it using the Text tab. Remove all the padding properties from the root layout so that it looks similar to this: <RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent" tools:context="com.example.kyle.ancientbritain .SplashActivity"> Here we will need an image to act as the background for our splash screen. If you have not downloaded the project files from the Packt website, find an image, roughly of the size and aspect of your target device's screen, upload it to the project drawable folder, and call it splash. The file I used is 480 x 800 px. Remove the TextView that the wizard placed inside the layout and replace it with this ImageView: <ImageView android:id="@+id/splash_image" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@drawable/splash"/> Create a TextView beneath this, such as the following: <TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_alignParentBottom="true" android:layout_centerHorizontal="true" android:layout_marginBottom="40dp" android:gravity="center_horizontal" android:textAppearance="?android:attr/ textAppearanceLarge" android:textColor="#fffcfcbd"/> Add the following text property: android:text="Welcome to <b>Ancient Britain</b>npress and hold nanywhere on the screennto start" To save time adding string resources to the strings.xml file, enter a hardcoded string such as the preceding one and heed the warning from the editor to have the string extracted for you like this: There is nothing in this layout that we have not encountered before. We removed all the padding so that our splash image will fill the layout; however, you will see from the preview that this does not appear to be the case. We will deal with this next in our Java code, but we need to edit our Manifest first so that the app gets launched with our SplashActivity. Editing the Manifest It is very simple to configure the AndroidManifest file so that an app will get launched with whichever Activity we choose; the way it does so is with an intent. While we are editing the Manifest, we will also configure the display to fill the screen. Simply follow these steps: Open the res/values-v21/styles.xml file and add the following style: <style name="SplashTheme" parent="android:Theme.Material. NoActionBar.Fullscreen"> </style> Open the AndroidManifest.xml file. Cut-and-paste the <intent-filter> element from MainActivity to SplashActivity. Include the following properties so that the entire <activity> node looks similar to this: <activity android:name=".SplashActivity" android:theme="@style/SplashTheme" android:screenOrientation="portrait" android:configChanges="orientation|screenSize" android:label="Old UK" > <intent-filter> <action android_name="android.intent.action.MAIN" /> <category android_name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> We have encountered themes and styles before and, here, we took advantage of a built-in theme designed for full screen activities. In many cases, we might have designed a landscape layout here but, as is often the case with splash screens, we locked the orientation with the android:screenOrientation property. The android:configChanges line is not actually needed here, but is included as it is useful to know about it. Configuring any attribute such as this prevents the system from automatically reloading the Activity whenever the device is rotated or the screen size changed. Instead of the Activity restarting, the onConfigurationChanged() method is called. This was not needed here as the screen size and orientation were taken care of in the previous lines of code and this line was only included as a point of interest. Finally, we changed the value of android:label. You may have noticed that, depending on the screen size of the device you are using, the name of our app is not displayed in full on the home screen or apps drawer. In such cases, when you want to use a shortened name for your app, it can be inserted here. With everything else in place, we can get on with adding our gesture detector. This is not dissimilar to the way we did this before but, this time, we will apply the detector to the whole screen and will be listening for a long press, rather than a fling. Adding the GestureDetector Along with implementing a gesture detector for the entire Activity here, we will also take the final step in configuring our splash screen so that the image fills the screen, but maintains its aspect ratio. Follow these steps to complete the app splash screen. Open the SplashActivity file. Declare a GestureDetector as we did in the earlier exercise: private GestureDetector detector; In the onCreate() method, assign and configure our splash image and gesture detector like this: ImageView imageView = (ImageView) findViewById(R.id.splash_image); imageView.setScaleType(ImageView.ScaleType.CENTER_CROP); detector = new GestureDetector(this, new SplashListener()); Now, override the Activity's onTouchEvent() like this: @Override public boolean onTouchEvent(MotionEvent event) { this.detector.onTouchEvent(event); return super.onTouchEvent(event); } Create the following SimpleOnGestureListener class: private class SplashListener extends GestureDetector. SimpleOnGestureListener { @Override public boolean onDown(MotionEvent e) { return true; } @Override public void onLongPress(MotionEvent e) { startActivity(new Intent(getApplicationContext(), MainActivity.class)); } } Build and run the app on your phone or an emulator. The way a gesture detector is implemented across an entire Activity should be familiar by this point, as should the capturing of the long press event. The ImageView.setScaleType(ImageView.ScaleType) method is essential here; it is a very useful method in general. The CENTER_CROP constant scales the image to fill the view while maintaining the aspect ratio, cropping the edges when necessary. There are several similar ScaleTypes, such as CENTER_INSIDE, which scales the image to the maximum size possible without cropping it, and CENTER, which does not scale the image at all. The beauty of CENTER_CROP is that it means that we don't have to design a separate image for every possible aspect ratio on the numerous devices our apps will end up running on. Provided that we make allowances for very wide or very narrow screens by not including essential information too close to the edges, we only need to provide a handful of images of varying pixel densities to maintain the image quality on large, high-resolution devices. The scale type of ImageView can be set from within XML with android:scaleType="centerCrop", for example. You may have wondered why we did not use the built-in Full-Screen Activity from the wizard; we could easily have done so. The template code the wizard creates for a Full-Screen Activity provides far more features than we needed for this exercise. Nevertheless, the template is worth taking a look at, especially if you want a fullscreen that brings the status bar and other components into view when the user interacts with the Activity. That brings us to the end of this article. Not only have we seen how to make our apps interact with touch events and gestures, but also how to send debug messages to the IDE and make a Full-Screen Activity. Summary We began this article by adding a GestureDetector to our project. We then edited it so that we could filter out meaningful touch events (swipe right and left, in this case). We went on to see how the SimpleOnGestureListener can save us a lot of time when we are only interested in catching a subset of the recognized gestures. We also saw how to use DDMS to pass debug messages during runtime and how, through a combination of XML and Java, the status and action bars can be hidden and the entire screen be filled with a single view or view group. Resources for Article: Further resources on this subject: Speeding up Gradle builds for Android [Article] Saying Hello to Unity and Android [Article] Testing with the Android SDK [Article]
Read more
  • 0
  • 0
  • 1774

Banner background image
article-image-speeding-gradle-builds-android
Packt
16 Jul 2015
7 min read
Save for later

Speeding up Gradle builds for Android

Packt
16 Jul 2015
7 min read
In this article by Kevin Pelgrims, the author of the book, Gradle for Android, we will cover a few tips and tricks that will help speed up the Gradle builds. A lot of Android developers that start using Gradle complain about the prolonged compilation time. Builds can take longer than they do with Ant, because Gradle has three phases in the build lifecycle that it goes through every time you execute a task. This makes the whole process very configurable, but also quite slow. Luckily, there are several ways to speed up Gradle builds. Gradle properties One way to tweak the speed of a Gradle build is to change some of the default settings. You can enable parallel builds by setting a property in a gradle.properties file that is placed in the root of a project. All you need to do is add the following line: org.gradle.parallel=true Another easy win is to enable the Gradle daemon, which starts a background process when you run a build the first time. Any subsequent builds will then reuse that background process, thus cutting out the startup cost. The process is kept alive as long as you use Gradle, and is terminated after three hours of idle time. Using the daemon is particularly useful when you use Gradle several times in a short time span. You can enable the daemon in the gradle.properties file like this: org.gradle.daemon=true In Android Studio, the Gradle daemon is enabled by default. This means that after the first build from inside the IDE, the next builds are a bit faster. If you build from the command-line interface; however, the Gradle daemon is disabled, unless you enable it in the properties. To speed up the compilation itself, you can tweak parameters on the Java Virtual Machine (JVM). There is a Gradle property called jvmargs that enables you to set different values for the memory allocation pool for the JVM. The two parameters that have a direct influence on your build speed are Xms and Xmx. The Xms parameter is used to set the initial amount of memory to be used, while the Xmx parameter is used to set a maximum. You can manually set these values in the gradle.properties file like this: org.gradle.jvmargs=-Xms256m -Xmx1024m You need to set the desired amount and a unit, which can be k for kilobytes, m for megabytes, and g for gigabytes. By default, the maximum memory allocation (Xmx) is set to 256 MB, and the starting memory allocation (Xms) is not set at all. The optimal settings depend on the capabilities of your computer. The last property you can configure to influence build speed is org.gradle.configureondemand. This property is particularly useful if you have complex projects with several modules, as it tries to limit the time spent in the configuration phase, by skipping modules that are not required for the task that is being executed. If you set this property to true, Gradle will try to figure out which modules have configuration changes and which ones do not, before it runs the configuration phase. This is a feature that will not be very useful if you only have an Android app and a library in your project. If you have a lot of modules that are loosely coupled, though, this feature can save you a lot of build time. System-wide Gradle properties If you want to apply these properties system-wide to all your Gradle-based projects, you can create a gradle.properties file in the .gradle folder in your home directory. On Microsoft Windows, the full path to this directory is %UserProfile%.gradle, on Linux and Mac OS X it is ~/.gradle. It is a good practice to set these properties in your home directory, rather than on the project level. The reason for this is that you usually want to keep memory consumption down on build servers, and the build time is of less importance. Android Studio The Gradle properties you can change to speed up the compilation process are also configurable in the Android Studio settings. To find the compiler settings, open the Settings dialog, and then navigate to Build, Execution, Deployment | Compiler. On that screen, you can find settings for parallel builds, JVM options, configure on demand, and so on. These settings only show up for Gradle-based Android modules. Have a look at the following screenshot: Configuring these settings from Android Studio is easier than configuring them manually in the build configuration file, and the settings dialog makes it easy to find properties that influence the build process. Profiling If you want to find out which parts of the build are slowing the process down, you can profile the entire build process. You can do this by adding the --profile flag whenever you execute a Gradle task. When you provide this flag, Gradle creates a profiling report, which can tell you which parts of the build process are the most time consuming. Once you know where the bottlenecks are, you can make the necessary changes. The report is saved as an HTML file in your module in build/reports/profile. This is the report generated after executing the build task on a multimodule project: The profiling report shows an overview of the time spent in each phase while executing the task. Below that summary is an overview of how much time Gradle spent on the configuration phase for each module. There are two more sections in the report that are not shown in the screenshot. The Dependency Resolution section shows how long it took to resolve dependencies, per module. Lastly, the Task Execution section contains an extremely detailed task execution overview. This overview has the timing for every single task, ordered by execution time from high to low. Jack and Jill If you are willing to use experimental tools, you can enable Jack and Jill to speed up builds. Jack (Java Android Compiler Kit) is a new Android build toolchain that compiles Java source code directly to the Android Dalvik executable (dex) format. It has its own .jack library format and takes care of packaging and shrinking as well. Jill (Jack Intermediate Library Linker) is a tool that can convert .aar and .jar files to .jack libraries. These tools are still quite experimental, but they were made to improve build times and to simplify the Android build process. It is not recommended to start using Jack and Jill for production versions of your projects, but they are made available so that you can try them out. To be able to use Jack and Jill, you need to use build tools version 21.1.1 or higher, and the Android plugin for Gradle version 1.0.0 or higher. Enabling Jack and Jill is as easy as setting one property in the defaultConfig block: android {   buildToolsRevision '22.0.1'   defaultConfig {     useJack = true   }} You can also enable Jack and Jill on a certain build type or product flavor. This way, you can continue using the regular build toolchain, and have an experimental build on the side: android {   productFlavors {       regular {           useJack = false       }        experimental {           useJack = true       }   }} As soon as you set useJack to true, minification and obfuscation will not go through ProGuard anymore, but you can still use the ProGuard rules syntax to specify certain rules and exceptions. Use the same proguardFiles method that we mentioned before, when talking about ProGuard. Summary This article helped us lean different ways to speed up builds; we first saw how we can tweak the settings, configure Gradle and the JVM, we saw how to detect parts that are slowing down the process, and then we learned Jack and Jill tool. Resources for Article: Further resources on this subject: Android Tech Page Android Virtual Device Manager Apache Maven and m2eclipse  Saying Hello to Unity and Android
Read more
  • 0
  • 0
  • 3203

article-image-writing-fully-native-application
Packt
05 May 2015
15 min read
Save for later

Writing a Fully Native Application

Packt
05 May 2015
15 min read
In this article written by Sylvain Ratabouil, author of Android NDK Beginner`s Guide - Second Edition, we have breached Android NDK's surface using JNI. But there is much more to find inside! The NDK includes its own set of specific features, one of them being Native Activities. Native activities allow creating applications based only on native code, without a single line of Java. No more JNI! No more references! No more Java! (For more resources related to this topic, see here.) In addition to native activities, the NDK brings some APIs for native access to Android resources, such as display windows, assets, device configuration. These APIs help in getting rid of the tortuous JNI bridge often necessary to embed native code. Although there is a lot still missing, and not likely to be available (Java remains the main platform language for GUIs and most frameworks), multimedia applications are a perfect target to apply them. Here we initiate a native C++ project developed progressively throughout this article: DroidBlaster. Based on a top-down viewpoint, this sample scrolling shooter will feature 2D graphics, and, later on, 3D graphics, sound, input, and sensor management. We will be creating its base structure and main game components. Let's now enter the heart of the Android NDK by: Creating a fully native activity Handling main activity events Accessing display window natively Retrieving time and calculating delays Creating a native Activity The NativeActivity class provides a facility to minimize the work necessary to create a native application. It lets the developer get rid of all the boilerplate code to initialize and communicate with native code and concentrate on core functionalities. This glue Activity is the simplest way to write applications, such as games without a line of Java code. The resulting project is provided with this book under the name DroidBlaster_Part1. Time for action – creating a basic native Activity We are now going to see how to create a minimal native activity that runs an event loop. Create a new hybrid Java/C++ project:      Name it DroidBlaster.      Turn the project into a native project. Name the native module droidblaster.      Remove the native source and header files that have been created by ADT.      Remove the reference to the Java src directory in Project Properties | Java Build Path | Source. Then, remove the directory itself on disk.      Get rid of all layouts in the res/layout directory.      Get rid of jni/droidblaster.cpp if it has been created. In AndroidManifest.xml, use Theme.NoTitleBar.Fullscreen as the application theme. Declare a NativeActivity that refers to the native module named droidblaster (that is, the native library we will compile) using the meta-data property android.app.lib_name: <?xml version="1.0" encoding="utf-8"?> <manifest    package="com.packtpub.droidblaster2d" android_versionCode="1"    android_versionName="1.0">    <uses-sdk        android_minSdkVersion="14"        android_targetSdkVersion="19"/>      <application android_icon="@drawable/ic_launcher"        android_label="@string/app_name"        android_allowBackup="false"        android:theme        ="@android:style/Theme.NoTitleBar.Fullscreen">        <activity android_name="android.app.NativeActivity"            android_label="@string/app_name"            android_screenOrientation="portrait">            <meta-data android_name="android.app.lib_name"                android:value="droidblaster"/>            <intent-filter>                <action android:name ="android.intent.action.MAIN"/>                <category                    android_name="android.intent.category.LAUNCHER"/>            </intent-filter>        </activity>    </application> </manifest> Create the file jni/Types.hpp. This header will contain common types and the header cstdint: #ifndef _PACKT_TYPES_HPP_ #define _PACKT_TYPES_HPP_   #include <cstdint>   #endif Let's write a logging class to get some feedback in the Logcat.      Create jni/Log.hpp and declare a new class Log.      Define the packt_Log_debug macro to allow the activating or deactivating of debug messages with a simple compile flag: #ifndef _PACKT_LOG_HPP_ #define _PACKT_LOG_HPP_   class Log { public:    static void error(const char* pMessage, ...);    static void warn(const char* pMessage, ...);    static void info(const char* pMessage, ...);    static void debug(const char* pMessage, ...); };   #ifndef NDEBUG    #define packt_Log_debug(...) Log::debug(__VA_ARGS__) #else    #define packt_Log_debug(...) #endif   #endif Implement the jni/Log.cpp file and implement the info() method. To write messages to Android logs, the NDK provides a dedicated logging API in the android/log.h header, which can be used similarly as printf() or vprintf() (with varArgs) in C: #include "Log.hpp"   #include <stdarg.h> #include <android/log.h>   void Log::info(const char* pMessage, ...) {    va_list varArgs;    va_start(varArgs, pMessage);    __android_log_vprint(ANDROID_LOG_INFO, "PACKT", pMessage,        varArgs);    __android_log_print(ANDROID_LOG_INFO, "PACKT", "n");    va_end(varArgs); } ... Write other log methods, error(), warn(), and debug(), which are almost identical, except the level macro, which are respectively ANDROID_LOG_ERROR, ANDROID_LOG_WARN, and ANDROID_LOG_DEBUG instead. Application events in NativeActivity can be processed with an event loop. So, create jni/EventLoop.hpp to define a class with a unique method run(). Include the android_native_app_glue.h header, which defines the android_app structure. It represents what could be called an applicative context, where all the information is related to the native activity; its state, its window, its event queue, and so on: #ifndef _PACKT_EVENTLOOP_HPP_ #define _PACKT_EVENTLOOP_HPP_   #include <android_native_app_glue.h>   class EventLoop { public:    EventLoop(android_app* pApplication);      void run();   private:    android_app* mApplication; }; #endif Create jni/EventLoop.cpp and implement the activity event loop in the run() method. Include a few log events to get some feedback in Android logs. During the whole activity lifetime, the run() method loops continuously over events until it is requested to terminate. When an activity is about to be destroyed, the destroyRequested value in the android_app structure is changed internally to indicate to the client code that it must exit. Also, call app_dummy() to ensure the glue code that ties native code to NativeActivity is not stripped by the linker. #include "EventLoop.hpp" #include "Log.hpp"   EventLoop::EventLoop(android_app* pApplication):        mApplication(pApplication) {}   void EventLoop::run() {    int32_t result; int32_t events;    android_poll_source* source;      // Makes sure native glue is not stripped by the linker.    app_dummy();      Log::info("Starting event loop");    while (true) {        // Event processing loop.        while ((result = ALooper_pollAll(-1, NULL, &events,                (void**) &source)) >= 0) {            // An event has to be processed.            if (source != NULL) {                source->process(mApplication, source);            }            // Application is getting destroyed.            if (mApplication->destroyRequested) {                Log::info("Exiting event loop");                return;            }        }    } } Finally, create jni/Main.cpp to define the program entry point android_main(), which runs the event loop in a new file Main.cpp: #include "EventLoop.hpp" #include "Log.hpp"   void android_main(android_app* pApplication) {    EventLoop(pApplication).run(); } Edit the jni/Android.mk file to define the droidblaster module (the LOCAL_MODULE directive). Describe the C++ files to compile the LOCAL_SRC_FILES directive with the help of the LS_CPP macro. Link droidblaster with the native_app_glue module (the LOCAL_STATIC_LIBRARIES directive) and android (required by the Native App Glue module), as well as the log libraries (the LOCAL_LDLIBS directive): LOCAL_PATH := $(call my-dir)   include $(CLEAR_VARS)   LS_CPP=$(subst $(1)/,,$(wildcard $(1)/*.cpp)) LOCAL_MODULE := droidblaster LOCAL_SRC_FILES := $(call LS_CPP,$(LOCAL_PATH)) LOCAL_LDLIBS := -landroid -llog LOCAL_STATIC_LIBRARIES := android_native_app_glue   include $(BUILD_SHARED_LIBRARY)   $(call import-module,android/native_app_glue)   Create jni/Application.mk to compile the native module for multiple ABIs. We will use the most basic ones, as shown in the following code: APP_ABI := armeabi armeabi-v7a x86 What just happened? Build and run the application. Of course, you will not see anything tremendous when starting this application. Actually, you will just see a black screen! However, if you look carefully at the LogCat view in Eclipse (or the adb logcat command), you will discover a few interesting messages that have been emitted by your native application in reaction to activity events. We initiated a Java Android project without a single line of Java code! Instead of referencing a child of Activity in AndroidManifest, we referenced the android.app.NativeActivity class provided by the Android framework. NativeActivity is a Java class, launched like any other Android activity and interpreted by the Dalvik Virtual Machine like any other Java class. However, we never faced it directly. NativeActivity is in fact a helper class provided with Android SDK, which contains all the necessary glue code to handle application events (lifecycle, input, sensors, and so on) and broadcasts them transparently to native code. Thus, a native activity does not eliminate the need for JNI. It just hides it under the cover! However, the native C/C++ module run by NativeActivity is executed outside Dalvik boundaries in its own thread, entirely natively (using the Posix Thread API)! NativeActivity and native code are connected together through the native_app_glue module. The Native App Glue has the responsibility of: Launching the native thread, which runs our own native code Receiving events from NativeActivity Routing these events to the native thread event loop for further processing The Native glue module code is located in ${ANDROID_NDK}/sources/android/native_app_glue and can be analyzed, modified, or forked at will. The headers related to native APIs such as, looper.h, can be found in ${ANDROID_NDK}/platforms/<Target Platform>/<Target Architecture>/usr/include/android/. Let's see in more detail how it works. More about the Native App Glue Our own native code entry point is declared inside the android_main() method, which is similar to the main methods in desktop applications. It is called only once when NativeActivity is instantiated and launched. It loops over application events until NativeActivity is terminated by the user (for example, when pressing a device's back button) or until it exits by itself. The android_main() method is not the real native application entry point. The real entry point is the ANativeActivity_onCreate() method hidden in the android_native_app_glue module. The event loop we implemented in android_main() is in fact a delegate event loop, launched in its own native thread by the glue module. This design decouples native code from the NativeActivity class, which is run on the UI thread on the Java side. Thus, even if your code takes a long time to handle an event, NativeActivity is not blocked and your Android device still remains responsive. The delegate native event loop in android_main() is itself composed, in our example, of two nested while loops. The outer one is an infinite loop, terminated only when activity destruction is requested by the system (indicated by the destroyRequested flag). It executes an inner loop, which processes all pending application events. ... int32_t result; int32_t events; android_poll_source* source; while (true) {    while ((result = ALooper_pollAll(-1, NULL, &events,            (void**) &source)) >= 0) {        if (source != NULL) {            source->process(mApplication, source);        }        if (mApplication->destroyRequested) {            return;        }    } } ... The inner For loop polls events by calling ALooper_pollAll(). This method is part of the Looper API, which can be described as a general-purpose event loop manager provided by Android. When timeout is set to -1, like in the preceding example, ALooper_pollAll() remains blocked while waiting for events. When at least one is received, ALooper_pollAll() returns and the code flow continues. The android_poll_source structure describing the event is filled and is then used by client code for further processing. This structure looks as follows: struct android_poll_source {    int32_t id; // Source identifier  struct android_app* app; // Global android application context    void (*process)(struct android_app* app,            struct android_poll_source* source); // Event processor }; The process() function pointer can be customized to process application events manually. As we saw in this part, the event loop receives an android_app structure in parameter. This structure, described in android_native_app_glue.h, contains some contextual information as shown in the following table: void* userData Pointer to any data you want. This is essential in giving some contextual information to the activity or input event callbacks. void (*pnAppCmd)(…) and int32_t (*onInputEvent)(…) These member variables represent the event callbacks triggered by the Native App Glue when an activity or an input event occurs. ANativeActivity* activity Describes the Java native activity (its class as a JNI object, its data directories, and so on) and gives the necessary information to retrieve a JNI context. AConfiguration* config Describes the current hardware and system state, such as the current language and country, the current screen orientation, density, size, and so on. void* savedState size_t and savedStateSize Used to save a buffer of data when an activity (and thus its native thread) is destroyed and later restored. AInputQueue* inputQueue Provides input events (used internally by the native glue). ALooper* looper Allows attaching and detaching event queues used internally by the native glue. Listeners poll and wait for events sent on a communication pipe. ANativeWindow* window and ARect contentRect Represents the "drawable" area on which graphics can be drawn. The ANativeWindow API, declared in native_window.h, allows retrieval of the window width, height, and pixel format, and the changing of these settings. int activityState Current activity state, that is, APP_CMD_START, APP_CMD_RESUME, APP_CMD_PAUSE, and so on. int destroyRequested When equal to 1, it indicates that the application is about to be destroyed and the native thread must be terminated immediately. This flag has to be checked in the event loop. The android_app structure also contains some additional data for internal use only, which should not be changed. Knowing all these details is not essential to program native programs but can help you understand what's going on behind your back. Let's now see how to handle these activity events. Summary The Android NDK allows us to write fully native applications without a line of Java code. NativeActivity provides a skeleton to implement an event loop that processes application events. Associated with the Posix time management API, the NDK provides the required base to build complex multimedia applications or games. In summary, we created NativeActivity that polls activity events to start or stop native code accordingly. We accessed the display window natively, like a bitmap, to display raw graphics. Finally, we retrieved time to make the application adapt to device speed using a monotonic clock. Resources for Article: Further resources on this subject: Android Native Application API [article] Organizing a Virtual Filesystem [article] Android Fragmentation Management [article]
Read more
  • 0
  • 0
  • 8842
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-testing-android-sdk
Packt
26 Mar 2015
25 min read
Save for later

Testing with the Android SDK

Packt
26 Mar 2015
25 min read
In this article by the author, Paul Blundell, of the book, Learning Android Application Testing, we learn to start digging a bit deeper to recognize the building blocks available to create more useful tests. We will be covering the following topics: Common assertions View assertions Other assertion types Helpers to test User Interfaces Mock objects Instrumentation TestCase class hierarchies Using external libraries We will be analyzing these components and showing examples of their use when applicable. The examples in this article are intentionally split from the original Android project that contains them. This is done to let you concentrate and focus only on the subject being presented, though the complete examples in a single project can be downloaded as explained later. Right now, we are interested in the trees and not the forest. Along with the examples presented, we will be identifying reusable common patterns that will help you in the creation of tests for your own projects. (For more resources related to this topic, see here.) The demonstration application A very simple application has been created to demonstrate the use of some of the tests in this article. The source for the application can be downloaded from XXXXXXXXXXXXX. The following screenshot shows this application running: When reading the explanation of the tests in this article, at any point, you can refer to the demo application that is provided in order to see the test in action. The previous simple application has a clickable link, text input, click on a button and a defined layout UI, we can test these one by one. Assertions in depth Assertions are methods that check for a condition that can be evaluated. If the condition is not met, the assertion method will throw an exception, thereby aborting the execution of the test. The JUnit API includes the class Assert. This is the base class of all the TestCase classes that hold several assertion methods useful for writing tests. These inherited methods test for a variety of conditions and are overloaded to support different parameter types. They can be grouped together in the following different sets, depending on the condition checked, for example: assertEquals assertTrue assertFalse assertNull assertNotNull assertSame assertNotSame fail The condition tested is pretty obvious and is easily identifiable by the method name. Perhaps the ones that deserve some attention are assertEquals() and assertSame(). The former, when used on objects, asserts that both objects passed as parameters are equally calling the objects' equals() method. The latter asserts that both objects refer to the same object. If, in some case, equals() is not implemented by the class, then assertEquals() and assertSame() will do the same thing. When one of these assertions fails inside a test, an AssertionFailedException is thrown, and this indicates that the test has failed. Occasionally, during the development process, you might need to create a test that you are not implementing at that precise time. However, you want to flag that the creation of the test was postponed. In such cases, you can use the fail() method, which always fails and uses a custom message that indicates the condition: public void testNotImplementedYet() {    fail("Not implemented yet"); } Still, there is another common use for fail() that is worth mentioning. If we need to test whether a method throws an exception, we can surround the code with a try-catch block and force a fail if the exception was not thrown. For example: public void testShouldThrowException() {    try {    MyFirstProjectActivity.methodThatShouldThrowException();      fail("Exception was not thrown");    } catch ( Exception ex ) {      // do nothing    } } JUnit4 has the annotation @Test(expected=Exception.class), and this supersedes the need for using fail() when testing exceptions. With this annotation, the test will only pass if the expected exception is thrown. Custom message It is worth knowing that all assert methods provide an overloaded version including a custom String message. Should the assertion fail, this custom message will be printed by the test runner, instead of a default message. The premise behind this is that, sometimes, the generic error message does not reveal enough details, and it is not obvious how the test failed. This custom message can be extremely useful to easily identify the failure once you are looking at the test report, so it's highly recommended as a best practice to use this version. The following is an example of a simple test that uses this recommendation: public void testMax() { int a = 10; int b = 20;   int actual = Math.max(a, b);   String failMsg = "Expected: " + b + " but was: " + actual; assertEquals(failMsg, b, actual); } In the preceding example, we can see another practice that would help you organize and understand your tests easily. This is the use of explicit names for variables that hold the actual values. There are other libraries available that have better default error messages and also a more fluid interface for testing. One of these that is worth looking at is Fest (https://code.google.com/p/fest/). Static imports Though basic assertion methods are inherited from the Assert base class, some other assertions need specific imports. To improve the readability of your tests, there is a pattern to statically import the assert methods from the corresponding classes. Using this pattern instead of having: public void testAlignment() { int margin = 0;    ... android.test.ViewAsserts.assertRightAligned (errorMsg, editText, margin); } We can simplify it by adding the static import: import static android.test.ViewAsserts.assertRightAligned; public void testAlignment() {    int margin = 0;    assertRightAligned(errorMsg, editText, margin); } View assertions The assertions introduced earlier handle a variety of types as parameters, but they are only intended to test simple conditions or simple objects. For example, we have asertEquals(short expected, short actual) to test short values, assertEquals(int expected, int actual) to test integer values, assertEquals(Object expected, Object expected) to test any Object instance, and so on. Usually, while testing user interfaces in Android, you will face the problem of more sophisticated methods, which are mainly related with Views. In this respect, Android provides a class with plenty of assertions in android.test.ViewAsserts (see http://developer.android.com/reference/android/test/ViewAsserts.html for more details), which test relationships between Views and their absolute and relative positions on the screen. These methods are also overloaded to provide different conditions. Among the assertions, we can find the following: assertBaselineAligned: This asserts that two Views are aligned on their baseline; that is, their baselines are on the same y location. assertBottomAligned: This asserts that two views are bottom aligned; that is, their bottom edges are on the same y location. assertGroupContains: This asserts that the specified group contains a specific child once and only once. assertGroupIntegrity: This asserts the specified group's integrity. The child count should be >= 0 and each child should be non-null. assertGroupNotContains: This asserts that the specified group does not contain a specific child. assertHasScreenCoordinates: This asserts that a View has a particular x and y position on the visible screen. assertHorizontalCenterAligned: This asserts that the test View is horizontally center aligned with respect to the reference view. assertLeftAligned: This asserts that two Views are left aligned; that is, their left edges are on the same x location. An optional margin can also be provided. assertOffScreenAbove: This asserts that the specified view is above the visible screen. assertOffScreenBelow: This asserts that the specified view is below the visible screen. assertOnScreen: This asserts that a View is on the screen. assertRightAligned: This asserts that two Views are right-aligned; that is, their right edges are on the same x location. An optional margin can also be specified. assertTopAligned: This asserts that two Views are top aligned; that is, their top edges are on the same y location. An optional margin can also be specified. assertVerticalCenterAligned: This asserts that the test View is vertically center-aligned with respect to the reference View. The following example shows how you can use ViewAssertions to test the user interface layout: public void testUserInterfaceLayout() {    int margin = 0;    View origin = mActivity.getWindow().getDecorView();    assertOnScreen(origin, editText);    assertOnScreen(origin, button);    assertRightAligned(editText, button, margin); } The assertOnScreen method uses an origin to start looking for the requested Views. In this case, we are using the top-level window decor View. If, for some reason, you don't need to go that high in the hierarchy, or if this approach is not suitable for your test, you may use another root View in the hierarchy, for example View.getRootView(), which, in our concrete example, would be editText.getRootView(). Even more assertions If the assertions that are reviewed previously do not seem to be enough for your tests' needs, there is still another class included in the Android framework that covers other cases. This class is MoreAsserts (http://developer.android.com/reference/android/test/MoreAsserts.html). These methods are also overloaded to support different parameter types. Among the assertions, we can find the following: assertAssignableFrom: This asserts that an object is assignable to a class. assertContainsRegex: This asserts that an expected Regex matches any substring of the specified String. It fails with the specified message if it does not. assertContainsInAnyOrder: This asserts that the specified Iterable contains precisely the elements expected, but in any order. assertContainsInOrder: This asserts that the specified Iterable contains precisely the elements expected, but in the same order. assertEmpty: This asserts that an Iterable is empty. assertEquals: This is for some Collections not covered in JUnit asserts. assertMatchesRegex: This asserts that the specified Regex exactly matches the String and fails with the provided message if it does not. assertNotContainsRegex: This asserts that the specified Regex does not match any substring of the specified String, and fails with the provided message if it does. assertNotEmpty: This asserts that some Collections not covered in JUnit asserts are not empty. assertNotMatchesRegex: This asserts that the specified Regex does not exactly match the specified String, and fails with the provided message if it does. checkEqualsAndHashCodeMethods: This is a utility used to test the equals() and hashCode() results at once. This tests whether equals() that is applied to both objects matches the specified result. The following test checks for an error during the invocation of the capitalization method called via a click on the UI button: @UiThreadTest public void testNoErrorInCapitalization() { String msg = "capitalize this text"; editText.setText(msg);   button.performClick();   String actual = editText.getText().toString(); String notExpectedRegexp = "(?i:ERROR)"; String errorMsg = "Capitalization error for " + actual; assertNotContainsRegex(errorMsg, notExpectedRegexp, actual); } If you are not familiar with regular expressions, invest some time and visit http://developer.android.com/reference/java/util/regex/package-summary.html because it will be worth it! In this particular case, we are looking for the word ERROR contained in the result with a case-insensitive match (setting the flag i for this purpose). That is, if for some reason, capitalization doesn't work in our application, and it contains an error message, we can detect this condition with the assertion. Note that because this is a test that modifies the user interface, we must annotate it with @UiThreadTest; otherwise, it won't be able to alter the UI from a different thread, and we will receive the following exception: INFO/TestRunner(610): ----- begin exception ----- INFO/TestRunner(610): android.view.ViewRoot$CalledFromWrongThreadException: Only the original thread that created a view hierarchy can touch its views. INFO/TestRunner(610):     at android.view.ViewRoot.checkThread(ViewRoot.java:2932) [...] INFO/TestRunner(610):     at android.app. Instrumentation$InstrumentationThread.run(Instrumentation.java:1447) INFO/TestRunner(610): ----- end exception ----- The TouchUtils class Sometimes, when testing UIs, it is helpful to simulate different kinds of touch events. These touch events can be generated in many different ways, but probably android.test.TouchUtils is the simplest to use. This class provides reusable methods to generate touch events in test cases that are derived from InstrumentationTestCase. The featured methods allow a simulated interaction with the UI under test. The TouchUtils class provides the infrastructure to inject the events using the correct UI or main thread, so no special handling is needed, and you don't need to annotate the test using @UIThreadTest. TouchUtils supports the following: Clicking on a View and releasing it Tapping on a View (touching it and quickly releasing) Long-clicking on a View Dragging the screen Dragging Views The following test represents a typical usage of TouchUtils:    public void testListScrolling() {        listView.scrollTo(0, 0);          TouchUtils.dragQuarterScreenUp(this, activity);        int actualItemPosition = listView.getFirstVisiblePosition();          assertTrue("Wrong position", actualItemPosition > 0);    } This test does the following: Repositions the list at the beginning to start from a known condition Scrolls the list Checks for the first visible position to see that it was correctly scrolled Even the most complex UIs can be tested in that way, and it would help you detect a variety of conditions that could potentially affect the user experience. Mock objects We have seen the mock objects provided by the Android testing framework, and evaluated the concerns about not using real objects to isolate our tests from the surrounding environment. Martin Fowler calls these two styles the classical and mockist Test-driven Development dichotomy in his great article Mocks aren't stubs, which can be read online at http://www.martinfowler.com/articles/mocksArentStubs.html. Independent of this discussion, we are introducing mock objects as one of the available building blocks because, sometimes, using mock objects in our tests is recommended, desirable, useful, or even unavoidable. The Android SDK provides the following classes in the subpackage android.test.mock to help us: MockApplication: This is a mock implementation of the Application class. All methods are non-functional and throw UnsupportedOperationException. MockContentProvider: This is a mock implementation of ContentProvider. All methods are non-functional and throw UnsupportedOperationException. MockContentResolver: This is a mock implementation of the ContentResolver class that isolates the test code from the real content system. All methods are non-functional and throw UnsupportedOperationException. MockContext: This is a mock context class, and this can be used to inject other dependencies. All methods are non-functional and throw UnsupportedOperationException. MockCursor: This is a mock Cursor class that isolates the test code from real Cursor implementation. All methods are non-functional and throw UnsupportedOperationException. MockDialogInterface: This is a mock implementation of the DialogInterface class. All methods are non-functional and throw UnsupportedOperationException. MockPackageManager: This is a mock implementation of the PackageManager class. All methods are non-functional and throw UnsupportedOperationException. MockResources: This is a mock Resources class. All of these classes have non-functional methods that throw UnsupportedOperationException when used. If you need to use some of these methods, or if you detect that your test is failing with this Exception, you should extend one of these base classes and provide the required functionality. MockContext overview This mock can be used to inject other dependencies, mocks, or monitors into the classes under test. Extend this class to provide your desired behavior, overriding the correspondent methods. The Android SDK provides some prebuilt mock Context objects, each of which has a separate use case. The IsolatedContext class In your tests, you might find the need to isolate the Activity under test from other Android components to prevent unwanted interactions. This can be a complete isolation, but sometimes, this isolation avoids interacting with other components, and for your Activity to still run correctly, some connection with the system is required. For those cases, the Android SDK provides android.test.IsolatedContext, a mock Context that not only prevents interaction with most of the underlying system but also satisfies the needs of interacting with other packages or components such as Services or ContentProviders. Alternate route to file and database operations In some cases, all we need is to be able to provide an alternate route to the file and database operations. For example, if we are testing the application on a real device, we perhaps don't want to affect the existing database but use our own testing data. Such cases can take advantage of another class that is not part of the android.test.mock subpackage but is part of android.test instead, that is, RenamingDelegatingContext. This class lets us alter operations on files and databases by having a prefix that is specified in the constructor. All other operations are delegated to the delegating Context that you must specify in the constructor too. Suppose our Activity under test uses a database we want to control, probably introducing specialized content or fixture data to drive our tests, and we don't want to use the real files. In this case, we create a RenamingDelegatingContext class that specifies a prefix, and our unchanged Activity will use this prefix to create any files. For example, if our Activity tries to access a file named birthdays.txt, and we provide a RenamingDelegatingContext class that specifies the prefix test, then this same Activity will access the file testbirthdays.txt instead when it is being tested. The MockContentResolver class The MockContentResolver class implements all methods in a non-functional way and throws the exception UnsupportedOperationException if you attempt to use them. The reason for this class is to isolate tests from the real content. Let's say your application uses a ContentProvider class to feed your Activity information. You can create unit tests for this ContentProvider using ProviderTestCase2, which we will be analyzing shortly, but when we try to produce functional or integration tests for the Activity against ContentProvider, it's not so evident as to what test case to use. The most obvious choice is ActivityInstrumentationTestCase2, mainly if your functional tests simulate user experience because you might need the sendKeys() method or similar methods, which are readily available on these tests. The first problem you might encounter then is that it's unclear as to where to inject a MockContentResolver in your test to be able to use test data with your ContentProvider. There's no way to inject a MockContext either. The TestCase base class This is the base class of all other test cases in the JUnit framework. It implements the basic methods that we were analyzing in the previous examples (setUp()). The TestCase class also implements the junit.framework.Test interface, meaning it can be run as a JUnit test. Your Android test cases should always extend TestCase or one of its descendants. The default constructor All test cases require a default constructor because, sometimes, depending on the test runner used, this is the only constructor that is invoked, and is also used for serialization. According to the documentation, this method is not intended to be used by "mere mortals" without calling setName(String name). Therefore, to appease the Gods, a common pattern is to use a default test case name in this constructor and invoke the given name constructor afterwards: public class MyTestCase extends TestCase {    public MyTestCase() {      this("MyTestCase Default Name");    }      public MyTestCase(String name) {      super(name);    } } The given name constructor This constructor takes a name as an argument to label the test case. It will appear in test reports and would be of much help when you try to identify where failed tests have come from. The setName() method There are some classes that extend TestCase that don't provide a given name constructor. In such cases, the only alternative is to call setName(String name). The AndroidTestCase base class This class can be used as a base class for general-purpose Android test cases. Use it when you need access to Android resources, databases, or files in the filesystem. Context is stored as a field in this class, which is conveniently named mContext and can be used inside the tests if needed, or the getContext() method can be used too. Tests based on this class can start more than one Activity using Context.startActivity(). There are various test cases in Android SDK that extend this base class: ApplicationTestCase<T extends Application> ProviderTestCase2<T extends ContentProvider> ServiceTestCase<T extends Service> When using the AndroidTestCase Java class, you inherit some base assertion methods that can be used; let's look at these in more detail. The assertActivityRequiresPermission() method The signature for this method is as follows: public void assertActivityRequiresPermission (String packageName, String className, String permission) Description This assertion method checks whether the launching of a particular Activity is protected by a specific permission. It takes the following three parameters: packageName: This is a string that indicates the package name of the activity to launch className: This is a string that indicates the class of the activity to launch permission: This is a string with the permission to check The Activity is launched and then SecurityException is expected, which mentions that the required permission is missing in the error message. The actual instantiation of an activity is not handled by this assertion, and thus, an Instrumentation is not needed. Example This test checks the requirement of the android.Manifest.permission.WRITE_EXTERNAL_STORAGE permission, which is needed to write to external storage, in the MyContactsActivity Activity: public void testActivityPermission() { String pkg = "com.blundell.tut"; String activity = PKG + ".MyContactsActivity"; String permission = android.Manifest.permission.CALL_PHONE; assertActivityRequiresPermission(pkg, activity, permission); } Always use the constants that describe the permissions from android.Manifest.permission, not the strings, so if the implementation changes, your code will still be valid. The assertReadingContentUriRequiresPermission method The signature for this method is as follows: public void assertReadingContentUriRequiresPermission(Uri uri, String permission) Description This assertion method checks whether reading from a specific URI requires the permission provided as a parameter. It takes the following two parameters: uri: This is the Uri that requires a permission to query permission: This is a string that contains the permission to query If a SecurityException class is generated, which contains the specified permission, this assertion is validated. Example This test tries to read contacts and verifies that the correct SecurityException is generated: public void testReadingContacts() {    Uri URI = ContactsContract.AUTHORITY_URI;    String PERMISSION = android.Manifest.permission.READ_CONTACTS;    assertReadingContentUriRequiresPermission(URI, PERMISSION); } The assertWritingContentUriRequiresPermission() method The signature for this method is as follows: public void assertWritingContentUriRequiresPermission (Uri uri,   String permission) Description This assertion method checks whether inserting into a specific Uri requires the permission provided as a parameter. It takes the following two parameters: uri: This is the Uri that requires a permission to query permission: This is a string that contains the permission to query If a SecurityException class is generated, which contains the specified permission, this assertion is validated. Example This test tries to write to Contacts and verifies that the correct SecurityException is generated: public void testWritingContacts() { Uri uri = ContactsContract.AUTHORITY_URI;    String permission = android.Manifest.permission.WRITE_CONTACTS; assertWritingContentUriRequiresPermission(uri, permission); } Instrumentation Instrumentation is instantiated by the system before any of the application code is run, thereby allowing monitoring of all the interactions between the system and the application. As with many other Android application components, instrumentation implementations are described in the AndroidManifest.xml under the tag <instrumentation>. However, with the advent of Gradle, this has now been automated for us, and we can change the properties of the instrumentation in the app's build.gradle file. The AndroidManifest file for your tests will be automatically generated: defaultConfig { testApplicationId 'com.blundell.tut.tests' testInstrumentationRunner   "android.test.InstrumentationTestRunner" } The values mentioned in the preceding code are also the defaults if you do not declare them, meaning that you don't have to have any of these parameters to start writing tests. The testApplicationId attribute defines the name of the package for your tests. As a default, it is your application under the test package name + tests. You can declare a custom test runner using testInstrumentationRunner. This is handy if you want to have tests run in a custom way, for example, parallel test execution. There are also many other parameters in development, and I would advise you to keep your eyes upon the Google Gradle plugin website (http://tools.android.com/tech-docs/new-build-system/user-guide). The ActivityMonitor inner class As mentioned earlier, the Instrumentation class is used to monitor the interaction between the system and the application or the Activities under test. The inner class Instrumentation ActivityMonitor allows the monitoring of a single Activity within an application. Example Let's pretend that we have a TextView in our Activity that holds a URL and has its auto link property set: <TextView        android_id="@+id/link        android_layout_width="match_parent"    android_layout_height="wrap_content"        android_text="@string/home"    android_autoLink="web" " /> If we want to verify that, when clicked, the hyperlink is correctly followed and some browser is invoked, we can create a test like this: public void testFollowLink() {        IntentFilter intentFilter = new IntentFilter(Intent.ACTION_VIEW);        intentFilter.addDataScheme("http");        intentFilter.addCategory(Intent.CATEGORY_BROWSABLE);          Instrumentation inst = getInstrumentation();        ActivityMonitor monitor = inst.addMonitor(intentFilter, null, false);        TouchUtils.clickView(this, linkTextView);        monitor.waitForActivityWithTimeout(3000);        int monitorHits = monitor.getHits();        inst.removeMonitor(monitor);          assertEquals(1, monitorHits);    } Here, we will do the following: Create an IntentFilter for intents that would open a browser. Add a monitor to our Instrumentation based on the IntentFilter class. Click on the hyperlink. Wait for the activity (hopefully the browser). Verify that the monitor hits were incremented. Remove the monitor. Using monitors, we can test even the most complex interactions with the system and other Activities. This is a very powerful tool to create integration tests. The InstrumentationTestCase class The InstrumentationTestCase class is the direct or indirect base class for various test cases that have access to Instrumentation. This is the list of the most important direct and indirect subclasses: ActivityTestCase ProviderTestCase2<T extends ContentProvider> SingleLaunchActivityTestCase<T extends Activity> SyncBaseInstrumentation ActivityInstrumentationTestCase2<T extends Activity> ActivityUnitTestCase<T extends Activity> The InstrumentationTestCase class is in the android.test package, and extends junit.framework.TestCase, which extends junit.framework.Assert. The launchActivity and launchActivityWithIntent method These utility methods are used to launch Activities from a test. If the Intent is not specified using the second option, a default Intent is used: public final T launchActivity (String pkg, Class<T> activityCls,   Bundle extras) The template class parameter T is used in activityCls and as the return type, limiting its use to Activities of that type. If you need to specify a custom Intent, you can use the following code that also adds the intent parameter: public final T launchActivityWithIntent (String pkg, Class<T>   activityCls, Intent intent) The sendKeys and sendRepeatedKeys methods While testing Activities' UI, you will face the need to simulate interaction with qwerty-based keyboards or DPAD buttons to send keys to complete fields, select shortcuts, or navigate throughout the different components. This is what the different sendKeys and sendRepeatedKeys are used for. There is one version of sendKeys that accepts integer keys values. They can be obtained from constants defined in the KeyEvent class. For example, we can use the sendKeys method in this way:    public void testSendKeyInts() {        requestMessageInputFocus();        sendKeys(                KeyEvent.KEYCODE_H,                KeyEvent.KEYCODE_E,                KeyEvent.KEYCODE_E,                KeyEvent.KEYCODE_E,                KeyEvent.KEYCODE_Y,                KeyEvent.KEYCODE_DPAD_DOWN,                KeyEvent.KEYCODE_ENTER);        String actual = messageInput.getText().toString();          assertEquals("HEEEY", actual);    } Here, we are sending H, E, and Y letter keys and then the ENTER key using their integer representations to the Activity under test. Alternatively, we can create a string by concatenating the keys we desire to send, discarding the KEYCODE prefix, and separating them with spaces that are ultimately ignored:      public void testSendKeyString() {        requestMessageInputFocus();          sendKeys("H 3*E Y DPAD_DOWN ENTER");        String actual = messageInput.getText().toString();          assertEquals("HEEEY", actual);    } Here, we did exactly the same as in the previous test but we used a String "H 3* EY DPAD_DOWN ENTER". Note that every key in the String can be prefixed by a repeating factor followed by * and the key to be repeated. We used 3*E in our previous example, which is the same as E E E, that is, three times the letter E. If sending repeated keys is what we need in our tests, there is also another alternative that is precisely intended for these cases: public void testSendRepeatedKeys() {        requestMessageInputFocus();          sendRepeatedKeys(                1, KeyEvent.KEYCODE_H,                3, KeyEvent.KEYCODE_E,                1, KeyEvent.KEYCODE_Y,                1, KeyEvent.KEYCODE_DPAD_DOWN,                1, KeyEvent.KEYCODE_ENTER);        String actual = messageInput.getText().toString();          assertEquals("HEEEY", actual);    } This is the same test implemented in a different manner. The repetition number precedes each key. The runTestOnUiThread helper method The runTestOnUiThread method is a helper method used to run portions of a test on the UI thread. We used this inside the method requestMessageInputFocus(); so that we can set the focus on our EditText before waiting for the application to be idle, using Instrumentation.waitForIdleSync(). Also, the runTestOnUiThread method throws an exception, so we have to deal with this case: private void requestMessageInputFocus() {        try {            runTestOnUiThread(new Runnable() {                @Override                public void run() {                    messageInput.requestFocus();                }            });        } catch (Throwable throwable) {            fail("Could not request focus.");        }        instrumentation.waitForIdleSync();    } Alternatively, as we have discussed before, to run a test on the UI thread, we can annotate it with @UiThreadTest. However, sometimes, we need to run only parts of the test on the UI thread because other parts of it are not suitable to run on that thread, for example, database calls, or we are using other helper methods that provide the infrastructure themselves to use the UI thread, for example the TouchUtils methods. Summary We investigated the most relevant building blocks and reusable patterns to create our tests. Along this journey, we: Understood the common assertions found in JUnit tests Explained the specialized assertions found in the Android SDK Explored Android mock objects and their use in Android tests Now that we have all the building blocks, it is time to start creating more and more tests to acquire the experience needed to master the technique. Resources for Article: Further resources on this subject: Android Virtual Device Manager [article] Signing an application in Android using Maven [article] The AsyncTask and HardwareTask Classes [article]
Read more
  • 0
  • 0
  • 1798

article-image-signing-application-android-using-maven
Packt
18 Mar 2015
10 min read
Save for later

Signing an application in Android using Maven

Packt
18 Mar 2015
10 min read
In this article written by Patroklos Papapetrou and Jonathan LALOU, authors of the book Android Application Development with Maven, we'll learn different modes of digital signing and also using Maven to digitally sign the applications. The topics that we will explore in this article are: (For more resources related to this topic, see here.) Signing an application Android requires that all packages, in order to be valid for installation in devices, need to be digitally signed with a certificate. This certificate is used by the Android ecosystem to validate the author of the application. Thankfully, the certificate is not required to be issued by a certificate authority. It would be a total nightmare for every Android developer and it would increase the cost of developing applications. However, if you want to sign the certificate by a trusted authority like the majority of the certificates used in web servers, you are free to do it. Android supports two modes of signing: debug and release. Debug mode is used by default during the development of the application, and the release mode when we are ready to release and publish it. In debug mode, when building and packaging an application the Android SDK automatically generates a certificate and signs the package. So don't worry; even though we haven't told Maven to do anything about signing, Android knows what to do and behind the scenes signs the package with the autogenerated key. When it comes to distributing an application, debug mode is not enough; so, we need to prepare our own self-signed certificate and instruct Maven to use it instead of the default one. Before we dive to Maven configuration, let us quickly remind you how to issue your own certificate. Open a command window, and type the following command: keytool -genkey -v -keystore my-android-release-key.keystore -alias my-android-key -keyalg RSA -keysize 2048 -validity 10000 If the keytool command line utility is not in your path, then it's a good idea to add it. It's located under the %JAVA_HOME%/bin directory. Alternatively, you can execute the command inside this directory. Let us explain some parameters of this command. We use the keytool command line utility to create a new keystore file under the name my-android-release-key.keystore inside the current directory. The -alias parameter is used to define an alias name for this key, and it will be used later on in Maven configuration. We also specify the algorithm, RSA, and the key size, 2048; finally we set the validity period in days. The generated key will be valid for 10,000 days—long enough for many many new versions of our application! After running the command, you will be prompted to answer a series of questions. First, type twice a password for the keystore file. It's a good idea to note it down because we will use it again in our Maven configuration. Type the word :secret in both prompts. Then, we need to provide some identification data, like name, surname, organization details, and location. Finally, we need to set a password for the key. If we want to keep the same password with the keystore file, we can just hit RETURN. If everything goes well, we will see the final message that informs us that the key is being stored in the keystore file with the name we just defined. After this, our key is ready to be used to sign our Android application. The key used in debug mode can be found in this file: ~/.android.debug.keystore and contains the following information: Keystore name: "debug.keystore" Keystore password: "android" Key alias: "androiddebugkey" Key password: "android" CN: "CN=Android Debug,O=Android,C=US" Now, it's time to let Maven use the key we just generated. Before we add the necessary configuration to our pom.xml file, we need to add a Maven profile to the global Maven settings. The profiles defined in the user settings.xml file can be used by all Maven projects in the same machine. This file is usually located under this folder: %M2_HOME%/conf/settings.xml. One fundamental advantage of defining global profiles in user's Maven settings is that this configuration is not shared in the pom.xml file to all developers that work on the application. The settings.xml file should never be kept under the Source Control Management (SCM) tool. Users can safely enter personal or critical information like passwords and keys, which is exactly the case of our example. Now, edit the settings.xml file and add the following lines inside the <profiles> attribute: <profile> <id>release</id> <properties>    <sign.keystore>/path/to/my/keystore/my-android-release-    key.keystore</sign.keystore>    <sign.alias>my-android-key</sign.alias>    <sign.storepass>secret</sign.storepass>    <sign.keypass>secret</sign.keypass> </properties> </profile> Keep in mind that the keystore name, the alias name, the keystore password, and the key password should be the ones we used when we created the keystore file. Clearly, storing passwords in plain text, even in a file that is normally protected from other users, is not a very good practice. A quick way to make it slightly less easy to read the password is to use XML entities to write the value. Some sites on the internet like this one http://coderstoolbox.net/string/#!encoding=xml&action=encode&charset=none provide such encryptions. It will be resolved as plain text when the file is loaded; so Maven won't even notice it. In this case, this would become: <sign.storepass>&#115;&#101;&#99;&#114;&#101;&#116;</sign.storepass> We have prepared our global profile and the corresponding properties, and so we can now edit the pom.xml file of the parent project and do the proper configuration. Adding common configuration in the parent file for all Maven submodules is a good practice in our case because at some point, we would like to release both free and paid versions, and it's preferable to avoid duplicating the same configuration in two files. We want to create a new profile and add all the necessary settings there, because the release process is not something that runs every day during the development phase. It should run only at a final stage, when we are ready to deploy our application. Our first priority is to tell Maven to disable debug mode. Then, we need to specify a new Maven plugin name: maven-jarsigner-plugin, which is responsible for driving the verification and signing process for custom/private certificates. You can find the complete release profile as follows: <profiles> <profile>    <id>release</id>    <build>      <plugins>        <plugin>          <groupId>com.jayway.maven.plugins.android.generation2          </groupId>          <artifactId>android-maven-plugin</artifactId>          <extensions>true</extensions>          <configuration>            <sdk>              <platform>19</platform>            </sdk>            <sign>              <debug>false</debug>            </sign>          </configuration>        </plugin>        <plugin>          <groupId>org.apache.maven.plugins</groupId>          <artifactId>maven-jarsigner-plugin</artifactId>          <executions>            <execution>              <id>signing</id>              <phase>package</phase>              <goals>                <goal>sign</goal>                <goal>verify</goal>              </goals>              <inherited>true</inherited>              <configuration>                <removeExistingSignatures>true                </removeExistingSignatures>                <archiveDirectory />                <includes>                  <include>${project.build.directory}/                  ${project.artifactId}.apk</include>                </includes>                <keystore>${sign.keystore}</keystore>                <alias>${sign.alias}</alias>                <storepass>${sign.storepass}</storepass>                <keypass>${sign.keypass}</keypass>                <verbose>true</verbose>              </configuration>            </execution>          </executions>        </plugin>      </plugins>    </build> </profile> </profiles> We instruct the JAR signer plugin to be triggered during the package phase and run the goals of verification and signing. Furthermore, we tell the plugin to remove any existing signatures from the package and use the variable values we have defined in our global profile, $sign.alias, $sign.keystore, $sign.storepass and $sign.keypass. The "verbose" setting is used here to verify that the private key is used instead of the debug key. Before we run our new profile, for comparison purposes, let's package our application without using the signing capability. Open a terminal window, and type the following Maven command: mvn clean package When the command finishes, navigate to the paid version target directory, /PaidVersion/target, and take a look at its contents. You will notice that there are two packaging files: a PaidVersion.jar (size 14KB) and PaidVersion.apk (size 46KB). Since we haven't discussed yet about releasing an application, we can just run the following command in a terminal window and see how the private key is used for signing the package: mvn clean package -Prelease You must have probably noticed that we use only one profile name, and that is the beauty of Maven. Profiles with the same ID are merged together, and so it's easier to understand and maintain the build scripts. If you want to double-check that the package is signed with your private certificate, you can monitor the Maven output, and at some point you will see something similar to the following image: This output verifies that the classes have been properly signed through the execution of the Maven JAR signer plugin. To better understand how signing and optimization affects the packages generation, we can navigate again to the /PaidVersion/target directory and take a look at the files created. You will be surprised to see that the same packages exist again but they have different sizes. The PaidVersion.jar file has a size of 18KB, which is greater than the file generated without signing. However, the PaidVersion.apk is smaller (size 44KB) than the first version. These differences happen because the .jar file is signed with the new certificate; so the size is getting slightly bigger, but what about the .apk file. Should be also bigger because every file is signed with the certificate. The answer can be easily found if we open both the .apk files and compare them. They are compressed files so any well-known tool that opens compressed files can do this. If you take a closer look at the contents of the .apk files, you will notice that the contents of the .apk file that was generated using the private certificate are slightly larger except the resources.arsc file. This file, in the case of custom signing, is compressed, whereas in the debug signing mode it is in raw format. This explains why the signed version of the .apk file is smaller than the original one. There's also one last thing that verifies the correct completion of signing. Keep the compressed .apk files opened and navigate to the META-INF directory. These directories contain a couple of different files. The signed package with our personal certificate contains the key files named with the alias we used when we created the certificate and the package signed in debug mode contains the default certificate used by Android. Summary We know that Android developers struggle when it comes to proper package and release of an application to the public. We have analyzed in many details the necessary steps for a correct and complete packaging of Maven configuration. After reading this article, you should have basic knowledge of digitally signing packages with and without the help of Mavin in Android. Resources for Article: Further resources on this subject: Best Practices [article] Installing Apache Karaf [article] Apache Maven and m2eclipse [article]
Read more
  • 0
  • 0
  • 3907

article-image-asynctask-and-hardwaretask-classes
Packt
17 Feb 2015
10 min read
Save for later

The AsyncTask and HardwareTask Classes

Packt
17 Feb 2015
10 min read
This article is written by Andrew Henderson, the author of Android for the BeagleBone Black. This article will cover the usage of the AsyncTask and HardwareTask classes. (For more resources related to this topic, see here.) Understanding the AsyncTask class HardwareTask extends the AsyncTask class, and using it provides a major advantage over the way hardware interfacing is implemented in the gpio app. AsyncTasks allows you to perform complex and time-consuming hardware-interfacing tasks without your app becoming unresponsive while the tasks are executed. Each instance of an AsyncTask class can create a new thread of execution within Android. This is similar to how multithreaded programs found on other OSes spin new threads to handle file and network I/O, manage UIs, and perform parallel processing. The gpio app only used a single thread during its execution. This thread is the main UI thread that is part of all Android apps. The UI thread is designed to handle UI events as quickly as possible. When you interact with a UI element, that element's handler method is called by the UI thread. For example, clicking a button causes the UI thread to invoke the button's onClick() handler. The onClick() handler then executes a piece of code and returns to the UI thread. Android is constantly monitoring the execution of the UI thread. If a handler takes too long to finish its execution, Android shows an Application Not Responding (ANR) dialog to the user. You never want an ANR dialog to appear to the user. It is a sign that your app is running inefficiently (or even not at all!) by spending too much time in handlers within the UI thread. The Application Not Responding dialog in Android The gpio app performed reads and writes of the GPIO states very quickly from within the UI thread, so the risk of triggering the ANR was very small. Interfacing with the FRAM is a much slower process. With the BBB's I2C bus clocked at its maximum speed of 400 KHz, it takes approximately 25 microseconds to read or write a byte of data when using the FRAM. While this is not a major concern for small writes, reading or writing the entire 32,768 bytes of the FRAM can take close to a full second to execute! Multiple reads and writes of the full FRAM can easily trigger the ANR dialog, so it is necessary to move these time-consuming activities out of the UI thread. By placing your hardware interfacing into its own AsyncTask class, you decouple the execution of these time-intensive tasks from the execution of the UI thread. This prevents your hardware interfacing from potentially triggering the ANR dialog. Learning the details of the HardwareTask class The AsyncTask base class of HardwareTask provides many different methods, which you can further explore by referring to the Android API documentation. The four AsyncTask methods that are of immediate interest for our hardware-interfacing efforts are: onPreExecute() doInBackground() onPostExecute() execute() Of these four methods, only the doInBackground() method executes within its own thread. The other three methods all execute within the context of the UI thread. Only the methods that execute within the UI thread context are able to update screen UI elements.   The thread contexts in which the HardwareTask methods and the PacktHAL functions are executed Much like the MainActivity class of the gpio app, the HardwareTask class provides four native methods that are used to call PacktHAL JNI functions related to FRAM hardware interfacing: public class HardwareTask extends AsyncTask<Void, Void, Boolean> {   private native boolean openFRAM(int bus, int address); private native String readFRAM(int offset, int bufferSize); private native void writeFRAM(int offset, int bufferSize,      String buffer); private native boolean closeFRAM(); The openFRAM() method initializes your app's access to a FRAM located on a logical I2C bus (the bus parameter) and at a particular bus address (the address parameter). Once the connection to a particular FRAM is initialized via an openFRAM() call, all readFRAM() and writeFRAM() calls will be applied to that FRAM until a closeFRAM() call is made. The readFRAM() method will retrieve a series of bytes from the FRAM and return it as a Java String. A total of bufferSize bytes are retrieved starting at an offset of offset bytes from the start of the FRAM. The writeFRAM() method will store a series of bytes to the FRAM. A total of bufferSize characters from the Java string buffer are stored in the FRAM started at an offset of offset bytes from the start of the FRAM. In the fram app, the onClick() handlers for the Load and Save buttons in the MainActivity class each instantiate a new HardwareTask. Immediately after the instantiation of HardwareTask, either the loadFromFRAM() or saveToFRAM() method is called to begin interacting with the FRAM: public void onClickSaveButton(View view) {    hwTask = new HardwareTask();    hwTask.saveToFRAM(this); }    public void onClickLoadButton(View view) {    hwTask = new HardwareTask();    hwTask.loadFromFRAM(this); } Both the loadFromFRAM() and saveToFRAM() methods in the HardwareTask class call the base AsyncTask class execution() method to begin the new thread creation process: public void saveToFRAM(Activity act) {    mCallerActivity = act;    isSave = true;    execute(); }    public void loadFromFRAM(Activity act) {    mCallerActivity = act;    isSave = false;    execute(); } Each AsyncTask instance can only have its execute() method called once. If you need to run an AsyncTask a second time, you must instantiate a new instance of it and call the execute() method of the new instance. This is why we instantiate a new instance of HardwareTask in the onClick() handlers of the Load and Save buttons, rather than instantiating a single HardwareTask instance and then calling its execute() method many times. The execute() method automatically calls the onPreExecute() method of the HardwareTask class. The onPreExecute() method performs any initialization that must occur prior to the start of the new thread. In the fram app, this requires disabling various UI elements and calling openFRAM() to initialize the connection to the FRAM via PacktHAL: protected void onPreExecute() {    // Some setup goes here    ...   if ( !openFRAM(2, 0x50) ) {      Log.e("HardwareTask", "Error opening hardware");      isDone = true; } // Disable the Buttons and TextFields while talking to the hardware saveText.setEnabled(false); saveButton.setEnabled(false); loadButton.setEnabled(false); } Disabling your UI elements When you are performing a background operation, you might wish to keep your app's user from providing more input until the operation is complete. During a FRAM read or write, we do not want the user to press any UI buttons or change the data held within the saveText text field. If your UI elements remain enabled all the time, the user can launch multiple AsyncTask instances simultaneously by repeatedly hitting the UI buttons. To prevent this, disable any UI elements required to restrict user input until that input is necessary. Once the onPreExecute() method finishes, the AsyncTask base class spins a new thread and executes the doInBackground() method within that thread. The lifetime of the new thread is only for the duration of the doInBackground() method. Once doInBackground() returns, the new thread will terminate. As everything that takes place within the doInBackground() method is performed in a background thread, it is the perfect place to perform any time-consuming activities that would trigger an ANR dialog if they were executed from within the UI thread. This means that the slow readFRAM() and writeFRAM() calls that access the I2C bus and communicate with the FRAM should be made from within doInBackground(): protected Boolean doInBackground(Void... params) {    ...    Log.i("HardwareTask", "doInBackground: Interfacing with hardware");    try {      if (isSave) {          writeFRAM(0, saveData.length(), saveData);      } else {        loadData = readFRAM(0, 61);      }    } catch (Exception e) {      ... The loadData and saveData string variables used in the readFRAM() and writeFRAM() calls are both class variables of HardwareTask. The saveData variable is populated with the contents of the saveEditText text field via a saveEditText.toString() call in the HardwareTask class' onPreExecute() method. How do I update the UI from within an AsyncTask thread? While the fram app does not make use of them in this example, the AsyncTask class provides two special methods, publishProgress() and onPublishProgress(), that are worth mentioning. The AsyncTask thread uses these methods to communicate with the UI thread while the AsyncTask thread is running. The publishProgress() method executes within the AsyncTask thread and triggers the execution of onPublishProgress() within the UI thread. These methods are commonly used to update progress meters (hence the name publishProgress) or other UI elements that cannot be directly updated from within the AsyncTask thread. After doInBackground() has completed, the AsyncTask thread terminates. This triggers the calling of doPostExecute() from the UI thread. The doPostExecute() method is used for any post-thread cleanup and updating any UI elements that need to be modified. The fram app uses the closeFRAM() PacktHAL function to close the current FRAM context that it opened with openFRAM() in the onPreExecute() method. protected void onPostExecute(Boolean result) {    if (!closeFRAM()) {    Log.e("HardwareTask", "Error closing hardware"); }    ... The user must now be notified that the task has been completed. If the Load button was pressed, then the string displayed in the loadTextField widget is updated via the MainActivity class updateLoadedData() method. If the Save button was pressed, a Toast message is displayed to notify the user that the save was successful. Log.i("HardwareTask", "onPostExecute: Completed."); if (isSave) {    Toast toast = Toast.makeText(mCallerActivity.getApplicationContext(),      "Data stored to FRAM", Toast.LENGTH_SHORT);    toast.show(); } else {    ((MainActivity)mCallerActivity).updateLoadedData(loadData); } Giving Toast feedback to the user The Toast class is a great way to provide quick feedback to your app's user. It pops up a small message that disappears after a configurable period of time. If you perform a hardware-related task in the background and you want to notify the user of its completion without changing any UI elements, try using a Toast message! Toast messages can only be triggered by methods that are executing from within the UI thread. An example of the Toast message Finally, the onPostExecute() method will re-enable all of the UI elements that were disabled in onPreExecute(): saveText.setEnabled(true);saveButton.setEnabled(true); loadButton.setEnabled(true); The onPostExecute() method has now finished its execution and the app is back to patiently waiting for the user to make the next fram access request by pressing either the Load or Save button. Are you ready for a challenge? Now that you have seen all of the pieces of the fram app, why not change it to add new functionality? For a challenge, try adding a counter that indicates to the user how many more characters can be entered into the saveText text field before the 60-character limit is reached. Summary The fram app in this article demonstrated how to use the AsyncTask class to perform time-intensive hardware interfacing tasks without stalling the app's UI thread and triggering the ANR dialog. Resources for Article: Further resources on this subject: Sound Recorder for Android [article] Reversing Android Applications [article] Saying Hello to Unity and Android [article]
Read more
  • 0
  • 0
  • 1483

article-image-android-virtual-device-manager
Packt
06 Feb 2015
8 min read
Save for later

Android Virtual Device Manager

Packt
06 Feb 2015
8 min read
This article written by Belén Cruz Zapata, the author of the book Android Studio Essentials, teaches us the uses of the AVD Manager tool. It introduces us to the Google Play services. (For more resources related to this topic, see here.) The Android Virtual Device Manager (AVD Manager) is an Android tool accessible from Android Studio to manage the Android virtual devices that will be executed in the Android emulator. To open the AVD Manager from Android Studio, navigate to the Tools | Android | AVD Manager menu option. You can also click on the shortcut from the toolbar. The AVD Manager displays the list of the existing virtual devices. Since we have not created any virtual device, initially the list will be empty. To create our first virtual device, click on the Create Virtual Device button to open the configuration dialog. The first step is to select the hardware configuration of the virtual device. The hardware definitions are listed on the left side of the window. Select one of them, like the Nexus 5, to examine its details on the right side as shown in the following screenshot. Hardware definitions can be classified into one of these categories: Phone, Tablet, Wear or TV. We can also configure our own hardware device definitions from the AVD Manager. We can create a new definition using the New Hardware Profile button. The Clone Device button creates a duplicate of an existing device. Click on the New Hardware Profile button to examine the existing configuration parameters. The most important parameters that define a device are: Device Name: Name of the device. Screensize: Screen size in inches. This value determines the size category of the device. Type a value of 4.0 and notice how the Size value (on the right side) is normal. Now type a value of 7.0 and the Size field changes its value to large. This parameter along with the screen resolution also determines the density category. Resolution: Screen resolution in pixels. This value determines the density category of the device. Having a screen size of 4.0 inches, type a value of 768 x 1280 and notice how the density value is 400 dpi. Change the screen size to 6.0 inches and the density value changes to hdpi. Now change the resolution to 480 x 800 and the density value is mdpi. RAM: RAM memory size of the device. Input: Indicate if the home, back, or menu buttons of the device are available via software or hardware. Supported device states: Check the allowed states. Cameras: Select if the device has a front camera or a back camera. Sensors: Sensors available in the device: accelerometer, gyroscope, GPS, and proximity sensor. Default Skin: Select additional hardware controls. Create a new device with a screen size of 4.7 inches, a resolution of 800 x 1280, a RAM value of 500 MiB, software buttons, and both portrait and landscape states enabled. Name it as My Device. Click on the Finish button. The hardware definition has been added to the list of configurations. Click on the Next button to continue the creation of a new virtual device. The next step is to select the virtual device system image and the target Android platform. Each platform has its architecture, so the system images that are installed on your system will be listed along with the rest of the images that can be downloaded (Show downloadable system images box checked). Download and select one of the images of the Lollipop release and click on the Next button. Finally, the last step is to verify the configuration of the virtual device. Enter the name of the Android Virtual Device in the AVD Name field. Give the virtual device a meaningful name to recognize it easily, such as AVD_nexus5_api21. Click on the Show Advanced Settings button. The settings that we can configure for the virtual device are the following: Emulation Options: The Store a snapshot for faster startup option saves the state of the emulator in order to load faster the next time. The Use Host GPU tries to accelerate the GPU hardware to run the emulator faster. Custom skin definition: Select if additional hardware controls are displayed in the emulator. Memory and Storage: Select the memory parameters of the virtual device. Let the default values, unless a warning message is shown; in this case, follow the instructions of the message. For example, select 1536M for the RAM memory and 64 for the VM Heap. The Internal Storage can also be configured. Select for example: 200 MiB. Select the size of the SD Card or select a file to behave as the SD card. Device: Select one of the available device configurations. These configurations are the ones we tested in the layout editor preview. Select the Nexus 5 device to load its parameters in the dialog. Target: Select the device Android platform. We have to create one virtual device with the minimum platform supported by our application and another virtual device with the target platform of our application. For this first virtual device, select the target platform, Android 4.4.2 - API Level 19. CPU/ABI: Select the device architecture. The value of this field is set when we select the target platform. Each platform has its architecture, so if we do not have it installed, the following message will be shown; No system images installed for this target. To solve this, open the SDK Manager and search for one of the architectures of the target platform, ARM EABI v7a System Image or Intel x86 Atom System Image. Keyboard: Select if a hardware keyboard is displayed in the emulator. Check it. Skin: Select if additional hardware controls are displayed in the emulator. You can select the Skin with dynamic hardware controls option. Front Camera: Select if the emulator has a front camera or a back camera. The camera can be emulated or can be real by the use of a webcam from the computer. Select None for both cameras. Keyboard: Select if a hardware keyboard is displayed in the emulator. Check it. Network: Select the speed of the simulated network and select the delay in processing data across the network. The new virtual device is now listed in the AVD Manager. Select the recently created virtual device to enable the remaining actions: Start: Run the virtual device. Edit: Edit the virtual device configuration. Duplicate: Creates a new device configuration displaying the last step of the creation process. You can change its configuration parameters and then verify the new device. Wipe Data: Removes the user files from the virtual device. Show on Disk: Opens the virtual device directory on your system. View Details: Open a dialog detailing the virtual device characteristics. Delete: Delete the virtual device. Click on the Start button. The emulator will be opened as shown in the following screenshot. Wait until it is completely loaded, and then you will be able to try it. In Android Studio, open the main layout with the graphical editor and click on the list of the devices. As the following screenshot shows, our custom device definition appears and we can select it to preview the layout: Navigation Editor The Navigation Editor is a tool to create and structure the layouts of the application using a graphical viewer. To open this tool navigate to the Tools | Android | Navigation Editor menu. The tool opens a file in XML format named main.nvg.xml. This file is stored in your project at /.navigation/app/raw/. Since there is only one layout and one activity in our project, the navigation editor only shows this main layout. If you select the layout, detailed information about it is displayed on the right panel of the editor. If you double-click on the layout, the XML layout file will be opened in a new tab. We can create a new activity by right-mouse clicking on the editor and selecting the New Activity option. We can also add transitions from the controls of a layout by shift clicking on a control and then dragging to the target activity. Open the main layout and create a new button with the label Open Activity: <Button        android_id="@+id/button_open"        android_layout_width="wrap_content"        android_layout_height="wrap_content"        android_layout_below="@+id/button_accept"        android_layout_centerHorizontal="true"        android_text="Open Activity" /> Open the Navigation Editor and add a second activity. Now the navigation editor displays both activities as the next screenshot shows. Now we can add the navigation between them. Shift-drag from the new button of the main activity to the second activity. A blue line and a pink circle have been added to represent the new navigation. Select the navigation relationship to see its details on the right panel as shown in the following screenshot. The right panel shows the source the activity, the destination activity and the gesture that triggers the navigation. Now open our main activity class and notice the new code that has been added to implement the recently created navigation. The onCreate method now contains the following code: findViewById(R.id.button_open).setOnClickListener( new View.OnClickListener() { @Override public void onClick(View v) { MainActivity.this.startActivity( new Intent(MainActivity.this, Activity2.class)); } }); This code sets the onClick method of the new button, from where the second activity is launched. Summary This article thought us about the Navigation Editor tool. It also showed how to integrate the Google Play services with a project in Android Studio. In this article, we got acquainted to the AVD Manager tool. Resources for Article: Further resources on this subject: Android Native Application API [article] Creating User Interfaces [article] Android 3.0 Application Development: Multimedia Management [article]
Read more
  • 0
  • 0
  • 5722
article-image-unit-and-functional-tests
Packt
21 Aug 2014
13 min read
Save for later

Unit and Functional Tests

Packt
21 Aug 2014
13 min read
In this article by Belén Cruz Zapata and Antonio Hernández Niñirola, authors of the book Testing and Securing Android Studio Applications, you will learn how to use unit tests that allow developers to quickly verify the state and behavior of an activity on its own. (For more resources related to this topic, see here.) Testing activities There are two possible modes of testing activities: Functional testing: In functional testing, the activity being tested is created using the system infrastructure. The test code can communicate with the Android system, send events to the UI, or launch another activity. Unit testing: In unit testing, the activity being tested is created with minimal connection to the system infrastructure. The activity is tested in isolation. In this article, we will explore the Android testing API to learn about the classes and methods that will help you test the activities of your application. The test case classes The Android testing API is based on JUnit. Android JUnit extensions are included in the android.test package. The following figure presents the main classes that are involved when testing activities: Let's learn more about these classes: TestCase: This JUnit class belongs to the junit.framework. The TestCase package represents a general test case. This class is extended by the Android API. InstrumentationTestCase: This class and its subclasses belong to the android.test package. It represents a test case that has access to instrumentation. ActivityTestCase: This class is used to test activities, but for more useful classes, you should use one of its subclasses instead of the main class. ActivityInstrumentationTestCase2: This class provides functional testing of an activity and is parameterized with the activity under test. For example, to evaluate your MainActivity, you have to create a test class named MainActivityTest that extends the ActivityInstrumentationTestCase2 class, shown as follows: public class MainActivityTest extends ActivityInstrumentationTestCase2<MainActivity> ActivityUnitTestCase: This class provides unit testing of an activity and is parameterized with the activity under test. For example, to evaluate your MainActivity, you can create a test class named MainActivityUnitTest that extends the ActivityUnitTestCase class, shown as follows: public class MainActivityUnitTest extends ActivityUnitTestCase<MainActivity> There is a new term that has emerged from the previous classes called Instrumentation. Instrumentation The execution of an application is ruled by the life cycle, which is determined by the Android system. For example, the life cycle of an activity is controlled by the invocation of some methods: onCreate(), onResume(), onDestroy(), and so on. These methods are called by the Android system and your code cannot invoke them, except while testing. The mechanism to allow your test code to invoke callback methods is known as Android instrumentation. Android instrumentation is a set of methods to control a component independent of its normal lifecycle. To invoke the callback methods from your test code, you have to use the classes that are instrumented. For example, to start the activity under test, you can use the getActivity() method that returns the activity instance. For each test method invocation, the activity will not be created until the first time this method is called. Instrumentation is necessary to test activities considering the lifecycle of an activity is based on the callback methods. These callback methods include the UI events as well. From an instrumented test case, you can use the getInstrumentation() method to get access to an Instrumentation object. This class provides methods related to the system interaction with the application. The complete documentation about this class can be found at: http://developer.android.com/reference/android/app/Instrumentation.html. Some of the most important methods are as follows: The addMonitor method: This method adds a monitor to get information about a particular type of Intent and can be used to look for the creation of an activity. A monitor can be created indicating IntentFilter or displaying the name of the activity to the monitor. Optionally, the monitor can block the activity start to return its canned result. You can use the following call definitions to add a monitor: ActivityMonitor addMonitor (IntentFilter filter, ActivityResult result, boolean block). ActivityMonitor addMonitor (String cls, ActivityResult result, boolean block). The following line is an example line code to add a monitor: Instrumentation.ActivityMonitor monitor = getInstrumentation().addMonitor(SecondActivity.class.getName(), null, false); The activity lifecycle methods: The methods to call the activity lifecycle methods are: callActivityOnCreate, callActivityOnDestroy, callActivityOnPause, callActivityOnRestart, callActivityOnResume, callActivityOnStart, finish, and so on. For example, you can pause an activity using the following line code: getInstrumentation().callActivityOnPause(mActivity); The getTargetContext method: This method returns the context for the application. The startActivitySync method: This method starts a new activity and waits for it to begin running. The function returns when the new activity has gone through the full initialization after the call to its onCreate method. The waitForIdleSync method: This method waits for the application to be idle synchronously. The test case methods JUnit's TestCase class provides the following protected methods that can be overridden by the subclasses: setUp(): This method is used to initialize the fixture state of the test case. It is executed before every test method is run. If you override this method, the first line of code will call the superclass. A standard setUp method should follow the given code definition: @Override protected void setUp() throws Exception { super.setUp(); // Initialize the fixture state } tearDown(): This method is used to tear down the fixture state of the test case. You should use this method to release resources. It is executed after running every test method. If you override this method, the last line of the code will call the superclass, shown as follows: @Override protected void tearDown() throws Exception { // Tear down the fixture state super.tearDown(); } The fixture state is usually implemented as a group of member variables but it can also consist of database or network connections. If you open or init connections in the setUp method, they should be closed or released in the tearDown method. When testing activities in Android, you have to initialize the activity under test in the setUp method. This can be done with the getActivity() method. The Assert class and method JUnit's TestCase class extends the Assert class, which provides a set of assert methods to check for certain conditions. When an assert method fails, AssertionFailedException is thrown. The test runner will handle the multiple assertion exceptions to present the testing results. Optionally, you can specify the error message that will be shown if the assert fails. You can read the Android reference of the TestCase class to examine all the available methods at http://developer.android.com/reference/junit/framework/Assert.html. The assertion methods provided by the Assert superclass are as follows: assertEquals: This method checks whether the two values provided are equal. It receives the actual and expected value that is to be compared with each other. This method is overloaded to support values of different types, such as short, String, char, int, byte, boolean, float, double, long, or Object. For example, the following assertion method throws an exception since both values are not equal: assertEquals(true, false); assertTrue or assertFalse: These methods check whether the given Boolean condition is true or false. assertNull or assertNotNull: These methods check whether an object is null or not. assertSame or assertNotSame: These methods check whether two objects refer to the same object or not. fail: This method fails a test. It can be used to make sure that a part of code is never reached, for example, if you want to test that a method throws an exception when it receives a wrong value, as shown in the following code snippet: try{ dontAcceptNullValuesMethod(null); fail("No exception was thrown"); } catch (NullPointerExceptionn e) { // OK } The Android testing API, which extends JUnit, provides additional and more powerful assertion classes: ViewAsserts and MoreAsserts. The ViewAsserts class The assertion methods offered by JUnit's Assert class are not enough if you want to test some special Android objects such as the ones related to the UI. The ViewAsserts class implements more sophisticated methods related to the Android views, that is, for the View objects. The whole list with all the assertion methods can be explored in the Android reference about this class at http://developer.android.com/reference/android/test/ViewAsserts.html. Some of them are described as follows: assertBottomAligned or assertLeftAligned or assertRightAligned or assertTopAligned(View first, View second): These methods check that the two specified View objects are bottom, left, right, or top aligned, respectively assertGroupContains or assertGroupNotContains(ViewGroup parent, View child): These methods check whether the specified ViewGroup object contains the specified child View assertHasScreenCoordinates(View origin, View view, int x, int y): This method checks that the specified View object has a particular position on the origin screen assertHorizontalCenterAligned or assertVerticalCenterAligned(View reference View view): These methods check that the specified View object is horizontally or vertically aligned with respect to the reference view assertOffScreenAbove or assertOffScreenBelow(View origin, View view): These methods check that the specified View object is above or below the visible screen assertOnScreen(View origin, View view): This method checks that the specified View object is loaded on the screen even if it is not visible The MoreAsserts class The Android API extends some of the basic assertion methods from the Assert class to present some additional methods. Some of the methods included in the MoreAsserts class are: assertContainsRegex(String expectedRegex, String actual): This method checks that the expected regular expression (regex) contains the actual given string assertContentsInAnyOrder(Iterable<?> actual, Object… expected): This method checks that the iterable object contains the given objects and in any order assertContentsInOrder(Iterable<?> actual, Object… expected): This method checks that the iterable object contains the given objects, but in the same order assertEmpty: This method checks if a collection is empty assertEquals: This method extends the assertEquals method from JUnit to cover collections: the Set objects, int arrays, String arrays, Object arrays, and so on assertMatchesRegex(String expectedRegex, String actual): This method checks whether the expected regex matches the given actual string exactly Opposite methods such as assertNotContainsRegex, assertNotEmpty, assertNotEquals, and assertNotMatchesRegex are included as well. All these methods are overloaded to optionally include a custom error message. The Android reference about the MoreAsserts class can be inspected to learn more about these assert methods at http://developer.android.com/reference/android/test/MoreAsserts.html. UI testing and TouchUtils The test code is executed in two different threads as the application under test, although, both the threads run in the same process. When testing the UI of an application, UI objects can be referenced from the test code, but you cannot change their properties or send events. There are two strategies to invoke methods that should run in the UI thread: Activity.runOnUiThread(): This method creates a Runnable object in the UI thread in which you can add the code in the run() method. For example, if you want to request the focus of a UI component: public void testComponent() { mActivity.runOnUiThread( new Runnable() { public void run() { mComponent.requestFocus(); } } ); … } @UiThreadTest: This annotation affects the whole method because it is executed on the UI thread. Considering the annotation refers to an entire method, statements that do not interact with the UI are not allowed in it. For example, consider the previous example using this annotation, shown as follows: @UiThreadTest public void testComponent () { mComponent.requestFocus(); … } There is also a helper class that provides methods to perform touch interactions on the view of your application: TouchUtils. The touch events are sent to the UI thread safely from the test thread; therefore, the methods of the TouchUtils class should not be invoked in the UI thread. Some of the methods provided by this helper class are as follows: The clickView method: This method simulates a click on the center of a view The drag, dragQuarterScreenDown, dragViewBy, dragViewTo, dragViewToTop methods: These methods simulate a click on an UI element and then drag it accordingly The longClickView method: This method simulates a long press click on the center of a view The scrollToTop or scrollToBottom methods: These methods scroll a ViewGroup to the top or bottom The mock object classes The Android testing API provides some classes to create mock system objects. Mock objects are fake objects that simulate the behavior of real objects but are totally controlled by the test. They allow isolation of tests from the rest of the system. Mock objects can, for example, simulate a part of the system that has not been implemented yet, or a part that is not practical to be tested. In Android, the following mock classes can be found: MockApplication, MockContext, MockContentProvider, MockCursor, MockDialogInterface, MockPackageManager, MockResources, and MockContentResolver. These classes are under the android.test.mock package. The methods of these objects are nonfunctional and throw an exception if they are called. You have to override the methods that you want to use. Creating an activity test In this section, we will create an example application so that we can learn how to implement the test cases to evaluate it. Some of the methods presented in the previous section will be put into practice. You can download the example code files from your account at http://www.packtpub.com. Our example is a simple alarm application that consists of two activities: MainActivity and SecondActivity. The MainActivity implements a self-built digital clock using text views and buttons. The purpose of creating a self-built digital clock is to have more code and elements to use in our tests. The layout of MainActivity is a relative one that includes two text views: one for the hour (the tvHour ID) and one for the minutes (the tvMinute ID). There are two buttons below the clock: one to subtract 10 minutes from the clock (the bMinus ID) and one to add 10 minutes to the clock (the bPlus ID). There is also an edit text field to specify the alarm name. Finally, there is a button to launch the second activity (the bValidate ID). Each button has a pertinent method that receives the click event when the button is pressed. The layout looks like the following screenshot: The SecondActivity receives the hour from the MainActivity and shows its value in a text view simulating that the alarm was saved. The objective to create this second activity is to be able to test the launch of another activity in our test case. Summary In this article, you learned how to use unit tests that allow developers to quickly verify the state and behavior of an activity on its own. Resources for Article: Further resources on this subject: Creating Dynamic UI with Android Fragments [article] Saying Hello to Unity and Android [article] Augmented Reality [article]
Read more
  • 0
  • 0
  • 1843

article-image-json-pojo-using-gson-android-studio
Troy Miles
01 Jul 2014
6 min read
Save for later

How to Convert POJO to JSON Using Gson in Android Studio

Troy Miles
01 Jul 2014
6 min read
JSON has become the defacto standard of data exchange on the web. Compared to its cousin XML, it is smaller in size and faster to both create and parse. In fact, it seems so simple that many developers roll their own code to convert plain old Java objects or POJO to and from JSON. For simple objects, it is fairly easy to write the conversion code, but as your objects grow more complex, your code's complexity grows as well. Do you really want to maintain a bunch of code whose functionality is not truly intrinsic to your app? Luckily there is no reason for you to do so. There are quite a few alternatives to writing your own Java JSON serializer/deserializer; in fact, json.org lists 25 of them. One of them, Gson, was created by Google for use on internal projects and later was open sourced. Gson is hosted on Google Code and the source code is available in an SVN repo. Create an Android app The process of converting POJO to JSON is called serialization. The reversed process is deserialization. A big reason that GSON is such a popular library is how simple it makes both processes. For both, the only thing you need is the Gson class. Let's create a simple Android app and see how simple Gson is to use. Start Android Studio and select new project Change the Application name to GsonTest. Click Next Click Next again. Click Finish At this point we have a complete Android hello world app. In past Android IDEs, we would add the Gson library at this point, but we don't do that anymore. Instead we add a Gson dependency to our build.gradle script and that will take care of everything else for us. It is super important to edit the correct Gradle file. There is one at the root directory but the one we want is at the app directory. Double-click it to open. Locate the dependencies section near the bottom of the script. After the last entry add the following line: compile 'com.google.code.gson:gson:2.2.4' After you add it, save the script and then click the Sync Project with Gradle Files icon. It is the fifth icon from the right-hand side in the toolbar. At this point, the Gson library is visible to your app. So let's build some test code. Create test code with JSON For our test we are going to use the JSON Test web service at https://www.jsontest.com/. It is a testing platform for JSON. Basically it gives us a place to send data to in order to test if we are properly serializing and deserializing data. JSON Test has a lot of services but we will use the validate service. You pass it a JSON string URL encoded as a query string and it will reply with a JSON object that indicates whether or not the JSON was encoded correctly, as well as some statistical information. The first thing we need to do is create two classes. The first class, TestPojo, is the Java class that we are going to serialize and send to JSON Test. TestPojo doesn't do anything important. It is just for our test; however, it contains several different types of objects: ints, strings, and arrays of ints. Classes that you create can easily be much more complicated, but don't worry, Gson can handle it, for example: 1 package com.tekadept.gsontest.app; 2 3 public class TestPojo { 4 private intvalue1 = 1; 5 private String value2 = "abc"; 6 private intvalues[] = {1, 2, 3, 4}; 7 private transient intvalue3 = 3; 8 9 // no argsctor 10 TestPojo() { 11 } 12 } 13 Gson will also respect the Java transient modifier, which specifies that a field should not be serialized. Any field with it will not appear in the JSON. The second class, JsonValidate, will hold the results of our call to JSON Test. In order to make it easy to parse, I've kept the field names exactly the same as those returned by the service, except for one. Gson has an annotation, @SerializedName, if you place it before a field name, you can have name the class version of a field be different than the JSON name. For example, if we wanted to name the validate field isValid all we would have to do is: 1 package com.tekadept.gsontest.app; 2 3 import com.google.gson.annotations.SerializedName; 4 5 public class JsonValidate { 6 7 public String object_or_array; 8 public booleanempty; 9 public long parse_time_nanoseconds; 10 @SerializedName("validate") 11 public booleanisValid; 12 public intsize; 13 } By using the @SerializedName annotation, our name for the JSON validate becomes isValid. Just remember that you only need to use the annotation when you change the field's name. In order to call JSON Test's validate service, we follow the best practice of not doing it on the UI thread by using an async task. An async task has four steps: onPreExecute, doInBackground, onProgressUpdate, and onPostExecute. The doInBackground method happens on another thread. It allows us to wait for the JSON Test service to respond to us without triggering the dreaded application not responding error. You can see this in action in the following code: 60 @Override 61 protected String doInBackground(String... notUsed) { 62 TestPojotp = new TestPojo(); 63 Gsongson = new Gson(); 64 String result = null; 65 66 try { 67 String json = URLEncoder.encode(gson.toJson(tp), "UTF-8"); 68 String url = String.format("%s%s", Constants.JsonTestUrl, json); 69 result = getStream(url); 70 } catch (Exception ex){ 71 Log.v(Constants.LOG_TAG, "Error: " + ex.getMessage()); 72 } 73 return result; 74 } To encode our Java object, all we need to do is create an instance of the Gson class, then call its toJson method, passing an instance of the class we wish to serialize. Deserialization is nearly as simple. In the onPostExecute method, we get the string of JSON from the web service. We then call the convertFromJson method that does the conversion. First it makes sure that it got a valid string, then it does the conversion by calling Gson'sfromJson method, passing the string and the name of its the class, as follows: 81 @Override 82 protected void onPostExecute(String result) { 83 84 // convert JSON string to a POJO 85 JsonValidatejv = convertFromJson(result); 86 if (jv != null) { 87 Log.v(Constants.LOG_TAG, "Conversion Succeed: " + result); 88 } else { 89 Log.v(Constants.LOG_TAG, "Conversion Failed"); 90 } 91 } 92 93 private JsonValidateconvertFromJson(String result) { 94 JsonValidatejv = null; 95 if (result != null &&result.length() >0) { 96 try { 97 Gsongson = new Gson(); 98 jv = gson.fromJson(result, JsonValidate.class); 99 } catch (Exception ex) { 100     Log.v(Constants.LOG_TAG, "Error: " + ex.getMessage()); 101                 } 102             } 103             return jv; 104         } Conclusion For most developers this is all you need to know. There is a complete guide to Gson at https://sites.google.com/site/gson/gson-user-guide. The complete source code for the test app is at https://github.com/Rockncoder/GsonTest. Discover more Android tutorials and extra content on our Android page - find it here.
Read more
  • 0
  • 0
  • 10107

article-image-writing-tag-content
Packt
10 Jun 2014
9 min read
Save for later

Writing Tag Content

Packt
10 Jun 2014
9 min read
(For more resources related to this topic, see here.) The NDEF Message is composed by one or more NDEF records, and each record contains the payload and a header in which the data length and type and identifier are stored. In this article, we will create some examples that will allow us to work with the NDEF standard and start writing NFC applications. Working with the NDEF record Android provides an easy way to read and write data when it is formatted as per the standard NDEF. This format is the easiest way for us to work with tag data because it saves us from performing lots of operations and processes of reading and writing raw bytes. So, unless we need to get our hands dirty and write our own protocol, this is the way to go (you can still build it on top of NDEF and achieve a custom, yet standard-based protocol). Getting ready Make sure you have a working Android development environment. If you don't, ADT Bundle is a good environment to start with (you can access it by navigating to http://developer.android.com/sdk/index.html). Make sure you have an NFC-enabled Android device or a virtual test environment. It will be assumed that Eclipse is the development IDE. How to do it... We are going to create an application that writes any NDEF record to a tag by performing the following steps: Open Eclipse and create a new Android application project named NfcBookCh3Example1 with the package name nfcbook.ch3.example1. Make sure the AndroidManifest.xml file is correctly configured. Open the MainActivity.java file located under com.nfcbook.ch3.example1 and add the following class member: private NfcAdapter nfcAdapter; Implement the enableForegroundDispatch method and filter tags by using Ndef and NdefFormatable.Invoke in the onResume method: private void enableForegroundDispatch() { Intent intent = new Intent(this, MainActivity.class).addFlags(Intent.FLAG_RECEIVER_REPLACE_PENDING); PendingIntent pendingIntent = PendingIntent.getActivity(this, 0, intent, 0); IntentFilter[] intentFilter = new IntentFilter[] {}; String[][] techList = new String[][] { { android.nfc.tech.Ndef.class.getName() }, { android.nfc.tech.NdefFormatable.class.getName() } }; if ( Build.DEVICE.matches(".*generic.*") ) { //clean up the tech filter when in emulator since it doesn't work properly. techList = null; } nfcAdapter.enableForegroundDispatch(this, pendingIntent, intentFilter, techList); } Instantiate the nfcAdapter class field in the onCreate method: protected void onCreate(Bundle savedInstanceState) { ... nfcAdapter = NfcAdapter.getDefaultAdapter(this); } Implement the formatTag method: private boolean formatTag(Tag tag, NdefMessage ndefMessage) { try { NdefFormatable ndefFormat = NdefFormatable.get(tag); if (ndefFormat != null) { ndefFormat.connect(); ndefFormat.format(ndefMessage); ndefFormat.close(); return true; } } catch (Exception e) { Log.e("formatTag", e.getMessage()); } return false; } Implement the writeNdefMessage method: private boolean writeNdefMessage(Tag tag, NdefMessage ndefMessage) { try { if (tag != null) { Ndef ndef = Ndef.get(tag); if (ndef == null) { return formatTag(tag, ndefMessage); } else { ndef.connect(); if (ndef.isWritable()) { ndef.writeNdefMessage(ndefMessage); ndef.close(); return true; } ndef.close(); } } } catch (Exception e) { Log.e("formatTag", e.getMessage()); } return false; } Implement the isNfcIntent method: boolean isNfcIntent(Intent intent) { return intent.hasExtra(NfcAdapter.EXTRA_TAG); } Override the onNewIntent method using the following code: @Override protected void onNewIntent(Intent intent) { try { if (isNfcIntent(intent)) { NdefRecord ndefEmptyRecord = new NdefRecord(NdefRecord.TNF_EMPTY, new byte[]{}, new byte[]{}, new byte[]{}); NdefMessage ndefMessage = new NdefMessage(new NdefRecord[] { ndefEmptyRecord }); Tag tag = intent.getParcelableExtra(NfcAdapter.EXTRA_TAG); if (writeNdefMessage(tag, ndefMessage)) { Toast.makeText(this, "Tag written!", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(this, "Failed to write tag", Toast.LENGTH_SHORT).show(); } } } catch (Exception e) { Log.e("onNewIntent", e.getMessage()); } } Override the onPause method and insert the following code: @Override protected void onPause() { super.onPause(); nfcAdapter.disableForegroundDispatch(this); } Open the NFC Simulator tool and simulate a few tags. The tag should be marked as modified, as shown in the following screenshot: In the NFC Simulator, a tag name that ends with _LOCKED is not writable, so we won't be able to write any content to the tag and, therefore, a Failed to write tag toast will appear. How it works... NFC intents carry an extra value with them that is a virtual representation of the tag and can be obtained using the NfcAdapter.EXTRA_TAG key. We can get information about the tag, such as the tag ID and its content type, through this object. In the onNewIntent method, we retrieve the tag instance and then use other classes provided by Android to easily read, write, and retrieve even more information about the tag. These classes are as follows: android.nfc.tech.Ndef: This class provides methods to retrieve and modify the NdefMessage object on a tag android.nfc.tech.NdefFormatable: This class provides methods to format tags that are capable of being formatted as NDEF The first thing we need to do while writing a tag is to call the get(Tag tag) method from the Ndef class, which will return an instance of the same class. Then, we need to open a connection with the tag by calling the connect() method. With an open connection, we can now write a NDEF message to the tag by calling the writeNdefMessage(NdefMessage msg) method. Checking whether the tag is writable or not is always a good practice to prevent unwanted exceptions. We can do this by calling the isWritable() method. Note that this method may not account for physical write protection. When everything is done, we call the close() method to release the previously opened connection. If the get(Tag tag) method returns null, it means that the tag is not formatted as per the NDEF format, and we should try to format it correctly. For formatting a tag with the NDEF format, we use the NdefFormatable class in the same way as we did with the Ndef class. However, in this case, we want to format the tag and write a message. This is achieved by calling the format(NdefMessage firstMessage) method. So, we should call the get(Tag tag) method, then open a connection by calling connect(), format the tag, and write the message by calling the format(NdefMessage firstMessage) method. Finally, close the connection with the close() method. If the get(Tag tag) method returns null, it means that the tag is the Android NFC API that cannot automatically format the tag to NDEF. An NDEF message is composed of several NDEF records. Each of these records is composed of four key properties: Type Name Format (TNF): This property defines how the type field should be interpreted Record Type Definition (RTD): This property is used together with the TNF to help Android create the correct NDEF message and trigger the corresponding intent Id: This property lets you define a custom identifier for the record Payload: This property contains the content that will be transported to the record Using the combinations between the TNF and the RTD, we can create several different NDEF records to hold our data and even create our custom types. In this recipe, we created an empty record. The main TNF property values of a record are as follows: - TNF_ABSOLUTE_URI: This is a URI-type field - TNF_WELL_KNOWN: This is an NFC Forum-defined URN - TNF_EXTERNAL_TYPE: This is a URN-type field - TNF_MIME_MEDIA: This is a MIME type based on the type specified The main RTF property values of a record are: - RTD_URI: This is the URI based on the payload - RTD_TEXT: This is the NFC Forum-defined record type Writing a URI-formatted record URI is probably the most common content written to NFC tags. It allows you to share a website, an online service, or a link to the online content. This can be used, for example, in advertising and marketing. How to do it... We are going to create an application that writes URI records to a tag by performing the following steps. The URI will be hardcoded and will point to the Packt Publishing website. Open Eclipse and create a new Android application project named NfcBookCh3Example2. Make sure the AndroidManifest.xml file is configured correctly. Set the minimum SDK version to 14: <uses-sdk android_minSdkVersion="14" /> Implement the enableForegroundDispatch, isNfcIntent, formatTag, and writeNdefMessage methods from the previous recipe—steps 2, 4, 6, and 7. Add the following class member and instantiate it in the onCreate method: private NfcAdapter nfcAdapter; protected void onCreate(Bundle savedInstanceState) { ... nfcAdapter = NfcAdapter.getDefaultAdapter(this); } Override the onNewIntent method and place the following code: @Override protected void onNewIntent(Intent intent) { try { if (isNfcIntent(intent)) { NdefRecord uriRecord = NdefRecord.createUri("http://www.packtpub.com"); NdefMessage ndefMessage = new NdefMessage(new NdefRecord[] { uriRecord }); Tag tag = intent.getParcelableExtra(NfcAdapter.EXTRA_TAG); boolean writeResult = writeNdefMessage(tag, ndefMessage); if (writeResult) { Toast.makeText(this, "Tag written!", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(this, "Tag write failed!", Toast.LENGTH_SHORT).show(); } } } catch (Exception e) { Log.e("onNewIntent", e.getMessage()); } super.onNewIntent(intent); } Run the application. Tap a tag on your phone or simulate a tap in the NFC Simulator. A Write Successful! toast should appear. How it works... URIs are perfect content for NFC tags because with a relatively small amount of content, we can send users to more rich and complete resources. These types of records are the most easy to create, and this is done by calling the NdefRecord.createUri method and passing the URI as the first parameter. URIs are not necessarily URLs for a website. We can use other URIs that are quite well-known in Android such as the following: - tel:+000 000 000 000 - sms:+000 000 000 000 If we write the tel: uri syntax, the user will be prompted to initiate a phone call. We can always create a URI record the hard way without using the createUri method: public NdefRecord createUriRecord(String uri) { NdefRecord rtdUriRecord = null; try { byte[] uriField; uriField = uri.getBytes("UTF-8"); byte[] payload = new byte[uriField.length + 1]; //+1 for the URI prefix payload[0] = 0x00; //prefixes the URI System.arraycopy(uriField, 0, payload, 1, uriField.length); rtdUriRecord = new NdefRecord(NdefRecord.TNF_WELL_KNOWN, NdefRecord.RTD_URI, new byte[0], payload); } catch (UnsupportedEncodingException e) { Log.e("createUriRecord", e.getMessage()); } return rtdUriRecord; } The first byte in the payload indicates which prefix should be used with the URI. This way, we don't need to write the whole URI in the tag, which saves some tag space. The following list describes the recognized prefixes: 0x00 No prepending is done 0x01 http://www. 0x02 https://www. 0x03 http:// 0x04 https:// 0x05 tel: 0x06 mailto: 0x07 ftp://anonymous:anonymous@ 0x08 ftp://ftp. 0x09 ftps:// 0x0A sftp:// 0x0B smb:// 0x0C nfs:// 0x0D ftp:// 0x0E dav:// 0x0F news: 0x10 telnet:// 0x11 imap: 0x12 rtsp:// 0x13 urn: 0x14 pop: 0x15 sip: 0x16 sips: 0x17 tftp: 0x18 btspp:// 0x19 btl2cap:// 0x1A btgoep:// 0x1B tcpobex:// 0x1C irdaobex:// 0x1D file:// 0x1E urn:epc:id: 0x1F urn:epc:tag: 0x20 urn:epc:pat: 0x21 urn:epc:raw: 0x22 urn:epc: 0x23 urn:nfc:
Read more
  • 0
  • 0
  • 2217
article-image-common-asynctask-issues
Packt
20 Dec 2013
7 min read
Save for later

Common AsyncTask issues

Packt
20 Dec 2013
7 min read
(For more resources related to this topic, see here.) Fragmentation issues AsyncTask has evolved with new releases of the Android platform, resulting in behavior that varies with the platform of the device running the task, which is a part of the wider issue of fragmentation. The simple fact is that if we target a broad range of API levels, the execution characteristics of our AsyncTasks—and therefore, the behavior of our apps—can vary considerably on different devices. So what can we do to reduce the likelihood of encountering AsyncTask issues due to fragmentation? The most obvious approach is to deliberately target devices running at least Honeycomb, by setting a minSdkVersion of 11 in the Android Manifest file. This neatly puts us in the category of devices, which, by default, execute AsyncTasks serially, and therefore, much more predictably. However, this significantly reduces the market reach of our apps. At the time of writing in September 2013, more than 34 percent of Android devices in the wild run a version of Android in the danger zone between API levels 4 and 10. A second option is to design our code carefully and test exhaustively on a range of devices—always commendable practices of course, but as we've seen, concurrent programming is hard enough without the added complexity of fragmentation, and invariably, subtle bugs will remain. A third solution that has been suggested by the Android development community is to reimplement AsyncTaskin a package within your own project, then extend your own AsyncTask class instead of the SDK version. In this way, you are no longer at the mercy of the user's device platform, and can regain control of your AsyncTasks. Since the source code for AsyncTask is readily available, this is not difficult to do. Activity lifecycle issues Having deliberately moved any long-running tasks off the main thread, we've made our applications nice and responsive—the main thread is free to respond very quickly to any user interaction. Unfortunately, we have also created a potential problem for ourselves, because the main thread is able to finish the Activity before our background tasks complete. Activity might finish for many reasons, including configuration changes caused the by the user rotating the device (the default behavior of Activity on a change in orientation is to restart with an entirely new instance of the activity). If we continue processing a background task after the Activity has finished, we are probably doing unnecessary work, and therefore wasting CPU and other resources (including battery life), which could be put to better use. Also, any object references held by the AsyncTask will not be eligible for garbage collection until the task explicitly nulls those references or completes and is itself eligible for GC ( garbage collection ). Since our AsyncTask probably references the Activity or parts of the View hierarchy, we can easily leak a significant amount of memory in this way. A common usage of AsyncTask is to declare it as an anonymous inner class of the host Activity, which creates an implicit reference to the Activity and an even bigger memory leak. There are two approaches for preventing these resource wastage problems. Handling lifecycle issues with early cancellation First, and most obviously, we can synchronize our AsyncTask lifecycle with that of the Activity by canceling running tasks when our Activity is finishing. When an Activity finishes, its lifecycle callback methods are invoked on the main thread. We can check to see why the lifecycle method is being called, and if the Activity is finishing, cancel the background tasks. The most appropriate Activity lifecycle method for this is onPause, which is guaranteed to be called before the Activity finishes. protected void onPause() { super.onPause(); if ((task != null) && (isFinishing())) task.cancel(false); } If the Activity is not finishing—say because it has started another Activity and is still on the back stack—we might simply allow our background task to continue to completion. Handling lifecycle issues with retained headless fragments If the Activity is finishing because of a configuration change, it may still be useful to complete the background task and display the results in the restarted Activity. One pattern for achieving this is through the use of retained Fragments. Fragments were introduced to Android at API level 11, but are available through a support library to applications targeting earlier API levels. All of the downloadable examples use the support library, and target API levels 7 through 19. To use Fragments, our Activity must extend the FragmentActivity class. The Fragment lifecycle is closely bound to that of the host Activity, and a fragment will normally be disposed when the activity restarts. However, we can explicitly prevent this by invoking setRetainInstance (true) on our Fragment so that it survives across Activity restarts. Typically, a Fragment will be responsible for creating and managing at least a portion of the user interface of an Activity, but this is not mandatory. A Fragment that does not manage a view of its own is known as a headless Fragment. Isolating our AsyncTask in a retained headless Fragment makes it less likely that we will accidentally leak references to objects such as the View hierarchy, because the AsyncTask will no longer directly interact with the user interface. To demonstrate this, we'll start by defining an interface that our Activity will implement: public interface AsyncListener<Progress, Result> { void onPreExecute(); void onProgressUpdate(Progress... progress); void onPostExecute(Result result); void onCancelled(Result result); } Next, we'll create a retained headless Fragment, which wraps our AsyncTask. For brevity, doInBackground is omitted, as it is unchanged from the previous examples—see the downloadable samples for the complete code. public class PrimesFragment extends Fragment { private AsyncListener<Integer,BigInteger> listener; private PrimesTask task; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setRetainInstance(true); task = new PrimesTask(); task.execute(2000); } public void onAttach(Activity activity) { super.onAttach(activity); listener = (AsyncListener<Integer,BigInteger>)activity; } public void onDetach() { super.onDetach(); listener = null; } class PrimesTask extends AsyncTask<Integer, Integer, BigInteger>{ protected void onPreExecute() { if (listener != null) listener.onPreExecute(); } protected void onProgressUpdate(Integer... values) { if (listener != null) listener.onProgressUpdate(values); } protected void onPostExecute(BigInteger result) { if (listener != null) listener.onPostExecute(result); } protected void onCancelled(BigInteger result) { if (listener != null) listener.onCancelled(result); } // … doInBackground elided for brevity … } } We're using the Fragment lifecycle methods (onAttach and onDetach) to add or remove the current Activity as a listener, and PrimesTask delegates directly to it from all of its main-thread callbacks. Now, all we need is the host Activity that implements AsyncListener and uses PrimesFragment to implement its long-running task. The full source code is available to download from the Packt Publishing website, so we'll just take a look at the highlights. First, the code in the button's OnClickListener now checks to see if Fragment already exists, and only creates one if it is missing: FragmentManager fm = getSupportFragmentManager(); PrimesFragment primes = (PrimesFragment)fm.findFragmentByTag("primes"); if (primes == null) { primes = new PrimesFragment(); FragmentTransaction transaction = fm.beginTransaction(); transaction.add(primes, "primes").commit(); } If our Activity has been restarted, it will need to re-display the progress dialog when a progress update callback is received, so we check and show it, if necessary, before updating the progress bar: public void onProgressUpdate(Integer... progress) { if (dialog == null) prepareProgressDialog(); progress.setProgress(progress[0]); } Finally, Activity will need to implement the onPostExecute and onCancelled callbacks defined by AsyncListener. Both methods will update the resultView as in the previous examples, then do a little cleanup—dismissing the dialog and removing Fragment as its work is now done: public void onPostExecute(BigInteger result) { resultView.setText(result.toString()); cleanUp(); } public void onCancelled(BigInteger result) { resultView.setText("cancelled at " + result); cleanUp(); } private void cleanUp() { dialog.dismiss(); dialog = null; FragmentManager fm = getSupportFragmentManager(); Fragment primes = fm.findFragmentByTag("primes"); fm.beginTransaction().remove(primes).commit(); } Summary In this article, we've taken a look at AsyncTask and how to use it to write responsive applications that perform operations without blocking the main thread. Resources for Article: Further resources on this subject: Android Native Application API [Article] So, what is Spring for Android? [Article] Creating Dynamic UI with Android Fragments [Article]
Read more
  • 0
  • 0
  • 5199

article-image-developing-apps-google-speech-apis
Packt
21 Nov 2013
6 min read
Save for later

Developing apps with the Google Speech APIs

Packt
21 Nov 2013
6 min read
(For more resources related to this topic, see here.) Speech technology has come of age. If you own a smartphone or tablet you can perform many tasks on your device using voice. For example, you can send a text message, update your calendar, set an alarm, and do many other things with a single spoken command that would take multiple steps to complete using traditional methods such as tapping and selecting. You can also ask the sorts of queries that you would previously have typed into your Google search box and get a spoken response. For those who wish to develop their own speech-based apps, Google provides APIs for the basic technologies of text-to-speech synthesis (TTS) and automated speech recognition (ASR). Using these APIs developers can create useful interactive voice applications. This article provides a brief overview of the Google APIs and then goes on to show some examples of voice-based apps built around the APIs. Using the Google text-to-speech synthesis API TTS has been available on Android devices since Android 1.6 (API Level 4). The components of the Google TTS API (package android.speech.tts) are documented at http://developer.android.com/reference/android/speech/tts/package-summary.html. Interfaces and classes are listed here and further details can be obtained by clicking on these. Starting the TTS engine involves creating an instance of the TextToSpeech class along with the method that will be executed when the TTS engine is initialized. Checking that TTS has been initialized is done through an interface called OnInitListener. If TTS initialization is complete, the method onInit is invoked. If TTS has been initialized correctly, the speak method is invoked to speak out some words: TextToSpeech tts = new TextToSpeech(this, new OnInitListener(){ public void onInit(int status) { if (status == TextToSpeech.SUCCESS) speak(“Hello world”, TextToSpeech.QUEUE_ADD, null); } } Due to limited storage on some devices, not all languages that are supported may actually be installed on a particular device so it is important to check if a particular language is available before creating the TextToSpeech object. This way, it is possible to download and install the required language-specific resource files if necessary. This is done by sending an Intent with the action ACTION_CHECK_TTS_DATA, which is part of the TextToSpeech.Engine class: Intent intent = new In-tent(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA); startActivityForResult(intent,TTS_DATA_CHECK); If the language data has been correctly installed, the onActivityResult handler will receive a CHECK_VOICE_DATA_PASS. If the data is not available, the action ACTION_INSTALL_TTS_DATA will be executed: Intent installData = new Intent (Engine. ACTION_INSTALL_TTS_DATA); startActivity(installData); The next figure shows an example of an app using the TTS API. A potential use-case for this type of app is when the user accesses some text on the Web - for example, a news item, email, or a sports report. This is useful if the user’s eyes and hands are busy, or if there are problems reading the text on the screen. In this example, the app retrieves some text and the user presses the Speak button to hear it. A Stop button is provided in case the user does not wish to hear all of the text. Using the Google speech recognition API The components of the Google Speech API (package android.speech) are documented at http://developer.android.com/reference/android/speech/package-summary.html. Interfaces and classes are listed and further details can be obtained by clicking on these. There are two ways in which speech recognition can be carried out on an Android Device: based solely on a RecognizerIntent, or by creating an instance of SpeechRecognizer. The following code shows how to start an activity to recognize speech using the first approach: Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); // Specify language model intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, languageModel); // Specify how many results to receive. Results listed in order of confidence intent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, numberRecoResults); // Start listening startActivityForResult(intent, ASR_CODE); The app shown below illustrates the following: The user selects the parameters for speech recognition. The user presses a button and says some words. The words recognized are displayed in a list along with their confidence scores. Multilingual apps It is important to be able to develop apps in languages other than English. The TTS and ASR engines can be configured to a wide range of languages. However, we cannot expect that all languages will be available or that they are supported on a particular device. Thus, before selecting a language it is necessary to check whether it is one of the supported languages, and if not, to set the currently preferred language. In order to do this, a RecognizerIntent.ACTION_GET_LANGUAGE_DETAILS ordered broadcast is sent that returns a Bundle from which the information about the preferred language (RecognizerIntent.EXTRA_LANGUAGE_PREFERENCE) and the list of supported languages (RecognizerIntent.EXTRA_SUPPORTED_LANGUAGES) can be extracted. For speech recognition this introduces a new parameter for the intent in which the language is specified that will be used for recognition, as shown in the following code line: intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, language); As shown in the next figure, the user is asked for a language and then the app recognizes what the user says and plays back the best recognition result in the language selected. Creating a Virtual Personal Assistant (VPA) Many voice-based apps need to do more than simply speak and understand speech. For example, a VPA also needs the ability to engage in dialog with the user and to perform operations such as connecting to web services and activating device functions. One way to enable these additional capabilities is to make use of chatbot technology (see, for example, the Pandorabots web service: http://www.pandorabots.com/). The following figure shows two VPAs, Jack and Derek, that have been developed in this way. Jack is a general-purpose VPA, while Derek is a specialized VPA that can answer questions about Type 2 diabetes, such as symptoms, causes, treatment, risks to children, and complications. Summary The Google Speech APIs can be used in countless ways to develop interesting and useful voice-based apps. This article has shown some examples. By building on these you will be able to bring the power of voice to your Android apps, making them smarter and more intuitive, and boosting your users' mobile experience. Resources for Article: Further resources on this subject: Introducing an Android platform [Article] Building Android (Must know) [Article] Top 5 Must-have Android Applications [Article]
Read more
  • 0
  • 0
  • 3538