Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Mobile

213 Articles
article-image-styling-user-interface
Packt
22 Nov 2013
8 min read
Save for later

Styling the User Interface

Packt
22 Nov 2013
8 min read
(For more resources related to this topic, see here.) Styling components versus themes Before we get into this article, it's important to have a good understanding of the difference between styling an individual component and creating a theme. Almost every display component in Sencha Touch has the option to set its own style. For example, a panel component can use a style in this way: { xtype: 'panel', style: 'border: none; font: 12px Arial black', html: 'Hello World' } The style can also be set as an object using: { xtype: 'panel', style : { 'border' : 'none', 'font' : '12px Arial black', 'border-left': '1px solid black' } html: 'Hello World' } You will notice that inside the style block, we have quoted both sides of the configuration setting. This is still the correct syntax for JavaScript and a very good habit to get in to for using style blocks. This is because a number of standard CSS styles use a dash as part of their name. If we do not add quotes to border-left, JavaScript will read this as border minus left and promptly collapse in a pile of errors. We can also set a style class for a component and use an external CSS file to define the class as follows: { xtype: 'panel', cls: 'myStyle', html: 'Hello World' } Your external CSS file could then control the style of the component in the following manner: .myStyle { border: none; font: 12px Arial black; } This class-based control of display is considered a best practice as it separates the style logic from the display logic. This means that when you need to change a border color, it can be done in one file instead of hunting through multiple files for individual style settings. These styling options are very useful for controlling the display of individual components. There are also certain style elements, such as border, padding, and margin, that can be set directly in the components' configuration: { xtype: 'panel', bodyMargin: '10 5 5 5', bodyBorder: '1px solid black', bodyPadding: 5, html: 'Hello World' } These configurations can accept either a number to be applied to all sides or a CSS string value, such as 1px solid black or 10 5 5 5. The number should be entered without quotes but the CSS string values need to be within quotes. These kind of small changes can be helpful in styling your application, but what if you need to do something a bit bigger? What if you want to change the color or appearance of the entire application? What if you want to create your own default style for your buttons? This is where themes and UI styles come into play. UI styling for toolbars and buttons Let's do a quick review of the basic MVC application we created, Creating a Simple Application, and use it to start our exploration of styles with toolbars and buttons. To begin, we are going to add a few things to the first panel, which has our titlebar, toolbar, and Hello World text. Adding the toolbar In app/views, you'll find Main.js. Go ahead and open that in your editor and takea look at the first panel in our items list: items: [ { title: 'Hello', iconCls: 'home', xtype: 'panel', html: 'Hello World', items: [ { xtype: 'titlebar', docked: 'top', title: 'About TouchStart' } ] }... We're going to add a second toolbar on top of the existing one. Locate the items section, and after the curly braces for our first toolbar, add the second toolbar in the following manner: { xtype: 'titlebar', docked: 'top', title: 'About TouchStart' }, { docked: 'top', xtype: 'toolbar', items: [ {text: 'My Button'} ]} Don't forget to add a comma between the two toolbars. Extra or missing commas While working in Sencha Touch, one of the most common causes of parse errors is an extra or missing comma. When you are moving the code around, always make sure you have accounted for any stray or missing commas. Fortunately for us, the Safari Error Console will usually give us a pretty good idea about the line number to look at for these types of parse errors. A more detailed list of common errors can be found at: http://javascript.about.com/od/reference/a/error.htm Now when you take a look at the first tab, you should see our new toolbar with our button to the left. Since the toolbars both have the same background, they are a bit difficult to differentiate. So, we are going to change the appearance of the bottom bar using the ui configuration option: { docked: 'top', xtype: 'toolbar', ui: 'light', items: [ {text: 'My Button'} ] } The ui configuration is the shorthand for a particular set of styles in Sencha Touch. There are several ui styles included with Sencha Touch, and later on, we will show you how to make your own. Styling buttons Buttons can also use the ui configuration setting, for which they offer several different options: normal: This is the default button back: This is a button with the left side narrowed to a point round: This is a more drastically rounded button small: This is a smaller button action: This is a brighter version of the default button (the color varies according to the active color of the theme, which we will see later) forward: This is a button with the right side narrowed to a point Buttons also have some color options built into the ui option. These color options are confirm and decline. These options are combined with the previous shape options using a hyphen; for example, confirm-small or decline-round. Let's add some new buttons and see how this looks on our screen. Locate the items list with our button in the second toolbar: items: [ {text: 'My Button'} ] Replace that old items list with the following new items list: items: [ { text: 'Back', ui: 'back' }, { text: 'Round', ui: 'round' }, { text: 'Small', ui: 'small' }, { text: 'Normal', ui: 'normal' }, { text: 'Action', ui: 'action' }, { text: 'Forward', ui: 'forward' } ] This will produce a series of buttons across the top of our toolbar. As you may notice, all of our buttons are aligned to the left. You can move buttons to the right by adding a spacer xtype in front of the buttons you want pushed to the right. Try this by adding the following between our Forward and Action buttons: { xtype: 'spacer'}, This will make the Forward button move over to the right-hand side of the toolbar: Since buttons can actually be used anywhere, we can add some to our title bar and use the align property to control where they appear. Modify the titlebar for our first panel and add an items section, as shown in the following code: { xtype: 'titlebar', docked: 'top', title: 'About TouchStart', items: [ { xtype: 'button', text: 'Left', align: 'left' }, { xtype: 'button', text: 'Right', align: 'right' } ] } Now we should have two buttons in our title bar, one on either side of the title: Let's also add some buttons to the panel container so we can see what the ui options confirm and decline look like. Locate the end of the items section of our HelloPanel container and add the following after the second toolbar: { xtype: 'button', text: 'Confirm', ui: 'confirm', width: 100 }, { xtype: 'button', text: 'Decline', ui: 'decline', width: 100 } There are two things you may notice that differentiate our panel buttons from our toolbar buttons. The first is that we declare xtype:'button' in our panel but we don't in our toolbar. This is because the toolbar assumes it will contain buttons and xtype only has to be declared if you use something other than a button. The panel does not set a default xtype attribute, so every item in the panel must declare one. The second difference is that we declare width for the buttons. If we don't declare width when we use a button in a panel, it will expand to the full width of the panel. On the toolbar, the button auto-sizes itself to fit the text. You will also see that our two buttons in the panel are mashed together. You can separate them out by adding margin: 5 to each of the button configuration sections. These simple styling options can help make your application easier to navigate and provide the user with visual clues for important or potentially destructive actions. The tab bar The tab bar at the bottom also understands the ui configuration option. In this case, the available options are light and dark. The tab bar also changes the icon appearance based on the ui option; a light toolbar will have dark icons and a dark toolbar will have light icons. These icons are actually part of a special font called Pictos. Sencha Touch started using the Pictos font in Version 2.2 instead of images icons because of compatibility issues on some mobile devices. The icon mask from previous versions of Sencha Touch is available but has been discontinued as of Version 2.2. You can see some of the icons available in the documentation for the Ext.Button component: http://docs.sencha.com/touch/2.2.0/#!/api/Ext.Button If you're curious about the Pictos font, you can learn more about it at http://pictos.cc/
Read more
  • 0
  • 0
  • 1074

article-image-drawing-and-drawables-android-canvas
Packt
21 Nov 2013
8 min read
Save for later

Drawing and Drawables in Android Canvas

Packt
21 Nov 2013
8 min read
In this article by Mir Nauman Tahir, the author of the book Learning Android Canvas, our goal is to learn about the following: Drawing on a Canvas Drawing on a View Drawing on a SurfaceView Drawables Drawables from resource images Drawables from resource XML Shape Drawables (For more resources related to this topic, see here.) Android provides us with 2D drawing APIs that enable us to draw our custom drawing on the Canvas. When working with 2D drawings, we will either draw on view or directly on the surface or Canvas. Using View for our graphics, the drawing is handled by the system's normal View hierarchy drawing process. We only define our graphics to be inserted in the View; the rest is done automatically by the system. While using the method to draw directly on the Canvas, we have to manually call the suitable drawing Canvas methods such as onDraw() or createBitmap(). This method requires more efforts and coding and is a bit more complicated, but we have everything in control such as the animation and everything else like being in control of the size and location of the drawing and the colors and the ability to move the drawing from its current location to another location through code. The implementation of the onDraw() method can be seen in the drawing on the view section and the code for createBitmap() is shown in the Drawing on a Canvas section. We will use the drawing on the View method if we are dealing with static graphics–static graphics do not change dynamically during the execution of the application–or if we are dealing with graphics that are not resource hungry as we don't wish to put our application performance at stake. Drawing on a View can be used for designing eye-catching simple applications with static graphics and simple functionality–simple attractive backgrounds and buttons. It's perfectly okay to draw on View using the main UI thread as these graphics are not a threat to the overall performance of our application. The drawing on a Canvas method should be used when working with heavy graphics that change dynamically like those in games. In this scenario, the Canvas will continuously redraw itself to keep the graphics updated. We can draw on a Canvas using the main UI thread, but when working with heavy, resource-hungry, dynamically changing graphics, the application will continuously redraw itself. It is better to use a separate thread to draw these graphics. Keeping such graphics on the main UI thread will not make them go into the non-responding mode, and after working so hard we certainly won't like this. So this choice should be made very carefully. Drawing on a Canvas A Canvas is an interface, a medium that enables us to actually access the surface, which we will use to draw our graphics. The Canvas contains all the necessary drawing methods needed to draw our graphics. The actual internal mechanism of drawing on a Canvas is that, whenever anything needs to be drawn on the Canvas, it's actually drawn on an underlying blank bitmap image. By default, this bitmap is automatically provided for us. But if we want to use a new Canvas, then we need to create a new bitmap image and then a new Canvas object while providing the already created bitmap to the constructor of the Canvas class. A sample code is explained as follows. Initially, the bitmap is drawn but not on the screen; it's actually drawn in the background on an internal Canvas. But to bring it to the front, we need to create a new Canvas object and provide the already created bitmap to it to be painted on the screen. Bitmap ourNewBitmap = Bitmap.CreateBitmap(100,100,Bitmap.Config.ARGB_8888); Canvas ourNewCanvas = new Canvas(ourNewBitmap); Drawing on a View If our application does not require heavy system resources or fast frame rates, we should use View.onDraw(). The benefit in this case is that the system will automatically give the Canvas its underlying bitmap as well. All we need is to make our drawing calls and be done with our drawings. We will create our class by extending it from the View class and will define the onDraw() method in it. The onDraw() method is where we will define whatever we want to draw on our Canvas. The Android framework will call the onDraw() method to ask our View to draw itself. The onDraw() method will be called by the Android framework on a need basis; for example, whenever our application wants to draw itself, this method will be called. We have to call the invalidate() method whenever we want our view to redraw itself. This means that, whenever we want our application's view to be redrawn, we will call the invalidate() method and the Android framework will call the onDraw() method for us. Let's say we want to draw a line, then the code would be something like this: class DrawView extends View { Paint paint = new Paint(); public DrawView(Context context) { super(context); paint.setColor(Color.BLUE); } @Override public void onDraw(Canvas canvas) { super.onDraw(canvas); canvas.drawLine(10, 10, 90, 10, paint); } } Inside the onDraw() method, we will use all kinds of facilities that are provided by the Canvas class such as the different drawing methods made available by the Canvas class. We can also use drawing methods from other classes as well. The Android framework will draw a bitmap on the Canvas for us once our onDraw() method is complete with all our desired functionality. If we are using the main UI thread, we will call the invalidate() method, but if we are using another thread, then we will call the postInvalidate() method. Drawing on a SurfaceView The View class provides a subclass SurfaceView that provides a dedicated drawing surface within the hierarchy of the View. The goal is to draw using a secondary thread so that the application won't wait for the resources to be free and ready to redraw. The secondary thread has access to the SurfaceView object that has the ability to draw on its own Canvas with its own redraw frequency. We will start by creating a class that will extend the SurfaceView class. We should implement an interface SurfaceHolder.Callback. This interface is important in the sense that it will provide us with the information when a surface is created, modified, or destroyed. When we have timely information about the creation, change, or destruction of a surface, we can make a better decision on when to start drawing and when to stop. The secondary thread class that will perform all the drawing on our Canvas can also be defined in the SurfaceView class. To get information, the Surface object should be handled through SurfaceHolder and not directly. To do this, we will get the Holder by calling the getHolder() method when the SurfaceView is initialized. We will then tell the SurfaceHolder object that we want to receive all the callbacks; to do this, we will call addCallBacks(). After this, we will override all the methods inside the SurfaceView class to get our job done according to our functionality. The next step is to draw the surface's Canvas from inside the second thread; to do this, we will pass our SurfaceHandler object to the thread object and will get the Canvas using the lockCanvas() method. This will get the Canvas for us and will lock it for the drawing from the current thread only. We need to do this because we don't want an open Canvas that can be drawn by another thread; if this is the situation, it will disturb all our graphics and drawings on the Canvas. When we are done with drawing our graphics on the Canvas, we will unlock the Canvas by calling the unlockCanvasAndPost() method and will pass our Canvas object. To have a successful drawing, we will need repeated redraws; so we will repeat this locking and unlocking as needed and the surface will draw the Canvas. To have a uniform and smooth graphic animation, we need to have the previous state of the Canvas; so we will retrieve the Canvas from the SurfaceHolder object every time and the whole surface should be redrawn each time. If we don't do so, for instance, not painting the whole surface, the drawing from the previous Canvas will persist and that will destroy the whole look of our graphic-intense application. A sample code would be the following: class OurGameView extends SurfaceView implements SurfaceHolder.Callback { Thread thread = null; SurfaceHolder surfaceHolder; volatile boolean running = false; public void OurGameView (Context context) { super(context); surfaceHolder = getHolder(); } public void onResumeOurGameView (){ running = true; thread = new Thread(this); thread.start(); } public void onPauseOurGameView(){ boolean retry = true; running = false; while(retry){ thread.join(); retry = false; } public void run() { while(running){ if(surfaceHolder.getSurface().isValid()){ Canvas canvas = surfaceHolder.lockCanvas(); //... actual drawing on canvas surfaceHolder.unlockCanvasAndPost(canvas); } } } }
Read more
  • 0
  • 0
  • 5496

article-image-developing-apps-google-speech-apis
Packt
21 Nov 2013
6 min read
Save for later

Developing apps with the Google Speech APIs

Packt
21 Nov 2013
6 min read
(For more resources related to this topic, see here.) Speech technology has come of age. If you own a smartphone or tablet you can perform many tasks on your device using voice. For example, you can send a text message, update your calendar, set an alarm, and do many other things with a single spoken command that would take multiple steps to complete using traditional methods such as tapping and selecting. You can also ask the sorts of queries that you would previously have typed into your Google search box and get a spoken response. For those who wish to develop their own speech-based apps, Google provides APIs for the basic technologies of text-to-speech synthesis (TTS) and automated speech recognition (ASR). Using these APIs developers can create useful interactive voice applications. This article provides a brief overview of the Google APIs and then goes on to show some examples of voice-based apps built around the APIs. Using the Google text-to-speech synthesis API TTS has been available on Android devices since Android 1.6 (API Level 4). The components of the Google TTS API (package android.speech.tts) are documented at http://developer.android.com/reference/android/speech/tts/package-summary.html. Interfaces and classes are listed here and further details can be obtained by clicking on these. Starting the TTS engine involves creating an instance of the TextToSpeech class along with the method that will be executed when the TTS engine is initialized. Checking that TTS has been initialized is done through an interface called OnInitListener. If TTS initialization is complete, the method onInit is invoked. If TTS has been initialized correctly, the speak method is invoked to speak out some words: TextToSpeech tts = new TextToSpeech(this, new OnInitListener(){ public void onInit(int status) { if (status == TextToSpeech.SUCCESS) speak(“Hello world”, TextToSpeech.QUEUE_ADD, null); } } Due to limited storage on some devices, not all languages that are supported may actually be installed on a particular device so it is important to check if a particular language is available before creating the TextToSpeech object. This way, it is possible to download and install the required language-specific resource files if necessary. This is done by sending an Intent with the action ACTION_CHECK_TTS_DATA, which is part of the TextToSpeech.Engine class: Intent intent = new In-tent(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA); startActivityForResult(intent,TTS_DATA_CHECK); If the language data has been correctly installed, the onActivityResult handler will receive a CHECK_VOICE_DATA_PASS. If the data is not available, the action ACTION_INSTALL_TTS_DATA will be executed: Intent installData = new Intent (Engine. ACTION_INSTALL_TTS_DATA); startActivity(installData); The next figure shows an example of an app using the TTS API. A potential use-case for this type of app is when the user accesses some text on the Web - for example, a news item, email, or a sports report. This is useful if the user’s eyes and hands are busy, or if there are problems reading the text on the screen. In this example, the app retrieves some text and the user presses the Speak button to hear it. A Stop button is provided in case the user does not wish to hear all of the text. Using the Google speech recognition API The components of the Google Speech API (package android.speech) are documented at http://developer.android.com/reference/android/speech/package-summary.html. Interfaces and classes are listed and further details can be obtained by clicking on these. There are two ways in which speech recognition can be carried out on an Android Device: based solely on a RecognizerIntent, or by creating an instance of SpeechRecognizer. The following code shows how to start an activity to recognize speech using the first approach: Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); // Specify language model intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, languageModel); // Specify how many results to receive. Results listed in order of confidence intent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, numberRecoResults); // Start listening startActivityForResult(intent, ASR_CODE); The app shown below illustrates the following: The user selects the parameters for speech recognition. The user presses a button and says some words. The words recognized are displayed in a list along with their confidence scores. Multilingual apps It is important to be able to develop apps in languages other than English. The TTS and ASR engines can be configured to a wide range of languages. However, we cannot expect that all languages will be available or that they are supported on a particular device. Thus, before selecting a language it is necessary to check whether it is one of the supported languages, and if not, to set the currently preferred language. In order to do this, a RecognizerIntent.ACTION_GET_LANGUAGE_DETAILS ordered broadcast is sent that returns a Bundle from which the information about the preferred language (RecognizerIntent.EXTRA_LANGUAGE_PREFERENCE) and the list of supported languages (RecognizerIntent.EXTRA_SUPPORTED_LANGUAGES) can be extracted. For speech recognition this introduces a new parameter for the intent in which the language is specified that will be used for recognition, as shown in the following code line: intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, language); As shown in the next figure, the user is asked for a language and then the app recognizes what the user says and plays back the best recognition result in the language selected. Creating a Virtual Personal Assistant (VPA) Many voice-based apps need to do more than simply speak and understand speech. For example, a VPA also needs the ability to engage in dialog with the user and to perform operations such as connecting to web services and activating device functions. One way to enable these additional capabilities is to make use of chatbot technology (see, for example, the Pandorabots web service: http://www.pandorabots.com/). The following figure shows two VPAs, Jack and Derek, that have been developed in this way. Jack is a general-purpose VPA, while Derek is a specialized VPA that can answer questions about Type 2 diabetes, such as symptoms, causes, treatment, risks to children, and complications. Summary The Google Speech APIs can be used in countless ways to develop interesting and useful voice-based apps. This article has shown some examples. By building on these you will be able to bring the power of voice to your Android apps, making them smarter and more intuitive, and boosting your users' mobile experience. Resources for Article: Further resources on this subject: Introducing an Android platform [Article] Building Android (Must know) [Article] Top 5 Must-have Android Applications [Article]
Read more
  • 0
  • 0
  • 2889

article-image-using-location-data-phonegap
Packt
30 Oct 2013
11 min read
Save for later

Using Location Data with PhoneGap

Packt
30 Oct 2013
11 min read
(For more resources related to this topic, see here.) An introduction to Geolocation The term geolocation is used in order to refer to the identification process of the real-world geographic location of an object. Devices that are able to detect the user's position are becoming more common each day and we are now used to getting content based on our location ( geo targeting ). Using the Global Positioning System (GPS )—a space-based satellite navigation system that provides location and time information consistently across the globe—you can now get the accurate location of a device. During the early 1970s, the US military created Navstar, a defense navigation satellite system. Navstar was the system that created the basis for the GPS infrastructure used today by billions of devices. Since 1978 more than 60 GPS satellites have been successfully placed in the orbit around the Earth (refer to http://en.wikipedia.org/wiki/List_of_GPS_satellite_launches for a detailed report about the past and planned launches). The location of a device is represented through a point. This point is comprised of two components: latitude and longitude. There are many methods for modern devices to determine the location information, these include: Global Positioning System (GPS) IP address GSM/CDMA cell IDs Wi-Fi and Bluetooth MAC address Each approach delivers the same information; what changes is the accuracy of the device's position. The GPS satellites continuously transmit information that can parse, for example, the general health of the GPS array, roughly, where all of the satellites are in orbit, information on the precise orbit or path of the transmitting satellite, and the time of the transmission. The receiver calculates its own position by timing the signals sent by any of the satellites in the array that are visible. The process of measuring the distance from a point to a group of satellites to locate a position is known as trilateration . The distance is determined using the speed of light as a constant along with the time that the signal left the satellites. The emerging trend in mobile development is GPS-based "people discovery" apps such as Highlight, Sonar, Banjo, and Foursquare. Each app has different features and has been built for different purposes, but all of them share the same killer feature: using location as a piece of metadata in order to filter information according to the user's needs. The PhoneGap Geolocation API The Geolocation API is not a part of the HTML5 specification but it is tightly integrated with mobile development. The PhoneGap Geolocation API and the W3C Geolocation API mirror each other; both define the same methods and relative arguments. There are several devices that already implement the W3C Geolocation API; for those devices you can use native support instead of the PhoneGap API. As per the HTML specification, the user has to explicitly allow the website or the app to use the device's current position. The Geolocation API is exposed through the geolocation object child of the navigator object and consists of the following three methods: getCurrentPosition() returns the device position. watchPosition() watches for changes in the device position. clearWatch() stops the watcher for the device's position changes. The watchPosition() and clearWatch() methods work in the same way that the setInterval() and clearInterval() methods work; in fact the first one returns an identifier that is passed in to the second one. The getCurrentPosition() and watchPosition() methods mirror each other and take the same arguments: a success and a failure callback function and an optional configuration object. The configuration object is used in order to specify the maximum age of a cached value of the device's position, to set a timeout after which the method will fail and to specify whether the application requires only accurate results. var options = {maximumAge: 3000, timeout: 5000, enableHighAccuracy: true }; navigator.geolocation.watchPosition(onSuccess, onFailure, options); Only the first argument is mandatory; but it's recommended to handle always the failure use case. The success handler function receives as argument, a Position object. Accessing its properties you can read the device's coordinates and the creation timestamp of the object that stores the coordinates. function onSuccess(position) { console.log('Coordinates: ' + position.coords); console.log('Timestamp: ' + position.timestamp); } The coords property of the Position object contains a Coordinates object; so far the most important properties of this object are longitude and latitude. Using those properties it's possible to start to integrate positioning information as relevant metadata in your app. The failure handler receives as argument, a PositionError object. Using the code and the message property of this object you can gracefully handle every possible error. function onError(error) { console.log('message: ' + error.message); console.log ('code: ' + error.code); } The message property returns a detailed description of the error, the code property returns an integer; the possible values are represented through the following pseudo constants: PositionError.PERMISSION_DENIED, the user denies the app to use the device's current position PositionError.POSITION_UNAVAILABLE, the position of the device cannot be determined If you want to recover the last available position when the POSITION_UNAVAILABLE error is returned, you have to write a custom plugin that uses the platform-specific API. Android and iOS have this feature. You can find a detailed example at http://stackoverflow.com/questions/10897081/retrieving-last-known-geolocation-phonegap. PositionError.TIMEOUT, the specified timeout has elapsed before the implementation could successfully acquire a new Position object JavaScript doesn't support constants such as Java and other object-oriented programming languages. With the term "pseudo constants", I refer to those values that should never change in a JavaScript app. One of the most common tasks to perform with the device position information is to show the device location on a map. You can quickly perform this task by integrating Google Maps in your app; the only requirement is a valid API key. To get the key, use the following steps: Visit the APIs console at https://code.google.com/apis/console and log in with your Google account. Click the Services link on the left-hand menu. Activate the Google Maps API v3 service. Time for action – showing device position with Google Maps Get ready to add a map renderer to the PhoneGap default app template. Refer to the following steps: Open the command-line tool and create a new PhoneGap project named MapSample. $ cordova create ~/the/path/to/your/source/ mapmample com.gnstudio.pg.MapSample MapSample Add the Geolocation API plugin using the command line. $ cordova plugins add https: //git-wip-us.apache.org /repos/asf/cordova-plugin-geolocation.git Go to the www folder, open the index.html file, and add a div element with the id value #map inside the main div of the app below the #deviceready one. <div id='map'></div> Add a new script tag to include the Google Maps JavaScript library. <script type="text/javascript" src ="https: //maps.googleapis.com/maps/api/js?key= YOUR_API_KEY &sensor=true"> </script> Go to the css folder and define a new rule inside the index.css file to give to the div element and its content an appropriate size. #map{ width: 280px; height: 230px; display: block; margin: 5px auto; position: relative; } Go to the js folder, open the index.js file, and define a new function named initMap. initMap: function(lat, long){ // The code needed to show the map and the // device position will be added here } In the body of the function, define an options object in order to specify how the map has to be rendered. var options = { zoom: 8, center: new google.maps.LatLng(lat, long), mapTypeId: google.maps.MapTypeId.ROADMAP }; Add to the body of the initMap function the code to initialize the rendering of the map, and to show a marker representing the current device's position over it. var map = new google.maps.Map(document.getElementById('map'), options); var markerPoint = new google.maps.LatLng(lat, long); var marker = new google.maps.Marker({ position: markerPoint, map: map, title: 'Device's Location' }); Define a function to use as the success handler and call from its body the initMap function previously defined. onSuccess: function(position){ var coords = position.coords; app.initMap(coords.latitude, coords.longitude); } Define another function in order to have a failure handler able to notify the user that something went wrong. onFailure: function(error){ navigator.notification.alert(error.message, null); } Go into the deviceready function and add as the last statement the call to the Geolocation API needed to recover the device's position. navigator.geolocation.getCurrentPosition(app.onSuccess, app.onError, {timeout: 5000, enableAccuracy: false}); Open the command-line tool, build the app, and then run it on your testing devices. $ cordova build $ cordova run android What just happened? You integrated Google Maps inside an app. The map is an interactive map most users are familiar with—the most common gestures are already working and the Google Street View controls are already enabled. To successfully load the Google Maps API on iOS, it's mandatory to whitelist the googleapis.com and gstatic.com domains. Open the .plist file of the project as source code (right-click on the file and then Open As | Source Code ) and add the following array of domains: <key>ExternalHosts</key> <array> <string>*.googleapis.com</string> <string>*.gstatic.com</string> </array> Other Geolocation data In the previous example, you only used the latitude and longitude properties of the position object that you received. There are other attributes that can be accessed as properties of the Coordinates object: altitude, the height of the device, in meters, above the sea level. accuracy, the accuracy level of the latitude and longitude, in meters; it can be used to show a radius of accuracy when mapping the device's position. altitudeAccuracy, the accuracy of the altitude in meters. heading, the direction of the device in degrees clockwise from true north. speed, the current ground speed of the device in meters per second. Latitude and longitude are the best supported of these properties, and the ones that will be most useful when communicating with remote APIs. The other properties are mainly useful if you're developing an application for which Geolocation is a core component of its standard functionality, such as apps that make use of this data to create a flow of information contextualized to the geolocation data. The accuracy property is the most important of these additional features, because as an application developer, you typically won't know which particular sensor is giving you the location and you can use the accuracy property as a range in your queries to external services. There are several APIs that allow you to discover interesting data related to a place; among these the most interesting are the Google Places API and the Foursquare API. The Google Places and Foursquare online documentation is very well organized and it's the right place to start if you want to dig deeper into these topics. You can access the Google Places docs at https://developers.google.com/maps/documentation/javascript/places and Foursquare at https://developer.foursquare.com/. The itinero reference app for this article implements both the APIs. In the next example, you will look at how to integrate Google Places inside the RequireJS app. In order to include the Google Places API inside an app, all you have to do is add the libraries parameter to the Google Maps API call. The resulting URL should look similar to http://maps.google.com/maps/api/js?key=SECRET_KEY&sensor=true&libraries=places. The itinero app lets users create and plan a trip with friends. Once the user provides the name of the trip, the name of the country to be visited, and the trip mates and dates, it's time to start selecting the travel, eat, and sleep options. When the user selects the Eat option, the Google Places data provider will return bakeries, take-out places, groceries, and so on, close to the trip's destination. The app will show on the screen a list of possible places the user can select to plan the trip. For a complete list of the types of place searches supported by the Google API, refer to the online documentation at https://developers.google.com/places/documentation/supported_types.
Read more
  • 0
  • 0
  • 4010

article-image-building-hello-world-application
Packt
29 Oct 2013
6 min read
Save for later

Building a Hello World application

Packt
29 Oct 2013
6 min read
(For more resources related to this topic, see here.) The Hello World application In the previous sections, we saw how to set up environments for development of Sencha Touch. Now let's start with the Hello World application. First of all, create a new folder in your web server and name it sencha-touch-start. Create a subfolder lib inside this folder. In this folder, we will store our Sencha Touch resources. Create two more subfolders inside the lib folder and name them js and css respectively. Copy the sencha-touch-all.js file from the SDK, which we had downloaded, to the lib/js folder. Copy the sencha-touch.css file from SDK to the lib/css folder. Now, create a new file in the sencha-touch-start folder, name it index.html, and add the following code snippet to it: <!DOCTYPE html><html><head><meta charset="utf-8"><title>Hello World</title><script src = "lib/js/sencha-touch-all.js" type="text/javascript"></script><link href="lib/css/sencha-touch.css" rel="stylesheet"type="text/css" /></head><body></body></html> Now create a new file in the sencha-touch-start folder, name it app.js, and add the following code snippet to it: Ext.application({name: 'Hello World',launch: function () {var panel = Ext.create('Ext.Panel', {fullscreen: true,html: 'Welcome to Sencha Touch'});Ext.Viewport.add(panel);}}); Add a link to the app.js file in the index.html page; we created the following link to sencha-touch-all.js and sencha-touch.css: <script src = "app.js" type="text/javascript"></script> Here in the code, Ext.application({..}) creates an instance of the Ext.Application class and initializes our application. The name property defines the name of our application. The launch property defines what an application should do when it starts. This property should always be set to a function inside which we will add our code to initialize the application. Here in this function, we are creating a panel with Ext.create and adding it to Ext.Viewport. Ext. Viewport is automatically created and initialized by the Sencha Touch framework. This is like a base container that holds other components of the application. At this point, your application folder structure should look like this: Now run the application in the browser and you should see the following screen: If your application does not work, please check your web server. It should be turned on, and the steps mentioned earlier should be repeated. Now we will go through some of the most important features and configurations of Sencha Touch. These are required to build real-time Sencha Touch applications. Introduction to layouts Layouts give a developer a number of options to arrange components inside the application. Sencha Touch offers the following four basic layouts: fit hbox vbox card hbox and vbox layouts arrange items horizontally and vertically, respectively. Let's modify our previous example by adding hbox and vbox layouts to it. Modify the code in the launch function of app.js as follows: var panel = Ext.create('Ext.Panel', {fullscreen: true,layout: 'hbox',items: [{xtype: 'panel',html: 'BOX1',flex: 1,style: {'background-color': 'blue'}},{xtype: 'panel',html: 'BOX2',flex: 1,style: {'background-color': 'red'}},{xtype: 'panel',html: 'BOX3',flex: 1,style: {'background-color': 'green'}}]});Ext.Viewport.add(panel); In the preceding code snippet, we specified the layout for a panel by setting the layout: 'hbox' property, and added three items to the panel. Another important configuration to note here is flex. The flex configuration is unique to the hbox and vbox layouts. It controls how much space the component will take up, proportionally, in the overall layout. Here, we have specified flex : 1 to all the child containers; that means the height of the main container will be divided equally in a 1:1:1 ratio among all the three containers. For example, if the height of the main container is 150 px, the height of each child container would be 50 px. Here, the height of the main container would be dependent on the browser width. So, it will automatically adjust itself. This is how Sencha Touch adaptive layout works; we will see this in detail in later sections. If you run the preceding code example in your browser, you should see the following screen: Also, we can change the layout to vbox by setting layout: 'vbox', and you should see the following screen: When we specify a fit layout, a single item will automatically expand to cover the whole space of the container. If we add more than one item and specify only the fit layout, only the first item would be visible at a time. The card layout arranges items in a stack of cards and only one item will be visible at a time, but we can switch between items using the setActiveItem function. We will see this in detail in a later section. Panel – a basic container We have already mentioned the word "panel" in previous examples. It's a basic container component of the Sencha Touch framework. It's basically used to hold items and arrange them in a proper layout by adding the layout configuration. Besides this, it is also used as overlays. Overlays are containers that float over your application. Overlay containers can be positioned relative to some other components. Create another folder in your web server, name it panel- demo, and copy all the files and folders from the sencha-touch-start folder of the previous example. Modify the title in the index.html file. <title>Panel Demo</title> Modify app.js as follows: Ext.application({name: 'PanelDemo',launch: function () {var panel = Ext.create('Ext.Panel', {fullscreen: true,items: [{xtype: 'button',text: 'Show Overlay',listeners: {tap: function(button){var overlay = Ext.create('Ext.Panel', {height: 100,width: 300,html: 'Panel asOverlay'});overlay.showBy(button);}}}]});Ext.Viewport.add(panel);}}); In the preceding code snippet, we have added button as the item in the panel and added listeners for the button. We are binding a tap event to a function for the button. On the tap of the button, we are creating another panel as an overlay and showing it using overlay. showBy(button). Summary This article thus provided us with details on building a Hello World application, which gave you further introduction to the most used components and features of Sencha Touch in real-time applications. Resources for Article : Further resources on this subject: Creating a Simple Application in Sencha Touch [Article] Sencha Touch: Layouts Revisited [Article] Sencha Touch: Catering Form Related Needs [Article]
Read more
  • 0
  • 0
  • 1205

article-image-social-networks
Packt
29 Oct 2013
15 min read
Save for later

Social Networks

Packt
29 Oct 2013
15 min read
(For more resources related to this topic, see here.) One window to rule them all Since we want to give our users the ability to post their messages on multiple social networks at once, it makes perfect sense to keep our whole application in a single window. It will be comprised of the following sections: The top section of the window will contain labels and a text area for message input. The text area will be limited to 140 characters in order to comply with Twitter's message limitation. As the user types his or her message, a label showing the number of characters will be updated. The bottom section of the window will use an encapsulating view that will contain multiple image views. Each image view will represent a social network to which the application will post the messages (in our case, Twitter and Facebook). But we can easily add more networks that will have their representation in this section. Each image view acts as a toggle button in order to select if the message will be sent to a particular social network or not. Finally, in the middle of those two sections, we will add a single button that will be used to publish the message from the text area. Code while it is hot! With our design in hand, we can now move on to create our user interface. Our entire application will be contained in a single window, so this is the first thing we will create. We will give it a title as well as a linear background gradient that changes from purple at the top to black at the bottom of the screen. Since we will add other components to it, we will keep its reference in the win variable: var win = Ti.UI.createWindow({ title: 'Unified Status', backgroundGradient: { type: 'linear', startPoint: { x: '0%', y: '0%' }, endPoint: { x: '0%', y: '100%' }, colors: [ { color: '#813eba'}, { color: '#000' } ] } }); The top section We will then add a new white label at the very top of the window. It will span 90% of the window's width, have a bigger font than other labels, and will have its text aligned to the left. Since this label will never be accessed later on, we will invoke our createLabel function right into the add function of the window object: win.add(Ti.UI.createLabel({ text: 'Post a message', color: '#fff', top: 4, width: '90%', textAlign: Ti.UI.TEXT_ALIGNMENT_LEFT, font: { fontSize: '22sp' } })); We will now move on to create our text area where our users will enter their messages. It will be placed right under the label previously created, occupy 90% of the screen's width, have a slightly bigger font, and have a thick, rounded, dark border. It will also be limited to 140 characters. It is also important that we don't forget to add this newly created object to our main window: var txtStatus = Ti.UI.createTextArea({ top: 37, width: '90%', height: 100, color: '#000', maxLength: 140, borderWidth: 3, borderRadius: 4, borderColor: '#401b60', font: { fontSize: '16sp' } }); win.add(txtStatus); The second label that will complement our text area will be used to indicate how many characters are currently present in the user's message. So, we will now create a label just underneath the text area and assign it a default text value. It will span 90% of the screen's width and have its text aligned to the right. Since this label will be updated dynamically when the text area's value changes, we will keep its reference in a variable named lblCount. As we did with our previous UI components, we will add our label to our main window using the following code: var lblCount = Ti.UI.createLabel({ text: '0/140', top: 134, width: '90%', color: '#fff', textAlign: Ti.UI.TEXT_ALIGNMENT_RIGHT }); win.add(lblCount); The last control from the top section will be our Post button. It will be placed right under the text area and centered horizontally using the following code: var btnPost = Ti.UI.createButton({ title: 'Post', top: 140, width: 150 }); win.add(btnPost); By not specifying any left or right property, the component is automatically centered. This is pretty useful to keep in mind while designing our user interfaces, as it frees us from having to do calculations in order to center something on the screen. Staying within the limits Even though the text area's maxLength property will ensure that the message length will not exceed the limitation we have set, we need to give our users feedback as they are typing their message. To achieve this, we will add an event listener on the change event of our text area. Every time; the text area's content changes, we will update the lblCount label with the number of characters our message contains: txtStatus.addEventListener('change', function(e) { lblCount.text = e.value.length + '/140'; We will also add a condition to check if our user's message is close to reaching its limit. If that is the case, we will change the label's color to red, if not, it will return to its original color: if (e.value.length > 120) { lblCount.color = 'red'; } else { lblCount.color = 'white' } Our last conditional check will be enabling the Post button only if there is actually a message to be posted. This will prevent our users from posting empty messages: btnPost.enabled = !(e.value.length === 0); }); Setting up our Post button Probably the most essential component in our application would be the Post button. We won't be able to post anything online (yet) by clicking on it, as there are things we will still need to add. We will add an event listener for the click event on the Post button. If the on-screen keyboard is displayed, we will call the blur function of the text area in order to hide it: btnPost.addEventListener('click', function() { txtStatus.blur(); Also, we will reset the value to the text area and the character count on the label, so that the interface is ready to enter a new message: txtStatus.value = ''; lblCount.text = '0/140'; }); The bottom section With all our message input mechanisms in place, we will now move on to our bottom section. All components from this section will be contained in a view that will be placed at the bottom of the screen. It will span 90% of the screen's width, and its height will adapt to its content. We will store its reference in a variable named bottomView for later use: var bottomView = Ti.UI.createView({ bottom: 4, width: '90%', height: Ti.UI.SIZE }); We will then create our toggle switches for each social network that our application interacts with. Since we want something sexier than regular switch components, we will create our own switch using a regular image view. Our first image view will be used to toggle the use of Facebook. It will have a dark blue background (similar to Facebook's logo) and will have a background image representing Facebook's logo with a gray background, so that it appears disabled. It will be positioned to the left of its parent view, and will have a borderRadius value of 4 in order to give it a rounded aspect. We will keep its reference in the fbView variable and then add this same image view to our bottom view using the following code: var fbView = Ti.UI.createImageView({ backgroundColor: '#3B5998', image: 'images/fb-logo-disabled.png', borderRadius: 4, width: 100, left: 10, height: 100 }); bottomView.add(fbView); We will create a similar image view (in almost every way), but this time for Twitter. So the background color and the image will be different. Also, it will be positioned to the right of our container view. We will store its reference in the twitView variable and then add it to our bottom view using the following code: var twitView = Ti.UI.createImageView({ backgroundColor: '#9AE4E8', image: 'images/twitter-logo-disabled.png', borderRadius: 4, width: 100, right: 10, height: 100 }); bottomView.add(twitView); Last but not the least, it is imperative that we do not forget to add our bottomView container object into our window. Also, we will open the window so that our users can interact with it using the following code: win.add(bottomView); win.open(); What if the user rotates the device? At this stage, if our user were to rotate the device (to landscape), nothing would happen on the screen. The reason behind this is because we have not taken any action to make our application compatible with the landscape mode. In many cases, this would require some changes to how the user interface is created depending on the orientation. But since our application is fairly simple, and most of our layout relies on percentages, we can activate the landscape mode without any modification to our code. To activate the landscape mode, we will update the orientations section from our tiapp.xml configuration file. It is mandatory to have at least one orientation present in this section (it doesn't matter which one it is). We want our users to be able to use the application, no matter how they hold their device: <iphone> <orientations device="iphone"> <orientation>Ti.UI.PORTRAIT</orientation> <orientation>Ti.UI.UPSIDE_PORTRAIT</orientation> <orientation>Ti.UI.LANDSCAPE_LEFT</orientation> <orientation>Ti.UI.LANDSCAPE_RIGHT</orientation> </orientations> </iphone> By default, no changes are required for Android applications since the default behavior supports orientation changes. There are, of course, ways to limit orientation in the manifest section, but this subject falls out of this article's scope. See it in action We have now implemented all the basic user interface. We can now test it and see if it behaves as we anticipated. We will click on the Run button from the App Explorer tab, as we did many times before. We now have our text area at the top with our two big images views at the bottom, each with the social network's logo. We can already test the message entry (with the character counter incrementing) by clicking on the Post button. Now if we rotate the iOS simulator (or the Android emulator), we can see that our layout adapts well to the landscape mode. To rotate the iOS simulator, you need to use the Hardware menu or you can use Cmd-Left and Cmd-Right on the keyboard. If you are using the Android emulator, there is no menu, but you can change the orientation using Ctrl + F12. The reason this all fits so well is because most of our dimensions are done using percentages. That means that our components will adapt and use the available space on the screen. Also, we positioned the bottom view using the bottom property; which meant that it will stick to the bottom of the screen, no matter how tall it is. There is a module for that Since we don't want to interact with the social network through manual asynchronous HTTP requests, we will leverage the Facebook native module provided with Titanium. Since it comes with Titanium SDK, there is no need to download or copy any file. All we need to do is add a reference (one for each target platform), in the modules section of our tiapp.xml file as follows: <modules> <module platform="android">facebook</module> <module platform="iphone">facebook</module> </modules> Also, we need to add the following property at the end of our configuration file, and replace the FACEBOOK_APPID parameter with the application ID that was provided when we created our app online. <property name="ti.facebook.appid">[FACEBOOK_APPID]</property> Why do we have to reference a module even though it comes bundled with the Titanium framework? Mostly, it is to avoid framework bloat; it is safe to assume that most applications developed using Titanium won't require interaction with Facebook. This is the reason for having it in a separate module that can be loaded on demand. Linking our mobile app to our Facebook app With our Facebook module loaded, we will now populate the necessary properties before making any call to the network. We will need to set the appid property in our code; since we have already defined it as a property in our tiapp.xml file, we can access it through the Properties API. This is a neat way to externalize application parameters, thus preventing us to hardcode them in our JavaScript code: fb.appid = Ti.App.Properties.getString('ti.facebook.appid'); We will also set the proper permissions that we will need while interacting with the server. (In this case, we only want to publish messages on the user's wall): fb.permissions = ['publish_actions']; It is important to set the appid and permissions properties before calling the authorize function. This makes sense since we want Facebook to authorize our application with a defined set of permissions from the get-go. Allowing our user to log in and log out at the click of a button We want our users to be able to connect (and disconnect) at their will from one social network, just by pressing the same view on the screen. To achieve this, we will create a function called toggleFacebook that will have a single parameter: function toggleFacebook(isActive) { If we want the function to make the service active, then we will verify if the user is already logged in to Facebook. If not, we will ask Facebook to authorize the application using the function with the same name. If the parameter indicates that we want to make the service inactive, we will log out from Facebook altogether: if (isActive) { if (!fb.loggedin) { fb.authorize(); } } else { fb.logout(); } } Now, all that we need to do is create an event listener on the click event for our Facebook image view and simply toggle between the two states depending whether the user is logged in or not: fbView.addEventListener('click', function() { toggleFacebook(!fb.loggedIn); }); The authorize function prompts the user to log in (if he or she is not already logged in), and authorize his or her application. It kind of makes sense that Facebook requires user validation before delegating the right to post something on his or her behalf. Handling responses from Facebook We have now completely implemented the Facebook login/logout mechanism into our application, but we still need to provide some feedback to our users. The Facebook module provides two event listeners that allow us to track when our user will have logged in or out. In the login event listener, we will check if the user is logged in successfully. If he or she did, we will update the image view's image property with the colored logo. If there was any error during authentication, or if the operation was simply cancelled, we will show an alert, as given in the following code: fb.addEventListener('login', function(e) { if (e.success) { fbView.image = 'images/fb-logo.png'; } else if (e.error) { alert(e.error); } else if (e.cancelled) { alert("Canceled"); } }); In the logout event listener, we will update the image view's image property with the grayed out Facebook logo using the following code: fb.addEventListener('logout', function(e) { fbView.image = 'images/fb-logo-disabled.png'; }); Posting our message on Facebook Since we are now connected to Facebook, and our application is authorized to post, we can now post our messages on this particular social network. To do this, we will create a new function named postFacebookMessage, with a single parameter, and that will be the message string to be posted: function postFacebookMessage(msg) { Inside this function, we will call the requestWithGraphPath function from the Facebook native module. This function can look fairly complex at first glance, but we will go over each parameter in detail. The parameters are as follows: The Graph API path requested (My feed). A dictionary object containing all of the properties required by the call (just the message). The HTTP method used for this call (POST). The callback function invoked when the request completes (this function simply checks the result from the call. In case of error, an alert is displayed). fb.requestWithGraphPath('me/feed', { message: msg }, "POST", function(e) { if (e.success) { Ti.API.info("Success! " + e.result); } else { if (e.error) { alert(e.error); } else { alert("Unknown result"); } } } ); } We will then update the click event handler for the Post button and call the postFacebookMessage function if the user is logged in to Facebook: btnPost.addEventListener('click', function() { if (fb.loggedIn) { postFacebookMessage(txtStatus.value); } ... }); With this, our application can post messages on our user's Facebook wall. Summary In this article, we learned how to create server-side applications on two popular social networking websites. We learned how to interact with those networks in terms of API and authentication. We also learned how to handle device rotation as well as use the native platform setting Windows. Finally, we covered Titanium menus and activities. Resources for Article: Further resources on this subject: Appcelerator Titanium: Creating Animations, Transformations, and Understanding Drag-and-drop [Article] augmentedTi: The application architecture [Article] Basic Security Approaches [Article]
Read more
  • 0
  • 0
  • 1251
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-windows-phone-8-principles-uiux-and-bindings
Packt
28 Oct 2013
12 min read
Save for later

Windows Phone 8: Principles for UI/UX and Bindings

Packt
28 Oct 2013
12 min read
(For more resources related to this topic, see here.) Principles for UI/UX There are many dos and don'ts in Windows Phone UI development. It is good to remember that we have a screen with a limited size where the elements should be placed comfortably. Fast and fluid This is the main goal of Windows Phone UX. Starting with the phone menu, we can switch between tiles and application lists; if we touch an element action it is performed very fast. We have to remember while creating our own software about providing responsivity to user interaction, even if we need to load some data or connect to a server, the user has to be informed about loading. Going through our application, the user should feel that he or she controls and can touch every element. Many controls provide animation out of the box and we need to use it; we need to impress our users while seeing motion in our application. The application has to be alive! Grid system The following design pattern provides consistency and high UX to Windows Phone users. It helps in organizing the application elements. A grid system will provide unity across applications and will make our application look similar to the Windows Store application. Talking about the grid layout, we mean arranging the content using grid lines. As we can see, the grid is where the application design starts. Each square is located 12 pixels from the other square. Single square has 25 x 25 pixels. In order to provide the best user experience to our application, we need to ensure that the elements are arranged appropriately. Looking at the left-hand side of the image, we find that each big tile contains 6 x 6 squares with 12 pixels margins. Windows is one When using a grid system and other design patterns for Windows Phone, porting to Windows Store will be possible and can be done more easily. Unfortunately at the moment, there are no common markets for Windows Phone application and Windows Store (applications for Windows 8 and Windows RT). The current solution is to create the application in a way that it allows portability to other platforms, such as separating the application's logic from UI. Even then some specific changes have to be done. Controls design best practices Button Use a maximum of two words in the Button content; it can be dynamically set but should never be more than two words To define content use system font Use a minimum height of 40 pixels CheckBox A CheckBox text should present a clear choice to the user If there are many choices, use the ScrollViewer and StackPanel control It is recommended to use single or a maximum of two lines of content text If the meaning of checkbox is not clear, use RadioButton or ListBox HyperlinkButton Don't place two hyperlinks close to each other because it will make it difficult to select either one Hyperlink text should not be longer than two words If some action can be reached by taping in text content, use HyperlinkButton instead of Button Image Size (resolution) of the picture has to be really close or identical to the Image control size Use gestures if it is possible The Image controls displays JPEG or PNG images Provide good quality images-non pixelated ListBox Use ListBox with long list of items; for smaller lists use RadioButton ListBox should be responsive so that it provides an immediate visual reaction for the user action Use at least 12 pixels height with sans-serif font MediaElement Do not show too many controls on one page Do not allow for a situation when more than one media (audio or video) is playing simultaneously Panorama The Panorama control handles only portrait orientation If you are using ApplicationBar with Panorama, make sure that the ApplicationBar control is in minimized mode Do not use too many sections (PanoramaItems); use a maximum of 5 sections to avoid torturing the user with swiping Panorama title can contain image and/or text Use system font; an exception is when your brand uses custom font First PanoramaItem should contain a left-aligned title For section title use plain text Do not use Pivot in PanoramaItem and Panorama in PivotItem Horizontal scrolling can be difficult because of the default behavior of Panorama; use vertical scrolling instead Good quality background photography makes for a very good users' impression, but ensure that all content is readable Background should be low contrast and dark Background image can be maximally 1024 x 800 pixels and minimally 480 x 800 pixels to provide good performance Pivot Do not place Pivot in PanoramaItem Use a maximum of 4 Pivot items Each PivotItem should not contain completely different actions; their functionality should be related Do not use input controls because it disrupts the Pivot flick, if you want to create a form with editing controls, use a separate page that will contain the form  Do not use controls that need swipe or scroll inside Panorama like Map, ToggleButton, or Slider because the Pivot default gesture handling it, can make it difficult for the user to use them ProgressBar It is highly recommended to use a label that describes the state of the ProgressBar control-loading, launching, and failed RadioButton Avoid customization Provide 2 short words as content text, with at least 12 pixels height maximally Use ListBox if this control has more than 10 decisions to make Bring user's attention to the selected RadioButton control by making inactive those that are not selected ScrollViewer Avoid creating a situation where a user has too much content to scroll Slider Make sure that the Slider control is the appropriate size for comfortable use Do not use Slider as a progress indicator Do not put Slider too close to the edge of the screen because it can be difficult to select a value It shouldn't be used within Panorama or Pivot because Slider uses dragging gestures, whose usage is difficult in those containers TextBlock Make sure that the text in the TextBlock control looks clear and is readable TextBox To improve the user's experience, update the application content when the user enters text into TextBox; for example while filtering data It is not possible to scroll rich text inputs When it is likely that the user will enter a new value into TextBox, select all content on focus LongListSelector LongListSelector is used for a large number of data items Use at least 12 pixel Sans serif font Fonts From the GUI and UX point of view, it is critical to make text clear and legible. Entire text should be properly contrasted to the background. The font used in Windows Phone, is Sans serif, which is widely used in web and mobile applications. The Sans serif font type has many fonts, which one should we use? There is no simple answer for that, it depends on the control and context in which we use the text. However, it is recommended to use Segoe UI, Calibri, or Cambria font types. Segoe UI: This is best used in UI elements such as buttons, checkboxes, datepickers, and others like these. Calibri: This is best used for input and content text such as e-mail messages. When using this font it is good to set the font size to 13. Cambria: This font type is recommended for articles or any other big pieces of text, depending on content section, we can use 9, 11, 20 pt Cambria font. An example of each of the these fonts is as follows: 11 pt Segoe UI font 13 pt Calibri font 11 pt Cambria font Tiles and notifications My friend who is the owner of a company that creates mobile applications, once said to me that 50% of an application's success depends on how nice and good looking the icon/tile is. After thinking about it, I imagine he was right. The tile is the first thing that a user sees when starting to use our application (remember what I said about first impression?). The Windows Phone tile is not only a static icon that represents our application, but it can be live and show some simplified content as well. A default tile is a static icon and can display some information such as message count after it updates, and it returns to the default tile only after another update. An application tile can be pinned to the Start screen and set in one of 3 sizes: A small tile is built from 3 x 3 grid units. It is half the width and height of medium tile. It was introduced in Windows Phone 8 and 7.8 update of Windows Phone 7. Only a medium-size tile was available in Windows Phone 7. Every application pinned to the Start screen starts with medium tile. Along with the small tile, a large tile was introduced in Windows Phone 8. Why we should use the tile that is updating (live tile)? Because the tile is the front door of our application and if it gives some notification to user it really says, "Hey! Come here and see what I've got for you!". An updated tile assures the users that our application is fresh and active, even if it has not been run for some time. If we want to allow the user to pin our application in large/wide tile, we should provide a wide image, point it in the application manifest, and mark the Supports wide size option. There are some good practices which are worth following. If our application has the content that is refreshed at least once every few days and can be interesting for the user, we should enable our application to use wide tile; if not it is not necessary. Forget about the wide tile if the application doesn't use notifications. There are three available tile templates that will help us to create tiles as follows: Iconic template which is mainly used for e-mails, messaging, RSS, and social networking apps. Flip templates that are used in an application gives the user a lot of information like weather application. Cycle template for galleries and photo applications that cycles 1 to 9 images in a tile—applicable only for medium and wide tile. Bindings Basically, binding is a mechanism that the whole MVVM pattern relays on. It is very powerful and is the "glue" between the view and the exposed fields and commands. The functionality that we want to implement is shown in the following diagram: Model We are going to use the Model class for storing data about users. public class UserModel:INotifyPropertyChanged { private string name {get;set; public string Name { get; set { name = value; RaisePropertyChanged(); } } public string LastName { get; set{(…)} } public string Gender { get; set{(…)} } public DateTime DateOfBirth { get; set{(…)} } public event PropertyChangedEventHandler PropertyChanged; (…) } One thing that we still have to implement is the property changed event subscription for all fields within this UserModel class. ViewModel Now, UserModel will be wrapped into UserViewModel. Our ViewModel will also implement the INotifyPropertyChanged interface for updating View when the VieModels object change. public class UserViewModel:INotifyPropertyChanged { public UserModel CurrentUser { get; set; } public string FullName { get { if (this.CurrentUser != null) { return string.Format("{0} {1}", this.CurrentUser.Name, this.CurrentUser.LastName); } return ""; } } public List<string> ListOfPossibleGenders = new List<string>(){"Male","Female"}; (…) } As we can see, UserModel is wrapped in two ways: the first is wrapping the entire Model and some properties that are transformed into FullName. The FullName property contains Name and LastName that comes from UserModel object. View I'm going to create a view, piece by piece, to show how things should be done. At the beginning, we should create a new UserControl object called UserView in the Views folder. We will do almost everything now in XAML; so, we have to open the UserView.xaml file and add a few things. In the root node, we have to add namespace for our ViewModel folder. Because of this line, our ViewModels will be accessible in the XAML code. <UserControl.DataContext> <models:UserViewModel/> </UserControl.DataContext> It sets DataContext of our view in XAML and has its equivalent in DataContext. If we wish to set DataContext in the code behind, we have to go to the constructor in the .cs file. public UserView() { InitializeComponent(); var viewModel = new SampleViewModel(); this.DataContext = viewModel; } A better approach is to define the ViewModel instance in XAML because we have intellisense support in binding expressions for fields that are public in ViewModel and the exposed model. As we can see, ViewModel contains the CurrentUser property that will store and expose the user data to the view. It has to be public, and thing that is missing here is the change notification in the CurrentUser setter. However, we already know how to do that so it is not described. <StackPanel> <TextBlock Text="Name"></TextBlock> <TextBox Name="txtName" Text="{ Binding CurrentUser.Name, Mode=TwoWay }"> </TextBox> <TextBlock Text="Lastname"></TextBlock> <TextBox Name="txtLastName" Text="{ Binding CurrentUser.LastName , Mode=TwoWay}"> </TextBox> <TextBlock Text="Gender"></TextBlock> <ListBox Name="lstGender" ItemsSource="{ Binding ListOfPossibleGenders}" SelectedItem="{Binding CurrentUser.Gender, Mode=TwoWay}"> </ListBox> </StackPanel> The preceding example shows how to set binding using the exposed model as well as list or property that was implemented directly in ViewModel. We made all these things without any code behind line! This is really good because our sample is testable and is very easy to write unit tests We use the TwoWay binding mode that automatically populates the control value and then the edit control value of the ViewModel property also gets updated. Summary Creating complex applications that will be intuitive and simple to use is really hard without the knowledge of the UI and UX principles that were described in this article. We covered a huge part of XAML— bindings. Resources for Article : Further resources on this subject: Windows 8 with VMware View [Article] How to Build a RSS Reader for Windows Phone 7 [Article] Introducing the Windows Store [Article]
Read more
  • 0
  • 0
  • 1470

article-image-gesture
Packt
25 Oct 2013
5 min read
Save for later

Gesture

Packt
25 Oct 2013
5 min read
(For more resources related to this topic, see here.) Gestures While there are ongoing arguments in the courts of America at the time of writing over who invented the likes of dragging images, it is without a doubt that a key feature of iOS is the ability to use gestures. To put it simply, when you tap the screen to start an app or select a part of an image to enlarge it or anything like that, you are using gestures. A gesture (in terms of iOS) is any touch interaction between the UI and the device. With iOS 6, there are six gestures the user has the ability to use. These gestures, along with brief explanations, have been listed in the following table: Class Name and type Gesture UIPanGesture Recognizer PanGesture; Continuous type Pan images or over-sized views by dragging across the screen UISwipeGesture Recognizer SwipeGesture; Continuous type Similar to panning, except it is a swipe UITapGesture Recognizer TapGesture; Discrete type Tap the screen a number of times (configurable) UILongPressGesture Recognizer LongPressGesture; Discrete type Hold the finger down on the screen UIPinchGesture Recognizer PinchGesture; Continuous type Zoom by pinching an area and moving your fingers in or out UIRotationGesture Recognizer RotationGesture; Continuous type Rotate by moving your fingers in opposite directions Gestures can be added by programming or via Xcode. The available gestures are listed in the following screenshot with the rest of the widgets on the right-hand side of the designer: To add a gesture, drag the gesture you want to use under the view on the View bar (shown in the following screenshot): Design the UI as you want and while pressing the Ctrl key, drag the gesture to what you want to recognize using the gesture. In my example, the object you want to recognize is anywhere on the screen. Once you have connected the gesture to what you want to recognize, you will see the configurable options of the gesture. The Taps field is the number of taps required before the Recognizer is triggered, and the Touches field is the number of points onscreen required to be touched for the Recognizer to be triggered. When you come to connect up the UI, the gesture must also be added. Gesture code When using Xcode, it is simple to code gestures. The class defined in the Xcode design for the tapping gesture is called tapGesture and is used in the following code: private int tapped = 0; public override void ViewDidLoad() { base.ViewDidLoad(); tapGesture.AddTarget(this, new Selector("screenTapped")); View.AddGestureRecognizer(tapGesture); } [Export("screenTapped")]public void SingleTap(UIGestureRecognizer s) { tapped++; lblCounter.Text = tapped.ToString(); } There is nothing really amazing to the code; it just displays how many times the screen has been tapped. The Selector method is called by the code when the tap has been seen. The method name doesn't make any difference as long as the Selector and Export names are the same. Types When the gesture types were originally described, they were given a type. The type reflects the number of messages sent to the Selector method. A discrete one generates a single message. A continuous one generates multiple messages, which requires the Selector method to be more complex. The complexity is added by the Selector method having to check the State of the gesture to decide on what to do with what message and whether it has been completed. Adding a gesture in code It is not a requirement that Xcode be used to add a gesture. To perform the same task in the following code as my preceding code did in Xcode is easy. The code will be as follows: UITapGestureRecognizer t'pGesture = new UITapGestureRecognizer() { NumberOfTapsRequired = 1 }; The rest of the code from AddTarget can then be used. Continuous types The following code, a Pinch Recognizer, shows a simple rescaling. There are a couple of other states that I'll explain after the code. The only difference in the designer code is that I have UIImageView instead of a label and a UIPinchGestureRecognizer class instead of a UITapGestureRecognizer class. public override void ViewDidLoad() { base.ViewDidLoad(); uiImageView.Image =UIImage.FromFile("graphics/image.jpg") Scale(new SizeF(160f, 160f); pinchGesture.AddTarget(this, new Selector("screenTapped")); uiImageView.AddGestureRecognizer(pinchGesture); } [Export("screenTapped")]public void SingleTap(UIGestureRecognizer s) { UIPinchGestureRecognizer pinch = (UIPinchGestureRecognizer)s; float scale = 0f; PointF location; switch(s.State) { case UIGestureRecognizerState.Began: Console.WriteLine("Pinch begun"); location = s.LocationInView(s.View); break; case UIGestureRecognizerState.Changed: Console.WriteLine("Pinch value changed"); scale = pinch.Scale; uiImageView.Image = UIImage FromFile("graphics/image.jpg") Scale(new SizeF(160f, 160f), scale); break; case UIGestureRecognizerState.Cancelled: Console.WriteLine("Pinch cancelled"); uiImageView.Image = UIImage FromFile("graphics/image.jpg") Scale(new SizeF(160f, 160f)); scale = 0f; break; case UIGestureRecognizerState.Recognized: Console.WriteLine("Pinch recognized"); break; } } Other UIGestureRecognizerState values The following table gives a list of other Recognizer states: State Description Notes Possible Default state; gesture hasn't been recognized Used by all gestures Failed Gesture failed No messages sent for this state Translation Direction of pan Used in the pan gesture Velocity Speed of pan Used in the pan gesture In addition to these, it should be noted that discrete types only use Possible and Recognized states. Summary Gestures certainly can add a lot to your apps. They can enable the user to speed around an image, move about a map, enlarge and reduce, as well as select areas of anything on a view. Their flexibility underpins why iOS is recognized as being an extremely versatile device for users to manipulate images, video, and anything else on-screen. Resources for Article: Further resources on this subject: Creating and configuring a basic mobile application [Article] So, what is XenMobile? [Article] Building HTML5 Pages from Scratch [Article]
Read more
  • 0
  • 0
  • 1580

article-image-creating-quizzes
Packt
24 Oct 2013
9 min read
Save for later

Creating Quizzes

Packt
24 Oct 2013
9 min read
(For more resources related to this topic, see here.) Creating a short-answer question For this task, we will create a card to host an interface for a short-answer question. This type of question allows the user to input their answers via the keyboard. Evaluating this type of answer can be especially challenging, since there could be several correct answers and users are prone to make spelling mistakes. Engage Thrusters Create a new card and name it SA. Copy the Title label from the Main card and paste it onto the new SA card. This will ensure the title label field has a consistent format and location. Copy the Question label from the TF card and paste it onto the new SA card. Copy the Progress label from the TF card and paste it onto the new SA card. Copy the Submit button from the Sequencing card and paste it onto the new SA card. Drag a text entry field onto the card and make the following modifications: Change the name to answer. Set the size to 362 by 46. Set the location to 237, 185. Change the text size to 14. We are now ready to program our interface. Enter the following code at the card level: on preOpenCard global qNbr, qArray # Section 1 put 1 into qNbr # Section 2 put "" into fld "question" put "" into fld "answer" put "" into fld "progress" # Section 3 put "What farm animal eats shrubs, can be eaten, and are smaller than cows?" into qArray["1"]["question"] put "goat" into qArray["1"]["answer"] -- put "What is used in pencils for writing?" into qArray["2"]["question"] put "lead" into qArray["2"]["answer"] -- put "What programming language are you learning" into qArray["3"] ["question"] put "livecode" into qArray["3"]["answer"] end preOpenCard In section 1 of this code, we reset the question counter (qNbr) variable to 1. Section 2 contains the code to clear the question, answer, and progress fields. Section 3 populates the question/answer array (qArray). As you can see, this is the simplest array we have used. It only contains a question and answer pairing for each row. Our last step for the short answer question interface is to program the Submit button. Here is the code for that button: on mouseUp global qNbr, qArray local tResult # Section 1 if the text of fld "answer" contains qArray[qNbr]["answer"] then put "correct" into tResult else put "incorrect" into tResult end if #Section 2 switch tResult case "correct" if qNbr < 3 then answer "Very Good." with "Next" titled "Correct" else answer "Very Good." with "Okay" titled "Correct" end if break case "incorrect" if qNbr < 3 then answer "The correct answer is: " & qArray[qNbr]["answer"] & "." with "Next" titled "Wrong Answer" else answer "The correct answer is: " & qArray[qNbr]["answer"] & "." with "Okay" titled "Wrong Answer" end if break end switch # Section 3 if qNbr < 3 then add 1 to qNbr nextQuestion else go to card "Main" end if end mouseUp Our Submit button script is divided into three sections. The first section (section 1) checks to see if the answer contained in the array (qArray) is part of the answer the user entered. This is a simple string comparison and is not case sensitive. Section 2 of this button's code contains a switch statement based on the local variable tResult. Here, we provide the user with the actual answer if they do not get it right on their own. The final section (section 3) navigates to the next question or to the main card, depending upon which question set the user is on. Objective Complete - Mini Debriefing We have successfully coded our short answer quiz card. Our approach was to use a simple question and data input design with a Submit button. Your user interface should resemble the following screenshot: Creating a picture question card Using pictures as part of a quiz, poll, or other interface can be fun for the user. It might also be more appropriate than simply using text. Let's create a card that uses pictures as part of a quiz. Engage Thrusters Create a new card and name it Pictures. Copy the Title label from the Main card and paste it onto the new Pictures card. This will ensure the title label field has a consistent format and location. Copy the Question label from the TF card and paste it onto the new Pictures card. Copy the Progress label from the TF card and paste it onto the new Pictures card. Drag a Rectangle Button onto the card and make the following customizations: Change the name to picture1. Set the size to 120 x 120. Set the location to 128, 196. Drag a second Rectangle Button onto the card and make the following customizations: Change the name to picture2. Set the size to 120 x 120. Set the location to 336, 196. Upload the following listed files into your mobile application's Image Library. This LiveCode function is available by selecting the Development pull-down menu, then selecting Image Library. Near the bottom of the Image Library dialog is an Import File button. Once your files are uploaded, take note of the ID numbers assigned by LiveCode: q1a1.png q1a2.png q2a1.png q2a2.png q3a1.png q3a2.png With our interface fully constructed, we are now ready to add LiveCode script to the card. Here is the code you will enter at the card level: on preOpenCard global qNbr, qArray # Section 1 put 1 into qNbr set the icon of btn "picture1" to empty set the icon of btn "picture2" to empty # Section 2 put "" into fld "question" put "" into fld "progress" # Section 3 put "Which puppy is real?" into qArray["1"]["question"] put "2175" into qArray["1"]["pic1"] put "2176" into qArray["1"]["pic2"] put "q1a1" into qArray["1"]["answer"] -- put "Which puppy looks bigger?" into qArray["2"]["question"] put "2177" into qArray["2"]["pic1"] put "2178" into qArray["2"]["pic2"] put "q2a2" into qArray["2"]["answer"] -- put "Which scene is likely to make her owner more upset?" into qArray["3"]["question"] put "2179" into qArray["3"]["pic1"] put "2180" into qArray["3"]["pic2"] put "q3a1" into qArray["3"]["answer"] end preOpenCard In section 1 of this code, we set the qNbr to 1. This is our question counter. We also ensure that there is no image visible in the two buttons. We do this by setting the icon of the buttons to empty. In section 2, we empty the contents of the two onscreen fields (Question and Progress). In the third section, we populate the question set array (qArray). Each question has an answer that corresponds with the filename of the images you added to your stack in the previous step. The ID numbers of the six images you uploaded are also added to the array, so you will need to refer to your notes from step 7. Our next step is to program the picture1 and picture2 buttons. Here is the code for the picture1 button: on mouseUp global qNbr, qArray # Section 1 if qArray[qNbr]["answer"] contains "a1" then if qNbr < 3 then answer "Very Good." with "Next" titled "Correct" else answer "Very Good." with "Okay" titled "Correct" end if else if qNbr < 3 then answer "That is not correct." with "Next" titled "Wrong Answer" else answer "That is not correct." with "Okay" titled "Wrong Answer" end if end if # Section 2 if qNbr < 3 then add 1 to qNbr nextQuestion else go to card "Main" end if end mouseUp In section 1 of our code, we check to see if the answer from the array contains a1, which indicates that the picture on the left is the correct answer. Based on the answer evaluation, one of two text feedbacks is provided to the user. The name of the button on the feedback dialog is either Next or Okay, depending upon which question set the user is currently on. The second section of this code routes the user to either the main card (if they finished all three questions) or to the next question. Copy the code you entered in the picture1 button and paste it onto the picture2 button. Only one piece of code needs to change. On the first line of the section 1 code, change the string from a1 to a2. That line of code should be as follows: if qArray[qNbr]["answer"] contains "a2" then Objective Complete - Mini Debriefing In just 9 easy steps, we created a picture-based question type that uses images we uploaded to our stack's image library and a question set array. Your final interface should look similar to the following screenshot: Adding navigational scripting In this task, we will add scripts to the interface buttons on the Main card. Engage Thrusters Navigate to the Main card. Add the following script to the true-false button: on mouseUp set the disabled of me to true go to card "TF" end mouseUp Add the following script to the m-choice button: on mouseUp set the disabled of me to true go to card "MC" end mouseUp Add the following script to the sequence button: on mouseUp set the disabled of me to true go to card "Sequencing" end mouseUp Add the following script to the short-answer button: on mouseUp set the disabled of me to true go to card "SA" end mouseUp Add the following script to the pictures button: on mouseUp set the disabled of me to true go to card "Pictures" end mouseUp The last step in this task is to program the Reset button. Here is the code for that button: on mouseUp global theScore, totalQuestions, totalCorrect # Section 1 set the disabled of btn "true-false" to false set the disabled of btn "m-choice" to false set the disabled of btn "sequence" to false set the disabled of btn "short-answer" to false set the disabled of btn "pictures" to false # Section 2 set the backgroundColor of grc "progress1" to empty set the backgroundColor of grc "progress2" to empty set the backgroundColor of grc "progress3" to empty set the backgroundColor of grc "progress4" to empty set the backgroundColor of grc "progress5" to empty # Section3 put 0 into theScore put 0 into totalQuestions put 0 into totalCorrect put theScore & "%" into fld "Score" end mouseUp There are three sections to this code. In section 1, we are enabling each of the buttons. In the second section, we are clearing out the background color of each of the five progress circles in the bottom-center of the screen. In the final section, section 3, we reset the score and the score display. Objective Complete - Mini Debriefing That is all there was to this task, seven easy steps. There are no visible changes to the mobile application's interface. Summary In this article, we saw how to create a couple of quiz apps for mobile such as short-answer questions and picture card questions. Resources for Article: Further resources on this subject: Creating and configuring a basic mobile application [Article] Creating mobile friendly themes [Article] So, what is XenMobile? [Article]
Read more
  • 0
  • 0
  • 1695

article-image-creating-user-interfaces
Packt
15 Oct 2013
6 min read
Save for later

Creating User Interfaces

Packt
15 Oct 2013
6 min read
When creating an Android application, we have to be aware of the existence of multiple screen sizes and screen resolutions. It is important to check how our layouts are displayed in different screen configurations. To accomplish this, Android Studio provides a functionality to change the layout preview when we are in the design mode. We can find this functionality in the toolbar, the device definition option used in the preview is by default Nexus 4. Click on it to open the list of available device definitions. Try some of them. The difference between a tablet device and a device like the Nexus one are very notable. We should adapt the views to all the screen configurations our application supports to ensure they are displayed optimally. The device definitions indicate the screen inches, the resolution, and the screen density. Android divides into ldpi, mdpi, hdpi, xhdpi, and even xxhdpi the screen densities. ldpi (low-density dots per inch): About 120 dpi mdpi (medium-density dots per inch): About 160 dpi hdpi (high-density dots per inch): About 240 dpi xhdpi (extra-high-density dots per inch): About 320 dpi xxhdpi (extra-extra-high-density dots per inch): About 480 dpi The last dashboards published by Google show that most devices have high-density screens (34.3 percent), followed by xhdpi (23.7 percent) and mdpi (23.5 percent). Therefore, we can cover 81.5 percent of the devices by testing our application using these three screen densities. Official Android dashboards are available at http://developer.android.com/about/dashboards. Another issue to keep in mind is the device orientation. Do we want to support the landscape mode in our application? If the answer is yes, we have to test our layouts in the landscape orientation. On the toolbar, click on the layout state option to change the mode from portrait to landscape or from landscape to portrait. In the case that our application supports the landscape mode and the layout does not display as expected in this orientation, we may want to create a variation of the layout. Click on the first icon of the toolbar, that is, the configuration option, and select the option Create Landscape Variation. A new layout will be opened in the editor. This layout has been created in the resources folder, under the directory layout-land and using the same name as the portrait layout: /src/main/res/layout-land/activity_main.xml. Now we can edit the new layout variation perfectly conformed to the landscape mode. Similarly, we can create a variation of the layout for xlarge screens. Select the option Create layout-xlarge Variation. The new layout will be created in the layout xlarge folder: /src/main/res/layout-xlarge/activity_main.xml. Android divides into small, normal, large, and xlarge the actual screen sizes: small: Screens classified in this category are at least 426 dp x 320 dp normal: Screens classified in this category are at least 470 dp x 320 dp large: Screens classified in this category are at least 640 dp x 480 dp xlarge: Screens classified in this category are at least 960 dp x 720 dp A dp is a density independent pixel, equivalent to one physical pixel on a 160 dpi screen. The last dashboards published by Google show that most devices have a normal screen size (79.6 percent). If you want to cover a bigger percentage of devices, test your application by also using a small screen (9.5 percent), so the coverage will be 89.1 percent of devices. To display multiple device configurations at the same time, in the toolbar click on the configuration option and select the option Preview All Screen Sizes, or click on the Preview Representative Sample to open just the most important screen sizes. We can also delete any of the samples by clicking on it using the right mouse button and selecting the Delete option of the menu. Another useful action of this menu is the Save screenshot option, which allows us to take a screenshot of the layout preview. If we create some layout variations, we can preview all of them selecting the option Preview Layout Versions. Changing the UI theme Layouts and widgets are created using the default UI theme of our project. We can change the appearance of the elements of the UI by creating styles. Styles can be grouped to create a theme and a theme can be applied to a whole activity or application. Some themes are provided by default, such as the Holo style. Styles and themes are created as resources under the /src/res/values folder. Open the main layout using the graphical editor. The selected theme for our layout is indicated in the toolbar: AppTheme. This theme was created for our project and can be found in the styles file (/src/res/values/styles.xml). Open the styles file and notice that this theme is an extension of another theme (Theme.Light). To custom our theme, edit the styles file. For example, add the next line in the AppTheme definition to change the window background color: <style name="AppTheme" parent="AppBaseTheme"> <item name="android:windowBackground">#dddddd</item> </style> Save the file and switch to the layout tab. The background is now light gray. This background color will be applied to all our layouts due to the fact that we configured it in the theme and not just in the layout. To completely change the layout theme, click on the theme option from the toolbar in the graphical editor. The theme selector dialog is now opened, displaying a list of the available themes. The themes created in our own project are listed in the Project Themes section. The section Manifest Themes shows the theme configured in the application manifest file (/src/main/AndroidManifest.xml). The All section lists all the available themes. Summary In this article, we have seen how to develop and build Android applications with this new IDE.uide. It also shows how to support multiple screens and change their properties using the SDK. The changing of UI themes for the devices is also discussed, along with its properties. The main focus was on the creation of the user interfaces using both the graphical view and the text-based view. Resources for Article: Further resources on this subject: Creating Dynamic UI with Android Fragments Building Android (Must know) Android Fragmentation Management
Read more
  • 0
  • 0
  • 2308
article-image-so-what-xenmobile
Packt
08 Oct 2013
7 min read
Save for later

So, what is XenMobile?

Packt
08 Oct 2013
7 min read
(For more resources related to this topic, see here.) XenMobile is the next generation of mobile device management (MDM) from Citrix. XenMobile provides organizations the ability to automate most of the administrative tasks on mobile devices for both corporate and highly secured environments. In addition, it can also help organizations in managing bring your own device (BYOD) environments. XenMobile MDM allows administrators to configure role-based management for device provisioning and security for both corporate and employee-owned BYOD devices. When a user enrolls with their mobile device, an organization can provision policies and apps to devices automatically, blacklist or whitelist apps, detect and protect against jail broken devices, and wipe or selectively wipe a device that is lost, stolen, or out of compliance. A selective wipe means that only corporate or sensitive data is deleted, but personal data stays intact on a user's device. XenMobile supports every major mobile OS that is being used today, giving users the freedom to choose and use a device of their choice. Citrix has done an excellent job recognizing that organizations need MDM as a key component for a secure mobile ecosystem. Citrix XenMobile adds other features such as secure @WorkMail, @WorkWeb, and ShareFile integration, so that organizations can securely and safely access e-mail, the Internet, and exchange documents in a secure manner. There are other popular solutions on the market that have similar claims. Unlike other solutions, they rely on container-based solutions which limit native applications. Container-based solutions are applications that embed corporate data, e-mail, contact, and calendar data. Unfortunately, in many cases these solutions break the user experience by limiting how they can use native applications. XenMobile does this without compromising the user experience, allowing the secure applications to exist and share the same calendar, contact, and other key integration points on the mobile device. They were the only vendors at the time of writing this article, that had a single management platform which provided MDM features with secure storage, integrated VDI, multitenant, and application load balancing features, which we believe are some of the differentiators between XenMobile and its competitors. Citrix XenMobile MDM Architecture Mobile application stores (MAS) and mobile application management (MAN) are the concepts which you manage and secure access to individual applications on a mobile device, but leave the rest of the mobile device unmanaged. In some scenarios, people consider this as a great way of managing BYOD environments because organizations only need to worry about the applications and the data they manage. XenMobile has support for mobile application management and supports individual application policies, in addition to the holistic device policies found on other competing products. In this article, you will gain a deep understanding of XenMobile and its key features. You will learn how to install, configure, and use XenMobile in your environment to manage corporate and BYOD environments. We will then explore how to get started with XenMobile, configure policies and security, and how to deploy XenMobile in our organization. Next, we will look at some of the advanced features in XenMobile, how and when to use them, how to manage compliance breaches, and other top features. Finally, we will explore what do next when you have XenMobile configured. Welcome to the world of XenMobile MDM. Let's get started. Mobile device management (MDM) is a software solution that helps the organizations to manage, provision, and secure the lifecycle of a mobile device. MDM systems allow enterprises to mass deploy policies, settings, and applications to mobile devices. These features can include provisioning the mobile devices for Wi-Fi access, corporate e-mail, develop in-house applications, tracking locations, and remote wipe. Mobile device management solutions for enterprise corporations provide these capabilities over the air and for multiple mobile operating systems. Blackberry can be considered as the world's first real mobile enterprise solution with their product Blackberry Enterprise Server (BES). BES is still considered as a very capable and well-respected MDM solution. Blackberry devices were one of the first devices that provided organizations an accurate control of their users' mobile devices. The Blackberry device was essentially a dumb device until it was connected to a BES server. Once connected to a BES server, the Blackberry device would download policies, which would govern what features the device could use. This included everything from voice roaming, Internet usage, and even camera and storage policies. Because of its detailed configurability, Blackberry devices became the standard for most corporations wanting to use mobile devices and securing them. Apple and Google have made the smartphone a mainstream device and the tablet the computing platform of choice. People ended up waiting days in line to buy the latest gadget, and once they had it, you better believe they wanted to use it all the time. All of a sudden, organizations were getting hundreds of people wanting to connect their personal devices to the corporate network in order to work more efficiently with a device they enjoyed. The revolution of consumerization of IT had begun. In addition to Apple and Google devices, XenMobile supports Blackberry, Windows Phone, and other well-known mobile operating systems. Many vendors rushed to bring solutions to organizations to help them manage their Apple and mobile devices in enterprise architectures. Vendors tried to give organizations the same management and security that Blackberry had provided them with previous BES features. Over the years, Apple and Google both recognized the need for mobile management and started building mobile device management features in their operating system, so that MDM solutions could provide better granular management and security control for enterprise organizations. Today organizations are replacing older mobile devices in favor of Apple and Google devices. They feel comfortable in having these devices connected to corporate networks because they believe that they can manage them and secure them with MDM solutions. MDM solutions are the platform for organizations to ensure that mobile devices meet the technical, legal, and business compliance needed for their users to use devices of their choice, that are modern, and in many cases more productive than their legacy counterparts. MDM vendors have chosen to be container-based solutions, or device-based management. Container-based solutions provide segmentation of device data and allow organizations to completely ignore the rest of the device since all corporate data is self-contained. A good analogy for container-based solutions is Outlook Web Access. Outlook Web Access allows any computer to access Exchange email through a web browser. Computer software and applications are completely agnostic to corporate e-mail. Container-based solutions are similar, since they are indifferent to the mobile device data and other configuration components when being used to access an organization's resources, for example, e-mail on a mobile phone. Device-based management solutions allow organizations to manage device and application settings, but can only enforce security policies based on the features made available to them by device manufacturers. XenMobile is a device-based management solution, however, it has many of the features found in container-based solutions giving organizations the best of both worlds. Summary This article briefs about the functionalities of XenMobile and it covers XenMobile's features, gives an idea to the user regarding XenMobile. Resources for Article: Further resources on this subject: Creating mobile friendly themes [Article] Creating and configuring a basic mobile application [Article] Mobiles First – How and Why [Article]
Read more
  • 0
  • 0
  • 3021

article-image-ar-experience-using-vuforia-and-features-definition
Packt
01 Oct 2013
4 min read
Save for later

AR experience using Vuforia and features definition

Packt
01 Oct 2013
4 min read
(For more resources related to this topic, see here.) What decides trackable score? Trackables are the foundation of the AR experience using Vuforia. It is paramount to understand and create a suitable trackable for the experience to be robust and useful. The score attributed to the trackable in the target manager is our indication of how robust the target image is going to perform, but what decides that score? Best way of understanding this, is by understanding how Vuforia tracks the images. The idea is simple, it looks for position of contrasting edges in clusters all around the image. Those edges are tracked, and based on the map of positions that are stored in the dataset, Vuforia can tell the relative position of the trackable in the real world and accordingly render the 3D content on top of it. This particularly means that tracking the image is not a function of its color or what really is in it, as much as how many contrasting edges are there in the image, and how well they are distributed on the image. To better understand this, we can look on the current edges that are recognizable in the image we have just uploaded. To do that, simply click on the Show Features link on the top left of the webpage. The following image shows features in image target stones: Once the Show Features link has been clicked, the image target manager layers over the target image an overlay of where it detects a recognizable edge that it can track in a Vuforia image target. Notice that it is only tracking the dark edges between the Stones and nothing else in the image. It is even tracking only the high contrast edges between the Stones, while ignoring some of the lighter ones. Also notice that the number of edges found in the image is large, and evenly distributed all around the image. This is a great factor in what made this image great for tracking. To contrast this image's result, let's try an image that will yield a 1-star score when tried on the target manager. The following image shows landscape image added to target image: Before adding this image, intuitively we might think that this image is suitable for tracking. It certainly has a lot of details of a wide-angle landscape. But this image yielded a shocking 1-star result when added to the Target Manager. The main reason for the low score for this image is the fact that the entire image is a shade of green. This greatly diminishes contrasting edges in the image. If we are to click on the Show Features link on the top, we will be able to see what the target manager detected from the image. The following image shows features in the mountain landscape image: Immediately, we notice the considerably lower number of features detected in the image compared to the stones one. It only detected the edges created by the shadows of the objects in the image, which is clearly not enough to award it any score above 1 star. Features definition To help us get a higher score, we must understand what are the features that the target manager is looking for. We do know now that the main thing that the target manager is looking for in an image is edges, but what kind of edges specifically? To understand that, we need the definition of features. A feature is a sharp and spiked detail in the image, like the corner of an edge. Features must be very contrasting to be found and it has to be distributed evenly across the image and in a random manner. The following image shows shapes and features recognized in them: In the shapes illustrated above, we can see the yellow crosses representation of the features recognizable in the shape. The representation is as follows: Shape 1: It is a perfect circle without any corners at all, and such no features are recognizable in it. Shape 2: It has an edge to the left with two recognizable corners. That yields two features recognizable in the shape. Shape 3: It is a square with four edges and four corners. This yields four recognizable features in the shape. This means that any curved object yields little to none features at all. Primarily, humans and animals make very poor trackables due to their curved nature. Summary Thus in this article, we learned about how to track an image and which features are recognizable in an image. Resources for Article: Further resources on this subject: Interface Designing for Games in iOS [Article] Unity Game Development: Welcome to the 3D world [Article] Unity Game Development: Interactions (Part 1) [Article]
Read more
  • 0
  • 0
  • 3132

article-image-making-subtle-color-shifts-curves
Packt
23 Sep 2013
7 min read
Save for later

Making subtle color shifts with curves

Packt
23 Sep 2013
7 min read
(For more resources related to this topic, see here.) When looking at a scene, we may pick up subtle cues from the way colors shift between different image regions. For example, outdoors on a clear day, shadows have a slightly blue tint due to the ambient light reflected from the blue sky, while highlights have a slightly yellow tint because they are in direct sunlight. When we see bluish shadows and yellowish highlights in a photograph, we may get a "warm and sunny" feeling. This effect may be natural, or it may be exaggerated by a filter. Curve filters are useful for this type of manipulation. A curve filter is parameterized by sets of control points. For example, there might be one set of control points for each color channel. Each control point is a pair of numbers representing the input and output values for the given channel. For example, the pair (128, 180) means that a value of 128 in the given color channel is brightened to become a value of 180. Values between the control points are interpolated along a curve (hence the name, curve filter). In Gimp, a curve with the control points (0, 0), (128, 180), and (255, 255) is visualized as shown in the following screenshot: The x axis shows the input values ranging from 0 to 255, while the y axis shows the output values over the same range. Besides showing the curve, the graph shows the line y = x (no change) for comparison. Curvilinear interpolation helps to ensure that color transitions are smooth, not abrupt. Thus, a curve filter makes it relatively easy to create subtle, natural-looking effects. We may define an RGB curve filter in pseudocode as follows: dst.b = funcB(src.b) where funcB interpolates pointsB dst.g = funcG(src.g) where funcG interpolates pointsG dst.r = funcR(src.r) where funcR interpolates pointsR For now, we will work with RGB and RGBA curve filters, and with channel values that range from 0 to 255. If we want such a curve filter to produce natural-looking results, we should use the following rules of thumb: Every set of control points should include (0, 0) and (255, 255). This way, black remains black, white remains white, and the image does not appear to have an overall tint. As the input value increases, the output value should always increase too. (Their relationship should be monotonically increasing.) This way, shadows remain shadows, highlights remain highlights, and the image does not appear to have inconsistent lighting or contrast. OpenCV does not provide curvilinear interpolation functions but the Apache Commons Math library does. (See Adding files to the project, earlier in this chapter, for instructions on setting up Apache Commons Math.) This library provides interfaces called UnivariateInterpolator and UnivariateFunction, which have implementations including LinearInterpolator, SplineInterpolator, LinearFunction, and PolynomialSplineFunction. (Splines are a type of curve.) UnivariateInterpolator has an instance method, interpolate(double[] xval, double[] yval), which takes arrays of input and output values for the control points and returns a UnivariateFunction object. The UnivariateFunction object can provide interpolated values via the method value(double x). API documentation for Apache Commons Math is available at http://commons.apache.org/proper/commons-math/apidocs/. These interpolation functions are computationally expensive. We do not want to run them again and again for every channel of every pixel and every frame. Fortunately, we do not have to. There are only 256 possible input values per channel, so it is practical to precompute all possible output values and store them in a lookup table. For OpenCV's purposes, a lookup table is a Mat object whose indices represent input values and whose elements represent output values. The lookup can be performed using the static method Core.LUT(Mat src, Mat lut, Mat dst). In pseudocode, dst = lut[src]. The number of elements in lut should match the range of values in src, and the number of channels in lut should match the number of channels in src. Now, using Apache Commons Math and OpenCV, let's implement a curve filter for RGBA images with channel values ranging from 0 to 255. Open CurveFilter.java and write the following code: public class CurveFilter implements Filter { // The lookup table. private final Mat mLUT = new MatOfInt(); public CurveFilter( final double[] vValIn, final double[] vValOut, final double[] rValIn, final double[] rValOut, final double[] gValIn, final double[] gValOut, final double[] bValIn, final double[] bValOut) { // Create the interpolation functions. UnivariateFunction vFunc = newFunc(vValIn, vValOut); UnivariateFunction rFunc = newFunc(rValIn, rValOut); UnivariateFunction gFunc = newFunc(gValIn, gValOut); UnivariateFunction bFunc = newFunc(bValIn, bValOut); // Create and populate the lookup table. mLUT.create(256, 1, CvType.CV_8UC4); for (int i = 0; i < 256; i++) { final double v = vFunc.value(i); final double r = rFunc.value(v); final double g = gFunc.value(v); final double b = bFunc.value(v); mLUT.put(i, 0, r, g, b, i); // alpha is unchanged } } @Override public void apply(final Mat src, final Mat dst) { // Apply the lookup table. Core.LUT(src, mLUT, dst); } private UnivariateFunction newFunc(final double[] valIn, final double[] valOut) { UnivariateInterpolator interpolator; if (valIn.length > 2) { interpolator = new SplineInterpolator(); } else { interpolator = new LinearInterpolator(); } return interpolator.interpolate(valIn, valOut); } } CurveFilter stores the lookup table in a member variable. The constructor method populates the lookup table based on the four sets of control points that are taken as arguments. As well as a set of control points for each of the RGB channels, the constructor also takes a set of control points for the image's overall brightness, just for convenience. A helper method, newFunc, creates an appropriate interpolation function (linear or spline) for each set of control points. Then, we iterate over the possible input values and populate the lookup table. The apply method is a one-liner. It simply uses the precomputed lookup table with the given source and destination matrices. CurveFilter can be subclassed to define a filter with a specific set of control points. For example, let's open PortraCurveFilter.java and write the following code: public class PortraCurveFilter extends CurveFilter { public PortraCurveFilter() { super( new double[] { 0, 23, 157, 255 }, // vValIn new double[] { 0, 20, 173, 255 }, // vValOut new double[] { 0, 69, 213, 255 }, // rValIn new double[] { 0, 69, 218, 255 }, // rValOut new double[] { 0, 52, 189, 255 }, // gValIn new double[] { 0, 47, 196, 255 }, // gValOut new double[] { 0, 41, 231, 255 }, // bValIn new double[] { 0, 46, 228, 255 }); // bValOut } } This filter brightens the image, makes shadows cooler (more blue), and makes highlights warmer (more yellow). It produces flattering skin tones and tends to make things look sunnier and cleaner. It resembles the color characteristics of a brand of photo film called Kodak Portra, which was often used for portraits. The code for our other three channel mixing filters is similar. The ProviaCurveFilter class uses the following arguments for its control points: new double[] { 0, 255 }, // vValIn new double[] { 0, 255 }, // vValOut new double[] { 0, 59, 202, 255 }, // rValIn new double[] { 0, 54, 210, 255 }, // rValOut new double[] { 0, 27, 196, 255 }, // gValIn new double[] { 0, 21, 207, 255 }, // gValOut new double[] { 0, 35, 205, 255 }, // bValIn new double[] { 0, 25, 227, 255 }); // bValOut The effect is a strong, blue or greenish-blue tint in shadows and a strong, yellow or greenish-yellow tint in highlights. It resembles a film processing technique called cross-processing, which was sometimes used to produce grungy-looking photos of fashion models, pop stars, and so on. For a good discussion of how to emulate various brands of photo film, see Petteri Sulonen's blog at http://www.prime-junta.net/pont/How_to/100_Curves_and_Films/_Curves_and_films.html. The control points that we use are based on examples given in this article. Curve filters are a convenient tool for manipulating color and contrast, but they are limited insofar as each destination pixel is affected by only a single input pixel. Next, we will examine a more flexible family of filters, which enable each destination pixel to be affected by a neighborhood of input pixels. Summary In this article we learned how to make subtle color shifts with curves. Resources for Article: Further resources on this subject: Linking OpenCV to an iOS project [Article] A quick start – OpenCV fundamentals [Article] OpenCV: Image Processing using Morphological Filters [Article]
Read more
  • 0
  • 0
  • 3111
article-image-basic-security-approaches
Packt
19 Sep 2013
10 min read
Save for later

Basic Security Approaches

Packt
19 Sep 2013
10 min read
(For more resources related to this topic, see here.) Security, privacy, and safeguards for intellectual property are at the front of the minds of those of us building Titanium Enterprise apps. Titanium allows you to combine the underlying platform tools and third-party JavaScript libraries to help meet your security requirements. This article provides a series of approaches on how to leverage JavaScript, Titanium modules, and the underlying platform to enable you to create a layered security approach to assist you in meeting your organization's overall secure development goals. Each recipe is designed to provide building blocks to help you implement your industry's existing security and privacy standards. Implementing iOS data protection in Titanium Starting with iOS 4, Apple introduced the ability for apps to use the data protection feature to add an additional level of security for data stored on disk. Data protection uses the built-in hardware encryption to encrypt files stored on the device. This feature is available when the user's device is locked and protected with a passcode lock. During this time, all files are protected and inaccessible until the user explicitly unlocks the device. When the device is locked, no app can access protected files. This even applies to the app that created the file. Getting ready This recipe uses the securely native module for enhanced security functionality. This module and other code assets can be downloaded from the source provided by the book. Installing these in your project is straightforward. Simply copy the modules folder into your project as shown in the following screenshot: After copying the mentioned folder, you will need to click on your tiapp.xml file in Titanium Studio and add a reference to the bencoding.securely module as shown in the following screenshot: Enabling data protection This recipe requires your iOS device to have data protection enabled. You will need a device as the simulator does not support data protection. The following steps cover how to enable this feature on your device: Go to Settings | General | Passcode . Follow the prompts to set up a passcode. After adding a passcode, scroll to the bottom of the screen and verify that the text Data protection is enabled is visible as shown in the following screenshot: iOS device browser A third-party iOS device browser is needed to verify that data protection for the example recipe app has successfully been enabled. This recipe discusses how to verify data protection using the popular iExplorer app. An evaluation version of the iExplorer app can be used to follow along with this recipe. For more information and to download iExplorer, please visit http://www.macroplant.com/iexplorer. How to do it... To enable iOS data protection, the DataProtectionClass and com.apple.developer.default-data-protection keys need to be added to your tiapp.xml as demonstrated in the following code snippet: First, add the ios configuration node if your project does not already contain this element. <ios> <plist> <dict> Then at the top of the dict node, add the following highlighted keys. <key>DataProtectionClass</key> <string>NSFileProtectionComplete</string> <key>com.apple.developer. default-data-protection</key> <string>NSFileProtectionComplete</string> </dict> </plist> </ios>   After saving the updates to your tiapp.xml, you must clean your Titanium project in order to have the updates take effect. This can be done in Titanium Studio by selecting Project | Clean . Creating the namespace and imports Once you have added the securely module and added the tiapp.xml updates to your project, you need to create your application namespace in the app.js file and use require to import the module into your code as the following code snippet demonstrates: //Create our application namespace var my = { secure : require('bencoding.securely') }; Creating the recipe UI The following steps outline how to create the UI used in this recipe: First, a Ti.UI.Window is created to attach all UI elements. var win = Ti.UI.createWindow({ backgroundColor: '#fff', title: 'Data Protection Example', barColor:'#000',layout:'vertical' }); Next, a Ti.UI.Button is added to the Ti.UI.Window. This will be used to trigger our example. var button1 = Ti.UI.createButton({ title:'Create Test File', top:25, height:45, left:5, right:5 }); win.add(button1); Creating a file to verify data protection To verify if data protection is enabled in the app, the recipe creates a time-stamped file in the Ti.Filesystem.applicationDataDirectory directory. Using an iOS device browser, we can verify if the test file is protected when the device is locked. The following steps describe how the recipe creates this test file: The click event for button1 creates a time-stamped file that allows us to verify if data protection has been correctly enabled for the app. button1.addEventListener('click',function(e){ Next the isProtectedDataAvailable method is called on securely. This provides a Boolean result indicating that data protection allows the app to read from or write to the filesystem. if(!my.secure.isProtectedDataAvailable()){ alert('Protected data is not yet available.'); return; } To ensure there is a unique identifier in the file, a token is created using the current date and time. This token is then added to the following message template: var timeToken = String.formatDate(new Date(),"medium") + String.formatTime(new Date()); var msg = "When device is locked you will not be able"; msg += " to read this file. Your time token is "; msg += timeToken; The message created in step 3 is then written to the test.txt file located in the Ti.Filesystem.applicationDataDirectory directory. If the file already exists, it is removed so that the latest message will be available for testing. var testfile = Ti.Filesystem.getFile( Ti.Filesystem.applicationDataDirectory, 'test.txt'); if(testfile.exists()){ testfile.deleteFile(); } testfile.write(msg); testfile = null; Once the test.txt file is written to the device, a message is displayed to the user notifying them to lock their device and use an iOS device browser to confirm data protection is enabled. var alertMsg = "Please lock your device."; alertMsg+= "Then open an iOS Device Browser."; alertMsg+= "The time token you are looking for is "; alertMsg+= timeToken; alert(alertMsg); How it works... After the DataProtectionClass and com.apple.developer.default-data-protection keys have been added to your tiapp.xml, the iOS device handles protecting your files when the device is locked. The following steps discuss how to test that this recipe has correctly implemented data protection: The first step in the validation process is to build and deploy the recipe app to your iOS device. Once the app has been loaded onto your device, open the app and press the Create Test File button. Once you have received an alert message indicating the test file has been created, press the home button and lock your device. Plug your device into a computer with iExplorer installed. Open iExplorer and navigate so that you can view the apps on your device. Select the DataProtection app as marked with the red box in the following screenshot. Then right-click on the test.txt file located in the Documents folder and select Quick Look as marked with the green box in the following screenshot: After Quick Look is selected, iExplorer will try to open the test.txt file. Since it is protected, it cannot be opened and Quick Look will show the progress indicator until a timeout has been reached. You can then unlock your device and repeat the preceding steps to open the file in Quick Look . AES encryption using JavaScript The Advanced Encryption Standard ( AES ) is a specification for the encryption of electronic data established by the U.S. NIST in 2001. This encryption algorithm is used for securing sensitive, but unclassified material by U.S. Government agencies. AES has been widely adopted by enterprise and has become a de facto encryption standard for many commercially sensitive transactions. This recipe discusses how AES can be implemented in JavaScript and incorporated into your Titanium Enterprise app. Getting ready This recipe uses the Ti.SlowAES CommonJS module as a wrapper around the SlowAES open source project. Installing these in your project is straightforward. Simply copy the SlowAES folder into the Resources folder of your project as shown in the following screenshot: How to do it... Once you have added the SlowAES folder to your project, next you need to create your application namespace in the app.js file and use require to import the module into your code as the following code snippet demonstrates: //Create our application namespace var my = { mod : require('SlowAES/Ti.SlowAES') }; Creating the recipe UI This recipe demonstrates the usage of the Ti.SlowAES CommonJS module through a sample app using two Ti.UI.TextField controls for input. First, a Ti.UI.Window is created to attach all UI elements. var win = Ti.UI.createWindow({ backgroundColor: '#fff', title: 'AES Crypto Example', barColor:'#000',layout:'vertical',fullscreen:false }); Next, a Ti.UI.TextField control is added to the Ti.UI.Window to gather the secret from the user. var txtSecret = Ti.UI.createTextField({ value:'DoNotTell',hintText:'Enter Secret', height:45, left:5, right:5, borderStyle:Ti.UI.INPUT_BORDERSTYLE_ROUNDED }); win.add(txtSecret); Another Ti.UI.TextField is added to the Ti.UI.Window to gather the string to encrypt from the user. var txtToEncrypt = Ti.UI.createTextField({ value:'some information we want to encrypt', hintText:'Enter information to encrypt', height:45, left:5, right:5, borderStyle:Ti.UI.INPUT_BORDERSTYLE_ROUNDED }); win.add(txtToEncrypt); Next a Ti.UI.Label is added to the Ti.UI.Window. This Ti.UI.Label will be used to display the encrypted value to the user. var encryptedLabel = Ti.UI.createLabel({ top:10, height:65, left:5, right:5,color:'#000', textAlign:'left',font:{fontSize:14} }); win.add(encryptedLabel); Finally a Ti.UI.Button is added to the Ti.UI.Window. This Ti.UI.Button will be used later in the recipe to perform the encryption test. var btnEncrypt = Ti.UI.createButton({ title:'Run Encryption Test', top:25, height:45, left:5, right:5 }); win.add(btnEncrypt); Encrypting and decrypting values This section demonstrates how to use the Ti.SlowAES module to use the secret entered in the txtSecret, Ti.UI.TextField to encrypt the contents of the txtToEncrypt, Ti.UI.TextField. Once completed, the encrypted value is then decrypted and compared against the original input. The results are displayed to the user in an alert message as shown in the following screenshots: The encryption test is performed when the click event for the btnEncrypt control is fired as shown in the following code snippet: btnEncrypt.addEventListener('click',function(x){ The first step in the encryption process is to create a new instance of the SlowAES module as shown in this code snippet. var crypto = new my.mod(); Next using the encrypt function, the secret provided in the txtSecret control is used to encrypt the value in the txtToEncrypt control. The encrypted results are then returned to the encryptedValue as demonstrated in the following statement: var encryptedValue = crypto.encrypt(txtToEncrypt.value,txtSecret.value); The encryptedLabel.text property is then updated to display the encrypted value to the user. encryptedLabel.text = 'Encrypted:' + encryptedValue; Next, the decrypt method is used to demonstrate how to decrypt the string value encrypted earlier. This method requires the encrypted string value and the secret as shown in the following snippet: var decryptedValue = crypto.decrypt(encryptedValue,txtSecret.value); Finally, the original input value is compared against the decrypted value to ensure our encryption test was successful. The results of this test are then displayed to the user through a message alert and the Titanium Studio console. alert((txtToEncrypt.value ===decryptedValue) ? 'Encryption Test successfully ran check console for details.': 'Test failed, please check console for details.'); });
Read more
  • 0
  • 0
  • 1458

article-image-kendo-ui-mobile-exploring-mobile-widgets
Packt
17 Sep 2013
6 min read
Save for later

Kendo UI Mobile – Exploring Mobile Widgets

Packt
17 Sep 2013
6 min read
(For more resources related to this topic, see here.) Kendo Mobile widgets basics All Kendo Mobile widgets inherit from the base class kendo.mobile.ui.Widget, which is inherited from the base class of all Kendo widgets (both Web and Mobile), kendo.ui.Widget. The complete inheritance chain of the mobile widget class is shown in the following figure: kendo.Class acts as the base class for most of the Kendo UI objects while the kendo.Observable object contains methods for events. kendo.data.ObservableObject which is the building block of Kendo MVVM, is inherited from kendo.Observable. Mobile widget base methods From the inheritance chain, all Kendo Mobile widgets inherit a set of common methods. A thorough understanding of these methods is required while building highly performing, complex mobile apps. bind The bind() method defined in the kendo.Observable class, attaches a handler to an event. Using this method we can attach custom methods to any mobile widget. The bind() method takes the following two input parameters: eventName: The name of the event handler: The function to be fired when the event is raised The following example shows how to create a new mobile widget and attach a custom event to the widget: //create a new mobile widget var mobileWidget = new kendo.mobile.ui.Widget(); //attach a custom event mobileWidget.bind("customEvent", function(e) { // mobileWidget object can be accessed inside this function as //'e.sender' and 'this'. console.log('customEvent fired'); }); The event data is available in the object e. The object which raised the event is accessible inside the function as e.sender or using the this keyword. trigger The trigger() method executes all event handlers attached to the fired event. This method has two input parameters: eventName: The name of the event to be triggered eventData (optional): The event-specific data to be passed into the event handler Let's see how trigger works by modifying the code sample provided for bind: //create a mobile widget var mobileWidget = new kendo.mobile.ui.Widget(); //attach a custom event mobileWidget.bind("customEvent", function(e) { // mobileWidget object can be accessed //inside this function as //'e.sender' and 'this' . console.log('customEvent fired'); //read event specific data if it exists if(e.eventData !== undefined){ console.log('customEvent fired with data: ' + e.eventData); } }); //trigger the event with some data mobileWidget.trigger("customEvent", { eventData:'Kendo UI is cool!' }); Here we are triggering the custom event which is attached using the bind() method and sending some data along. This data is read inside the event and written to the console. When this code is run, we can see the following output in the console: customEvent fired customEvent fired with data: Kendo UI is cool! unbind The unbind() method detaches a previously attached event handler from the widget. It takes the following input parameters: eventName: The name of the event to be detached. If an event name is not specified, all handlers of all events will be detached. handler: The handler function to be detached. If a function is not specified, all functions attached to the event will be detached. The following code attaches an event to a widget and detaches it when the event is triggered: //create a mobile widget var mobileWidget = new kendo.mobile.ui.Widget(); //attach a custom event mobileWidget.bind("customEvent", function(e) { console.log('customEvent fired'); this.unbind("customEvent"); }); //trigger the event first time mobileWidget.trigger("customEvent"); //trigger the event second time mobileWidget.trigger("customEvent"); Output: customEvent fired As seen from the output, even though we trigger the event twice, only on the first time is the event handler invoked. one The one() method is identical to the bind() method only with one exception; the handler is unbound after its first invocation. So the handler will be fired only once. To see this method in action, let's add a count variable to the existing sample code and track the number of times the handler is invoked. For this we will bind the event handler with one() and then trigger the event twice as shown in the following code: //create a mobile widget var mobileWidget = new kendo.mobile.ui.Widget(); var count = 0; //attach a custom event mobileWidget.one("customEvent", function(e) { count++; console.log('customEvent fired. count: ' + count); }); //trigger the event first time mobileWidget.trigger("customEvent"); //trigger the event second time mobileWidget.trigger("customEvent"); Output: customEvent fired. count: 1 If you replace the one() method with the bind() method, you can see that the handler will be invoked twice. destroy The destroy() method is inherited from the kendo.ui.Widget base object. The destroy() method kills all the event handler attachments and removes the widget object in the jquery.data() attribute so that the widget can be safely removed from the DOM without memory leaks. If there is a child widget available, the destroy() method of the child widget will also be invoked. Let's see how the destroy() method works using the Kendo Mobile Button widget and using your browser's developer tools' console. Create an HTML file, add the following code along with Kendo UI Mobile file references in the file, and open it in your browser: <div data-role="view" > <a class="button" data-role="button" id="btnHome" data-click="buttonClick">Home</a> </div> <script> var app = new kendo.mobile.Application(document.body); function buttonClick(e){ console.log('Inside button click event handler...'); $("#btnHome").data().kendoMobileButton.destroy(); } </script> In this code block, we created a Kendo Button widget and on the click event, we are invoking the destroy() method of the button. Now open up your browser's developer tools' Console window, type $("#btnHome").data() and press Enter . Now if you click on the Object link shown in the earlier screenshot, a detailed view of all properties can be seen: Now click on kendoMobilebutton once and then again in the Console , type $("#btnHome").data() and hit Enter . Now we can see that the kendoMobileButton object is removed from the object list: Even though the data object is gone, the button stays in the DOM without any data or events associated with it. view The view() method is specific to mobile widgets and it returns the view object in which the widget is loaded. In the previous example, we can assign an ID, mainView, to the view and then retrieve it in the button's click event using this.view().id as shown in the following code snippet: <div data-role="view" id="mainView" > <a class="button" data-role="button" id="btnHome" data-click="buttonClick">Home</a> </div> <script> var app = new kendo.mobile.Application(document.body); function buttonClick(e){ console.log("View id: " + this.view().id ); } </script> Output: View id: #mainView
Read more
  • 0
  • 0
  • 1202