Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Mobile

213 Articles
article-image-working-sharing-plugin
Packt
23 May 2014
11 min read
Save for later

Working with the sharing plugin

Packt
23 May 2014
11 min read
(For more resources related to this topic, see here.) Now that we've dealt with the device events, let's get to the real meat of the project: let's add the sharing plugin and see how to use it. Getting ready Before continuing, be sure to add the plugin to your project: cordova plugin add https ://github.com/leecrossley/cordova-plugin-social-message.git Getting on with it This particular plugin is one of many socialnetwork plugins. Each one has its benefits and each one has its problems, and the available plugins are changing rapidly. This particular plugin is very easy to use, and supports a reasonable amount of social networks. On iOS, Facebook, Twitter, Mail, and Flickr are supported. On Android, any installed app that registers with the intent to share is supported. The full documentation is available at https://github.com/leecrossley/cordova-plugin-social-message at the time of writing this. It is easy to follow if you need to know more than what we cover here. To show a sharing sheet (the appearance varies based on platform and operation system), all we have to do is this: window.socialshare.send ( message ); message is an object that contains any of the following properties: text: This is the main content of the message. subject: This is the subject of the message. This is only applicable while sending e-mails; most other social networks will ignore this value. url: This is a link to attach to the message. image: This is an absolute path to the image in order to attach it to the message. It must begin with file:/// and the path should be properly escaped (that is, spaces should become %20, and so on). activityTypes (only for iOS): This supports activities on various social networks. Valid values are: PostToFacebook, PostToTwitter, PostToWeibo, Message, Mail, Print, CopyToPasteboard, AssignToContact, and SaveToCameraRoll. In order to create a simple message to share, we can use the following code: var message = {     text: "something to send" } window.socialshare.send ( message ); To add an image, we can go a step further, shown as follows: var message = {     text: "the caption",     image: "file://var/mobile/…/image.png" } window.socialshare.send ( message ); Once this method is called, the sharing sheet will appear. On iOS 7, you'll see something like the following screenshot: On Android, you will see something like the following screenshot: What did we do? In this section, we installed the sharing plugin and we learned how to use it. In the next sections, we'll cover the modifications required to use this plugin. Modifying the text note edit view We've dispatched most of the typical sections in this project—there's not really any user interface to design, nor are there any changes to the actual note models. All we need to do is modify the HTML template a little to include a share button and add the code to use the plugin. Getting on with it First, let's alter the template in www/html/textNoteEditView.html. I've highlighted the changes: <html>   <body>     <div class="ui-navigation-bar">       <div class="ui-title"         contenteditable="true">%NOTE_NAME%</div>       <div class="ui-bar-button-group ui-align-left">         <div class="ui-bar-button ui-tint-color ui-back-button">%BACK%</div>       </div>       <div class="ui-bar-button-group ui-align-right">         <div class="ui-bar-button ui-destructive-           color">%DELETE_NOTE%</div>       </div>     </div>     <div class="ui-scroll-container ui-avoid-navigation-bar ui-       avoid-tool-bar">       <textarea class="ui-text-box" >%NOTE_CONTENTS%</textarea>     </div><div class="ui-tool-bar"><div class="ui-bar-button-group ui-align-left"></div><div class="ui-bar-button-group ui-align-center">     </div>         <div class="ui-bar-button-group ui-align-right">        <div class="ui-bar-button ui-background-tint-color ui- glyph ui-glyph-share share-button"></div></div>    </div></body></html> Now, let's make the modifications to the view in www/js/app/views/textNoteEditView.js. First, we need to add an internal property that references the share button: self._shareButton = null; Next, we need to add code to renderToElement so that we can add an event handler to the share button. We'll do a little bit of checking here to see if we've found the icon, because we don't support sharing of videos and sounds and we don't include that asset in those views. If we didn't have the null check, those views would fail to work. Consider the following code snippet: self.renderToElement = function () {   …   self._shareButton = self.element.querySelector ( ".share-button"     );   if (self._shareButton !== null) {     Hammer ( self._shareButton ).on("tap", self.shareNote);   }   … } Finally, we need to add the method that actually shares the note. Note that we save the note before we share it, since that's how the data in the DOM gets transmitted to the note model. Consider the following code snippet: self.shareNote = function () {   self.saveNote();   var message = {     subject: self._note.name,     text: self._note.textContents   };   window.socialmessage.send ( message ); } What did we do? First, we added a toolbar to the view that looks like the following screenshot—note the new sharing icon: Then, we added the code that shares the note and attaches that code to the Share button. Here's an example of us sending a tweet from a note on iOS: What else do I need to know? Don't forget that social networks often have size limits. For example, Twitter only supports 140 characters, and so if you send a note using Twitter, it needs to be a very short note. We could, on iOS, prevent Twitter from being permitted, but there's no way to prevent this on Android. Even then, there's no real reason not to prevent Twitter from being an option. The user just needs to be familiar enough with the social network to know that they'll have to edit the content before posting it. Also, don't forget that the subject of a message only applies to mail; most other social networks will ignore it. If something is critical, be sure to include it in the text of the message, and not the subject only. Modifying the image note edit view The image note edit view presents an additional difficulty: we can't put the Share button in a toolbar. This is because doing so will cause positioning difficulties with TEXTAREA and the toolbar when the soft keyboard is visible. Instead, we'll put it in the lower-right corner of the image. This is done by using the same technique we used to outline the camera button. Getting on with it Let's edit the template in www/html/imageNoteEditView.html; again, I've highlighted the changes: <html>   <body>     <div class="ui-navigation-bar">       <div class="ui-title"         contenteditable="true">%NOTE_NAME%</div>       <div class="ui-bar-button-group ui-align-left">         <div class="ui-bar-button ui-tint-color ui-back-           button">%BACK%</div>       </div>       <div class="ui-bar-button-group ui-align-right">         <div class="ui-bar-button ui-destructive-           color">%DELETE_NOTE%</div>       </div>     </div>     <div class="ui-scroll-container ui-avoid-navigation-bar">       <div class="image-container">         <div class="ui-glyph ui-background-tint-color ui-glyphcamera-         outline"></div>               <div class="ui-glyph ui-background-tint-color ui-glyph-           camera outline"></div>         <div class="ui-glyph ui-background-tint-color ui-glyph-           camera non-outline"></div>         <div class="ui-glyph ui-background-tint-color ui-glyph-           share outline"></div>         <div class="ui-glyph ui-background-tint-color ui-glyph-           share non-outline share-button"></div>       </div>       <textarea class="ui-text-box"         onblur="this.classList.remove('editing');"         onfocus="this.classList.add('editing');         ">%NOTE_CONTENTS%</textarea>     </div>   </body> </html> Because sharing an image requires a little additional code, we need to override shareNote (which we inherit from the prior task) in www/js/app/views/imageNoteEditView.js: self.shareNote = function () {   var fm = noteStorageSingleton.fileManager;   var nativePath = fm.getNativeURL ( self._note.mediaContents );   self.saveNote();   var message = {     subject: self._note.name,     text: self._note.textContents   };   if (self._note.unitValue > 0) {     message.image = nativePath;   }   window.socialmessage.send ( message ); } Finally, we need to add the following styles to www/css/style.css: div.ui-glyph.ui-background-tint-color.ui-glyph-share.outline, div.ui-glyph.ui-background-tint-color.ui-glyph-share.non-outline {   left:inherit;   width:50px;   top: inherit;   height:50px; } {   -webkit-mask-position:15px 16px;   mask-position:15px 16px; } div.ui-glyph.ui-background-tint-color.ui-glyph-share.non-outline {   -webkit-mask-position:15px 15px;   mask-position:15px 15px; } What did we do? Like the previous task, we first modified the template to add the share icon. Then, we added the shareNote code to the view (note that we don't have to add anything to find the button, because we inherit it from the Text Note Edit View). Finally, we modify the style sheet to reposition the Share button appropriately so that it looks like the following screenshot: What else do I need to know? The image needs to be a valid image, or the plugin may crash. This is why we check for the value of unitValue in shareNote to ensure that it is large enough to attach to the message. If not, we only share the text. Game Over... Wrapping it up And that's it! You've learned how to respond to device events, and you've also added sharing to text and image notes by using a third-party plugin. Can you take the HEAT? The Hotshot Challenge There are several ways to improve the project. Why don't you try a few? Implement the ability to save the note when the app receives a pause event, and then restore the note when the app is resumed. Remember which note is visible when the app is paused, and restore it when the app is resumed. (Hint: localStorage may come in handy.) Add video or audio sharing. You'll probably have to alter the sharing plugin or find another (or an additional) plugin. You'll probably also need to upload the data to an external server so that it can be linked via the social network. For example, it's often customary to link to a video on Twitter by using a link shortener. The File Transfer plugin might come in handy for this challenge (https://github.com/apache/cordova-plugin-file-transfer/blob/dev/doc/index.md). Summary This article introduced you to a third-party plugin that provides access to e-mail and various social networks. Resources for Article: Further resources on this subject: Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere [Article] Configuring the ChildBrowser plugin [Article] Using Location Data with PhoneGap [Article]
Read more
  • 0
  • 0
  • 1807

article-image-updating-data-background
Packt
20 May 2014
4 min read
Save for later

Updating data in the background

Packt
20 May 2014
4 min read
(For more resources related to this topic, see here.) Getting ready Create a new Single View Application in Xamarin Studio and name it BackgroundFetchApp. Add a label to the controller. How to do it... Perform the following steps: We need access to the label from outside of the scope of the BackgroundFetchAppViewController class, so create a public property for it as follows: public UILabel LabelStatus { get { return this.lblStatus; } } Open the Info.plist file and under the Source tab, add the UIBackgroundModes key (Required background modes) with the string value, fetch. The following screenshot shows you the editor after it has been set: In the FinishedLaunching method of the AppDelegate class, enter the following line: UIApplication.SharedApplication.SetMinimumBackgroundFetchInterval(UIApplication.BackgroundFetchIntervalMinimum); Enter the following code, again, in the AppDelegate class: private int updateCount;public override void PerformFetch (UIApplication application,Action<UIBackgroundFetchResult> completionHandler){try {HttpWebRequest request = WebRequest.Create("http://software.tavlikos.com") as HttpWebRequest;using (StreamReader sr = new StreamReader(request.GetResponse().GetResponseStream())) {Console.WriteLine("Received response: {0}",sr.ReadToEnd());}this.viewController.LabelStatus.Text =string.Format("Update count: {0}/n{1}",++updateCount, DateTime.Now);completionHandler(UIBackgroundFetchResult.NewData);} catch {this.viewController.LabelStatus.Text =string.Format("Update {0} failed at {1}!",++updateCount, DateTime.Now);completionHandler(UIBackgroundFetchResult.Failed);}} Compile and run the app on the simulator or on the device. Press the home button (or Command + Shift + H) to move the app to the background and wait for an output. This might take a while, though. How it works... The UIBackgroundModes key with the fetch value enables the background fetch functionality for our app. Without setting it, the app will not wake up in the background. After setting the key in Info.plist, we override the PerformFetch method in the AppDelegate class, as follows: public override void PerformFetch (UIApplication application, Action<UIBackgroundFetchResult> completionHandler) This method is called whenever the system wakes up the app. Inside this method, we can connect to a server and retrieve the data we need. An important thing to note here is that we do not have to use iOS-specific APIs to connect to a server. In this example, a simple HttpWebRequest method is used to fetch the contents of this blog: http://software.tavlikos.com. After we have received the data we need, we must call the callback that is passed to the method, as follows: completionHandler(UIBackgroundFetchResult.NewData); We also need to pass the result of the fetch. In this example, we pass UIBackgroundFetchResult.NewData if the update is successful and UIBackgroundFetchResult.Failed if an exception occurs. If we do not call the callback in the specified amount of time, the app will be terminated. Furthermore, it might get fewer opportunities to fetch the data in the future. Lastly, to make sure that everything works correctly, we have to set the interval at which the app will be woken up, as follows: UIApplication.SharedApplication.SetMinimumBackgroundFetchInterval(UIApplication.BackgroundFetchIntervalMinimum); The default interval is UIApplication.BackgroundFetchIntervalNever, so if we do not set an interval, the background fetch will never be triggered. There's more Except for the functionality we added in this project, the background fetch is completely managed by the system. The interval we set is merely an indication and the only guarantee we have is that it will not be triggered sooner than the interval. In general, the system monitors the usage of all apps and will make sure to trigger the background fetch according to how often the apps are used. Apart from the predefined values, we can pass whatever value we want in seconds. UI updates We can update the UI in the PerformFetch method. iOS allows this so that the app's screenshot is updated while the app is in the background. However, note that we need to keep UI updates to the absolute minimum. Summary Thus, this article covered the things to keep in mind to make use of iOS 7's background fetch feature. Resources for Article: Further resources on this subject: Getting Started on UDK with iOS [Article] Interface Designing for Games in iOS [Article] Linking OpenCV to an iOS project [Article]
Read more
  • 0
  • 0
  • 3277

article-image-physics-uikit-dynamics
Packt
14 May 2014
8 min read
Save for later

Physics with UIKit Dynamics

Packt
14 May 2014
8 min read
(For more resources related to this topic, see here.) Motion and physics in UIKit With the introduction of iOS 7, Apple completely removed the skeuomorphic design that has been used since the introduction of the iPhone and iOS. In its place is a new and refreshing flat design that features muted gradients and minimal interface elements. Apple has strongly encouraged developers to move away from a skeuomorphic and real-world-based design in favor of these flat designs. Although we are guided away from a real-world look, Apple also strongly encourages that your user interface have a real-world feel. Some may think this is a contradiction; however, the goal is to give users a deeper connection to the user interface. UI elements that respond to touch, gestures, and changes in orientation are examples of how to apply this new design paradigm. In order to help assist in this new design approach, Apple has introduced two very nifty APIs, UIKit Dynamics and Motion Effects. UIKit Dynamics To put it simply, iOS 7 has a fully featured physics engine built into UIKit. You can manipulate specific properties to provide a more real-world feel to your interface. This includes gravity, springs, elasticity, bounce, and force to name a few. Each interface item will contain its own properties and the dynamic engine will abide by these properties. Motion effects One of the coolest features of iOS 7 on our devices is the parallax effect found on the home screen. Tilting the device in any direction will pan the background image to emphasize depth. Using motion effects, we can monitor the data supplied by the device's accelerometer to adjust our interface based on movement and orientation. By combining these two features, you can create great looking interfaces with a realistic feel that brings it to life. To demonstrate UIKit Dynamics, we will be adding some code to our FoodDetailViewController.m file to create some nice effects and animations. Adding gravity Open FoodDetailViewController.m and add the following instance variables to the view controller: UIDynamicAnimator* animator; UIGravityBehavior* gravity; Scroll to viewDidLoad and add the following code to the bottom of the method: animator = [[UIDynamicAnimator alloc] initWithReferenceView:self.view]; gravity = [[UIGravityBehavior alloc] initWithItems:@[self.foodImageView]]; [animator addBehavior:gravity]; Run the application, open the My Foods view, select a food item from the table view, and watch what happens. The food image should start to accelerate towards the bottom of the screen until it eventually falls off the screen, as shown in the following set of screenshots: Let's go over the code, specifically the two new classes that were just introduced, UIDynamicAnimator and UIGravityBehavior. UIDynamicAnimator This is the core component of UIKit Dynamics. It is safe to say that the dynamic animator is the physics engine itself wrapped in a convenient and easy-to-use class. The animator will do nothing on its own, but instead keep track of behaviors assigned to it. Each behavior will interact inside of this physics engine. UIGravityBehavior Behaviors are the core compositions of UIKit Dynamics animation. These behaviors all define individual responses to the physics environment. This particular behavior mimics the effects of gravity by applying force. Each behavior is associated with a view (or views) when created. Because you explicitly define this property, you can control which views will perform the behavior. Behavior properties Almost all behaviors have multiple properties that can be adjusted to the desired effect. A good example is the gravity behavior. We can adjust its angle and magnitude. Add the following code before adding the behavior to the animator: gravity.magnitude = 0.1f; Run the application and test it to see what happens. The picture view will start to fall; however, this time it will be at a much slower rate. Replace the preceding code line with the following line: gravity.magnitude = 10.0f; Run the application, and this time you will notice that the image falls much faster. Feel free to play with these properties and get a feel for each value. Creating boundaries When dealing with gravity, UIKit Dynamics does not conform to the boundaries of the screen. Although it is not visible, the food image continues to fall after it has passed the edge of the screen. It will continue to fall unless we set boundaries that will contain the image view. At the top of the file, create another instance variable: UICollisionBehavior *collision; Now in our viewDidLoad method, add the following code below our gravity code: collision = [[UICollisionBehavior alloc] initWithItems:@[self.foodImageView]]; collision.translatesReferenceBoundsIntoBoundary = YES; [animator addBehavior:collision]; Here we are creating an instance of a new class (which is a behavior), UICollisionBehavior. Just like our gravity behavior, we associate this behavior with our food image view. Rather than explicitly defining the coordinates for the boundary, we use the convenient translatesReferenceBoundsIntoBoundary property on our collision behavior. By setting this property to yes, the boundary will be defined by the bounds of the reference view that we set when allocating our UIDynamics animator. Because the reference view is self.view, the boundary is now the visible space of our view. Run the application and watch how the image will fall, but stop once it reaches the bottom of the screen, as shown in the following screenshot: Collisions With our image view responding to gravity and our screen bounds we can start detecting collisions. You may have noticed that when the image view is falling, it falls right through our two labels below it. This is because UIKit Dynamics will only respond to UIView elements that have been assigned behaviors. Each behavior can be assigned to multiple objects, and each object can have multiple behaviors. Because our labels have no behaviors associated with them, the UIKit Dynamics physics engine simply ignores it. Let's make the food image view collide with the date label. To do this, we simply need to add the label to the collision behavior allocation call. Here is what the new code looks like: collision = [[UICollisionBehavior alloc] initWithItems:@[self.foodImageView, self.foodDateLabel]]; As you can see, all we have done is add self.foodDateLabel to the initWithItems array property. As mentioned before, any single behavior can be associated with multiple items. Run your code and see what happens. When the image falls, it hits the date label but continues to fall, pushing the date label with it. Because we didn't associate the gravity behavior with the label, it does not fall immediately. Although it does not respond to gravity, the label will still be moved because it is a physics object after all. This approach is not ideal, so let's use another awesome feature of UIKit Dynamics, invisible boundaries. Creating invisible boundaries We are going to take a slightly different approach to this problem. Our label is only a point of reference for where we want to add a boundary that will stop our food image view. Because of this, the label does not need to be associated with any UIKit Dynamic behaviors. Remove self.foodDateLabel from the following code: collision = [[UICollisionBehavior alloc] initWithItems:@[self.foodImageView, self.foodDateLabel]]; Instead, add the following code to the bottom of viewDidLoad but before we add our collision behavior to the animator: // Add a boundary to the top edge CGPoint topEdge = CGPointMake(self.foodDateLabel.frame.origin.x + self.foodDateLabel.frame.size.width, self.foodDateLabel.frame.origin.y); [collision addBoundaryWithIdentifier:@"barrier" fromPoint:self.foodDateLabel.frame.origin toPoint:topEdge]; Here we add a boundary to the collision behavior and pass some parameters. First we define an identifier, which we will use later, and then we pass the food date label's origin as the fromPoint property. The toPoint property is set to the CGPoint we created using the food date label's frame. Go ahead and run the application, and you will see that the food image will now stop at the invisible boundary we defined. The label is still visible to the user, but the dynamic animator ignores it. Instead the animator sees the barrier we defined and responds accordingly, even though the barrier is invisible to the user. Here is a side-by-side comparison of the before and after: Dynamic items When using UIKit Dynamics, it is important to understand what UIKit Dynamics items are. Rather than referencing dynamics as views, they are referenced as items, which adhere to the UIDynamicItem protocol. This protocol defines the center, transform, and bounds of any object that adheres to this protocol. UIView is the most common class that adheres to the UIDynamicItem protocol. Another example of a class that conforms to this protocol is the UICollectionViewLayoutAttributes class. Summary In this article, we covered some basics of how UIKit Dynamics manages your application's behaviors, that enables us to create some really unique interface effects. Resources for Article: Further resources on this subject: Linking OpenCV to an iOS project [article] Unity iOS Essentials: Flyby Background [article] New iPad Features in iOS 6 [article]
Read more
  • 0
  • 0
  • 4475

Packt
23 Apr 2014
7 min read
Save for later

User Interactivity – Mini Golf

Packt
23 Apr 2014
7 min read
(For more resources related to this topic, see here.) Using user input and touch events A touch event occurs each time a user touches the screen, drags, or releases the screen, and also during an interruption. Touch events begin with a user touching the screen. The touches are all handled by the UIResponder class, which then causes a UIEvent object to be generated, which then passes your code to a UITouch object for each finger touching the screen. For most of the code you work on, you will be concerned about the basic information. Where did the user start to touch the screen? Did they drag their finger across the screen? And, where did they let go? You will also want to know if the touch event got interrupted or canceled by another event. All of these situations can be handled in your view controller code using the following four methods (You can probably figure out what each one does.): -(void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event -(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event -(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event -(void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event These methods are structured pretty simply, and we will show you how to implement these a little later in the article. When a touch event occurs on an object, let's say a button, that object will maintain the touch event until it has ended. So, if you touch a button and drag your finger outside the button, the touch-ended event is sent to the button as a touch up outside event. This is very important to know when you are setting up touches for your code. If you have a set of touch events associated with your background and another set associated with a button in the foreground, you need to understand that if touchesBegan begins on the button and touchesEnded ends on the background, touchesEnded is still passed to your button. In this article, we will be creating a simple Mini Golf game. We start with the ball in a standard start position on the screen. The user can touch anywhere on the screen and drag their finger in any direction and release. The ball will move in the direction the player dragged their finger, and the longer the distance from their original position, the more powerful the shot will be. In the Mini Golf game, we put the touch events on the background and not the ball for one simple reason. When the user's ball stops close to the edge of the screen, we still want them to be able to drag a long distance for more power. Using gestures in iOS apps There are special types of touch events known as gestures. Gestures are used all over in iOS apps, but most notable is their use in maps. The simplest gesture would be tap, which is used for selecting items; however, gestures such as pinch and rotate are used for scaling and, well, rotating. The most common gestures have been put together in the Gesture Recognizer classes from Apple, and they work on all iOS devices. These gestures are tap, pinch, rotate, swipe, pan, and long press. You will see all of the following items listed in your Object Library: Tap: This is an simple touch and release on the screen. A tap may also include multiple taps. Pinch: This is a gesture involving a two-finger touch and dragging of your fingers together or apart. Rotate: This is a gesture involving a two finger-touch and rotation of your fingers in a circle. Swipe: This gesture involves holding a touch down and then dragging to a release point. Pan: This gesture involves holding a touch and dragging. A pan does not have an end point like the swipe but instead recognizes direction. Long press: This gesture involves a simple touch and hold for a prespecified minimum amount of time. The following screenshot displays the Object Library: Let's get started on our new game as we show you how to integrate a gesture into your game: We'll start by making our standard Single View Application project. Let's call this one MiniGolf and save it to a new directory, as shown in the following screenshot: Once your project has been created, select it and uncheck Landscape Left and Landscape Right in your Deployment Info. Create a new Objective-C class file for our game view controller with a subclass of UIViewController, which we will call MiniGolfViewController so that we don't confuse it with our other game view controllers: Next, you will need to drag it into the Resources folder for the Mini Golf game: Select your Main.storyboard, and let's get our initial menu set up. Drag in a new View Controller object from your Objects Library. We are only going to be working with two screens for this game. For the new View Controller object, let's set the custom class to your new MiniGolfViewController: Let's add in a background and a button to segue into our new Mini Golf View Controller scene. Drag out an Image View object from our Object Library and set it to full screen. Drag out a Button object and name it Play Game. Press control, click on the Play Game button, and drag over to the Mini Golf View Controller scene to create a modal segue, as shown in the next screenshot. We wouldn't normally do this with a game, but let's use gestures to create a segue from the main menu to start the game. In the ViewController.m file, add in your unwind code: - (IBAction)unwindToThisView:(UIStoryboardSegue *)unwindSegue { } In your ViewController.h file, add in the corresponding action declaration: - (IBAction)unwindToThisView:(UIStoryboardSegue *)unwindSegue; In the Mini Golf View Controller, let's drag out an Image View object, set it to full screen, and set the image to gameBackground.png. Drag out a Button object and call it Exit Game and a Label object and set Strokes to 0 in the text area. It should look similar to the following screenshot: Press control and click-and-drag your Exit Game button to the Exit unwind segue button, as shown in the following screenshot: If you are planning on running this game in multiple screen sizes, you should pin your images and buttons in place. If you are just running your code in the iPhone 4 simulator, you should be fine. That should just about do it for the usual prep work. Now, let's get into some gestures. If you run your game right now, you should be able to go into your game area and exit back out. But what if we wanted to set it up so that when you touch the screen and swipe right, you would use a different segue to enter the gameplay area? The actions performed on the gestures are as follows: Select your view and drag out Swipe Gesture Recognizer from your Objects Library onto your view: Press control and drag your Swipe Gesture Recognizer to your Mini Golf View Controller just like you did with your Play Game button. Choose a modal segue and you are done. For each of the different types of gestures that are offered in the Object Library, there are unique settings in the attributes inspector. For the swipe gesture, you can choose the swipe direction and the number of touches required to run. For example, you could set up a two-finger left swipe just by changing a few settings, as shown in the following screenshot: You could just as easily press control and click-and-drag from your Swipe Gesture Recognizer to your header file to add IBAction associated with your swipe; this way, you can control what happens when you swipe. You will need to set Type to UISwipeGestureRecognizer: Once you have done this, a new IBAction function will be added to both your header file and your implementation file: - (IBAction)SwipeDetected:(UISwipeGestureRecognizer *)sender; and - (IBAction)SwipeDetected:(UISwipeGestureRecognizer *)sender { //Code for when a user swipes } This works the same way for each of the gestures in the Object Library.
Read more
  • 0
  • 0
  • 2935

article-image-mobile-application-development-ibm-worklight
Packt
18 Feb 2014
13 min read
Save for later

Mobile application development with IBM Worklight

Packt
18 Feb 2014
13 min read
(For more resources related to this topic, see here.) The mobile industry is evolving rapidly with an increasing number of mobile devices, such as smartphones and tablets. People are accessing more services from mobile devices than ever before. Mobile solutions are directly impacting businesses, organizations, and their growing number of customers and partners. Even employees now expect to access services on a mobile river. Several approaches currently exist for mobile application development; which include: Web Development: Uses open web (HTML5, JavaScript) client programming modules. Hybrid Development: The app source code consists of web code executed within a native container that is provided by Worklight and consist of native libraries. Hybrid Mixed: The developer uses arguments in the web code with native language to create unique features and access native APIs that are not yet available via JavaScript, such as AR, NFC, and others. Native Development: In this approach, the application is developed using native languages or transcoded into a native language via MAP tool's native appearance device capabilities, and performance. To achieve a similar application in different platforms, requires a different level of expertise hitting the cost, time, and complexity. The preceding list outlines the major aspects of the development approaches. Reviewing this list can help you choose which development approach is correct for your particular mobile application. The IBM Worklight solution In 2012, IBM acquired its very first set of mobile development and integration tools called IBM Worklight, which allows organizations to transform their business and deliver mobile solutions to their customers. IBM Worklight provides a truly open approach for developers to build an application and run it across multiple mobile platforms without having to port it for each environment, that is, Apple iOS, Google Android, Blackberry, and Microsoft Windows Phone. IBM Worklight also makes the developer's life easier by using standard technologies such as HTML5 and JavaScript with extensions for popular libraries such as jQuery Mobile, Dojo Toolkit, and Sencha Touch. IBM Worklight is a mobile application platform containing all of the tools needed to develop a mobile application. If we combine IBM Worklight components into a stream, it would be clean to say that hybrid mobile application development is tightly coupled with a baseline. Every specified component provides a bundle of functionalities and supports. Here is the lifecycle of Mobile Application Development; Worklight Studio: IBM Worklight provides a robust development Eclipse based environment called Worklight Studio that allow developers to quickly construct mobile application for multiple operating platforms. Worklight Server: This component is a runtime server that activates or enables secure data transmission through centralized back-end connectivity with adapters, used for offline encrypted storage, unified Push Notification and many more. Worklight Device Runtime: The device runtime provides a rich set of APIs that are cross-platform in nature and offer easy access to the services provided by the IBM Worklight Server. Worklight Console: This is a web dependent interface for real-time analytics, Managing Push Notification Authority and Mobile Version Management. The Worklight Console is a web-based interface and dedicated for ongoing administration of Worklight Server and its deployed apps, adapters and push notification services. Worklight Application Center: It's a cross-platform Mobile Application Store which pretends specific needs for Mobile Application Development team. There is a big advantage for using Worklight to creating user interface and that reflects on development at client side as well as server side. In general developer faces problems during development and support for creation of hybrid app by using other product, are typical not straight to define the use cases, debugging, preview testing for enterprise application but using a Worklight, developer can make simple architecture and enhanced improvised structures are amend to have mobile application. Creating a Simple IBM Worklight Application Let's start by creating a simple HelloWorld Worklight project; The steps described for creating an app are similar for IBM Worklight Studio and Eclipse IDE. The following is what you'll need to do: Start IBM Worklight Studio. Navigate to File| New and select Worklight Project, as shown in the following screenshot: Create New Worklight Project In the dialog that is displayed in the following screenshot, select Hybrid Application as the type of application defined in project templates, enter HelloWorld as the name of the first mobile project, and click on Next. Asked for Worklight Project Name and Project Type. You will see another dialog for Hybrid Application. In Application name, provide HelloWorld as the name of the application. Leave the checkboxes unchecked for now; these are used to extend supported JavaScript libraries into the app. Click on Finish. Define Worklight App Name in the window After clicking on Finish, you will see your project has been created from design perspective in Project Explorer, as shown in the following screenshot: Project Explore after complete Wizard. Adding an environment We have covered IBM Worklight Studio features and what they offer developers. It's time to see how this tool and plugin will make your life even easier. The cross platform development feature is a great deal to implement. It provides you with the means to achieve cross-development environment without any hurdles and with just a few clicks within its efficient interface. To add an environment for Android, iPhone, or any other platform, right-click on the Apps folder next to the adapters and navigate to New| Worklight Environment. You will see that a dialog box appears with checkboxes for currently supported environments, which you need to create an application for. The following screenshot illustrates this feature—we're adding an Android environment for this application: Worklight Environment selection window IBM Worklight Client Side API In this article, you will learn how the IBM Worklight client-side API can improve mobile application development. You will also see the IBM Worklight server side API improve client/server integration and communication between mobile applications and back end systems. The IBM Worklight client-side API allows mobile applications to access most of the features that are available in IBM Worklight during runtime, in order to get access to some defined libraries that appear to be bundled into the mobile application. Integration of the libraries for your mobile application using Worklight Server is used to access predefined communication interfaces. These libraries also offer unified access to native device features, which streamlines application development. The IBM Worklight client-side API contains hybrid, native, mixed hybrid, and web-based APIs. Besides, it extends those of these APIs that are responsible for supporting every mobile development framework. The development framework for a mobile application is used to improve security including custom and built-in authentication mechanisms for IBM Worklight provided by client-side API modules. It provides a semantic connection between web technologies such as HTML5, CSS3, and JavaScript with native functions that are available for different mobile platforms. Exploring Dojo Mobile Regarding the Dojo UI framework, you'll learn about Dojo Mobile in detail. Dojo Mobile, an extension for Dojo Toolkit, provides a series of widgets, or components, optimized for use on a mobile device, such as a smartphone or tablet. The Dojo framework is an extension of JavaScript and provides a built-in library which contains custom components such as text fields, validation menus, and image galleries. The components are modelled on their native counterparts and will look and feel native to those familiar with smartphone applications. The components are completely customizable using themes that let you make various customizations, such as pushing different sets of styles to iOS and Android users. Authentication and Security Modules Worklight has built-in authentication framework that allows developer to configure and use it with very little effort. The Worklight project has an authentication configuration file, which is used to declare and force security on mobile application, adapters, data and web resources which consist following security entities. We will talk about the various pre-defined authentication realms and security tests that are provided in Worklight out-of-box. To identify the importance of Mobile security you can see that in today's life we keep our personal and business data on mobile devices. The data and applications are both important to us. Both the data and applications should be protected against unauthorized access, particularly if they contain sensitive information or transmitting over the network. There are number of ways via a device can be compromised and it can leak data to malicious users. Worklight security principles, concepts and terminology IBM Worklight provides various security roles to protect applications, adapter procedures, and static resources from an unauthorized access. Each role can be defined by a security test that comprises one or more authentication realms. The authentication realm defines a process that will be used to authenticate the users. The authentication realm has the following parts: Challenge handler: This is a component on the device side Authenticator and login module: This is a component on the server One authentication realm can be used to protect multiple resources. We will look into each component in detail. Device Request Flow: The following screenshot shows a device that makes a request to access a protected resource, for example, an adapter function, on the server. In response to the request, the server sends back an authentication challenge to the device to submit its authenticity: Request/Response flow between Worklight application and enterprise server diagram> Push notification Mobile OS vendors such as Apple, Google, Microsoft, and others provide a free of cost feature through which a message can be delivered to any device running on the respective OS. The OS vendors send a message commonly known as a push message to a device for a particular app. It is not required for an app to be running in order to receive a push message. A push message can contain the following: Alerts: These would appear in the form of text messages Badges: These are small, circular marks on the app icon Sounds: These are audio alerts Alerts - Text Messages Messages will appear in the notification center (for iOS) and notification bar (for Android). IBM Worklight provides a unified push notification architecture that simplifies sending push messages across multiple devices running on different platforms. It provides a central management console to manage mobile vendor services, for example, APNS and GCM, in the background. Worklight provides the following push notification benefits: Easy to use: Users can easily subscribe and unsubscribe to a push service Quick message delivery: The push message gets delivered to a user's device even if the app is currently not running on the device Message feedback: It is possible to send feedback whenever a user receives and reads a push message Cordova Plugins A Cordova plugin is an open source, cross-platform mobile development architecture that allows the creation of multiplatform-deployable mobile apps. These apps can access native component features of devices using an API having web technologies such as HTML 5, JavaScript, and CSS 3. Apache Cordova Plugins are integrated into IBM Worklight Android and iOS projects. In this article, we will describe how Apache Cordova leverages the ability to merge the JavaScript interface as a wrapper on the web side in a native container with the device native interface on the mobile device platform. The most critical aspect of Cordova plugins is to deal with the native functionalities such as camera, bar code scanning, contacts list, and many other native features, currently running on multiple platforms. JavaScript doesn't provide such extensibility to enhance the scripting with respect to the native devices. In order to have a native feature's accessibility, we provide a library corresponding to the device's native feature so that JavaScript can communicate through it. When the need arises for a web page to execute the native feature functionality, the following points of access are available: The scenario has to be implemented in platform-specific manner, for example, in Android, iOS, or any other device In order to handle requests and responses between web pages and native pages, we need to communicate to/from web and native pages that are encrypted. By selecting the first option from the preceding list, we would find ourselves implementing and developing platform-dependent mobile applications. As we are in need of implementing mobile applications for a cross-platform mobile, and because it leads to provide cost-ineffective solutions, it is not a wise choice for Enterprise Mobile Development Solutions. It seems to be a really poor extensible for future enhancements and needs. Encrypted Offline Cache Encrypted Offline Cache (EOC) is the mechanism that is used for storing the repeated and the most sensitive data, which is used in the client's application. Encrypted Offline Cache is precisely known as EOC. It permits a flexible on-device data storage procedure for Android, iOS, BlackBerry, and Windows. This procedure provides a better alternative to the user for storing the manipulated data or the fetched response using the defined adapter data when offline and synchronizing the data for the usage of the server, which provides modifications that were completely developed when offline or without Internet connectivity. In order to dedicatedly create any mobile application for multiple platforms such as iOS and Android, consider using JSONStore rather than EOC. It seems to be much more practical to implement and is supposed to be the best practices of IBM. The JSONStore provides a mechanism to ease cryptographic procedures for encrypting forms and implementing security. PBKDF2 is a key derivation function that would act as the password to access encrypted data, which would be provided by the user. HTML5 cache can be used in EOC, which is not guaranteed to be persistent and is not a proper solution for the future updated versions of iOS. Storage JSONStore The local replica is a JSONStore. IBM Worklight delivers an API which do its work with a JSON Store consuming the class WL.JSONStore using the JavaScript defined method. You can generate an application that endures a local storage manipulated data with data copy and thrusts the local updates to a back-end provisioned service. Nearly every single method or process which is delivered in API for retrieving synchronized data to activate the native copy of the defined data that is kept on the client application or on-device. By means of the JSONStore API, you can encompass the functionality of existing adapter connectivity model to store data locally and impulse modifications from the client to a server. You can pursuit the local data storage and apprise or delete data within it. It can be used to protect the local data store by using password-based encryption. Summary In this article, we have discussed modern world mobile development techniques using IBM Worklight which surely allows an easy ,integrated, and secure enterprise mobile application with respect to time and development efforts. Beside it, most of the key functional areas have been covered including IBM Worklight components, mobile cross-platform environment handling and authentication, push notifications, Dojo mobile framework, and Encrypted Cache for Offline storage. IBM Worklight has the most diverse mechanism to enhance the mobile application functionalities with a more optimum and efficient way. This article also completely concludes the mobile application development techniques and features, by using which enterprise mobile app development will no longer be disquiet. Resources for Article: Further resources on this subject: Creating and configuring a basic mobile application [Article] Viewing on Mobile Devices [Article] Creating mobile friendly themes [Article]
Read more
  • 0
  • 0
  • 3270

Packt
18 Feb 2014
5 min read
Save for later

XenMobile™ Solutions Bundle

Packt
18 Feb 2014
5 min read
(For more resources related to this topic, see here.) Introduction to XenMobile™ Solution The XenMobile Solution allows to manage mobile devices, the applications inside these devices, and the data in these applications. This enables users to access their apps, which may be mobile-, SaaS-, web-, or Windows-based from a universal app store. It provides administrators with a granular level control over the devices and manages them accordingly by implementing multiple security policies. It provides admins with the options to securely deliver productivity apps such as e-mails or intranet websites to end users. Also, it permits options to securely wrap applications before deployment without compromising application security and productivity. With more and more enterprises welcoming the Bring Your Own Device (BYOD) concept, a scenario where the employees are allowed to bring their own devices at work, XenMobile components allow admins to securely manage these devices without hampering the end-user device experience. In this section, we will introduce our readers to the following XenMobile Solution components and their role in the XenMobile Solution: NetScaler Gateway: This is a secure, access-control management solution allowing users to securely access internal resources. It also provides administrators with granular control policies to manage how devices will function once they are connected to internal resources. These internal resources can be an intranet portal, corporate e-mails, or in-house apps. XenMobile Device Manager: The XenMobile Device Manager allows administrators to manage devices, users, enroll devices, deploy applications and files, and set policies. XenMobile Device Manager also has the option to integrate Active Directory and detailed reporting features. App Controller: App Controller allows users to access the Web, SaaS-based applications, iOS and Android apps, and integrate ShareFile apps on their device from anywhere on an internal network. When integrated with NetScaler Gateway, the XenMobile Solution provides the users with access to these resources from an external network. Administrators have granular security policies to implement on devices connecting either from an internal or external network. MDX Toolkit: The MDX toolkit is a software that must be installed on Mac OS to wrap iOS or Android-based apps and ensures the apps are secure and compliant when installed on end-user devices. Administrators can also define a set of default policies while wrapping the app to limit how it works. Worx Apps: These are client-based apps that communicate with App Controller and allow users to access internal resources anywhere. They contain Worx Home for user enrollment, Worx Web to access web-based resources, and WorxMail for accessing corporate e-mails. ShareFile: This is a cloud-based, file-sharing service that enables users to securely share documents from different apps or access shared resources on a desktop from mobile devices. ShareFile data can be accessed as an app, web resource, or through integration with Outlook as an add-in. The XenMobile Solution with its components creates a highly secure and enterprise-compliant solution. The following diagram is a detailed network diagram for the XenMobile Solution provided by Citrix: © Citrix Systems, Inc. All Rights Reserved. XenMobile™ Solution features XenMobile contains some of the most sought-after features when compared to its competitors. In this section, we will list some of the features available in XenMobile, as follows: Configuring, provisioning, and managing mobile devices on Windows Mobile, Symbian, iOS, and Android platforms Mobile Content Management using SharePoint and network-driven integration Secure mobile web browser App-specific micro VPN Integrating Windows apps Unified app store Secure document sharing, syncing, and editing The deployment flowchart While implementing a Mobile Device Management (MDM) solution, it's very important to have a deployment pattern. This helps in understanding which components are required or are not suitable as per the environment needs. This brings in the requirement to have a detailed flowchart of the Solution deployment. The following diagram shows the Citrix-recommended best practice's deployment flowchart for the XenMobile Solution: © Citrix Systems, Inc. All Rights Reserved. Explanation In this section, we will break down the deployment flowchart to understand the component selection phase. The flowchart is based upon our requirements and will vary from one scenario to other. Phase 1 The essentials for phase 1 are as follows: Requirement: Do we want an MDM solution to manage the enrolled devices? Decision: If an MDM solution is required, then we proceed with the XenMobile Device Manager installation; alternatively, we can move to the next requirement Phase 2 The essentials for phase 2 are as follows: Requirement: Is application and content management required? Decision: If application and content integration is required then we can deploy the XenMobile Solutions Bundle; alternatively, move to the next requirement Phase 3 The essentials for phase 3 are as follows: Requirement: Will there be users accessing the integrated applications and data from the public Internet? Decision: If Yes, then move ahead with the NetScaler Gateway deployment; alternatively, move to the next requirement Phase 4 The essentials for phase 4 are as follows: Requirement: Is access to XenApp or XenDesktop required? Decision: If Yes, then connect using StoreFront Summary This article provided a brief overview of XenMobile Solution and each of its components. We also covered many of its features that make it unique and the Network architecture of the solution. Additionally, we have addressed the best practice deployment flowchart of the XenMobile Solution as recommended by Citrix. Resources for Article: Further resources on this subject: So, what is XenMobile? [Article] Gesture [Article] Creating Quizzes [Article]
Read more
  • 0
  • 0
  • 1640
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-adding-graphics-map
Packt
13 Feb 2014
4 min read
Save for later

Adding Graphics to the Map

Packt
13 Feb 2014
4 min read
(For more resources related to this topic, see here.) Graphics are points, lines, or polygons that are drawn on top of your map in a layer that is independent of any other data layer associated with a map service. Most people associate a graphic object with the symbol that is displayed on a map to represent the graphic. However, each graphic in ArcGIS Server can be composed of up to four objects, including the geometry of the graphic, the symbology associated with the graphic, attributes that describe the graphic, and an info template that defines the format of the info window that appears when a graphic is clicked on. Although a graphic can be composed of up to four objects, it is not always necessary for this to happen. The objects you choose to associate with your graphic will be dependent on the needs of the application that you are building. For example, in an application that displays GPS coordinates on a map, you may not need to associate attributes or display info window for the graphic. However, in most cases, you will be defining the geometry and symbology for a graphic. Graphics are temporary objects stored in a separate layer on the map. They are displayed while an application is in use and are removed when the session is complete. The separate layer, called the graphics layer, stores all the graphics associated with your map. Just as with the other types of layers, GraphicsLayer also inherits from the Layer class. Therefore, all the properties, methods, and events found in the Layer class will also be present in GraphicsLayer. Graphics are displayed on top of any other layers that are present in your application. An example of point and polygon graphics is provided in the following screenshot. These graphics can be created by users or drawn by the application in response to the tasks that have been submitted. For example, a business analysis application might provide a tool that allows the user to draw a freehand polygon to represent a potential trade area. The polygon graphic would be displayed on top of the map, and could then be used as an input to a geoprocessing task that pulls demographic information pertaining to the potential trade area. Many ArcGIS Server tasks return their results as graphics. The QueryTask object can perform both attribute and spatial queries. The results of a query are then returned to the application in the form of a FeatureSet object, which is simply an array of features. You can then access each of these features as graphics and plot them on the map using a looping structure. Perhaps you'd like to find and display all land parcels that intersect the 100 year flood plain. A QueryTask object could perform the spatial query and then return the results to your application, where they would then be displayed as polygon graphics on the map. In this article, we will cover the following topics: The four parts of a graphic Creating geometry for graphics Symbolizing graphics Assigning attributes to graphics Displaying graphic attributes in an info window Creating graphics Adding graphics to the graphics layer The four parts of a graphic A graphic is composed of four items: Geometry, Symbol, Attributes, and InfoTemplate, as shown in the following diagram: A graphic has a geometric representation that describes where it is located. The geometry, along with a symbol, defines how the graphic is displayed. A graphic can also have attributes that provide descriptive information about the graphic. Attributes are defined as a set of name-value pairs. For example, a graphic depicting a wildfire location could have attributes that describe the name of the fire along with the number of acres burned. The info template defines what attributes should be displayed in the info window that appears when the graphic appears, along with how they should be displayed. After their creation, the graphic objects must be stored inside a GraphicsLayer object, before they can be displayed on the map. This GraphicsLayer object functions as a container for all the graphics that will be displayed. All the elements of a graphic are optional. However, the geometry and symbology of a graphic are almost always assigned. Without these two items, there would be nothing to display on the map, and there isn't much point in having a graphic unless you're going to display it. The following figure shows the typical process of creating a graphic and adding it to the graphics layer. In this case, we are applying the geometry of the graphic as well as a symbol to depict the graphic. However, we haven't specifically assigned attributes or an info template to this graphic.
Read more
  • 0
  • 0
  • 2543

Packt
10 Feb 2014
8 min read
Save for later

XamChat – a Cross-platform App

Packt
10 Feb 2014
8 min read
(For more resources related to this topic, see here.) Describing our sample application concept The concept is simple: a chat application that uses a standard Internet connection as an alternative to sending text messages. There are several popular applications like this in the Apple App Store, probably due to the cost of text messaging and support for devices such as the iPod Touch or iPad. This should be a neat real-world example that could be useful for users, and will cover specific topics in developing applications for iOS and Android. Before starting with the development, let's list the set of screens that we'll need: Login / sign up: This screen will include a standard login and sign-up process for the user List of conversations: This screen will include a button to start a new conversation List of friends: This screen will provide a way to add new friends when we start a new conversation Conversation: This screen will have a list of messages between you and another user, and an option to reply A quick wireframe layout of the application would help us grasp a better understanding of the layout of the app. The following figure shows the set of screens to be included in your app: Developing our model layer Since we have a good idea of what the application is, the next step is to develop the business objects or model layer of this application. Let's start out by defining a few classes that would contain the data to be used throughout the app. It is recommended, for the sake of organization, to add these to a Models folder in your project. Let's begin with a class representing a user. The class can be created as follows: public class User {   public int Id { get; set; }   public string Username { get; set; }   public string Password { get; set; } } Pretty straightforward so far; let's move on to create classes representing a conversation and a message as follows: public class Conversation {   public int Id { get; set; }   public int UserId { get; set; }   public string Username { get; set; } } public class Message {   public int Id { get; set; }   public int ConversationId { get; set; }   public int UserId { get; set; }   public string Username { get; set; }   public string Text { get; set; } } Notice that we are using integers as identifiers for the various objects. UserId is the value that would be set by the application to change the user that the object is associated with. Now let's go ahead and set up our solution by performing the following steps: Start by creating a new solution and a new C# Library project. Name the project as XamChat.Core and the solution as XamChat. Next, let's set the library to a Mono / .NET 4.5 project. This setting is found in the project option dialog under Build | General | Target Framework. You could also choose to use Portable Library for this project, Writing a mock web service Many times when developing a mobile application, you may need to begin the development of your application before the real backend web service is available. To prevent the development from halting entirely, a good approach would be to develop a mock version of the service. First, let's break down the operations our app will perform against a web server. The operations are as follows: Log in with a username and password. Register a new account. Get the user's list of friends. Add friends by their usernames. Get a list of the existing conversations for the user. Get a list of messages in a conversation. Send a message. Now let's define an interface that offers a method for each scenario. The interface is as follows: public interface IWebService {   Task<User> Login(string username, string password);   Task<User> Register(User user);   Task<User[]> GetFriends(int userId);   Task<User> AddFriend(int userId, string username);   Task<Conversation[]> GetConversations(int userId);   Task<Message[]> GetMessages(int conversationId);   Task<Message> SendMessage(Message message); } As you see, we're using asynchronous communication with the TPL(Task Parallel Library) technology. Since communicating with a web service can be a lengthy process, it is always a good idea to use the Task<T> class for these operations. Otherwise, you could inadvertently run a lengthy task on the user interface thread, which would prevent user inputs during the operation. Task is definitely needed for web requests, since users could easily be using a cellular Internet connection on iOS and Android, and it will give us the ability to use the async and await keywords down the road. Now let's implement a fake service that implements this interface. Place classes such as FakeWebService in the Fakes folder of the project. Let's start with the class declaration and the first method of the interface: public class FakeWebService {   public int SleepDuration { get; set; }   public FakeWebService()   {     SleepDuration = 1;   }   private Task Sleep()   {     return Task.Delay(SleepDuration);   }   public async Task<User> Login(     string username, string password)   {     await Sleep();     return new User { Id = 1, Username = username };   } } We started off with a SleepDuration property to store a number in milliseconds. This is used to simulate an interaction with a web server, which can take some time. It is also useful for changing the SleepDuration value in different situations. For example, you might want to set this to a small number when writing unit tests so that the tests execute quickly. Next, we implemented a simple Sleep method to return a task that introduce delays of a number of milliseconds. This method will be used throughout the fake service to cause a delay on each operation. Finally, the Login method merely used an await call on the Sleep method and returned a new User object with the appropriate Username. For now, any username or password combination will work; however, you may wish to write some code here to check specific credentials. Now, let's implement a few more methods to continue our FakeWebService class as follows: public async Task<User> Register(User user) {   await Sleep();   return user; } public async Task<User[]> GetFriends(int userId) {   await Sleep();   return new[]   {     new User { Id = 2, Username = "bobama" },     new User { Id = 2, Username = "bobloblaw" },     new User { Id = 3, Username = "gmichael" },   }; } public async Task<User> AddFriend(   int userId, string username) {   await Sleep();   return new User { Id = 4, Username = username }; } For each of these methods, we kept in mind exactly same pattern as the Login method. Each method will delay and return some sample data. Feel free to mix the data with your own values. Now, let's implement the GetConversations method required by the interface as follows: public async Task<Conversation[]> GetConversations(int userId) {   await Sleep();   return new[]   {     new Conversation { Id = 1, UserId = 2 },     new Conversation { Id = 1, UserId = 3 },     new Conversation { Id = 1, UserId = 4 },   }; } Basically, we just create a new array of the Conversation objects with arbitrary IDs. We also make sure to match up the UserId values with the IDs we've used on the User objects so far. Next, let's implement GetMessages to retrieve a list of messages as follows: public async Task<Message[]> GetMessages(int conversationId) {   await Sleep();   return new[]   {     new Message     {       Id = 1,       ConversationId = conversationId,       UserId = 2,       Text = "Hey",     },     new Message     {       Id = 2,       ConversationId = conversationId,       UserId = 1,       Text = "What's Up?",     },     new Message     {       Id = 3,       ConversationId = conversationId,       UserId = 2,       Text = "Have you seen that new movie?",     },     new Message     {       Id = 4,       ConversationId = conversationId,       UserId = 1,       Text = "It's great!",     },   }; } Once again, we are adding some arbitrary data here, and mainly making sure that UserId and ConversationId match our existing data so far. And finally, we will write one more method to send a message as follows: public async Task<Message> SendMessage(Message message) {   await Sleep();   return message; } Most of these methods are very straightforward. Note that the service doesn't have to work perfectly; it should merely complete each operation successfully with a delay. Each method should also return test data of some kind to be displayed in the UI. This will give us the ability to implement our iOS and Android applications while filling in the web service later. Next, we need to implement a simple interface for persisting application settings. Let's define an interface named ISettings as follows: public interface ISettings {   User User { get; set; }   void Save(); } Note that you might want to set up the Save method to be asynchronous and return Task if you plan on storing settings in the cloud. We don't really need this with our application since we will only be saving our settings locally. Later on, we'll implement this interface on each platform using Android and iOS APIs. For now, let's just implement a fake version that will be used later when we write unit tests. The interface is created by the following lines of code: public class FakeSettings : ISettings {   public User User { get; set; }   public void Save() { } } Note that the fake version doesn't actually need to do anything; we just need to provide a class that will implement the interface and not throw any unexpected errors. This completes the Model layer of the application. Here is a final class diagram of what we have implemented so far:
Read more
  • 0
  • 0
  • 1988

article-image-intents-mobile-components
Packt
20 Jan 2014
7 min read
Save for later

Intents for Mobile Components

Packt
20 Jan 2014
7 min read
(For more resources related to this topic, see here.) Common mobile components Due to the open source nature of the Android operating system, many different companies such as HTC and Samsung ported the Android OS on their devices with many different functionalities and styles. Each Android phone is unique in some way or the other and possesses many unique features and components different from other brands and phones. But there are some components that are found to be common in all the Android phones. We are using two key terms here: components and features. Component is the hardware part of an Android phone, such as camera, Bluetooth and so on. And Feature is the software part of an Android phone, such as the SMS feature, E-mail feature, and so on. This article is all about hardware components, their access, and their use through intents. These common components can be generally used and implemented independently of any mobile phone or model. And there is no doubt that intents are the best asynchronous messages to activate these Android components. These intents are used to trigger the Android OS when some event occurrs and some action should be taken. Android, on the basis of the data received, determines the receiver for the intent and triggers it. Here are a few common components found in each Android phone: The Wi-Fi component Each Android phone comes with a complete support of the Wi-Fi connectivity component. The new Android phones having Android Version 4.1 and above support the Wi-Fi Direct feature as well. This allows the user to connect to nearby devices without the need to connect with a hotspot or network access point. The Bluetooth component An Android phone includes Bluetooth network support that allows the users of Android phones to exchange data wirelessly in low range with other devices. The Android application framework provides developers with the access to Bluetooth functionality through Android Bluetooth APIs. The Cellular component No mobile phone is complete without a cellular component. Each Android phone has a cellular component for mobile communication through SMS, calls, and so on. The Android system provides very high, flexible APIs to utilize telephony and cellular components to create very interesting and innovative apps. Global Positioning System (GPS) and geo-location GPS is a very useful but battery-consuming component in any Android phone. It is used for developing location-based apps for Android users. Google Maps is the best feature related to GPS and geo-location. Developers have provided so many innovative apps and games utilizing Google Maps and GPS components in Android. The Geomagnetic field component Geomagnetic field component is found in most Android phones. This component is used to estimate the magnetic field of an Android phone at a given point on the Earth and, in particular, to compute magnetic declination from the North. The geomagnetic field component uses the World Magnetic Model produced by United States National Geospatial-Intelligence Agency. The current model that is being used for the geomagnetic field is valid until 2015. Newer Android phones will have the newer version of the geomagnetic field. Sensor components Most Android devices have built-in sensors that measure motion, orientation, environment conditions, and so on. These sensors sometimes act as the brains of the app. For example, they take actions on the basis of the mobile's surrounding (weather) and allow users to have an automatic interaction with the app. These sensors provide raw data with high precision and accuracy for measuring the respective sensor values. For example, gravity sensor can be used to track gestures and motions, such as tilt, shake, and so on, in any app or game. Similarly, a temperature sensor can be used to detect the mobile temperature, or a geomagnetic sensor (as introduced in the previous section) can be used in any travel application to track the compass bearing. Broadly, there are three categories of sensors in Android: motion, position, and environmental sensors. The following subsections discuss these types of sensors briefly. Motion sensors Motion sensors let the Android user monitor the motion of the device. There are both hardware-based sensors such as accelerometer, gyroscope, and software-based sensors such as gravity, linear acceleration, and rotation vector sensors. Motion sensors are used to detect a device's motion including tilt effect, shake effect, rotation, swing, and so on. If used properly, these effects can make any app or game very interesting and flexible, and can prove to provide a great user experience. Position sensors The two position sensors, geomagnetic sensor and orientation sensor, are used to determine the position of the mobile device. Another sensor, the proximity sensor, lets the user determine how close the face of a device is to an object. For example, when we get any call on an Android phone, placing the phone on the ear shuts off the screen, and when we hold the phone back in our hands, the screen display appears automatically. This simple application uses the proximity sensor to detect the ear (object) with the face of the device (the screen). Environmental sensors These sensors are not used much in Android apps, but used widely by the Android system to detect a lot of little things. For example, the temperature sensor is used to detect the temperature of the phone, and can be used in saving the battery and mobile life. At the time of writing this article, the Samsung Galaxy S4 Android phone has been launched. The phone has shown a great use of environmental gestures by allowing users to perform actions such as making calls by no-touch gestures such as moving your hand or face in front of the phone. Components and intents Android phones contain a large number of components and features. This becomes beneficial to both Android developers and users. Android developers can use these mobile components and features to customize the user experience. For most components, developers get two options; either they extend the components and customize those according to their application requirements, or they use the built-in interfaces provided by the Android system. We won't read about the first choice of extending components as it is beyond the scope of this article. However, we will study the other option of using built-in interfaces for mobile components. Generally, to use any mobile component from our Android app, the developers send intents to the Android system and then Android takes the action accordingly to call the respective component. Intents are asynchronous messages sent to the Android OS to perform any functionality. Most of the mobile components can be triggered by intents just by using a few lines of code and can be utilized fully by developers in their apps. In the following sections of this article, we will see few components and how they are used and triggered by intents with practical examples. We have divided the components in three ways: communication components, media components, and motion components. Now, let's discuss these components in the following sections. Communication components Any mobile phone's core purpose is communication. Android phones provide a lot of features other than communication features. Android phones contain SMS/MMS, Wi-Fi, and Bluetooth for communication purposes. This article focuses on the hardware components; so, we will discuss only Wi-Fi and Bluetooth. The Android system provides built-in APIs to manage and use Bluetooth devices, settings, discoverability, and much more. It offers full network APIs not only for Bluetooth but also for Wi-Fi, hotspots, configuring settings, Internet connectivity, and much more. More importantly, these APIs and components can be used very easily by writing few lines of code through intents. We will start by discussing Bluetooth, and how we can use Bluetooth through intents in the next section.
Read more
  • 0
  • 0
  • 2176

article-image-article-what_is_ngui
Packt
20 Jan 2014
8 min read
Save for later

What is NGUI?

Packt
20 Jan 2014
8 min read
(For more resources related to this topic, see here.) The Next-Gen User Interface kit is a plugin for Unity 3D. It has the great advantage of being easy to use, very powerful, and optimized compared to Unity's built-in GUI system, UnityGUI. Since it is written in C#, it is easily understandable and you may tweak it or add your own features, if necessary. The NGUI Standard License costs $95. With this, you will have useful example scenes included. I recommend this license to start comfortably—a free evaluation version is available, but it is limited, outdated, and not recommended. The NGUI Professional License, priced at $200, gives you access to NGUI's GIT repository to access the latest beta features and releases in advance. A $2000 Site License is available for an unlimited number of developers within the same studio. Let's have an overview of the main features of this plugin and see how they work. UnityGUI versus NGUI With Unity's GUI, you must create the entire UI in code by adding lines that display labels, textures, or any other UI element on the screen. These lines have to be written inside a special function, OnGUI(), that is called for every frame. This is no longer necessary; with NGUI, UI elements are simple GameObjects! You can create widgets—this is what NGUI calls labels, sprites, input fields, and so on—move them, rotate them, and change their dimensions using handles or the Inspector. Copying, pasting, creating prefabs, and every other useful feature of Unity's workflow is also available. These widgets are viewed by a camera and rendered on a layer that you can specify. Most of the parameters are accessible through Unity's Inspector, and you can see what your UI looks like directly in the Game window, without having to hit the Play button. Atlases Sprites and fonts are all contained in a large texture called atlas. With only a few clicks, you can easily create and edit your atlases. If you don't have any images to create your own UI assets, simple default atlases come with the plugin. That system means that for a complex UI window composed of different textures and fonts, the same material and texture will be used when rendering. This results in only one draw call for the entire window. This, along with other optimizations, makes NGUI the perfect tool to work on mobile platforms. Events NGUI also comes with an easy-to-use event framework that is written in C#. The plugin comes with a large number of additional components that you can attach to GameObjects. These components can perform advanced tasks depending on which events are triggered: hover, click, input, and so on. Therefore, you may enhance your UI experience while keeping it simple to configure. Code less, get more! Localization NGUI comes with its own localization system, enabling you to easily set up and change your UI's language with the push of a button. All your strings are located in the .txt files: one file per language. Shaders Lighting, normal mapping, and refraction shaders are supported in NGUI, which can give you beautiful results. Clipping is also a shader-controlled feature with NGUI, used for showing or hiding specific areas of your UI. We've now covered what NGUI's main features are, and how it can be useful to us as a plugin, and now it's time to import it inside Unity. Importing NGUI After buying the product from the Asset Store or getting the evaluation version, you have to download it. Perform the following steps to do so: Create a new Unity project. Navigate to Window | Asset Store. Select your downloads library. Click on the Download button next to NGUI: Next-Gen UI. When the download completes, click on the NGUI icon / product name in the library to access the product page. Click on the Import button and wait for a pop-up window to appear. Check the checkbox for NGUI v.3.0.2.unity package and click on Import. In the Project view, navigate to Assets | NGUI and double-click on NGUI v.3.0.2. A new imported pop-up window will appear. Click on Import again. Click any button on the toolbar to refresh it.The NGUI tray will appear! The NGUI tray will look like the following screenshot: You have now successfully imported NGUI to your project. Let's create your first 2D UI. Creating your UI We will now create our first 2D user interface with NGUI's UI Wizard. This wizard will add all the elements needed for NGUI to work. Before we continue, please save your scene as Menu.unity. UI Wizard Create your UI by opening the UI Wizard by navigating to NGUI | Open | UIWizard from the toolbar. Let's now take a look at the UI Wizard window and its parameters. Window You should now have the following pop-up window with two parameters: Parameters The two parameters are as follows: Layer: This is the layer on which your UI will be displayed Camera: This will decide if the UI will have a camera, and its drop-down options are as follows: None: No camera will be created Simple 2D:Uses a camera with orthographic projection Advanced 3D:Uses a camera with perspective projection Separate UI Layer I recommend that you separate your UI from other usual layers. We should do it as shown in the following steps: Click on the drop-down menu next to the Layer parameter. Select Add Layer. Create a new layer and name it GUI2D. Go back to the UI Wizard window and select this new GUI2D layer for your UI. You can now click on the Create Your UI button. Your first 2D UI has been created! Your UI structure The wizard has created four new GameObjects on the scene for us: UI Root (2D) Camera Anchor Panel Let's now review each in detail. UI Root (2D) The UIRoot component scales widgets down to keep them at a manageable size. It is also responsible for the Scaling Style—it will either scale UI elements to remain pixel perfect or to occupy the same percentage of the screen, depending on the parameters you specify. Select the UI Root (2D) GameObject in the Hierarchy. It has the UIRoot.cs script attached to it. This script adjusts the scale of the GameObject it's attached to in order to let you specify widget coordinates in pixels, instead of Unity units as shown in the following screenshot: Parameters The UIRoot component has four parameters: Scaling Style: The following are the available scaling styles: PixelPerfect:This will ensure that your UI will always try to remain at the same size in pixels, no matter what resolution. In this scaling mode, a 300 x 200 window will be huge on a 320 x 240 screen and tiny on a 1920 x 1080 screen. That also means that if you have a smaller resolution than your UI, it will be cropped. FixedSize:This will ensure that your UI will be proportionally resized depending on the screen's height. The result is that your UI will not be pixel perfect but will scale to fit the current screen size. FixedSizeOnMobiles:This will ensure fixed size on mobiles and pixel perfect everywhere else. Manual Height: With the FixedSize scaling style, the scale will be based on this height. If your screen's height goes over or under this value, it will be resized to be displayed identically while maintaining the aspect ratio(width/height proportional relationship). Minimum Height: With the PixelPerfect scaling style, this parameter defines the minimum height for the screen. If your screen height goes below this value, your UI will resize. It will be as if the Scaling Style parameter was set to FixedSize with Manual Height set to this value. Maximum Height: With the PixelPerfect scaling style, this parameter defines the maximum height for the screen. If your screen height goes over this value,your UI will resize. It will be as if the ScalingStyle parameter was set to FixedSize with Manual Height set to this value. Please set the Scaling Style parameter to FixedSize with a Manual Height value of 1080. This will allow us to have the same UI on any screen size up to 1920 x 1080. Even though the UI will look the same on different resolutions, the aspect ratio is still a problem since the rescale is based on the screen's height only. If you want to cover both 4:3 and 16:9 screens, your UI should not be too large—try to keep it square.Otherwise, your UI might be cropped on certain screen resolutions. On the other hand, if you want a 16:9 UI, I recommend you force this aspect ratio only. Let's do it now for this project by performing the following steps: Navigate to Edit| Project Settings | Player. In the Inspectoroption, unfold the Resolution and Presentationgroup. Unfold the Supported Aspect Ratios group. Check only the 16:9 box. Summary In this article, we discussed NGUI's basic workflow—it works with GameObjects, uses atlases to combine multiple textures in one large texture, has an event system, can use shaders, and has a localization system. After importing the NGUI plugin, we created our first 2D UI with the UI Wizard, reviewed its parameters, and created our own GUI 2D layer for our UI to reside on. Resources for Article: Further resources on this subject: Unity 3D Game Development: Don't Be a Clock Blocker [Article] Component-based approach of Unity [Article] Unity 3: Building a Rocket Launcher [Article]
Read more
  • 0
  • 0
  • 6356
article-image-bsd-socket-library
Packt
17 Jan 2014
9 min read
Save for later

BSD Socket Library

Packt
17 Jan 2014
9 min read
(For more resources related to this topic, see here.) Introduction The Berkeley Socket API (where API stands for Application Programming Interface) is a set of standard functions used for inter-process network communications. Other socket APIs also exist; however, the Berkeley socket is generally regarded as the standard. The Berkeley Socket API was originally introduced in 1983 when 4.2 BSD was released. The API has evolved with very few modifications into a part of the Portable Operating System Interface for Unix (POSIX) specification. All modern operating systems have some implementation of the Berkeley Socket Interface for connecting devices to the Internet. Even Winsock, which is MS Window's socket implementation, closely follows the Berkeley standards. BSD sockets generally rely on client/server architecture when they establish their connections. Client/server architecture is a networking approach where a device is assigned one of the two following roles: Server: A server is a device that selectively shares resources with other devices on the network Client: A client is a device that connects to a server to make use of the shared resources Great examples of the client/server architecture are web pages. When you open a web page in your favorite browser, for example https://www.packtpub.com, your browser (and therefore your computer) becomes the client and Packt Publishing's web servers become the servers. One very important concept to keep in mind is that any device can be a server, a client, or both. For example, you may be visiting the Packt Publishing website, which makes you a client, and at the same time you have file sharing enabled, which also makes your device a server. The Socket API generally uses one of the following two core protocols: Transmission Control Protocol (TCP): TCP provides a reliable, ordered, and error-checked delivery of a stream of data between two devices on the same network. TCP is generally used when you need to ensure that all packets are correctly received and are in the correct order (for example, web pages). User Datagram Protocol (UDP): UDP does not provide any of the error-checking or reliability features of TCP, but offers much less overhead. UDP is generally used when providing information to the client quickly is more important than missing packets (for example, a streaming video). Darwin, which is an open source POSIX compliant operating system, forms the core set of components upon which Mac OS X and iOS are based. This means that both OS X and iOS contain the BSD Socket Library. The last paragraph is very important to understand when you begin thinking about creating network applications for the iOS platform, because almost any code example that uses the BSD Socket Library will work on the iOS platform. The biggest difference between using the BSD Socket API on any standard Unix platform and the iOS platform is that the iOS platform does not support forking of processes. You will need to use multiple threads rather than multiple processes. The BSD Socket API can be used to build both client and server applications. There are a few networking concepts that you should understand: IP address: Any device on an Internet Protocol (IP) network, whether it is a client or server, has a unique identifier known as an IP address. The IP address serves two basic purposes: host identification and location identification. There are currently two IP address formats: IPv4: This is currently the standard for the Internet and most internal intranets. This is an example of an IPv4 address: 83.166.169.231. IPv6: This is the latest revision of the Internet Protocol (IP). It was developed to eventually replace IPv4 and to address the long-anticipated problem of running out of IPv4 addresses. This is an example of an IPv6 address: 2001:0db8:0000:0000:0000:ff00:0042:8329. An IPv6 can be shortened by replacing all the consecutive zero fields with two colons. The previous address could be rewritten as 2001:0db8::ff00:0042:8329. Ports: A port is an application or process-specific software construct serving as a communications endpoint on a device connected to an IP network, where the IP address identifies the device to connect to, and the port number identifies the application to connect to. The best way to think of network addressing is to think about how you mail a letter. For a letter to reach its destination, you must put the complete address on the envelope. For example, if you were going to send a letter to friend who lived at the following address: Your Friend 123 Main St Apt. 223 San Francisco CA, 94123 If I were to translate that into network addressing, the IP address would be equal to the street address, city, state, and zip code (123 Main St, San Francisco CA, 94123), and the apartment number would be equal to the port number (223). So the IP address gets you to the exact location, and the port number will tell you which door to knock on. A device has 65,536 available ports with the first 1024 being reserved for common protocols such as HTTP, HTTPS, SSH, and SMTP. Fully Qualified Domain Name (FQDN): As humans, we are not very good at remembering numbers; for example, if your friend tells you that he found a really excellent website for books and the address was 83.166.169.231, you probably would not remember that two minutes from now. However, if he tells you that the address was www.packtpub.com, you would probably remember it. FQDN is the name that can be used to refer to a device on an IP network. So now you may be asking yourself, how does the name get translated to the IP address? The Domain Name Server (DNS) would do that. Domain Name System Servers: A Domain Name System Server translates a fully qualified domain name to an IP address. When you use an FQDN of www.packtpub.com, your computer must get the IP address of the device from the DNS configured in your system. To find out what the primary DNS is for your machine, open a terminal window and type the following command: cat /etc/resolv.conf Byte order: As humans, when we look at a number, we put the most significant number first and the least significant number last; for example, in number 123, 1 represents 100, so it is the most significant number, while 3 is the least significant number. For computers, the byte order refers to the order in which data (not only integers) is stored into memory. Some computers store the most significant bytes first (at the lowest byte address), while others store the most significant bytes last. If a device stores the most significant bytes first, it is known as big-endian, while a device that stores the most significant bytes last is known as little-endian. The order of how data is stored in memory is of great importance when developing network applications, where you may have two devices that use different byte-ordering communication. You will need to account for this by using the Network-to-Host and Host-to-Network functions to convert between the byte order of your device and the byte order of the network. The byte order of the device is commonly referred to as the host byte order, and the byte order of the network is commonly referred to as the network byte order. Finding the byte order of your device One of the concepts that was briefly discussed in this article was how devices store information in memory (byte order). After that discussion, you may be wondering what the byte order of your device is. The byte order of a device depends on the Microprocessor architecture being used by the device. You can pretty easily go on to the Internet and search for "Mac OS X i386 byte order" and find out what the byte order is, but where is the fun in that? We are developers, so let's see if we can figure it out with code. We can determine the byte order of our devices with a few lines of C code; however, like most of the code in this article, we will put the C code within an Objective-C wrapper to make it easy to port to your projects. How to do it… Let's get started by defining an ENUM in our header file: We create an ENUM that will be used to identify the byte order of the system as shown in the following code: typedef NS_ENUM(NSUInteger, EndianType) { ENDIAN_UNKNOWN, ENDIAN_LITTLE, ENDIAN_BIG }; To determine the byte order of the device, we will use the byteOrder method as shown in the following code: -( EndianType)byteOrder { union { short sNum; char cNum[sizeof(short)]; } un; un.sNum = 0x0102; if (sizeof(short) == 2) { if(un.cNum[0] == 1 && un.cNum[1] == 2) return ENDIAN_BIG; else if (un.cNum[0] == 2 && un.cNum[1] == 1) return ENDIAN_LITTLE; else return ENDIAN_UNKNOWN; } else return ENDIAN_UNKNOWN; } How it works… In the ByteOrder header file, we defined an ENUM with three constants. The constants are as follows: ENDIAN_UNKNOWN: We are unable to determine the byte order of the device ENDIAN_LITTLE: This specifies that the most significant bytes are last (little-endian) ENDIAN_BIG: This specifies that the most significant bytes are first (big-endian) The byteOrder method determines the byte order of our device and returns an integer that can be translated using the constants defined in the header file. To determine the byte order of our device, we begin by creating a union of short int and char[]. We then store the value 0x0102 in the union. Finally, we look at the character array to determine the order in which the integer was stored in the character array. If the number one was stored first, it means that the device uses big-endian; if the number two was stored first, it means that the device uses little-endian.
Read more
  • 0
  • 0
  • 5523

article-image-making-poiapp-location-aware
Packt
10 Jan 2014
4 min read
Save for later

Making POIApp Location Aware

Packt
10 Jan 2014
4 min read
(For more resources related to this topic, see here.) Location services While working with location services on the Android platform, you will primarily work with an instance of LocationManager. The process is fairly straightforward as follows: Obtain a reference to an instance of LocationManager. Use the instance of LocationManager to request location change notifications, either ongoing or a single notification. Process OnLocationChange() callbacks. Android devices generally provide two different means for determining a location: GPS and Network. When requesting location change notifications, you must specify the provider you wish to receive updates from. The Android platform defines a set of string constants for the following providers: Provider Name Description GPS_PROVIDER (gps) This provider determines a location using satellites. Depending on conditions, this provider may take a while to return a location fix. This requires the ACCESS_FINE_LOCATION permission. NETWORK_PROVIDER (network) This provider determines a location based on the availability of a cell tower and Wi-Fi access points. Its results are retrieved by means of a network lookup. PASSIVE_PROVIDER (passive) This provider can be used to passively receive location updates when other applications or services request them without actually having to request for the locations yourself. It requires the ACCESS_FINE_ LOCATION permission, although if the GPS is not enabled, this provider might only return coarse fixes. You will notice specific permissions in the provider descriptions that must be set on an app to be used. Setting app permissions App permissions are specified in the AndroidManifest.xml file. To set the appropriate permissions, perform the following steps: Double-click on Properties/AndroidManifest.xml in the Solution pad. The file will be opened in the manifest editor. There are two tabs at the bottom of the screen, Application and Source, which can be used to toggle between viewing a form for editing the file or the raw XML as follows: In the Required permissions list, check AccessCoarseLocation, AccessFineLocation, and Internet. Select File | Save. Switch to the Source View to view the XML as follows: Configuring the emulator To use an emulator for development, this article will require the emulator to be configured with Google APIs so that the address lookup and navigation to map app works. To install and configure Google APIs, perform the following steps: From the main menu, select Tools | Open Android SDK Manager. Select the platform version you are using, check Google APIs, and click on Install 1 package…, as seen in the following screenshot: After the installation is complete, close the Android SDK Manager and from the main menu, select Tools | Open Android Emulator Manager. Select the emulator you want to configure and click on Edit. For Target, select the Google APIs entry for the API level you want to work with. Click on OK to save. Obtaining an instance of LocationManager The LocationManager class is a system service that provides access to the location and bearing of a device, if the device supports these services. You do not explicitly create an instance of LocationManager; instead, you request an instance from a Context object using the GetSystemService() method. In most cases, the Context object is a subtype of Activity. The following code depicts declaring a reference of a LocationManager class and requesting an instance: LocationManager _locMgr; . . . _locMgr = GetSystemService (Context.LocationService) as LocationManager; Requesting location change notifications The LocationManager class provides a series of overloaded methods that can be used to request location update notifications. If you simply need a single update, you can call RequestSingleUpdate(); to receive ongoing updates, call RequestLocationUpdate(). Prior to requesting location updates, you must identify the location provider that should be used. In our case, we simply want to use the most accurate provider available at the time. This can be accomplished by specifying the criteria for the desired provider using an instance of Android.Location.Criteria. The following code example shows how to specify the minimum criteria: Criteria criteria = new Criteria(); criteria.Accuracy = Accuracy.NoRequirement; criteria.PowerRequirement = Power.NoRequirement; Now that we have the criteria, we are ready to request updates as follows: _locMgr.RequestSingleUpdate (criteria, this, null); Summary In this article, we stepped through integrating POIApp with location services and the Google map app. We dealt with the various options that developers have to make their apps location aware and walks the reader through adding logic to determine a device's location and the address of a location, and displaying a location within the map app. Resources for Article: Further resources on this subject: Creating and configuring a basic mobile application [Article] Creating Dynamic UI with Android Fragments [Article] So, what is Spring for Android? [Article]
Read more
  • 0
  • 0
  • 1547

Packt
31 Dec 2013
5 min read
Save for later

Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere

Packt
31 Dec 2013
5 min read
(For more resources related to this topic, see here.) Step 1 – define global variables We include the following variables and functions in our file: var map; var latitud; var longitud; var xmlDoc = loadXml("puntos.xml"); var marker; var markersArray = []; function loadXml(xmlUrl) { var xmlhttp; if (window.XMLHttpRequest) { xmlhttp = new XMLHttpRequest(); } else { xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.open("GET", xmlUrl, false); xmlhttp.send(); xmlDoc = xmlhttp.responseXML; return xmlDoc; } With this code, we build our first function called loadXmlthat just loads the information into the XML file and then in to the smart phone memory. Step 2 – get current position We will build the following functions that we need to show our current position in Google Maps: getCurrentPosition: This function receives an object with the latitude and longitude. These values are set to two variables defined as global variables with the same name. This function receives three parameters, which are as follows: A function to manage the current latitude and longitude A function to manage errors if they occur when the device is trying to get the position Options to configure maximumAge: This is the time required to keep our position on the cache in milliseconds timeout: This is the maximum time required to wait for an answer from the getCurrentPosition function in milliseconds enableHighAccuracy: This provides a more accurate location The values of these functions can be set to either True or False. The following is the code for the preceding functions: getCurrentPosition(coordinates, errors, { maximumAge : 3000, timeout : 15000, enableHighAccuracy : true } The following is the code for the coordinates and errors functions: function coordinates(position) { latitud = position.coords.latitude; /* saving latitude */ longitud = position.coords.longitude; /* saving longitude*/ loadMap(); }// end function function errors(err) { /* Managing errors */ if (err.code == 0) { alert("Oops! Something is wrong"); } if (err.code == 1) { alert("Oops! Please accept share your position with us."); } if (err.code == 2) { alert("Oops! We can't get your current position."); } if (err.code == 3) { alert("Oops! Timeout!"); } }// end errors LocateMe: This function checks whether the device supports the PhoneGap geolocalization API or not. If the device supports the API, then this function calls a method to obtain the current position. If the device doesn't support geolocalization API, then we will use the current position's function, getCurrrentPosition, explained previously. The complete code for the LocateMe function is as follows: function locateMe() { if (navigator.geolocation) { /* The browser have geolocalization */ navigator.geolocation.getCurrentPosition(coordinates, errors, { maximumAge : 3000, timeout : 15000, enableHighAccuracy : true }); } else { alert('Oops! Your browser doesn't support geolocalization'); } } Step 3 – showing the current position using Google Maps To do this, we use a function that we called loadMap. This function is responsible for loading a map using Google Maps API with the current position. Also, we use a JavaScript object called myOptions with three properties, zoom to define the zoom, center to define where the maps will be centered, and mapTypeId to indicate the map style (that is Satellite). Following the code, we find two lines: the first one initializes the actualHeight variable according to the value that is returned by the getActualContentHeight function. This function returns the height that the map needs to have according to the screen size. In the second line, we change the height of the div "map canvas" through the jQuery Mobile method's CSS. In the following code file, we set the variable map with the google.maps.Map object that receives the parameters, the HTML element, and the myOptions object. We use an additional event to resize the map when the screen size changes. Now, we create a new marker object google.maps.Marker with four properties: position for the place of the marker, map that is the global variable, icon to define the image that we want to use as a marker, and the tooltip with the property title. We have two functions: the first one is createMarkers that allows us to read the XML file with the points uploaded in memory and, after that, puts the markers in the map with our second function addMarker that receives four parameters (the global map variable, point name, address and phone, and the google.maps.LatLng object with the point coordinates) to build the marker. We add a click event to show all the information about the place. Finally, we use the removePoints function that clears the maps, thereby deleting the points loaded. This is the code for the JS file: function loadMap() { var latlon = new google.maps.LatLng(latitud, longitud); var myOptions = { zoom : 17, center : latlon, mapTypeId : google.maps.MapTypeId.ROADMAP }; map = new google.maps.Map(document.getElementById('map_canvas'), myOptions); var mapDiv = document.getElementById('map_canvas'); google.maps.event.addDomListener(mapDiv, 'resize', function(){ google.maps.event.trigger(map,'resize'); }); var coorMarker = new google.maps.LatLng(latitud, longitud); marker = new google.maps.Marker({/* Create the marker */ position : coorMarker, map : map, icon : 'images/green-dot.png', title : "Where am I?" }); } Step 4 – edit the HTML file To implement the preceding functions, we need to add some code lines in our HTML file. We need to add the script line to include the new JS file, and add a complete JavaScript in the HTML head tag to execute the locateMe function. Now that the device is ready, another PhoneGap function calls watchPosition to update the position when the user is moving. Summary This article has shown us how to implement Geolocation in our app. It also shows how to use the API provided by PhoneGap to use in our app. The code for pointing out our current location was also discussed. Resources for Article: Further resources on this subject: Using Location Data with PhoneGap [Article] Configuring the ChildBrowser plugin [Article] Creating and configuring a basic mobile application [Article]
Read more
  • 0
  • 0
  • 2171
article-image-common-asynctask-issues
Packt
20 Dec 2013
7 min read
Save for later

Common AsyncTask issues

Packt
20 Dec 2013
7 min read
(For more resources related to this topic, see here.) Fragmentation issues AsyncTask has evolved with new releases of the Android platform, resulting in behavior that varies with the platform of the device running the task, which is a part of the wider issue of fragmentation. The simple fact is that if we target a broad range of API levels, the execution characteristics of our AsyncTasks—and therefore, the behavior of our apps—can vary considerably on different devices. So what can we do to reduce the likelihood of encountering AsyncTask issues due to fragmentation? The most obvious approach is to deliberately target devices running at least Honeycomb, by setting a minSdkVersion of 11 in the Android Manifest file. This neatly puts us in the category of devices, which, by default, execute AsyncTasks serially, and therefore, much more predictably. However, this significantly reduces the market reach of our apps. At the time of writing in September 2013, more than 34 percent of Android devices in the wild run a version of Android in the danger zone between API levels 4 and 10. A second option is to design our code carefully and test exhaustively on a range of devices—always commendable practices of course, but as we've seen, concurrent programming is hard enough without the added complexity of fragmentation, and invariably, subtle bugs will remain. A third solution that has been suggested by the Android development community is to reimplement AsyncTaskin a package within your own project, then extend your own AsyncTask class instead of the SDK version. In this way, you are no longer at the mercy of the user's device platform, and can regain control of your AsyncTasks. Since the source code for AsyncTask is readily available, this is not difficult to do. Activity lifecycle issues Having deliberately moved any long-running tasks off the main thread, we've made our applications nice and responsive—the main thread is free to respond very quickly to any user interaction. Unfortunately, we have also created a potential problem for ourselves, because the main thread is able to finish the Activity before our background tasks complete. Activity might finish for many reasons, including configuration changes caused the by the user rotating the device (the default behavior of Activity on a change in orientation is to restart with an entirely new instance of the activity). If we continue processing a background task after the Activity has finished, we are probably doing unnecessary work, and therefore wasting CPU and other resources (including battery life), which could be put to better use. Also, any object references held by the AsyncTask will not be eligible for garbage collection until the task explicitly nulls those references or completes and is itself eligible for GC ( garbage collection ). Since our AsyncTask probably references the Activity or parts of the View hierarchy, we can easily leak a significant amount of memory in this way. A common usage of AsyncTask is to declare it as an anonymous inner class of the host Activity, which creates an implicit reference to the Activity and an even bigger memory leak. There are two approaches for preventing these resource wastage problems. Handling lifecycle issues with early cancellation First, and most obviously, we can synchronize our AsyncTask lifecycle with that of the Activity by canceling running tasks when our Activity is finishing. When an Activity finishes, its lifecycle callback methods are invoked on the main thread. We can check to see why the lifecycle method is being called, and if the Activity is finishing, cancel the background tasks. The most appropriate Activity lifecycle method for this is onPause, which is guaranteed to be called before the Activity finishes. protected void onPause() { super.onPause(); if ((task != null) && (isFinishing())) task.cancel(false); } If the Activity is not finishing—say because it has started another Activity and is still on the back stack—we might simply allow our background task to continue to completion. Handling lifecycle issues with retained headless fragments If the Activity is finishing because of a configuration change, it may still be useful to complete the background task and display the results in the restarted Activity. One pattern for achieving this is through the use of retained Fragments. Fragments were introduced to Android at API level 11, but are available through a support library to applications targeting earlier API levels. All of the downloadable examples use the support library, and target API levels 7 through 19. To use Fragments, our Activity must extend the FragmentActivity class. The Fragment lifecycle is closely bound to that of the host Activity, and a fragment will normally be disposed when the activity restarts. However, we can explicitly prevent this by invoking setRetainInstance (true) on our Fragment so that it survives across Activity restarts. Typically, a Fragment will be responsible for creating and managing at least a portion of the user interface of an Activity, but this is not mandatory. A Fragment that does not manage a view of its own is known as a headless Fragment. Isolating our AsyncTask in a retained headless Fragment makes it less likely that we will accidentally leak references to objects such as the View hierarchy, because the AsyncTask will no longer directly interact with the user interface. To demonstrate this, we'll start by defining an interface that our Activity will implement: public interface AsyncListener<Progress, Result> { void onPreExecute(); void onProgressUpdate(Progress... progress); void onPostExecute(Result result); void onCancelled(Result result); } Next, we'll create a retained headless Fragment, which wraps our AsyncTask. For brevity, doInBackground is omitted, as it is unchanged from the previous examples—see the downloadable samples for the complete code. public class PrimesFragment extends Fragment { private AsyncListener<Integer,BigInteger> listener; private PrimesTask task; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setRetainInstance(true); task = new PrimesTask(); task.execute(2000); } public void onAttach(Activity activity) { super.onAttach(activity); listener = (AsyncListener<Integer,BigInteger>)activity; } public void onDetach() { super.onDetach(); listener = null; } class PrimesTask extends AsyncTask<Integer, Integer, BigInteger>{ protected void onPreExecute() { if (listener != null) listener.onPreExecute(); } protected void onProgressUpdate(Integer... values) { if (listener != null) listener.onProgressUpdate(values); } protected void onPostExecute(BigInteger result) { if (listener != null) listener.onPostExecute(result); } protected void onCancelled(BigInteger result) { if (listener != null) listener.onCancelled(result); } // … doInBackground elided for brevity … } } We're using the Fragment lifecycle methods (onAttach and onDetach) to add or remove the current Activity as a listener, and PrimesTask delegates directly to it from all of its main-thread callbacks. Now, all we need is the host Activity that implements AsyncListener and uses PrimesFragment to implement its long-running task. The full source code is available to download from the Packt Publishing website, so we'll just take a look at the highlights. First, the code in the button's OnClickListener now checks to see if Fragment already exists, and only creates one if it is missing: FragmentManager fm = getSupportFragmentManager(); PrimesFragment primes = (PrimesFragment)fm.findFragmentByTag("primes"); if (primes == null) { primes = new PrimesFragment(); FragmentTransaction transaction = fm.beginTransaction(); transaction.add(primes, "primes").commit(); } If our Activity has been restarted, it will need to re-display the progress dialog when a progress update callback is received, so we check and show it, if necessary, before updating the progress bar: public void onProgressUpdate(Integer... progress) { if (dialog == null) prepareProgressDialog(); progress.setProgress(progress[0]); } Finally, Activity will need to implement the onPostExecute and onCancelled callbacks defined by AsyncListener. Both methods will update the resultView as in the previous examples, then do a little cleanup—dismissing the dialog and removing Fragment as its work is now done: public void onPostExecute(BigInteger result) { resultView.setText(result.toString()); cleanUp(); } public void onCancelled(BigInteger result) { resultView.setText("cancelled at " + result); cleanUp(); } private void cleanUp() { dialog.dismiss(); dialog = null; FragmentManager fm = getSupportFragmentManager(); Fragment primes = fm.findFragmentByTag("primes"); fm.beginTransaction().remove(primes).commit(); } Summary In this article, we've taken a look at AsyncTask and how to use it to write responsive applications that perform operations without blocking the main thread. Resources for Article: Further resources on this subject: Android Native Application API [Article] So, what is Spring for Android? [Article] Creating Dynamic UI with Android Fragments [Article]
Read more
  • 0
  • 0
  • 6903

article-image-styling-user-interface
Packt
22 Nov 2013
8 min read
Save for later

Styling the User Interface

Packt
22 Nov 2013
8 min read
(For more resources related to this topic, see here.) Styling components versus themes Before we get into this article, it's important to have a good understanding of the difference between styling an individual component and creating a theme. Almost every display component in Sencha Touch has the option to set its own style. For example, a panel component can use a style in this way: { xtype: 'panel', style: 'border: none; font: 12px Arial black', html: 'Hello World' } The style can also be set as an object using: { xtype: 'panel', style : { 'border' : 'none', 'font' : '12px Arial black', 'border-left': '1px solid black' } html: 'Hello World' } You will notice that inside the style block, we have quoted both sides of the configuration setting. This is still the correct syntax for JavaScript and a very good habit to get in to for using style blocks. This is because a number of standard CSS styles use a dash as part of their name. If we do not add quotes to border-left, JavaScript will read this as border minus left and promptly collapse in a pile of errors. We can also set a style class for a component and use an external CSS file to define the class as follows: { xtype: 'panel', cls: 'myStyle', html: 'Hello World' } Your external CSS file could then control the style of the component in the following manner: .myStyle { border: none; font: 12px Arial black; } This class-based control of display is considered a best practice as it separates the style logic from the display logic. This means that when you need to change a border color, it can be done in one file instead of hunting through multiple files for individual style settings. These styling options are very useful for controlling the display of individual components. There are also certain style elements, such as border, padding, and margin, that can be set directly in the components' configuration: { xtype: 'panel', bodyMargin: '10 5 5 5', bodyBorder: '1px solid black', bodyPadding: 5, html: 'Hello World' } These configurations can accept either a number to be applied to all sides or a CSS string value, such as 1px solid black or 10 5 5 5. The number should be entered without quotes but the CSS string values need to be within quotes. These kind of small changes can be helpful in styling your application, but what if you need to do something a bit bigger? What if you want to change the color or appearance of the entire application? What if you want to create your own default style for your buttons? This is where themes and UI styles come into play. UI styling for toolbars and buttons Let's do a quick review of the basic MVC application we created, Creating a Simple Application, and use it to start our exploration of styles with toolbars and buttons. To begin, we are going to add a few things to the first panel, which has our titlebar, toolbar, and Hello World text. Adding the toolbar In app/views, you'll find Main.js. Go ahead and open that in your editor and takea look at the first panel in our items list: items: [ { title: 'Hello', iconCls: 'home', xtype: 'panel', html: 'Hello World', items: [ { xtype: 'titlebar', docked: 'top', title: 'About TouchStart' } ] }... We're going to add a second toolbar on top of the existing one. Locate the items section, and after the curly braces for our first toolbar, add the second toolbar in the following manner: { xtype: 'titlebar', docked: 'top', title: 'About TouchStart' }, { docked: 'top', xtype: 'toolbar', items: [ {text: 'My Button'} ]} Don't forget to add a comma between the two toolbars. Extra or missing commas While working in Sencha Touch, one of the most common causes of parse errors is an extra or missing comma. When you are moving the code around, always make sure you have accounted for any stray or missing commas. Fortunately for us, the Safari Error Console will usually give us a pretty good idea about the line number to look at for these types of parse errors. A more detailed list of common errors can be found at: http://javascript.about.com/od/reference/a/error.htm Now when you take a look at the first tab, you should see our new toolbar with our button to the left. Since the toolbars both have the same background, they are a bit difficult to differentiate. So, we are going to change the appearance of the bottom bar using the ui configuration option: { docked: 'top', xtype: 'toolbar', ui: 'light', items: [ {text: 'My Button'} ] } The ui configuration is the shorthand for a particular set of styles in Sencha Touch. There are several ui styles included with Sencha Touch, and later on, we will show you how to make your own. Styling buttons Buttons can also use the ui configuration setting, for which they offer several different options: normal: This is the default button back: This is a button with the left side narrowed to a point round: This is a more drastically rounded button small: This is a smaller button action: This is a brighter version of the default button (the color varies according to the active color of the theme, which we will see later) forward: This is a button with the right side narrowed to a point Buttons also have some color options built into the ui option. These color options are confirm and decline. These options are combined with the previous shape options using a hyphen; for example, confirm-small or decline-round. Let's add some new buttons and see how this looks on our screen. Locate the items list with our button in the second toolbar: items: [ {text: 'My Button'} ] Replace that old items list with the following new items list: items: [ { text: 'Back', ui: 'back' }, { text: 'Round', ui: 'round' }, { text: 'Small', ui: 'small' }, { text: 'Normal', ui: 'normal' }, { text: 'Action', ui: 'action' }, { text: 'Forward', ui: 'forward' } ] This will produce a series of buttons across the top of our toolbar. As you may notice, all of our buttons are aligned to the left. You can move buttons to the right by adding a spacer xtype in front of the buttons you want pushed to the right. Try this by adding the following between our Forward and Action buttons: { xtype: 'spacer'}, This will make the Forward button move over to the right-hand side of the toolbar: Since buttons can actually be used anywhere, we can add some to our title bar and use the align property to control where they appear. Modify the titlebar for our first panel and add an items section, as shown in the following code: { xtype: 'titlebar', docked: 'top', title: 'About TouchStart', items: [ { xtype: 'button', text: 'Left', align: 'left' }, { xtype: 'button', text: 'Right', align: 'right' } ] } Now we should have two buttons in our title bar, one on either side of the title: Let's also add some buttons to the panel container so we can see what the ui options confirm and decline look like. Locate the end of the items section of our HelloPanel container and add the following after the second toolbar: { xtype: 'button', text: 'Confirm', ui: 'confirm', width: 100 }, { xtype: 'button', text: 'Decline', ui: 'decline', width: 100 } There are two things you may notice that differentiate our panel buttons from our toolbar buttons. The first is that we declare xtype:'button' in our panel but we don't in our toolbar. This is because the toolbar assumes it will contain buttons and xtype only has to be declared if you use something other than a button. The panel does not set a default xtype attribute, so every item in the panel must declare one. The second difference is that we declare width for the buttons. If we don't declare width when we use a button in a panel, it will expand to the full width of the panel. On the toolbar, the button auto-sizes itself to fit the text. You will also see that our two buttons in the panel are mashed together. You can separate them out by adding margin: 5 to each of the button configuration sections. These simple styling options can help make your application easier to navigate and provide the user with visual clues for important or potentially destructive actions. The tab bar The tab bar at the bottom also understands the ui configuration option. In this case, the available options are light and dark. The tab bar also changes the icon appearance based on the ui option; a light toolbar will have dark icons and a dark toolbar will have light icons. These icons are actually part of a special font called Pictos. Sencha Touch started using the Pictos font in Version 2.2 instead of images icons because of compatibility issues on some mobile devices. The icon mask from previous versions of Sencha Touch is available but has been discontinued as of Version 2.2. You can see some of the icons available in the documentation for the Ext.Button component: http://docs.sencha.com/touch/2.2.0/#!/api/Ext.Button If you're curious about the Pictos font, you can learn more about it at http://pictos.cc/
Read more
  • 0
  • 0
  • 3139