Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides - Mobile

49 Articles
article-image-ai-on-mobile-how-ai-is-taking-over-the-mobile-devices-marketspace
Sugandha Lahoti
19 Apr 2018
4 min read
Save for later

AI on mobile: How AI is taking over the mobile devices marketspace

Sugandha Lahoti
19 Apr 2018
4 min read
If you look at the current trends in the mobile market space, a lot of mobile phone manufacturers portray artificial intelligence as the chief feature in their mobile phones. The total number of developers who build for mobile is expected to hit 14m mark by 2020, according to Evans Data survey. With this level of competition, developers have resorted to Artificial Intelligence to distinguish their app, or to make their mobile device stand out. AI on Mobile is the next big thing. AI on Mobile can be incorporated in multiple forms. This may include hardware, such as AI chips as seen on Apple’s iPhone X or software-based, such as Google’s TensorFlow for Mobile. Let’s look in detail how smartphone manufacturers and mobile developers are leveraging the power of AI for Mobile for both hardware and software specifications. Embedded chips and In-device AI Mobile Handsets nowadays are equipped with specialized AI chips. These chips are embedded alongside CPUs to handle heavy lifting tasks in smartphones to bring AI on Mobile. These built-in AI engines can not only respond to your commands but also lead the way and make decisions about what it believes is best for you. So, when you take a picture, the smartphone software, leveraging the power of AI hardware correctly identifies the person, object, or location being photographed and also compensates for low-resolution images by predicting the pixels that are missing. When we talk about battery life, AI allocates power to relevant functions eliminating unnecessary use of power. Also, in-device AI reduces data-processing dependency on cloud-based AI, saving both energy, time and associated costs. The past few months have seen a large number of AI-based silicon popping everywhere. The trend first began with Apple’s neural engine, a part of the new A11 processor Apple developed to power the iPhone X.  This neural engine powers the machine learning algorithms that recognize faces and transfer facial expressions onto animated emoji. Competing head first with Apple, Samsung revealed the Exynos 9 Series 9810. The chip features an upgraded processor with neural network capacity for AI-powered apps. Huawei also joined the party with Kirin 970 processor, a dedicated Neural Network Processing Unit (NPU) which was able to process 2,000 images per minute in a benchmark image recognition test. Google announced the open beta of its Tensor Processing Unit 2nd Gen. ARM announced its own AI hardware called Project Trillium, a mobile machine learning processor.  Amazon is also working on a dedicated AI chip for its Echo smart speaker. Google Pixel 2 features a Visual Core co-processor for AI. It offers an AI song recognizer, superior imaging capabilities, and even helps the Google Assistant understand the user commands/questions better. The arrival of AI APIs for Mobile Apart from in-device hardware, smartphones also have witnessed the arrival of Artificially intelligent APIs. These APIs add more power to a smartphone’s capabilities by offering personalization, efficient searching, accurate video and image recognition, and advanced data mining. Let’s look at a few powerful machine learning APIs and libraries targeted solely to Mobile devices. It all began with Facebook announcing Caffe2Go in 2016. This Caffe version was designed for running deep learning models on mobile devices. It condensed the size of image and video processing AI models by 100x, to run neural networks with high efficiency on both iOS and Android. Caffe2Go became the core of Style Transfer, Facebook’s real-time photo stylization tool. Then came Google’s TensorFlow Lite in 2017 announced at the Google I/O conference. Tensorflow Lite is a feather-light upshot for mobile and embedded devices. It is designed to be Lightweight, Speedy, and Cross-platform (the runtime is tailormade to run on various platforms–starting with Android and iOS.) TensorFlow Lite also supports the Android Neural Networks API, which can run computationally intensive operations for machine learning on mobile devices. Following TensorFlow Lite came Apple’s CoreML, a programming framework designed to make it easier to run machine learning models on iOS. Core ML supports Vision for image analysis, Foundation for natural language processing, and GameplayKit for evaluating learned decision trees. CoreML makes it easier for apps to process data locally using machine learning without sending user information to the cloud. It also optimizes models for Apple mobile devices, reducing RAM and power consumption. Artificial Intelligence is finding its way into every aspect of a mobile device, whether it be through hardware with dedicated AI chips or through APIs for running AI-enabled services on hand-held devices. And this is just the beginning. In the near future, AI on Mobile would play a decisive role in driving smartphone innovation possibly being the only distinguishing factor consumers think of while buying a mobile device.
Read more
  • 0
  • 0
  • 4002

article-image-unity-plugins-for-augmented-reality-app-development
Sugandha Lahoti
10 Apr 2018
4 min read
Save for later

Unity plugins for augmented reality application development

Sugandha Lahoti
10 Apr 2018
4 min read
Augmented Reality is the powerhouse for the next set of magic tricks headed to our mobile devices.  Augmented Reality combines real-world objects with Digital information. Heard about Pokemon Go? It was first showcased by Niantic at WWDC 2017 and was built on Apple’s augmented reality framework, ARKit. Following the widespread success of Pokemon Go, a large number of companies are eager to invest in AR technology. Unity is one of the dominant players in the industry when it comes to creating desktop, console and mobile games. Augmented Reality has been exciting game developers for quite some time now, and following this excitement Unity has released prominent tools for developers to experiment with AR Apps. Bear in mind that Unity is not designed exclusively for Augmented Reality and so developers can access additional functionality by importing extensions. These extensions also provide pre-designed game components such as characters or game props. Let us briefly look at 3 prominent tools or extensions for Augmented Reality development provided by Unity: Unity ARKit plugin The Unity ARKit plugin uses the functionality of the ARKit SDK within Unity projects. As on September 2017, this plugin is also extended for iOS apps as iOS ARKit plugin. The ARKit plugin provides Unity developers with access to features such as motion tracking, vertical and horizontal plane finding, live video rendering, hit-testing, raw point cloud data, ambient light estimation, and more for their AR projects. This plugin also provides easy integration of AR features in existing Unity projects. A new tool, the Unity ARKit Remote speeds up iteration by allowing developers to make real-time changes to the scene and debug scripts in the Unity Editor. The latest update to iOS ARKit is version 1.5 which provides developers with the more tools to power more immersive AR experiences. Google ARCore Google ARCore for Unity provides mobile AR experiences for Android, without the need for additional hardware. The latest major version ARCore 1.0 enables AR applications to track a phone’s motion in the real world, detect planes in the environment, and understand lighting in the camera scene. ARCore 1.0 introduces featured oriented points which help in the placement of anchors on textured surfaces. These feature points enhance the environmental understanding of the scene. So ARCore is not just limited to horizontal and vertical planes like ARKit, but can create AR Apps on any surface. ARCore 1.0 is supported by the Android Emulator in Android Studio 3.1 Beta and is available for use on multiple supported Android devices. Vuforia integration with Unity Vuforia allows developers to build cross-platform AR apps directly from the Unity editor. It provides Augmented Reality support for Android, iOS, and UWP devices, through a single API. It attaches digital content to different types of objects and environments using Model Targets and Ground Plane, across a broad range of devices and operating systems. Ground Plane attaches digital content to horizontal surfaces. Model Targets provides Object Recognition capabilities. Other targets include Image (to put AR content on flat objects) and Cloud (manage large collections of Image Targets from your own CMS). Vuforia also includes Device Tracking capability which provides an inside-out device tracker for rotational head and hand tracking. It also provides APIs to create immersive experiences that transition between AR and VR. You can browse through various AR projects from the Unity community to help you get started with your next big AR idea as well as to choose the toolkit best suited for you. Leap Motion open sources its $100 augmented reality headset, North Star Unity and Unreal comparison Types of Augmented Reality targets Create Your First Augmented Reality Experience: The Tools and Terms You Need to Understand
Read more
  • 0
  • 0
  • 5660

article-image-types-augmented-reality-targets
Aarthi Kumaraswamy
08 Apr 2018
6 min read
Save for later

Types of Augmented Reality targets

Aarthi Kumaraswamy
08 Apr 2018
6 min read
The essence of Augmented Reality is that your device recognizes objects in the real world and renders the computer graphics registered to the same 3D space, providing the illusion that the virtual objects are in the same physical space with you. Since augmented reality was first invented decades ago, the types of targets the software can recognize has progressed from very simple markers for images and natural feature tracking to full spatial map meshes. There are many AR development toolkits available; some of them are more capable than others of supporting a range of targets. The following is a survey of various Augmented Reality target types. We will go into more detail in later chapters, as we use different targets in different projects. Marker The most basic target is a simple marker with a wide border. The advantage of marker targets is they're readily recognized by the software with very little processing overhead and minimize the risk of the app not working, for example, due to inconsistent ambient lighting or other environmental conditions. The following is the Hiro marker used in example projects in ARToolkit: Coded Markers Taking simple markers to the next level, areas within the border can be reserved for 2D barcode patterns. This way, a single family of markers can be reused to pop up many different virtual objects by changing the encoded pattern. For example, a children's book may have an AR pop up on each page, using the same marker shape, but the bar code directs the app to show only the objects relevant to that page in the book. The following is a set of very simple coded markers from ARToolkit: Vuforia includes a powerful marker system called VuMark that makes it very easy to create branded markers, as illustrated in the following image. As you can see, while the marker styles vary for specific marketing purposes, they share common characteristics, including a reserved area within an outer border for the 2D code: Images The ability to recognize and track arbitrary images is a tremendous boost to AR applications as it avoids the requirement of creating and distributing custom markers paired with specific apps. Image tracking falls into the category of natural feature tracking (NFT). There are characteristics that make a good target image, including having a well-defined border (preferably eight percent of the image width), irregular asymmetrical patterns, and good contrast. When an image is incorporated in your AR app, it's first analyzed and a feature map (2D node mesh) is stored and used to match real-world image captures, say, in frames of video from your phone. Multi-targets It is worth noting that apps may be set up to see not just one marker in view but multiple markers. With multitargets, you can have virtual objects pop up for each marker in the scene simultaneously. Similarly, markers can be printed and folded or pasted on geometric objects, such as product labels or toys. The following is an example cereal box target: Text recognition If a marker can include a 2D bar code, then why not just read text? Some AR SDKs allow you to configure your app (train) to read text in specified fonts. Vuforia goes further with a word list library and the ability to add your own words. Simple shapes Your AR app can be configured to recognize basic shapes such as a cuboid or cylinder with specific relative dimensions. Its not just the shape but its measurements that may distinguish one target from another: Rubik's Cube versus a shoe box, for example. A cuboid may have width, height, and length. A cylinder may have a length and different top and bottom diameters (for example, a cone). In Vuforia's implementation of basic shapes, the texture patterns on the shaped object are not considered, just anything with a similar shape will match. But when you point your app to a real-world object with that shape, it should have enough textured surface for good edge detection; a solid white cube would not be easily recognized. Object recognition The ability to recognize and track complex 3D objects is similar but goes beyond 2D image recognition. While planar images are appropriate for flat surfaces, books or simple product packaging, you may need object recognition for toys or consumer products without their packaging. Vuforia, for example, offers Vuforia Object Scanner to create object data files that can be used in your app for targets. The following is an example of a toy car being scanned by Vuforia Object Scanner: Spatial maps Earlier, we introduced spatial maps and dynamic spatial location via SLAM. SDKs that support spatial maps may implement their own solutions and/or expose access to a device's own support. For example, the HoloLens SDK Unity package supports its native spatial maps, of course. Vuforia's spatial maps (called Smart Terrain) does not use depth sensing like HoloLens; rather, it uses visible light camera to construct the environment mesh using photogrammetry. Apple ARKit and Google ARCore also map your environment using the camera video fused with other sensor data. Geolocation A bit of an outlier, but worth mentioning, AR apps can also use just the device's GPS sensor to identify its location in the environment and use that information to annotate what is in view. I use the word annotate because GPS tracking is not as accurate as any of the techniques we have mentioned, so it wouldn't work for close-up views of objects. But it can work just fine, say, standing atop a mountain and holding your phone up to see the names of other peaks within the view or walking down a street to look up Yelp! reviews of restaurants within range. You can even use it for locating and capturing Pokémon. [box type="note" align="" class="" width=""]You read an excerpt from the book, Augmented Reality for Developers, by Jonathan Linowes, and Krystian Babilinski. To learn how to use these targets and to build a variety of AR apps, check the book now![/box]
Read more
  • 0
  • 0
  • 10845

article-image-why-are-android-developers-switching-java-kotlin
Hari Vignesh
23 Jan 2018
4 min read
Save for later

Why are Android developers switching from Java to Kotlin?

Hari Vignesh
23 Jan 2018
4 min read
When we talk about Android app development, the first programming language that comes to mind is 'Java'. However Java isn’t the only language you can use for Android programming – you can use any language that compiles to the JVM. Recently, a new language has caught the attention of the Android community – Kotlin. Kotlin has actually been around since 2011, but it was only in May 2017 that Google announced that the language was to become an officially supported language in the Android operating system. This is one of the many reasons why Kotlin’s adoption has been so dramatic. The Realm report, published at the end of 2017 suggests that Kotlin is likely to overtake Java in terms of usage in the next couple of years. When you want to work on custom Android applications, an advanced technology will help you achieve your goals. Java and Kotlin are commonly used languages for Google for writing Android Apps. A great importance is given to programming languages because it might cut down some of your time and money. Want to learn Kotlin? Find Kotlin eBooks and videos in our library. There are many reasons why mobile developers are choosing to switch from Java to Kotlin. Below are some of the most significant. Kotlin is easy for anyone who knows Java to learn Similarities in typing and syntax make Kotlin very easy to master for anyone who’s already working with Java. If you’re worried about a steep learning curve, you'll be pleasantly surprised by how easy it is for developers to dive into coding in Kotlin. Kotlin is evolving with a lot of support from the developer community. A lot of developers who contribute to Kotlin’s evolution are freelancers who find work on different platforms and experience a wide range of smaller projects with varied needs. Other contributors include larger companies and industry giants like Google. Kotlin needs 20 percent less coding compared to Java. Java is a bit outdated, which means every new launch has to support features included in the previous version. This eventually increases the code to write, resulting in absence of layer-to-layer architecture. If you compare the coding of Java class and Kotlin class, you will find that the one written in Kotlin will be much more compact than the one written in Java. Kotlin has Android Studio support Because Kotlin is built by JetBrains, it’s unsurprising that Android Studio (also a JetBrains product) has excellent support for Kotlin. Android Studio makes it incredibly easy to configure Kotlin in your project; it’s as straightforward as simply opening a few menus. Your IDE will have no problem understanding, compiling and running Kotlin code once you have set up Kotlin for Android Studio. After configuring Kotlin for Android Studio, you can convert the entire Java source file into a Kotlin file. The fact that Kotlin is Java compatible makes it a uniquely useful language that can leverage JVMs while at the same time be used to update and improve enterprise-level solutions that have enormous codebases written in Java. Kotlin is great for procedural programming Every programming paradigm has its own set of strengths and weaknesses. There will always be certain scenarios where one is more effective than another. One thing that’s so appealing about Kotlin is that it combines the strengths of two different approaches – procedural and functional. True, the largely procedural approach can sometimes be the most challenging aspect of the language when you first start to get to grips with it. However, the level of control such an approach can give you is well worth the investment of your time. Kotlin makes development more efficient and your life easier This follows on nicely from the point above. While certain aspects of Kotlin require patience and concentration to master, in the long run, with less code, errors and bugs will be greatly reduced. That saves you time, making coding much more enjoyable rather than an administrative nightmare of spaghetti code. There are plenty of features in Kotlin that makes it a practical solution to today’s programming challenges. Where JetBrains takes the language next remains to be seen – we could, perhaps, see Kotlin make a move towards iOS development, and if it compiled to JavaScript we may also begin to see it used more and more within web development. Of course, this will largely be down to JetBrain’s goals and just how much they want Kotlin to dominate the developer landscape. Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 5463

Banner background image
article-image-android-o-whats-new-and-why-its-been-introduced
Raka Mahesa
07 May 2017
5 min read
Save for later

Android O: What's new and why it's been introduced

Raka Mahesa
07 May 2017
5 min read
Eclaire, Froyo, Gingerbread, Honeycomb, Ice Cream Sandwich, Jelly Bean, Kit Kat, Lollipop, Marshmallow, and Nougat. If you thought that was just a list of various sweet treats, well, you're not wrong, but it's also a list of Android version names. And if you guessed that the next version of Android starts with O, well you're exactly right because Google themselves have announced Android O – the latest version of Android.  So, what's new in the O version of Android? Let's find out.  Notifications have always been one of Android's biggest strengths. Notifications on Android are informative, versatile, and customizable so they fit their users' need. Google clearly understands this and has kept improving the notification system of Android. They have overhauled how the notifications look, made notifications more interactive, and given users a way to manage the importance of each notification. So, of course, for this version of Android, Google added even more features to the notification system.  The biggest feature added to the notification system on Android O is the Notification Channel. Basically, Notification Channel is an API that allows developers to define categories for notifications from their apps. App users will then be able to control the setting for each category of notifications. This way, users can fine tune applications so they only show notifications that the users think are important.  For example, let's say you have a chat application and it has 2 notification channels. The first channel is for notifying users when a new chat message arrives and the second one is for when the user is added to someone else's friend list. Some users may only care about the new chat messages, so they can turn off certain types of notifications instead of turning of all notifications from the app.  Other features added to Android O notification system is Notification Snoozing and Notification Timeout. Just like in alarm, Notification Snoozing allows the user to snooze a notification and let it reappear later when the user has time. Meanwhile, Notification Timeout allows developers to set a timeout duration for the notifications. Imagine that you want to notify a user about a flash sale that only runs for 2 hours. By adding timeout, the notification can remove itself when the event is over. Okay, enough about notifications – what else is new in Android O?  Autofill Framework  One of the newest things introduced with Android O is the Autofill Framework. You know how browsers can remember your full name, email address, home address, and other stuff and automatically fill in a registration form with that data? Well, the same capability is coming to Android apps via the Autofill Framework. An app can also register itself as an Autofill Service. For example, if you made a social media app, you can let other apps use the user's account data from your app to help users fill their forms.  Account data  Speaking of account data, with Android O, Google has removed the ability for developers to get user's account data using the GET_ACCOUNT permission, forcing developers to use the account chooser dialog instead. So with Android O, developers can no longer automatically fill in a text field with the user's email address and name, and have to let users pick accounts on their own.  And it's not just form filling that gets reworked. In an effort to improve battery life and phone performance, Android O has added a number of limitations to background processes. For example, on Android O, apps running in the background (that is, apps that don't have any of their interface visible to users) will not be able to get users’ location data as frequently as before. Also, apps in the background can no longer create and use background processes.  Do keep in mind that some of those limitations will impact any application running on Android O, not just apps that were built using the O version of the SDK. So if you have an app that relies on background processes, you may want to check your app to ensure it works fine on Android O.  App icons  Let's talk about something with more visual: App icons. You know how manufacturers add custom skins to their phones to differentiate their products from competitors? Well, some time ago they also changed the shape of all app icons to fit the overall UI of their phones and thisbroke some carefully designed icons. Fortunately, with the Adaptive Icon feature introduced in Android O, developers will be able to design an icon that can adjust to a variety of shapes.  We've covered a lot, but there are still tons of other features added to Android O that we haven't discussed, including: multi-display support, a new native Audio API, Keyboard Navigation, new APIs to manage WebView, new Java 8 APIs, and more. Do check out the official documentation for those.  That being said, we're still missing the most important thing: What is going to be the full name for Android O? I can only think of Oreo at the moment. What about you?  About the author  Raka Mahesa is a game developer at Chocoarts (chocoarts.com), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99.
Read more
  • 0
  • 0
  • 2227

article-image-mvp-android
HariVigneshJayapalan
04 Apr 2017
6 min read
Save for later

MVP for Android

HariVigneshJayapalan
04 Apr 2017
6 min read
The Android framework does not encourage any specific way to design an application. In a way, that makes the framework more powerful and vulnerable at the same time. You may be asking yourself things like, "Why should I know about this? I'm provided with Activity and I can write my entire implementation using a few Activities and Fragments, right?” Based on my experience, I have realized that solving a problem or implementing a feature at that point of time is not enough. Over time, our apps will go through a lot of change cycles and feature management. Maintaining these over a period of time will create havoc in our application if not designed properly with separation of concerns. That’s why developers have come up with architectural design patterns for better code crafting. How has it evolved? Most developers started creating an Android app with Activity at the center and capable of deciding what to do and how to fetch data. Activity code over a period of time started to grow and became a collection of non-reusable components.Then developers started packaging those components and the Activity could use them through the exposed APIs of these components. Then they started to take pride and began breaking codes into bits and pieces as much as possible. After that, they found themselves in an ocean of components with hard-to-trace dependencies and usage. Also, later we were introduced to the concept of testability and found that regression is much safer if it’s written with tests. Developers realized that the jumbled code that they developed in the above process is very tightly coupled with the Android APIs, preventing JVM tests and also hindering an easy design of test cases. This is the classic MVC with Activity or Fragment acting as a Controller. SOLID principles SOLID principles are object-oriented design principles, thanks to dear Robert C. Martin. According to the SOLID article on Wikipedia, it stands for: S (SRP): Single responsibility principle This principle means that a class must have only one responsibility and do only the task for which it has been designed. Otherwise, if our class assumes more than one responsibility we will have a high coupling causing our code to be fragile with any changes. O (OCP): Open/closed principle According to this principle, a software entity must be easily extensible with new features without having to modify its existing code in use. Open for extension: new behavior can be added to satisfy the new requirements. Close for modification: extending the new behavior is not required to modify the existing code. If we apply this principle, we will get extensible systems that will be less prone to errors whenever the requirements are changed. We can use abstraction and polymorphism to help us apply this principle. L (LSP): Liskov substitution principle This principle was defined by Barbara Liskov and says that objects must be replaceable by instances of their subtypes without altering the correct functioning of our system. Applying this principle, we can validate that our abstractions are correct. I (ISP): Interface segregation principle This principle defines that a class should never implement an interface that does not go to use. Failure to comply with this principle means that in our implementations we will have dependencies on methods that we do not need but that we are obliged to define. Therefore, implementing a specific interface is better than implementing a general-purpose interface. An interface is defined by the client that will use it; so it should not have methods that the client will not implement. D (DIP): Dependency inversion principle The dependency inversion principle means that a particular class should not depend directly on another class, but on an abstraction (interface) of this class. When we apply this principle we will reduce dependency on specific implementations and thus make our code more reusable. MVP somehow tries to follow (not 100% completely) all of these five principles. You can try looking up clean architecture for pure SOLID implementation. What is an MVP design pattern? An MVP design pattern is a set of guidelines that if followed, decouples the code for reusability and testability. It divides the application components based on its role, called separation of concerns. MVP divides the application into three basic components: Model: The Model represents a set of classes that describes the business logic and data. It also defines business rules for data, which means how the data can be changed and manipulated. In other words, it is responsible for handling the data part of the application. View: The View represents the UI components. It is only responsible for displaying the data that is received from the presenter as the result. This also transforms the model(s) into UI. In other words, it is responsible for laying out the views with specific data on the screen. Presenter: The Presenter is responsible for handling all UI events on behalf of the view. This receives input from users via the View, then processes the user’s data with the help of Model, and passes the results back to the View. Unlike view and controller, view and presenter are completely decoupled from each other and communicates to each other by an interface. Also, Presenter does not manage the incoming request traffic as Controller. In other words, it is a bridge that connects a Model and a View. It also acts as an instructor to the View. MVP lays down a few ground rules for the abovementioned components, as listed below: A View’s sole responsibility is to draw a UI as instructed by the Presenter. It is a dumb part of the application. The View delegates all the user interactions to its Presenter. The View never communicates with Model directly. The Presenter is responsible for delegating the View’s requirements to Model and instructing the View with actions for specific events. The Model is responsible for fetching data from the server, database and file system. MVP projects for getting started Every developer will have his/her own way of implementing MVP. I’m listing a few projects down the line. Migrating to MVP will not be quick and it will take some time. Please take your time and get your hands dirty with MVP: https://github.com/mmirhoseini/marvel https://github.com/saulmm/Material-Movies https://fernandocejas.com/2014/09/03/architecting-android-the-clean-way/  About the author HariVigneshJayapalan is a Google-certified Android app developer, IDF-certified UI &UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur.
Read more
  • 0
  • 0
  • 4233
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-benefits-using-kotlin-java-android
HariVigneshJayapalan
06 Mar 2017
6 min read
Save for later

Benefits of using Kotlin Java for Android

HariVigneshJayapalan
06 Mar 2017
6 min read
Kotlin is a statically typed programming language for JVM, Android, and browser. Kotlin is a new programming language from JetBrains, the maker of the world’s best IDEs. Why Kotlin? Before we jump into the benefits of Kotlin, we need to understand how Kotlin originated and evolved. We already have many programming languages, but how has Kotlin emerged to capture programmers’ heart? A 2013 study showed that language features matter little compared withecosystem issues when developers evaluate programming languages. Kotlin compiles to JVM bytecode or JavaScript. It is not a language you will write a kernel in. It is of the greatest interest to people who work with Java today, although it could appeal to all programmers who use a garbage-collected runtime, including people who currently use Scala, Go, Python, Ruby, and JavaScript. Kotlin comes from industry, not academia. It solves problems faced by working programmers and developers today. As an example, the type system helps you avoid null pointer exceptions. Research languages tend to just not have null at all, but this is of no use to people working with large codebases and APIs that do. Kotlin costs nothing to adopt! It’s open source, but that’s not the point. It means that there’s a high quality, one-click Java to Kotlin converter tool(available in Android Studio), and a strong focus on Java binary compatibility. You can convert an existing Java project, one file at a time, and everything will still compile, even for complex programs that run up to millions of lines of code. Kotlin programs can use all existing Java frameworks and libraries, even advanced frameworks that rely on annotation processing. The interop is seamless and does not require wrappers or adapter layers. It integrates with Maven, Gradle, and other build systems. It is approachable and it can be learned in a few hours by simply reading the language reference. The syntax is clean and intuitive. Kotlin looks a lot like Scala, but it’s simpler. The language balances terseness and readability as well.It also enforces no particular philosophy of programming, such as overly functional or OOP styling. Combined with the appearance of frameworks like Anko and Kovenant, this resource lightness means Kotlin has become popular among Android developers. You can read a report written by a developer at Square on their experience with Kotlin and Android. Kotlin features Let's summarize why it’s the right time to jump from native Java to Kotlin Java. Concise: Drastically reduces the amount of boilerplate code you need to write. Safe: Avoid entire classes of errors, such as null pointer exceptions. Versatile: Build server-side applications, Android apps, or frontend code running in the browser. Interoperable: Leverage existing frameworks and libraries of the JVM with 100% Java Interoperability. Brief discussion Let’s discuss a few important features in detail. Functional programming support Functional programming is not easy, at least in the beginning, until it becomes fun. There arezero-overhead lambdas and the ability to do mapping, folding, etc. over standard Java collections. The Kotlin type system distinguishes between mutable and immutable views over collections. Function purity The concept of a pure function (a function that does not have side effects) is the most important functional concept, which allows us to greatly reduce code complexity and get rid of most mutable states. Higher-order functions Higher-order Functions either take functions as parameters, return functions, or both.Higher-order functions are everywhere. You just pass functions to collections to make the code easy to read.titles.map{ it.toUpperCase()}reads like plain English. Isn’t it beautiful? Immutability Immutability makes it easier to write, use, and reason about the code (class invariant is established once and then unchanged). The internal state of your app components will be more consistent. Kotlin enforces immutability by introducingvalkeyword as well as Kotlin collections, which are immutable by default. Once thevalor a collection is initialized, you can be sure about its validity. Null safety Kotlin’s type system is aimed at eliminating the danger of null references from code, also known as The Billion Dollar Mistake. One of the most common pitfalls in many programming languages, including Java, is that of accessing a member of null references, resulting in null reference exceptions. In Java, this would be the equivalent toa NullPointerException, or NPE for short. In Kotlin, the type system distinguishes between references that can hold null (nullable references) and those that can't (non-null references).For example, a regular variable of type String can’t hold null: var a: String = “abc” a = null // compilation error To allow nulls, you can declare a variable as a nullable string, written String?: var b: String? = “abc” b = null // ok Anko DSL for Android Anko DSL for Android is a great library, which significantly simplifies working with views, threads, and Android lifecycle. The GitHub description states that Anko is a “Pleasant Android application development” and it truly has proven to be so. Removing ButterKnife dependency In Kotlin, you can just reference your view property by its @id XML parameter;these properties would have the same name as declared in your XML file. More info can be found in official docs. Smart casting // Java if (node instanceOf Tree) { return ((Tree) node).symbol; } // kotlin if (node is Tree) { returnnode.symbol; // Smart casting, no need of casting } if (document is Payable &&document.pay()) { // Smart casting println(“Payable document ${document.title} was payed for.”) } Kotlin uses lazy evaluation just like in Java. So, if the document were not a Payable, the second part would not be evaluated in the first place. Hence, if evaluated, Kotlin knows that the document is a Payable and uses a smart cast. Try it now! Like many modern languages, Kotlin has a way to try it out via your web browser. Unlike those other languages, Kotlin’s tryout site is practically a full-blown IDE that features fast autocompletion, real-time background compilation, and even online static analysis! TRY IT NOW About the author HariVigneshJayapalan is a Google Certified Android App developer, IDF Certified UI &UX Professional, street magician, fitness freak, technology enthusiast, and a wannabe entrepreneur.
Read more
  • 0
  • 0
  • 3283

article-image-shift-swift-2017
Shawn Major
27 Jan 2017
3 min read
Save for later

Shift to Swift in 2017

Shawn Major
27 Jan 2017
3 min read
It’s a great time to be a Swift developer because this modern programming language has a lot of momentum and community support behind it and a big future ahead of it. Swift became a real contender when it became open source in December 2015, giving developers the power to build their own tools and port it into the environments in which they work. The release of Swift 3 in September 2016 really shook things up by enabling broad scale adoption across multiple platforms – including portability to Linus/x86, Raspberry Pi, and Android. Swift 3 is the “spring cleaning” release that, while not being backwards compatible, has resulted in a massively cleaner language and ensured sound and consistent language fundamentals that will carry across to future releases. If you’re a developer using Swift, the best thing you can do is get on board with Swift 3 as the next release promises to deliver stability from 3.0 onwards. Swift 4 is expected to be released in late 2017 with the goals of providing source stability for Swift 3 code and ABI stability for the Swift standard library. Despite this shake up that occurred with the new release, developers are still enthusiastic about Swift – it was one of the “most loved” programming languages in StackOverflow’s 2015 and 2016 Developer Surveys. Swift was also one of the top 3 trending techs in 2016 as it’s been stealing market share from Objective C. The keen interest that developers have in Swift is reflected by the +35,000 stars it has amassed on Github and the impressive amount of ongoing collaboration between its core team and the wider community. Rumour has it that Google is considering making Swift a “first class” language and that Facebook and Uber are looking to make Swift more central to their operations. Lyft’s migration of its iOS app to Swift in 2015 shows that the lightness, leanness, and maintainability of the code are worth it and services like the web server and toolkit Perfect are proof that the server-side Swift is ready. People are starting to do some cool and surprising things with Swift. Including… Shaping the language itself. Apple has made a repository on Github called swift-evolution that houses proposals for enhancements and changes to the Swift language. Developers are bringing Swift 3 to as many ARM-based systems as possible. For example, you can get Swift 3 for all the Raspberry Pi boards or you can program a robot in Swift on a BeagleBone. IBM has adopted Swift as the core language for their cloud platform. This opens the door to radically simpler app dev. Developers will be able to build the next generation of apps in native Swift from end-to-end, deploy applications with both server and client components, and build microservice APIs on the cloud. The Swift Sandbox lets developers of any level of experience can actively build server-based code. Since launching it’s had over 2 million code runs from over 100 countries. We think there are going to be a lot of exciting opportunities for developers to work with Swift in the near future. The iOS Developer Skill Plan on Mapt is perfect for diving into Swift and we have plenty of Swift 3 books and videos if you have more specific projects in mind.The large community of developers using iOS/OSX and making libraries combined with the growing popularity of Swift as a general-purpose language makes jumping into Swift a worthwhile venture. Interested in what other developers have been up to across the tech landscape? Find out in our free Skill Up: Developer Talk report on the state of software in 2017.
Read more
  • 0
  • 0
  • 3221

article-image-5-new-features-will-make-developers-love-android-7
Sam Wood
09 Sep 2016
3 min read
Save for later

5 New Features That Will Make Developers Love Android 7

Sam Wood
09 Sep 2016
3 min read
Android Nougat is here, and it's looking pretty tasty. We've been told about the benefits to end users - but what are some of the most exciting features for developers to dive into? We've got five that we think you'll love. 1. Data Saver If your app is a hungry, hungry data devourer then you could be losing users as you burn through their allowance of cellular data. Android 7's new data saver feature can help with that. It throttles background data usage, and signals to foreground apps to use less data. Worried that will make your app less useful? Don't worry - users can 'whitelist' applications to consume their full data desires. 2. Multi-tasking It's the big flagship feature of Android 7 - it's the ability to run two apps on the screen at once. As phones keep getting bigger (and more and more people opt for Android tablets over an iPad) having the option to run two apps alongside each other makes a lot more sense. What does this mean for developers? Well, first, you'll want to tweak your app to make sure it's multi-window ready. But what's even more exciting is the potential for drag and drop functionality between apps, dragging text and images from one pane to another. Ever miss being able to just drag files to attach them to an email like on a desktop? With Android N, that's coming to mobile - and devs should consider updating accordingly. 3. Vulkan API Nougat brings a new option to Android game developers in the form of the Vulkan graphics API. No longer restricted to just OpenGL ES, developers will find that Vulkan provides them with a more direct control over hardware - which should lead to improved game performance. Vulkan can also be used across OSes, including Windows and the SteamOS (Valve is a big backer). By adopting Vulkan, Google has really opened up the possibility for high-performance games to make it onto Android. 4. Just In Time Compiler Android 7 has added a JIT (Just In Time) compiler, which will work to constantly improve the performance of Android Apps as they run. The performance of your app will improve - but the device won't consume too much memory. Say goodbye to freezes and non-responsive devices, and hello to faster installation and updates! This means users installing more and more apps, which means more downloads for you! 5. Notification Enhancements Android 7 changes the way your notifications work on your device. Rather than just popping up at the top of your device, notifications in Nougat will have the option for a direct reply without opening the app, will be bundled together with related notifications, and can even be viewed as a 'heads-up' notification displayed to the user when the device is active. These heads-up notifications are also customizable by app developers - so better start getting creative! How will this option affect your app's UX and UI? There's plenty more... This are just some of the features of Android 7 we're most excited about - there's plenty more to explore! So dive right in to Android development, and start building for Nougat today!
Read more
  • 0
  • 0
  • 1884

article-image-opencv-and-android-making-your-apps-see
Raka Mahesa
07 Jul 2016
6 min read
Save for later

OpenCV and Android: Making Your Apps See

Raka Mahesa
07 Jul 2016
6 min read
Computer vision might sound like an exotic term, but it's actually a piece of technology that you can easily find in your daily life. You know how Facebook can automatically tag your friends in a photo? That's computer vision. Have you ever tried Google Image Search? That's computer vision too. Even the QR Code reader app in your phone employs some sort of computer vision technology. Fortunately, you don't have to conduct your own researches to implement computer vision, since that technology is easily accessible in the form of SDKs and libraries. OpenCV is one of those libraries, and it's open source too. OpenCV focuses on real-time computer vision, so it feels very natural when the library is extended to Android, a device that usually has a camera built in. However, if you're looking to implement OpenCV in your app, you will find the official documentations for the Android version a bit lagging behind the ever evolving Android development environment. But don't worry; this post will help you with that. Together we're going to add the OpenCV Android library and use some of its basic functions on your app. Requirements Before you get started, let’s make sure you have all the following requirements: Android Studio v1.2 or above Android 4.4 (API 19) SDK or above OpenCV for Android library v3.1 or above An Android device with a camera Importing the OpenCV Library All right, let's get started. Once you have downloaded the OpenCV library, extract it and you will find a folder named "sdk" in it. This "sdk" folder should contain folders called "java" and "native". Remember the location of these 2 folders, since we will get back to them soon enough. So now you need to create a new project with blank activity on Android Studio. Make sure to set the minimum required SDK to API 19, which is the lowest version that's compatible with the library. Next, import the OpenCV library. Open the File > New > Import Module... menu and point it to the "java" folder mentioned earlier, which will automatically copy the Java library to your project folder. Now that you have added the library as a module, you need to link the Android project to the module. Open the File > Project Structure... menu and select app. On the dependencies tab, press the + button, choose Module Dependency, and select the OpenCV module on the list that pops up. Next, you need to make sure that the module will be built with the same setting as your app. Open the build.gradle scripts for both the app and the OpenCV module. Copy the SDK version and tools version values in the app graddle script to the OpenCV graddle script. Once it's done, sync the gradle scripts and rebuild the project. Here are the values of my graddle script, but your script may differ based on the SDK version you used. compileSdkVersion 23 buildToolsVersion "23.0.0 rc2" defaultConfig { minSdkVersion 19 targetSdkVersion 23 } To finish importing OpenCV, you need to add the C++ libraries to the project. Remember the "native" folder mentioned earlier? There should be a folder named "libs" inside it. Copy the "libs" folder to the <project-name>/OpenCVLibrary/src/main folder and rename it to "jniLibs" so that Android Studio will know that the files inside that folder are C++ libraries. Sync the project again, and now OpenCV should have been imported properly to your project. Accessing the Camera Now that you’re done importing the library, it's time for the next step: accessing the device's camera. The OpenCV library has its own camera UI that you can use to easily access the camera data, so let’s use that. To do that, simply replace the layout XML file for your main activity with this one. Then you'll need to ask permission from the user to access the camera. Add the following line to the app manifest. <uses-permission android_name="android.permission.CAMERA"/> And if you're building for Android 6.0 (API 23), you will need to ask for permission inside the app. Add the following line to the onCreate() function of your main activity to ask for permission. requestPermissions(new String[] { Manifest.permission.CAMERA }, 1); There are two things to note about the camera UI from the library. First, by default, it will not show anything unless it's activated in the app by calling the enableView() function. And second, on portrait orientation, the camera will display a rotated view. Fixing this last issue is quite a hassle, so let’s just choose to lock the app to landscape orientation. Using OpenCV Library With the preparation out of the way, let's start actually using the library. Here's the code for the app's main activity if you want to see how the final version works. To use the library, initialize it by calling the OpenCVLoader.initAsync() method on the activity's onResume() method. This way the activity will always check if the OpenCV library has been initialized every time the app is going to use it. //Create callback protected LoaderCallbackInterface mCallback = new BaseLoaderCallback(this) { @Override public void onManagerConnected(int status) { //If not success, call base method if (status != LoaderCallbackInterface.SUCCESS) super.onManagerConnected(status); else { //Enable camera if connected to library if (mCamera != null) mCamera.enableView(); } } }; @Override protected void onResume() { //Super super.onResume(); //Try to init OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_10, this, mCallback); } The initialization process will check if your phone already has the full OpenCV library. If it doesn't, it will automatically open the Google Play page for the OpenCV Manager app and ask the user to install it. And if OpenCV has been initialized, it simply activates the camera for further use.   If you noticed, the activity implements the CvCameraViewListener2 interface. This interface enables you to access the onCameraFrame() method, which is a function that allows you to read what image the camera is capturing, and to return what image the interface should be showing. Let's try a simple image processing and show it on the screen. @Override public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) { //Get edge from the image Mat result = new Mat(); Imgproc.Canny(inputFrame.rgba(), result, 70, 100); //Return result return result; } Imgproc.Canny() is an OpenCV function that does Canny Edge Detection, which is a process to detect all edges in a picture. As you can see, it's pretty simple; you simply need to put the image from the camera (inputFrame.rgba()) into the function and it will return another image that shows only the edges. Here's what the app’s display will look like. And that's it! You've implemented a pretty basic feature from the OpenCV library on an Android app. There are still many image processing features that the library has, so check out this exhaustive list of features for more. Good luck! About the author Raka Mahesa is a game developer at Chocoarts who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99.
Read more
  • 0
  • 0
  • 7216
article-image-ios-9-speed
Samrat Shaw
20 May 2016
5 min read
Save for later

iOS 9: Up to Speed

Samrat Shaw
20 May 2016
5 min read
iOS 9 is the biggest iOS release to date. The new OS introduced new intricate features and refined existing ones. The biggest focus is on intelligence and proactivity, allowing iOS devices to learn user habits and act on that information. While it isn’t a groundbreaking change like iOS 7, there is a lot of new functionality for developers to learn. Along with iOS 9 and Xcode 7, Apple also announced major changes to the Swift language (Swift 2.0) and announced open source plans. In this post, I will discuss some of my favorite changes and additions in iOS 9. 1 List of new features Let’s examine the new features. 1.1 Search Extensibility Spotlight search in iOS now includes searching within third-party apps. This allows you to deep link from Search in iOS 9. You can allow users to supply relevant information that they can then navigate directly to. When a user clicks on any of the search results, the app will be opened and you can be redirected to the location where the search keyword is present. The new enhancements to the Search API include NSUserActivity APIs, Core Spotlight APIs, and web markup. 1.2 App Thinning App thinning optimizes the install sizes of apps to use the lowest amount of storage space while retaining critical functionality. Thus, users will only download those parts of the binary that are relevant to them. The app's resources are now split, so that if a user installs an app on iPhone 6, they do not download iPad code or other assets that are used to make an app universal. App thinning has three main aspects, namely app slicing, on-demand resources, and bitcode. Faster downloads and more space for other apps and content provide a better user experience. 1.3 3D Touch iPhone 6s and 6s Plus added a whole new dimension to UI interactions. A user can now press the Home screen icon to immediately access functionality provided by an app. Within the app, a user can now press views to see previews of additional content and gain accelerated access to features. 3D Touch works by detecting the amount of pressure that you are applying to your phone's screen in order to perform different actions. In addition to the UITouch APIs, Apple has also provided two new sets of classes, adding 3D Touch functionality to apps: UIPreviewAction and UIApplicationShortcutItem. This unlocks a whole new paradigm of iOS device interaction and will enable a new generation of innovation in upcoming iOS apps. 1.4 App Transport Security (ATS) With the introduction of App Transport Security, Apple is leading by example to improve the security of its operating system. Apple expects developers to adopt App Transport Security in their applications. With App Transport Security enabled, network requests are automatically made over HTTPS instead of HTTP. App Transport Security requires TLS 1.2 or higher. Developers also have an option to disable ATS, either selectively or as a whole, by specifying in the Info.plist of their applications. 1.5 UIStackView The newly introduced UIStackView is similar to Android’s LinearLayout. Developers embed views to the UIStackView (either horizontally or vertically), without the need to specify the auto layout constraints. The constraints are inserted by the UIKit at runtime, thus making it easier for developers. They have the option to specify the spacing between the subviews. It is important to note that UIStackViews don't scroll; they just act as containers that automatically fit their content. 1.6 SFSafariViewController With SFSafariViewController, developers can use nearly all of the benefits of viewing web content inside Safari without forcing users to leave an app. It saves developers a lot of time, since they no longer need to create their own custom browsing experiences. For the users too, it is more convenient, since they will have their passwords pre-filled, not have to leave the app, have their browsing history available, and more. The controller also comes with a built-in reader mode. 1.7 Multitasking for iPad Apple has introduced Slide Over, Split View, and Picture-in-Picture for iPad, thus allowing certain models to use the much larger screen space for more tasks. From the developer point of view, this can be supported by using the iOS AutoLayout and Size Classes. If the code base already uses these, then the app will automatically respond to the new multitasking setup. Starting from Xcode 7, each iOS app template will be preconfigured to support Slide Over and Split View. 1.8 The Contacts Framework Apple has introduced a brand new framework, Contacts. This replaces the function-based AddressBook framework. The Contacts framework provides an object-oriented approach to working with the user's contact information. It also provides an Objective-C API that works well with Swift too. This is a big improvement over the previous method of accessing a user’s contacts with the AddressBook framework. As you can see from this post, there are a lot of exciting new features and capabilities in iOS9 that developers can tap into, thus providing new and exciting apps for the millions of Apple users around the world. About the author Samrat Shaw is a graduate student (software engineering) at the National University Of Singapore and an iOS intern at Massive Infinity.
Read more
  • 0
  • 0
  • 1445

article-image-android-your-mobile-platform-choice
Richard Gall
21 Mar 2016
2 min read
Save for later

Android: Your Mobile Platform of Choice

Richard Gall
21 Mar 2016
2 min read
It’s been a long week of argument and debate, strong words and opinions – and that’s just in the Packt office. But, now the votes have been counted we can announce that Android is the Packt customer’s mobile platform of choice. Across oour website poll and our Twitter poll, Android was the clear winner. Throughout the week, it also proved to be the most popular platform with customers, with sales of our Android eBooks exceeding those for iOS.  As you can see, our Twitter poll, delivered a particularly significant win for Android. Clearly there was a lot of love for Android. But what we really loved about the week was hearing some interesting perspectives from mobile developers around the world. This tweet in particular summed up why we think Android dominated the vote: Fundamentally, it’s all about customization – with Android you have more freedom as a developer, which, for many developers is central to the sheer pleasure of the development experience. Of course, the freedom you get with Android is only a certain type of freedom – and there are, of course trade-offs if you want the openness of such a platform. This article from October 2015 suggested that Android development is ‘30% more expensive than iOS development’ due to the longer amount of time Android projects take – the writers estimate that, on average, you write 40% more code when working with Android over iOS. But with new tools on the horizon likely to make Android development even more efficient (after all, think about what it was like to build for Android back in 2013!), it’s unsurprising that it should prove so popular with many developers. We’re celebrating Android’s win with an additional week of offers – which means you’ve now got another week to pick up our very best Android titles and get ready for a bright and exciting future in the mobile development world!
Read more
  • 0
  • 0
  • 1743

article-image-reactive-programming-swift
Milton Moura
16 Mar 2016
6 min read
Save for later

Reactive Programming in Swift

Milton Moura
16 Mar 2016
6 min read
In this post we will learn how to use some of Swift's functional features to write more concise and expressive code using RxSwift, a reactive programming framework, to manage application states and concurrent tasks. Swift and its functional features Swift can be described as a modern object-oriented language with native support for generic programming. Although it is not a functional language, it has some features that allows us to program using a functional approach, like closures, functions as first-class types, and immutable value types. Nevertheless, Cocoa Touch is an object-oriented framework and bares the constraints that this paradigm enforces. Typical issues that arise in software development projects include managing shared application state and concurrent asynchronous tasks that compete for the data that resides there. Functional programming solves these problems by privileging the immutable state and defining application logic as expressions that do not change during the application's lifecycle. By defining self-contained functions, computations can be easily parallelized and concurrency issues minimized. The Reactive Model The reactive programming model has its roots in FRP (functional reactive programming), which shifts the paradigm from discrete, imperative, command-driven programming to a series of transformations that can be applied to a stream of inputs continously over time. While that might sound like a mouthful, there's nothing quite like a simple example to get a feel for what this means. Expressing a relationship between variables Let's say you have two variables (A and B) whose value changes over the running time of an application, and a third one (C) that derives its own value based on the previous two. 1. var A = 10 2. var B = 20 3. let C = A * 2 + B 4. 5. // Current Values 6. // A = 10, B = 20, C = 40 7. 8. A = 0 9. 10. // Current Values 11. // A = 0, B = 20, C = 40 The definition of C with regards to A and B is evaluated only once, when the assignment operation is executed. The relationship between them is lost immediatly after that. Changing A or B from then on will have no effect on the value of C. At any given moment, to evaluate that expression you must reassign the value of C and calculate it once again, based on the current values of A and B. How would we do this in a reactive programming approach? In the reactive model, we would create two streams that propagate changes in the values of either A or B over time. Each value change is represented as a signal in its corresponding stream. We then combine both streams and assign a transformation that we want to perform on each signal emitted, thus creating a new stream that will emit only transformed values. The usual way to demonstrate this is using Marbles Diagrams, where each line represents the continuity of time and each marble an event that occurs at a determined point in time: Reacting in Cocoa Touch To address this in Cocoa Touch, you could use Key-Value Observing to add observers to the changing variables and handle them when the KVO system notifies you: self.addObserver(self, forKeyPath:"valueA", options: .New, context: nil) self.addObserver(self, forKeyPath:"valueB", options: .New, context: nil) override func observeValueForKeyPath(keyPath: String?, ofObject object: AnyObject?, change: [String : AnyObject]?, context: UnsafeMutablePointer<Void>) { let C = valueA * 2 + valueB } If your variables are tied to the user interface, in UIKit you could define a handler that is invoked when change events are triggered: sliderA.addTarget(self, action: "update", forControlEvents: UIControlEvents.ValueChanged) sliderB.addTarget(self, action: "update", forControlEvents: UIControlEvents.ValueChanged) func update() { let C = sliderA.value * 2 + sliderB.value } But none of these approaches define a persistent and explicit relationship between the variables involved, their lifecycle, and the events that change their value. We can overcome this with a reactive programming model. There are a couple of different implementations currently available for OS X and iOS development such as RxSwift and ReactiveCocoa. We will focus on RxSwift but the basic concepts we address are similar in both frameworks. RxSwift RxSwift extends the Observer pattern to simulate asynchronous streams of data flowing out of your Cocoa Touch objects as if they were typical collections. By extending some of Cocoa Touch's classes with observable streams, you are able to subscribe to their output and use them with composable operations, such as filter(), merge(), map(), reduce(), and others. Returning to our previous example, let's say we have an iOS application with two sliders (sliderA and sliderB) and we wish to continously update a label (labelC) with the same expression we used before (A * 2 + B): 1. combineLatest(sliderA.rx_value, sliderB.rx_value) { 2. $0 * 2 + $1 3. }.map { 4. "Sum of slider values is ($0)" 5. }.bindTo(labelC.rx_text)  We take advantage of the rx_value extension of the UISlider class that transforms the slider's value property into an observable type that emits an item when its value changes. By applying the combineLatest() operation on both slider's observable types, we create a new observable type that emits items whenever any of its source streams emits an item. The resulting emission is a tuple with both slider's values that can be transformed in the operation callback (line 2). Then, we map the transformed value into an informative string (line 4) and bind its value to our label (line 5). By composing three independent operations (combineLatest(), map() and bindTo()) we were able to concisely express a relationship between three objects and continuously update our application's UI, reacting accordingly to changes in the application state. What's next? We are only scratching the surface on what you can do with RxSwift. In the sample source code, you will find an example on how to download online resources using chainable asynchronous tasks. Be sure to check it out if this article sparked your curiosity. Then take some time to read the documentation and learn about the several other API extensions that will help you develop iOS apps in a more functional and expressive way. Discover how patterns in Swift can help you to deal with a large number of similar objects in our article Using the Flyweight Pattern. About the author Milton Moura (@mgcm) is a freelance iOS developer based in Portugal. He has worked professionally in several industries, from aviation to telecommunications and energy and is now fully dedicated to creating amazing applications using Apple technologies. With a passion for design and user interaction, he is also very interested in new approaches to software development. You can find out more at http://defaultbreak.com
Read more
  • 0
  • 0
  • 4075
article-image-swift-2016
Owen Roberts
16 Mar 2016
4 min read
Save for later

Swift in 2016

Owen Roberts
16 Mar 2016
4 min read
It’s only been 2 years since Swift was first released to the public and it’s amazing how quickly it has been adopted by iOS developers all over. Seen as a great jumping point for many people and a perfect alternative to Objective-C with some of the best modern language features built in, like tuples and generics; being Open Source is the icing on the cake for tinker-happy devs looking to make the language their own. Swift is in an interesting position though; despite it being one of the fastest languages being picked up right now, do you know how many apps made by Apple actually use it in iOS 9.2? Only 1. Calculator. It’s not a huge surprise when you think about it – the language is new and constantly evolving, and we can safely assume that Calculator’s use of Swift is to test the water as the features and workings of the language settle down. Maybe in the next 2-3 years Apple will have finally moved to a pure Swift world, but other developers? They’re really jumping into the language. IBM, for example, uses Swift for all its iOS apps. What does this mean for you? It means that, as a developer, you have the ability to help shape a young language that rarely happens on today’s web. So here are a few reasons you should take the plunge and get deeper into Swift in 2016, and if you haven’t started yet, then there’s no better time! Swift 3 is coming What better time to get even deeper into the language when it’s about to add a host of great new features? Swift 3.0 is currently scheduled to launch around the tail end of 2016 and Apple aren’t keeping what they want to include close to their chest. The biggest additions are looking to be stabilizing the ABI, refining the language even more with added resilience to changes, and further increasing portability. All these changes have been on the wishlists of Swift devs for ages and now that we’re finally going to get them there’s sure to be more professional projects made purely in Swift. 3.0 looks to be the edition of Swift that you can use for your customers without worry, so if you haven’t gotten into the language yet, this is the version you should be prepping for! It’s no longer an iOS only language Probably the biggest change to happen to Swift since it became Open Source is that the language is now available on Ubuntu officially, while dedicated fans are also currently creating an Android port of all things. What does this mean for you as a developer? Well, the potential for a greater number of platforms your apps can be deployed on has grown; and one of Swift’s main complaints, that it’s an iOS only language, is rendered moot. It’s getting easier to learn and use In the last 2 years we’ve seen a variety of different tools and package managers for those looking to get more out of Swift. If you’re already using Swift it’s most likely you’re using Xcode to write apps. However, if you’re looking to try something new or just don’t like Xcode then there’s now a host of options for you. Testing frameworks like Quick are starting to appear on the market and alternatives such as AppCode look to build on the feedback the community gives to Xcode and fill in the gaps with what it’s missing. Suggestions as you type and decent project monitoring are becoming more commonplace with these new environments, and there are more environments around if you look, so why not jump on them and see which one suits your style of development? The Swift job market is expanding Last year the Swift job market expanded by an incredible 600%, and that was in its first year alone. With Apple giving Swift its full support and the community having grown so quickly, companies are beginning to take notice. Many companies who produce iOS apps are looking for the benefits that Swift offers over Objective-C and having that language as part of your skillset is something that is beginning to set iOS developers apart from one another… With everything happening with Spring this year it looks to be one of the best times to jump on board or dig deeper into the language. If you’re looking to get started building your Swift skills then be sure to check out our iOS tech page, it has all our most popular iOS books for you to explore along with the list of upcoming titles for you to preorder, Swift included.
Read more
  • 0
  • 0
  • 2071

article-image-swift-missing-pieces-surviving-change
Nicholas Maccharoli
14 Mar 2016
5 min read
Save for later

Swift: Missing Pieces & Surviving Change

Nicholas Maccharoli
14 Mar 2016
5 min read
Change Swift is still a young language when compared to other languages like C, C++, Objective-C, Ruby, and Python. Therefore it is subject to major changes that will often result in code breaking for simple operations like calculating the length of a string. Packaging functionality that is prone to change into operators, functions or computed properties may make dealing with these transitions easier. It will also reduce the number of lines of code that need to be repaired every time Swift undergoes an update. Case study: String Length A great example of something breaking between language updates is the task of getting a string’s character length. In versions of Swift prior to 1.2, the way to calculate the length of a native string was countElements(myString), but then in version 1.2 it became just count(myString). Later at WWDC 2015 Apple announced that many functions that were previously global –such as count - were now implemented as protocol extensions. This resulted in once again having to rewrite parts of existing code as myString.characters.count. So how can one make these code repairs between updates more manageable? With a little help from our friends Computed Properties of course! Say we were to write a line like this every time we wanted to get the length of a string: let length = count(myString) And then all of a sudden this method becomes invalid in the next major release and we have unfortunately calculated the length of our strings this way in, say, over fifty places. Fixing this would require a code change in all fifty places. But could this have been mitigated? Yes, we could have used a computed property on the string called length right from the start: extension String { var length : Int { return self.characters.count } } Had our Swift code originally been written like this, all that would be required is a one line change. This is because the other fifty places would still be receiving a valid Int from the call myString.length. Missing Pieces Swift has some great shorthand and built in operators for things like combining strings - let fileName = fileName + ".txt" - and appending to arrays - waveForms += ["Triangle", "Sawtooth"]. So what about adding one dictionary to another? //Won't work let languageBirthdays = ["C" : 1972, "Objective-C": 1983] + ["python" : 1991, "ruby" : 1995] But it works out of the box in Ruby: compiled = { "C" => 1972, "Objective-C" => 1983 } interpreted = { "Ruby" => 1995, "Python" => 1991 } programming_languages = compiled.merge(interpreted) And Python does not put up much of a fuss either: compiled = {"C":1972, 'Objective-C': 1983} interpreted = {"Ruby":1995, "Python": 1991} programming_languages = compiled.update(interpreted) So how can we make appending one dictionary to another go as smoothly as it does with other container types like arrays in Swift? By overloading the + and += operators to work with dictionaries of course! func + <Key, Value> (var lhs: Dictionary<Key, Value>, rhs: Dictionary<Key, Value>) -> Dictionary<Key, Value> { rhs.forEach { lhs[$0] = $1 } return lhs } func += <Key, Value> (inout lhs: Dictionary<Key, Value>, rhs: Dictionary<Key, Value>) -> Dictionary<Key, Value> { lhs = lhs + rhs return lhs } With a light application of generics and operator overloading we can make the syntax for dictionary addition the same as the syntax for array addition. Operators FTW: Regex Shorthand One thing that you may have encountered during your time with Swift is the lack of support for regular expressions. At the time of writing, Swift is currently at version 2.1.1 and there is no Regular Expression support in the Swift Standard Library. The next best thing to do is to rely on a third party library or Foundation Framework's NSRegularExpression. The issue is that writing code to use NSRegularExpression to find a simple match is a bit long winded every time you wish to check for a match. Putting it into a function is not a bad idea either, but defining an operator may make our code a bit more compact. Taking inspiration from Ruby's =~ regex operator, let’s make a simple version returning a bool representing if there was a match: infix operator =~ { associativity left precedence 140 } func =~ (lhs: String, rhs: String) -> Bool { if let regex = try? NSRegularExpression(pattern: rhs, options: NSRegularExpressionOptions.CaseInsensitive) { let matches = regex.matchesInString(lhs, options: NSMatchingOptions.ReportCompletion, range: NSMakeRange(0, lhs.length)) return matches.count > 0 } else { return false } } (Take note of our trusty length computed property springing to action.) This time around there is no operator as of Swift 2.1 called =~. Therefore, we need to first define the symbol telling the Swift compiler that it is an operator that is infix taking objects on the left and right side, with a precedence of 140, and its associativity is left. Associativity and precedence only matter when there are multiple operators chained together, but I imagine most uses of this operator being something like: guard testStatus =~ "TEST SUCCEEDED" else { reportFailure() } Have fun but be courteous It would be wise to observe The Law of the Instrument and not treat everything as a nail just because you have a hammer in arm’s reach. When making the decision to wrap functionality into an operator or use a computed property in place of the canonical way of coding something explicitly, first ask yourself if this is really improving readability. It could be that you’re just reducing the amount of typing – think about how easily the next person reading your code could adapt. If you want to create even better Swift apps then check out our article to make the most of the Flyweight pattern in Swift - perfect when you need a large number of similar objects! About the author Nick Maccharoli is an iOS / Backend developer and Open Source enthusiast working at a startup in Tokyo and enjoying the current development scene. You can see what he is up to at @din0sr or github.com/nirma
Read more
  • 0
  • 0
  • 2943