Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides

852 Articles
article-image-why-is-data-science-important
Richard Gall
24 Apr 2018
3 min read
Save for later

Why is data science important?

Richard Gall
24 Apr 2018
3 min read
Is data science important? It's a term that's talked about a lot but often misunderstood. Because it's a buzzword it's easy to dismiss; but data science is important. Behind the term lies very specific set of activities - and skills - that businesses can leverage to their advantage. Data science allows businesses to use the data at their disposal, whether that's customer data, financial data or otherwise, in an intelligent manner. It's results should be a key driver of growth. However, although it’s not wrong to see data science as a real game changer for business, that doesn’t mean it’s easy to do well. In fact, it’s pretty easy to do data science badly. A number of reports suggest that a large proportion of analytics projects fail to deliver results. That means a huge number of organizations are doing data science wrong. Key to these failures is a misunderstanding of how to properly utilize data science. You see it so many times - buzzwords like data science are often like hammers. They make all your problems look like nails. And not properly understanding the business problems you’re trying to solve is where things go wrong. What is data science? But what is data science exactly? Quite simply, it’s about using data to solve problems. The scope of these problems is huge. Here are a few ways data science can be used: Improving customer retention by finding out what the triggers of churn might be Improving internal product development processes by looking at points where faults are most likely to happen Targeting customers with the right sales messages at the right time Informing product development by looking at how people use your products Analyzing customer sentiment on social media Financial modeling As you can see data science is a field that can impact every department. From marketing to product management to finance, data science isn’t just a buzzword, it’s a shift in mindset about how we work. Data science is about solving business problems To anyone still asking is data science important, the answer is actually quite straightforward. It's important because it solves business problems. Once you - and management - recognise that fact, you're on the right track. Too often businesses want machine learning, big data projects without thinking about what they’re really trying to do. If you want your data scientists to be successful, present them with the problems - let them create the solutions. They won’t want to be told to simply build a machine learning project. It’s crucial to know what the end goal is. Peter Drucker once said “in God we trust… everyone else must bring data”. But data science didn’t really exist then - if it did it could be much simpler: trust your data scientists.
Read more
  • 0
  • 0
  • 9938

article-image-things-consider-when-migrating-cloud
Kristen Hardwick
01 Jul 2014
5 min read
Save for later

Things to Consider When Migrating to the Cloud

Kristen Hardwick
01 Jul 2014
5 min read
After the decision is made to make use of a cloud solution like Amazon Web Services or Microsoft Azure, there is one main question that needs to be answered – “What’s next?...” There are many factors to consider when migrating to the cloud, and this post will discuss the major steps for completing the transition. Gather background information Before getting started, it’s important to have a clear picture of what is meant to be accomplished in order to call the transition a success.Keeping the following questions at the forefront during the planning stages will help guide your process and ensure the success of the migration. What are the reasons for moving to the cloud? There are many benefits of moving to the cloud, and it is important to know what the focus of the transition should be. If the cost savings are the primary driver, vendor choice may be important. Prices between vendors vary, as do the support services that are offered–that might make a difference in future iterations. In other cases, the elasticity of hardware may be the main appeal. It will be important to ensure that the customization options are available at the desired level. Which applications are being moved? When beginning the migration process, it is important to make sure that the scope of the effort is clear. Consider the option of moving data and applications to the cloud selectively in order to ease the transition. Once the organization has completed a successful small-scale migration into the cloud, a second iteration of the process can take care of additional applications. What is the anticipated cost? A cloud solution will have variable costs associated with it, but it is important to have some estimation of what is expected. This will help when selecting vendors, and it will allow for guidance in configuring the system. What is the long-term plan? Is the new environment intended to eventually replace the legacy system? To work alongside it? Begin to think about the plan beyond the initial migration. Ensure that the selected vendor provides service guarantees that may become requirements in the future, like disaster recovery options or automatic backup services. Determine your actual cloud needs One important thing to maximize the benefits of making use of the cloud is to ensure that your resources are sufficient for your needs. Cloud computing services are billed based on actual usage, including processing power, storage, and network bandwidth. Configuring too few nodes will limit the ability to support the required applications, and too many nodes will inflate costs. Determine the list of applications and features that need to be present in the selected cloud vendor. Some vendors include backup services or disaster recovery options as add-on services that will impact the cost, so it important to decide whether or not these services are necessary. A benefit with most vendors is that these services are extremely configurable, so subscriptions can be modified. However, it is important to choose a vendor with packages that make sense for your current and future needs as much as possible, since transitioning between vendors is not typically desirable. Implement security policies Since the data and applications in the cloud are accessed over the Internet, it is of the utmost importance to ensure that all available vendor security policies are implemented correctly. In addition to the main access policies, determine if data security is a concern. Sensitive data such as PII or PCI may have regulations that impact data encryption rules, especially when being accessed through the cloud. Ensure that the selected vendor is reliable in order to safeguard this information properly. In some cases, applications that are being migrated will need to be refactored so that they will work in the cloud. Sometimes this means making adjustments to connection information or networking protocols. In other cases, this means adjusting access policies or opening ports. In all cases, a detailed plan needs to be made at the networking, software, and data levels in order to make the transition smooth. Let’s get to work! Once all of the decisions have been made and the security policies have been established and implemented, the data appropriate for the project can be uploaded to the cloud. After the data is transferred, it is important to ensure that everything was successful by performing data validation and testing of data access policies. At this point, everything will be configured and any application-specific refactoring or testing can begin. In order to ensure the success of the project, consider hiring a consulting firm with cloud experience that can help guide the process. In any case, the vendor, virtual machine specifications, configured applications and services, and privacy settings must be carefully considered in order to ensure that the cloud services provide the solution necessary for the project. Once the initial migration is complete, the plan can be revised in order to facilitate the migration of additional datasets or processes into the cloud environment. About the author Kristen Hardwick has been gaining professional experience with software development in parallel computing environments in the private, public, and government sectors since 2007. She has interfaced with several different parallel paradigms, including Grid, Cluster, and Cloud. She started her software development career with Dynetics in Huntsville, AL, and then moved to Baltimore, MD, to work for Dynamics Research Corporation. She now works at Spry where her focus is on designing and developing big data analytics for the Hadoop ecosystem.
Read more
  • 0
  • 0
  • 9860

article-image-top-7-modern-virtual-reality-hardware-systems
Sugandha Lahoti
20 Apr 2018
7 min read
Save for later

Top 7 modern Virtual Reality hardware systems

Sugandha Lahoti
20 Apr 2018
7 min read
Since its early inception, virtual reality has offered an escape. Donning a headset can transport you to a brand new world, full of wonderment and excitement. Or it can let you explore a location too dangerous for human existence.  Or it can even just present the real world to you in a new manner. And now that we have moved past the era of bulky goggles and clumsy helmets, the hardware is making the aim of unfettered escapism a reality. In this article, we present a roundup of the modern VR hardware systems. Each product is presented giving an overview of the device, and its price as of February 2018. Use this information to compare systems and find the device which best suits your personal needs. There has been an explosion of VR hardware in the last three years. They range from cheaply made housings around a pair of lens to full headsets with embedded screens creating a 110-degree field of view. Each device offers distinct advantages and use cases. Many have even dropped significantly in price over the past 12 months making them more accessible to a wider audience of users. Following is a brief overview of each device, ranked in terms of price and complexity. Google Cardboard Cardboard VR is compatible with a wide range of contemporary smartphones. Google Cardboard's biggest advantage is its low cost, broad hardware support, and portability. As a bonus, it is wireless. Using the phone's gyroscopes, the VR applications can track the user in 360 degrees of rotation. While modern phones are very powerful, they are not as powerful as desktop PCs. But the user is untethered and the systems are lightweight: Cost: $5-20 (plus an iOS or Android smartphone) [box type="shadow" align="" class="" width=""]Check out this post to Build Virtual Reality Solar System in Unity for Google Cardboard[/box] Google Daydream Rather than plastic, the Daydream is built from a fabric-like material and is bundled with a Wii-like motion controller with a trackpad and buttons. It does have superior optics compared to a Cardboard but is not as nice as the higher end VR systems. Just as with the Gear VR, it works only with a very specific list of phones: Cost: $79 (plus a Google or Android Smartphone) Gear VR Gear VR is part of the Oculus ecosystem. While it still uses a smartphone (Samsung only), the Gear VR Head-Mounted Display (HMD) includes some of the same circuitry from the Oculus Rift PC solution. This results in far more responsive and superior tracking compared to Google Cardboard, although it still only tracks rotation: Cost: $99 (plus Samsung Android Smartphone) Oculus Rift The Oculus Rift is the platform that reignited the VR renaissance through its successful Kickstarter campaign. The Rift uses a PC and external cameras that allow not only rotational tracking but also positional tracking, allowing the user a full VR experience. The Samsung relationship allows Oculus to use Samsung screens in their HMDs. While the Oculus no longer demands that the user remain seated, it does want the user to move within a smaller 3 m x 3 m area. The Rift HMD is wired to the PC. The user can interact with the VR world with the included Xbox gamepad, mouse, and keyboard, a one-button clicker, or proprietary wireless controllers: Cost: $399 plus $800 for a VR-ready PC Vive The HTC Vive from Valve uses smartphone panels from HTC. The Vive has its own proprietary wireless controllers, of a different design than Oculus (but it can also work with gamepads, joysticks, mouse/keyboards). The most distinguishing characteristic is that the Vive encourages users to explore and walk within a 4 m x 4 m, or larger, cube: Cost: $599 plus an $800 VR-ready PC Sony PSVR While there are persistent rumors of an Xbox VR HMD, Sony is currently the only video game console with a VR HMD. It is easier to install and set up than a PC-based VR system, and while the library of titles is much smaller, the quality of the titles is higher overall on average. It is also the most affordable of the positional tracking VR options. But, it is also the only one that cannot be developed on by the average hobbyist developer: Cost: $400, plus Sony Playstation 4 console Microsoft's HoloLens Microsoft's HoloLens provides a unique AR experience in several ways. The user is not blocked off from the real world; they can still see the world around them (other people, desks, chairs, and so on) through the HMD's semitransparent optics. The HoloLens scans the user's environment and creates a 3D representation of that space. This allows the Holograms from the HoloLens to interact with objects in the room. Holographic characters can sit on couches in the room, fish can avoid table legs, screens can be placed on walls in the room, and so on. The system is completely wireless. It's the only commercially available positional tracking device that is wireless. The computer is built into HMD with the processing power that sits in between a smartphone and a VR-ready PC. The user can walk, untethered, in areas as large as 30 m x 30 m. While an Xbox controller and a proprietary single-button controller can be used, the main interaction with the HoloLens is through voice commands and two gestures from the user's hand (Select and Go back). The final difference is that the holograms only appear in a relatively narrow field of view. Because the user can still see other people, either sharing the same Holographic projections or not, the users can interact with each other in a more natural manner: Cost: Development Edition: $3000; Commercial Suite: $5000 Headset costs and comparison across various features The following chart is a sampling of VR headset prices, accurate as of February 1, 2018. VR/AR hardware is rapidly advancing and prices and specs are going to change annually, sometimes quarterly. As of now, the price of the Oculus has dropped by $200: Google Cardboard Gear VR Google Daydream Oculus Rift HTC Vive Sony PS VR HoloLens Complete cost for HMD, trackers, default controllers $5 $99 $79 $399 $599 $299 $3000 Total cost with CPU: phone, PC, PS4 $200 $650 $650 $1,400 $1,500 $600 $3000 Built-in headphones NO No No Yes No No Yes Platform Apple Android Samsung Galaxy Google Pixel PC PC Sony PS4 Proprietary PC Enhanced rotational tracking No Yes No Yes Yes Yes yes Positional tracking No No No Yes Yes Yes Yes Built-in touch panel No* Yes No No No No no Motion controls No No No Yes Yes Yes Yes Tracking system No No No Optical Lighthouse Optical Laser True 360 tracking No No No Yes Yes No Yes Room scale and size No No No Yes Yes Yes Yes Remote No No Yes Yes No No Yes Gamepad support No Yes No Yes 2m x 2m Yes 4m x 4m Yes 3m x 3m Yes 10mX10m Resolution per eye Varies 1440 x1280 1440 x1280 1200 x1080 1200 x1080 1080 x960 1268 X720 Field of view Varies 100 90 110 110 100 30 Refresh rate 60 Hz 60 Hz 60 Hz 90 Hz 90 Hz 90-120 Hz 60 Hz Wireless Yes Yes Yes No No No Yes Optics adjustment No Focus No IPD IPD IPD IPD Operating system iOS Android Android Oculus Android Daydream Win 10 Oculus Win 10 Steam Sony PS4 Win 10 Built-in Camera Yes Yes Yes* No Yes* No Yes AR/VR VR* VR* VR VR VR* VR AR Natural user Interface No No No No No Yes Choosing which HMD to support comes down to a wide range of issues: cost, access to hardware, use cases, image fidelity/processing power, and more. The previous chart is provided to help the user understand the strengths and weaknesses of each platform. There are many HMDs not included in this overview. Some are not commercially available at the time of this writing (Magic Leap, the Win 10 HMD licensed from Microsoft, the Starbreeze/IMAX HMD, and others) and some are not yet widely available or differentiated enough: Razer's Open Source HMD. You enjoyed an excerpt from the book, Virtual Reality Blueprints, written by Charles Palmer and John Williamson. In this book, you will learn how to create immersive 3D games and applications with Cardboard VR, Gear VR, OculusVR, and HTC Vive. The hype behind Magic Leap’s New Augmented Reality Headsets Create Your First Augmented Reality Experience Using the Programming Language You Already Know  
Read more
  • 0
  • 0
  • 9773

article-image-introducing-woz-a-progressive-webassembly-application-pwa-web-assembly-generator-written-entirely-in-rust
Sugandha Lahoti
04 Sep 2019
5 min read
Save for later

Introducing Woz, a Progressive WebAssembly Application (PWA + Web Assembly) generator written entirely in Rust

Sugandha Lahoti
04 Sep 2019
5 min read
Progressive Web Apps are already being deployed at a massive scale evidenced by their presence on most websites now. But what’s next for PWA? Alex Kehayis, developer at Stripe things its the merging of WebAssembly to PWA. According to him, the adoption of WebAssembly and ease of distribution on the web creates compelling new opportunities for application development. He has created what he calls Progressive Webassembly Applications (PWAAs) which is built entirely using Rust. In his talk at WebAssembly San Francisco Meetup, Alex walks through the creation of Woz, a PWA toolchain for Rust. Woz is a progressive WebAssembly app generator (PWAA) for Rust. Woz makes distributing your app as simple as sharing a hyperlink. Read Also: Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] Web content has become efficient Alex begins his talk by pointing out how web content has become massively efficient; this is because it solves three problems: Distribution: Actually serving content to your users Unification: Write once and run it everywhere Experience: Consume content in a low friction environment Mobile applications vs Web applications Applications are kind of an elevated form of content. They tend to be more experiential, dynamic, and interactive. Alex points out the definition of ‘application’ from Wikipedia, which states that applications are software that is designed to perform a group of coordinated functions tasks and activities for the benefit of users. Despite all progress, mobile apps are still hugely inefficient to create, distribute, and use. Its distribution is generally in the hands of the duopoly, Apple and Google. The unification is generally handled through third-party frameworks such as React Native, or Xamarin. User experience on mobile apps, although performant leads to high friction as a user has to generally switch between apps, take time for it to install, load etc. Web based applications on the other hand are quite efficient to create, distribute and use. Anybody who's got an internet connection and a browser can go through the web application. For web applications, unification happens through standards, unlike frameworks which is more efficient. User experience is also quite dynamic and fast; you jump right into it and don't have to necessarily install anything. Should everybody just use web apps instead of mobile apps? Although mobile applications are a bit inefficient, they bring certain features: Native application has better performance than web based apps Encapsulation (e.g. home screen, self-contained experience) Mobile apps are offline by default Mobile apps use Hardware/sensors Native apps typically consume less battery than web apps In order to get the best of both worlds, Alex suggests the following steps: Bring web applications to mobile This has already been implemented and are called Progressive web applications Improve the state of performance and providing access. Alex says that WebAssembly is a viable choice for achieving this. WebAssembly is highly performant when it's paired with a language like Rust. Progressive WebAssembly Applications Woz, a Progressive WebAssembly Application generator Alex proceeds to talk about Woz, which is a progressive WebAssembly application generator.  It combines all the good things of a PWA and WebAssembly and works as a toolchain for building and deploying performant mobile apps with Rust. You can distribute your app as simply as sharing a hyperlink. Woz brings distribution via browsers, unification via web standards, and experience via hyperlinks. Woz uses wasm-bindgen to generate the interop calls between WebAssembly and JavaScript. This allows you to write the entire application in Rust—including rendering to the DOM. It will soon be coming with ‘managed charging’ for your apps and even provide multiple copies your users can share all with a hyperlink. Unlike all the things you need for a PWA (SSL certificate, PWA Manifest, Splash screen, Home screen icons, Service worker), PWAAs requires JS bindings to WebAssembly and to fetch, compile, and run wasm. His talks also talked about some popular Rust-based frontend frameworks Yew: “Yew is a modern Rust framework inspired by Elm and React for creating multi-threaded frontend apps with WebAssembly.” Sauron: “Sauron is an html web framework for building web-apps. It is heavily inspired by elm.” Percy: “A modular toolkit for building isomorphic web apps with Rust + WebAssembly” Seed: “A Rust framework for creating web apps” Read Also: “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer Josh Triplett With Woz, the goal, Alex says, was to stay in Rust and create a PWA that can be installed to your home screen. The sample app that he created only weighs about 300Kb. Alex says, “In order to actually write the app, you really only need one entry point - it’s a public method render that's decorated wasm_bindgen. The rest will kind of figure itself out. You don't necessarily need to go create your own JavaScript file.” He then proceeded to show a quick demo of what it looks like. What’s next? WebAssembly will continue to evolve. More languages and ecosystem can target WebAssembly. Progressive web apps will continue to evolve. PWAAs are an interesting proposition. We should really be liberating mobile apps and bringing them to the web. I think web assembly is kind of a missing link to some of these things. Watch Alex Kehayis’s full talk on YouTube. Slides are available here. https://www.youtube.com/watch?v=0ySua0-c4jg Other news in Tech Wasmer’s first Postgres extension to run WebAssembly is here! Mozilla proposes WebAssembly Interface Types to enable language interoperability Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module
Read more
  • 0
  • 0
  • 9758

article-image-20-ways-to-describe-programming-in-5-words
Richard Gall
25 Apr 2018
3 min read
Save for later

20 ways to describe programming in 5 words

Richard Gall
25 Apr 2018
3 min read
How would you describe programming? Can you describe programming in 5 words? It's pretty difficult. Even explaining it in a basic and straightforward way can be challenging. You type stuff... and then it turns into something else or makes something happen. Or, as is often the case, something doesn't happen. Twitter account @abstractionscon asked its followers "what 5 words best describe programming?" The results didn't disappoint. There was a mix of funny, slightly tragic, and even poetic evocations and descriptions of what programming is and what it feels like. It turns out that more often than not, it simply feels frustrating. Things go wrong a lot. One of the most interesting aspects of the conversation was how it brings to light just how challenging it is to put programming into language. That's reflected in many of the responses to the original tweet. One of the conclusions we can probably draw from this is that not only is describing programming pretty hard, it's also pretty funny. And from that, perhaps it's also true that programming is generally a pretty funny thing to do. But then why would that be surprising? You learn from an early age that getting a computer to do what you want is difficult, so why should writing software be any different? Take a look at some of the best attempts to describe programming below. Which is your favourite? And how would you describe programming? https://twitter.com/alicegoldfuss/status/988818057219854336 https://twitter.com/jennschiffer/status/988849269552578560 https://twitter.com/lindseybieda/status/988941397544890368 https://twitter.com/sarahmei/status/988600171075268608 https://twitter.com/tef_ebooks/status/988752549552578560 https://twitter.com/jckarter/status/988828156386684928 https://twitter.com/cassidoo/status/988920470907961344 https://twitter.com/kelseyhightower/status/988646191679209472 https://twitter.com/francesc/status/988653691669446658 https://twitter.com/shanselman/status/988919759377915904 https://twitter.com/chriseng/status/988674723516207104 https://twitter.com/EricaJoy/status/988649667914186755 https://twitter.com/brianleroux/status/988628362355773440 https://twitter.com/ftrain/status/988759827731148800 https://twitter.com/jbeda/status/988634633087545344 https://twitter.com/kamal/status/988749873347375104 https://twitter.com/fatih/status/988695353171030016 https://twitter.com/innesmck/status/989067129432498176 https://twitter.com/franckverrot/status/988611564168036352 https://twitter.com/dewitt/status/988609620536053760 Thank you Twitter for your insights and jokes. It does make you feel better to know that there are millions of people out there with the same frustrations and software-induced high blood pressure. The next time something goes wrong remember you're really just meat teaching sand to think. Hopefully that should put everything into perspective. Read more: Slow down to learn how to code faster
Read more
  • 0
  • 0
  • 9715

article-image-what-is-the-reactive-manifesto
Packt Editorial Staff
17 Apr 2018
3 min read
Save for later

What is the Reactive Manifesto?

Packt Editorial Staff
17 Apr 2018
3 min read
The Reactive Manifesto is a document that defines the core principles of reactive programming. It was first released in 2013 by a group of developers led by a man called Jonas Boner (you can find him on Twitter: @jboner). Jonas wrote this in a blog post explaining the reasons behind the manifesto: "Application requirements have changed dramatically in recent years. Both from a runtime environment perspective, with multicore and cloud computing architectures nowadays being the norm, as well as from a user requirements perspective, with tighter SLAs in terms of lower latency, higher throughput, availability and close to linear scalability. This all demands writing applications in a fundamentally different way than what most programmers are used to." A number of high-profile programmers signed the reactive manifesto. Some of the names behind it include Erik Meijer, Martin Odersky, Greg Young, Martin Thompson, and Roland Kuhn. A second, updated version of the Reactive Manifesto was released in 2014 - to date more than 22,000 people have signed it. The Reactive Manifesto underpins the principles of reactive programming You can think of it as the map to the treasure of reactive programming, or like the bible for the programmers of the reactive programming religion. Everyone starting with reactive programming should have a read of the manifesto to understand what reactive programming is all about and what its principles are. The 4 principles of the Reactive Manifesto Reactive systems must be responsive The system should respond in a timely manner. Responsive systems focus on providing rapid and consistent response times, so they deliver a consistent quality of service. Reactive systems must be resilient In case the system faces any failure, it should stay responsive. Resilience is achieved by replication, containment, isolation, and delegation. Failures are contained within each component, isolating components from each other, so when failure has occurred in a component, it will not affect the other components or the system as a whole. Reactive systems must be elastic Reactive systems can react to changes and stay responsive under varying workload. They achieve elasticity in a cost effective way on commodity hardware and software platforms. Reactive systems must be message driven Message driven: In order to establish the resilient principle, reactive systems need to establish a boundary between components by relying on asynchronous message passing. Those are the core principles behind reactive programming put forward by the manifesto. But there's something else that supports the thinking behind reactive programming. That's the standard specification on reactive streams. Reactive Streams standard specifications Everything in the reactive world is accomplished with the help of Reactive Streams. In 2013, Netflix, Pivotal, and Lightbend (previously known as Typesafe) felt a need for a standards specification for Reactive Streams as the reactive programming was beginning to spread and more frameworks for reactive programming were starting to emerge, so they started the initiative that resulted in Reactive Streams standard specification, which is now getting implemented across various frameworks and platforms. You can take a look at the Reactive Streams standard specification here. This post has been adapted from Reactive Programming in Kotlin. Find it on the Packt store here.
Read more
  • 0
  • 1
  • 9674
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-what-role-does-linux-play-in-securing-android-devices
Sugandha Lahoti
07 Oct 2018
9 min read
Save for later

What role does Linux play in securing Android devices?

Sugandha Lahoti
07 Oct 2018
9 min read
In this article, we will talk about the Android Model particularly the Linux Kernel layer, over which Android is built. We will also talk about Android's security features and offerings and how Linux plays a role to secure Android OS. This article is taken from the book Practical Mobile Forensics - Third Edition by Rohit Tamma et al. In this book, you will investigate, analyze, and report iOS, Android, and Windows devices. The Android architecture Android is open source and the code is released under the Apache license. Practically, this means anyone (especially device manufacturers) can access it, freely modify it, and use the software according to the requirements of any device. This is one of the primary reasons for its wide acceptance. Notable players that use Android include Samsung, HTC, Sony, and LG. As with any other platform, Android consists of a stack of layers running one above the other. To understand the Android ecosystem, it's essential to have a basic understanding of what these layers are and what they do. The following figure summarizes the various layers involved in the Android software stack: Android architecture Each of these layers performs several operations that support specific operating system functions. Each layer provides services to the layers lying on top of it. The Linux kernel layer Android OS is built on top of the Linux kernel, with some architectural changes made by Google. There are several reasons for choosing the Linux kernel. Most importantly, Linux is a portable platform that can be compiled easily on different hardware. The kernel acts as an abstraction layer between the software and hardware present on the device. Consider the case of a camera click. What happens when you take a photo using the camera button on your device? At some point, the hardware instruction (pressing a button) has to be converted to a software instruction (to take a picture and store it in the gallery). The kernel contains drivers to facilitate this process. When the user presses on the button, the instruction goes to the corresponding camera driver in the kernel, which sends the necessary commands to the camera hardware, similar to what occurs when a key is pressed on a keyboard. In simple words, the drivers in the kernel command control the underlying hardware. The Linux kernel is responsible for managing the core functionality of Android, such as process management, memory management, security, and networking. Linux is a proven platform when it comes to security and process management. Android has taken leverage of the existing Linux open source OS to build a solid foundation for its ecosystem. Each version of Android has a different version of the underlying Linux kernel. The Marshmallow Android version is known to use Linux kernel 3.18.10, whereas the Nougat version is known to use Linux kernel 4.4.1. Android security Android was designed with a specific focus on security. Android as a platform offers and enforces certain features that safeguard the user data present on the mobile through multi-layered security. There are certain safe defaults that will protect the user, and certain offerings that can be leveraged by the development community to build secure applications. The following are issues that are to be kept in mind while incorporating Android security controls: Protecting user-related data Safeguarding the system resources Making sure that one application cannot access the data of another application The next few sections will help us understand more about Android's security features and offerings. Secure kernel Linux has evolved as a trusted platform over the years, and Android has leveraged this fact using it as its kernel. The user-based permission model of Linux has, in fact, worked well for Android. As mentioned earlier, there is a lot of specific code built into the Linux kernel. With each Android version release, the kernel version has also changed. The following table shows Android versions and their corresponding kernel versions: Android version Linux kernel version 1 2.6.25 1.5 2.6.27 1.6 2.6.29 2.2 2.6.32 2.3 2.6.35 3.0 2.6.36 4.0 3.0.1 4.1 3.0.31 4.2 3.4.0 4.2 3.4.39 4.4 3.8 5.0 3.16.1 6.0 3.18.1 7.0 4.4.1 The permission model As shown in the following screenshot, any Android application must be granted permissions to access sensitive functionality, such as the internet, dialer, and so on, by the user. This provides an opportunity for the user to know in advance which functions on the device is being accessed by the application. Simply put, it requires the user's permission to perform any kind of malicious activity (stealing data, compromising the system, and so on). This model helps the user to prevent attacks, but if the user is unaware and gives away a lot of permissions, it leaves them in trouble (remember, when it comes to installing malware on any device, the weakest link is always the user). Until Android 6.0, users needed to grant the permissions during install time. Users had to either accept all the permissions or not install the application. But, starting from Android 6.0, users grant permissions to apps while the app is running. This new permission system also gives the user more control over the app's functionality by allowing the user to grant selective permissions. For example, a user can deny a particular app access to his location but provide access to the internet. The user can revoke the permissions at any time by going to the app's Settings screen. Application sandbox In Linux systems, each user is assigned a unique user ID (UID), and users are segregated so that one user cannot access the data of another user. However, all applications under a particular user are run with the same privileges. Similarly, in Android, each application runs as a unique user. In other words, a UID is assigned to each application and is run as a separate process. This concept ensures an application sandbox at the kernel level. The kernel manages the security restrictions between the applications by making use of existing Linux concepts, such as UID and GID. If an application attempts to do something malicious, say to read the data of another application, this is not permitted as the application does not have user privileges. Hence, the operating system protects an application from accessing the data of another application. Secure inter-process communication Android offers secure inter-process communication through which one's activity in an application can send messages to another activity in the same application or a different application. To achieve this, Android provides inter-process communication (IPC) mechanisms: intents, services, content providers, and so on. Application signing It is mandatory that all of the installed applications are digitally signed. Developers can place their applications in Google's Play Store only after signing the applications. The private key with which the application is signed is held by the developer. Using the same key, a developer can provide updates to their application, share data between the applications, and so on. Security-Enhanced Linux Security-Enhanced Linux (SELinux) is a security feature that was introduced in Android 4.3 and fully enforced in Android 5.0. Until this addition, Android security was based on Discretionary Access Control (DAC), which means applications can ask for permissions, and users can grant or deny those permissions. Thus, malware can create havoc on phones by gaining those permissions. But, SE Android uses Mandatory Access Control (MAC), which ensures that applications work in isolated environments. Hence, even if a user installs a malware app, the malware cannot access the OS and corrupt the device. SELinux is used to enforce MAC over all the processes, including the ones running with root privileges. SELinux operates on the principle of default denial: anything that is not explicitly allowed is denied. SELinux can operate in one of the two global modes: permissive mode, in which permission denials are logged but not enforced, and enforcing mode, in which denials are both logged and enforced. Full Disk Encryption With Android 6.0 Marshmallow, Google has mandated Full Disk Encryption (FDE) for most devices, provided that the hardware meets certain minimum standards. Encryption is the process of converting data into cipher text using a secret key. On Android devices, full disk encryption refers to the process of encrypting all user data using a secret key. This key is then encrypted by the lock screen PIN/pattern/password before being securely stored in a trusted location. Once a device is encrypted, all user-created data is automatically encrypted before writing it to disk, and all reads automatically decrypt data before returning it to the calling process. Full disk encryption in Android works only with an Embedded Multimedia Card (eMMC) and similar flash devices that present themselves to the kernel as block devices. Staring from Android 7.x, Google decided to shift the encryption feature from full-disk encryption to file-based encryption. In file-based encryption, different files are encrypted with different keys. By doing so, those files can be unlocked independently without requiring an entire partition to be decrypted at once. As a result of this, the system can now decrypt and use files needed to boot the system, and open notifications without having to wait until the user unlocks the phone. Trusted Execution Environment Trusted Execution Environment (TEE) is an isolated area (typically a separate microprocessor) intended to guarantee the security of data stored inside it, and also to execute code with integrity. The main processor on mobile devices is considered untrusted and cannot be used to store secret data (such as cryptographic keys). Hence, TEE is used specifically to perform such operations, and the software running on the main processor delegates any operations that require the use of secret data to the TEE processor. Thus we talked about the Linux Kernel layer, over which Android is built. We also talked about Android's security features and offerings and how Linux plays a role to secure Android OS. To learn more about methods for accessing the data stored on Android devices, read our book Practical Mobile Forensics - Third Edition. The kernel community attempting to make Linux more secure. Google open sources Filament – a physically based rendering engine for Android, Windows, Linux and macOS Google becomes a new platinum member of the Linux Foundation
Read more
  • 0
  • 0
  • 9621

article-image-what-is-seaborn-and-why-should-you-use-it-for-data-visualization
Erik Kappelman
30 Jan 2018
6 min read
Save for later

What is Seaborn and why should you use it for data visualization?

Erik Kappelman
30 Jan 2018
6 min read
Seaborn is a Python library created for enhanced data visualization. It's a very timely and relevant tool for data professionals working today precisely because effective data visualization – and communication in general – is a particularly essential skill. Being able to bridge the gap between data and insight is hugely valuable, and Seaborn is a tool that fits comfortably in the toolchain of anyone interested in doing just that. There are, of course, a huge range of data visualization libraries out there – but if you're wondering why you should use Seaborn, put simply it brings some serious power to the table that other tools can’t quite match. Follow this Seaborn tutorial and you’ll find out what makes Seaborn such a good data visualization library. How to get started with Seaborn To get started, I recommend becoming familiar with Anaconda, if you are not already. I find that using Anaconda and its various tools makes coding in Python, especially package and library management, a whole lot easier. So, let's load the packages we are going to need. (I am assuming you have already downloaded and setup Seaborn.) import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pandas as pd Now that we have our packages on board, let's just make a basic plot. The function below creates a series of sine functions and then graphs all of these functions; take a look: np.random.seed(sum(map(ord, "aesthetics"))) def sinplot(flip=1): x = np.linspace(0, 14, 100) for i in range(1, 7): plt.plot(x, np.sin(x + i * .5) * (7 - i) * flip) sin = sinplot() plt.savefig("sin.png") It’s a pretty basic set of sine curves, and while it looks pretty professional and clean, it doesn’t really tell us much more about what makes Seaborn unique. So what makes Seaborn different? What are the benefits of Seaborn? Well, let's take a look at what Seaborn refers to as ‘joint plots.’ These plots pair a scatter plot with the distribution of each variable in the scatter plot on the axes. Let's look at the code for the next two graphs and then we’ll discuss why they matter: join1 = sns.jointplot(x="x", y="y", data=df); join1.savefig("join1.png") join2= sns.jointplot(x="x", y="y", data=df, kind="kde"); join2.savefig("join2.png") plt.clf() This plot isn’t unique to Seaborn. I've created very similar plots in R, however, that plot took one single line of code. In R, at the very least you're looking at five or six lines, and you’re going to have to use the default plotting package because I’ve never been able to figure out marginal plots in ggplot2. Graphs like this really show us a lot about the data we are examining. We can simultaneously see that the two sets of data are correlated and that they are both somewhat skewed and non-normal, although the y variable could probably pass as normal. If marginal plots were this easy in R, I would leverage them a whole lot more because they are informative. The next plot, however, is different. In fact, I hadn’t really seen something like it before I learned about Seaborn. This plot uses a kernel density plot instead of a scatter plot, and the distributions are estimated smoothly instead of using histograms. This could be a helpful graph if you were specifically interested in densities and correlations as well as the distributions of the data. This could be quite beneficial in various spatial analysis applications, as well as traditional statistical fields. The third join plot includes a regression line in the scatter plot as well as an assessment of the fit of the linear model used. The code used to produce this plot is below: tips = sns.load_dataset('tips') sns.jointplot(x="total_bill", y="tip", data=tips, kind="reg"); plt.savefig('join3.png') The inclusion of error fields around the line helps you to better visualize the accuracy of the linear regression. Additionally, the distribution of the data is available in the margins. Normally, it would take three separate graphs to convey all of this information. Seaborn makes this much simpler. With a single line of code, we are able to create a graph that covers all of the relevant information related to this linear regression. Another somewhat novel graph type that’s available in Seaborn is the violin plot. Again, we can create this complex graph with the simple code shown below: iris = sns.load_dataset("iris") sns.violinplot(x=iris.species, y=iris.sepal_length, data=iris); plt.savefig("violin.png") This is data from the famous Iris data set. The violin plot is essentially an amalgamation of a box plot and a kernel density estimate of a distribution. Both box plots and graphs of univariate distributions are very helpful when first beginning analysis of some dataset. Again, Seaborn takes a lot out of the work of this process by making it easy to produce single graphs that would normally take multiple graphs using other analysis tools. The final chart I would like to show is really useful. It summarizes the results of univariate logistic regression graphically. This is a tough thing to display and until I came across Seaborn I had really never seen an example I would consider good. The chart is created with the code below: tips['big_tip'] = tips['tip']/tips['total_bill'] >= 0.2 sns.lmplot(x="total_bill", y="big_tip", data=tips,logistic=True, y_jitter=.03); plt.savefig("tiplogit.png") The chart displays the results of the regression a binary indicator if a tip was larger than 20 percent or ‘big’ against the total cost of the meal: The chart illustrates very clearly that people are not tipping as much when their meals are more expensive, at least in terms of proportions. Summarizing the results of logistic regressions is always challenging, but as you can see, thanks to Seaborn, you can do a pretty good job with just one line of code. Seaborn is simply a really great library that's worth your time exploring – I hope this post has convinced you and inspired you to go and try it for yourself if you haven't already. There is always room for improvement when it comes to data visualization. Seaborn might be the improvement you need. I know I'll be using it.
Read more
  • 0
  • 0
  • 9611

article-image-ecmascript-7-what-expect
Soham Kamani
22 Jun 2016
5 min read
Save for later

ECMAScript 7 - What to expect?

Soham Kamani
22 Jun 2016
5 min read
Now that ES6 has been officially accepted, it’s time to look forward to the next iteration of JavaScript, which is ECMAScript 7. There are many new and exciting features in ES7. Support for asynchronous programming Of all the new features in ES7, the most exciting one, in my view, is the addition of async and await for asynchronous programming, which occurs quite often, especially when you're trying to build applications using Node.js. To explain async and await, it's better you first see an example. Let’s say you have three asynchronous operations, each one dependent on the result returned by the previous one. There are multiple ways you could do that. The most common way to do this is to utilize callbacks. Let’s take a look at the code: myFirstOperation(function(err, firstResult){ mySecondOperation(firstResult, function(err, secondResult){ myThirdOperation(secondResult, function(err, thirdResult){ /* Do something with the third result */ }); }); }); The obvious flaw with this approach is that it leads to a situation known as callback hell. The introduction of promises simplified async programming greatly, so let’s see how the code would look using promises (which were introduced with ES6): myFirstPromise() .then(firstResult => mySecondPromise(firstResult)) .then(secondResult => myThirdPromis(secondResult)) .then(thirdResult =>{ /* Do something with thrid result */ }, err => { /* Handle error */ }); Now, let’s see how to handle these operations using async and await: async function myOperations(){ const firstResult = await myFirstOperation(); const secondResult = await mySecondOperation(firstResult); const thirdResult = await myThirdOperation(secondResult); /* Do something with third result */ }; try { myOperations(); } catch (err) { /* Handle error */ } This looks just like synchronous code? What? Exactly! The use of async and await makes life much simpler, by making async functions seem as if they are synchronous code. Under the hood, though, all of these functions execute in a nonblocking fashion, so you have the benefit of nonblocking async functions, with the simplicity and readability of synchronous code. Brilliant! Object rest and Object spread In ES6, we saw the introduction of array rest and spread operations. These new additions make it easier for you to combine and decompose arrays. ES7 takes this one level further by providing similar functionality for objects. Object rest This is a extension to the existing ES6 destructuring operation. On assignment of the properties during destructuring, if there is an additional ...rest parameter, all the remaining keys and values are assigned to it as another object. For example: const myObject = { lorem : 'ipsum', dolor : 'sit', amet : 'foo', bar : 'baz' }; const { lorem, dolor, ...others } = myObject; // lorem === 'ipsum' // dolor === 'sit' // others === { amet : 'foo', bar : 'baz' } Object spread This is similar to object rest, but is used for constructing objects instead of destructuring them: const obj1 = { amet : 'foo', bar : 'baz' }; const myObject = { lorem : 'ipsum', dolor : 'sit', ...obj1 }; /* myObject === { lorem : 'ipsum', dolor : 'sit', amet : 'foo', bar : 'baz' }; */ This is an alternative way of expressing the Object.assign function already present in ES6. In the precding code, myObject, is a new object, constructed using some properties of obj1 (there is no reference to obj). The equivalent way of doing this in ES6 would be: const myObject = Object.assign({ lorem : 'ipsum', dolor : 'sit' }, obj1); Of course, the object spread notation is much more readable, and the recommended way of assigning new objects, if you choose to adopt it. Observables The Object.observe function is a great new addition for asynchronously monitoring changes made to objects. Using this feature, you will be able to handle any sort of change made to objects, along with seeing how and when that change was made. Let's look at an example of how Object.observe will work: const myObject = {}; Object.observe(myObject, (changes) => { const [{ name, object, type, oldValue }] = changes; console.log(`You tried to ${type} the ${name} property`); }); myObject.foo = 'bar'; //You tried to add the foo property Caveat Although this is a good feature, as of this writing, Object.observe is being tagged as obsolete, which means that this feature could be removed at any time in the future. While it’s still ok to play around and experiment with this, it is recommended not to use it in production systems and larger applications. Additional utility methods There have been additional methods added to the String and Array prototypes: Array.prototype.includes: This checks whether an array includes an element or not: [1,2,3].includes(1); //true String.prototype.padLeft and String.prototype.padRight: 'abc'.padLeft(10); //"abc " 'abc'.padRight(10); //" abc" String.prototype.trimLeft and String.prototype.trimRight: 'n t abc n t'.trimLeft(); //"abc n t" 'n t abc n t'.trimRight(); //"n t abc" Working with ES7 today Many of the features mentioned here are still in the proposal phase, but you can still get started using them in your JavaScript application today! The most common tool used to get started is babel. In case you want to make a browser application, babel is perfect for compiling all of your code to regular ES5. Alternatively, you can use the many babel plugins already available to use babel with your favorite toolbelt or build system. In case you have trouble setting up your project, there are many yeoman generators to help you get started. If you are planning to use ES7 to build a node module or an application in node, there is a yeoman generator available for that as well. About the author Soham Kamani is a Full stack web developer and electronics hobbyist. He is especially interested in JavaScript, Python, and IOT. He can be found on Twitter at @sohamkamani and at sohamkamani.com.
Read more
  • 0
  • 0
  • 9583

article-image-top-7-tools-for-virtual-reality-game-developers
Natasha Mathur
31 Oct 2018
12 min read
Save for later

Top 7 tools for virtual reality game developers

Natasha Mathur
31 Oct 2018
12 min read
According to Statista, the virtual reality software market is booming. It is projected to reach a value of around 24.5 billion U.S. dollars by 2020. Also, the estimated revenue of the virtual reality market in the year 2021 is3.56 billion U.S. dollars. This would be a huge increase from a very respectable 3.06 billion U.S. dollars back in 2016 This makes virtual reality a potentially lucrative opportunity if you’re a game developer. But it’s also one that’s a lot of fun, with plenty of creative opportunities, and which doesn’t require a load of money up front. Thanks to technological advancements in the VR space, it’s not easier than ever to build a VR game from scratch. But with so many virtual reality tools out there, it can be hard to know where to start. It leaves you stranded with plenty of options but no sense of direction. To help you out, we’ve consolidated a list of what we think are the top 7 tools to help you get started. 1.Unity 3d: the leading game engine at the cutting edge of the industry Developer: Unity Technologies Release date: 2005 Why choose Unity for virtual reality game development? In a nutshell:  it is the easiest way to get started with Virtual Reality development and doesn’t compromise on the quality of the developed game. Unity offers a huge 3D asset store, which is an online marketplace by Unity. In this asset store, you can easily find the 2D, 3D models, SDKs, templates, as well as different virtual reality tools that you can download and import directly to your game. One of the most popular tools that you can find in the Unity asset store is the VR toolkit. So for times, when you don’t want to spend time on building a character model from scratch, you can simply pick one from the asset store. This helps jump-start the game development process. Some of these assets are free, and for some, you have to pay one-time. Moreover, the documentation in Unity consists of vivid examples ( eg; Introduction to VR best practices), video tutorials, as well as live training sessions (eg; VR essentials pack demo). This is not only great news for the experienced game developer but the newbies too as unity makes it easy for you to quickly learn to build games, including the AAA quality virtual reality games. It also has an ever-growing community. So, for times when you get stuck somewhere during the game development process, a solid community will be there to offer you advice on resolving a wide range of issues. Languages Supported: Unity supports three development languages namely, c#, Boo, and UnityScript. Platforms supported: Unity supports all the platforms such as mobile, PC, web and console platforms. The free version supports Mac OS X, Android, iOS, Windows and among other mobile platforms. The paid version further supports  Nintendo Wii, Xbox 360 and PlayStation. The free version, however, is more than enough to dive right into the development process. Unity also supports all the major HMDs such as Oculus Rift, Steam VR/Vive, Playstation VR, Gear VR, Microsoft HoloLens, and Google’s Daydream View. Price: Unity has three versions, namely,  personal, plus and pro version. The personal version is completely free, Unity 3D plus is $35 per seat per month, and pro is $125 per seat per month. However, the personal version is more than enough to dive right into the development process. Learning curve: Unity 3d has a flat learning curve. It can be used with ease by both beginners and professionals alike. Learning resources: Unity Virtual Reality Projects - Second Edition                                   Unity Virtual Reality - Volume 1 [Video]                                   Unity Virtual Reality - Volume 2 [Video] 2. Unreal Engine 4: a free game engine with exceptional graphics and capabilities for virtual reality Developer: Epic Games Release Date: 1998 Why choose Unreal Engine for virtual reality gaming? Unreal Engine has powered games with some of the most exceptional graphics and features, so it naturally comes with features catered towards advanced Game development. For virtual reality, Unreal Engine comes with an advanced cinematics system, advanced lighting capabilities, a rendering pipeline offering 90 Hz stereo framerate or faster at high resolutions as well as tools scaling from simple to detailed scenes, environments and characters. Similar to Unity, Unreal Engine 4  also comes with an asset store, which is an online marketplace by Unreal offering animations, blueprints, code plugins, props, environments, as well as architectural visualization. Again, just like Unity’s asset store, some of the assets are paid, and some are free. Documentation provided by Unreal Engine is not as rich as the one offered by Unity and comes with basic guides and live training streams on Virtual reality development. Unreal Engine 4 also has a strong community to guide you through your game development journey. Languages supported: Unreal Engine 4 offers only C++ development language. Platforms supported: UE4 supports all the latest HMDs such as Oculus Rift, HTC Vive, Samsung Gear VR, Google VR, and Leap Motion among others. Unreal Engine 4 lets you deploy your VR game projects to Windows PC, PlayStation 4, Xbox One, Mac OS X, iOS, Android, AR, VR, Linux, SteamOS, and HTML5. You can run the Unreal Editor on Windows, Mac OS X, and Linux. Moreover, Xbox One, PlayStation 4 and Nintendo Switch console tools and code are also available at no additional cost to registered developers for their respective platform(s). Price: The great thing about UE4 is that it is very cost-effective for all the game nerds out there, as it's free to use, with a 5% royalty on gross product revenue after the first $3,000 per game per calendar quarter from commercial products. Learning Curve: Unreal Engine 4 has a steep learning curve and is suited mostly for professionals. Learning resources: Exploring Unreal Engine 4 VR Editor and Essentials of VR [Video]                    Unreal Engine 4: The Complete Beginner's Course [Video]                      3. CryEngine: a game engine with a powerful range of assets for virtual reality games Developer: Crytek Release Date: 2002 Why choose CryEngine for virtual reality game development? Similar to Unity and Unreal Engine, CryEngine also offers an asset store, offering tools and assets across different domains such as 3D modeling, scripts, sounds, animations, etc. The documentation offered by CryEngine is not as rich as Unity, which makes it difficult to approach for the beginners. However, it does have an online forum which can guide the experienced developers during their virtual reality game development journey. CryEngine also includes CE# Framework, new Sandbox Editor, Improved Profiling, Reworked Low Overhead Renderer, DirectX 12 Support, Advanced Volumetric Cloud System, new particle system, FMOD Studio support, and Visual Studio 2015 Support, which all collectively can amp up the virtual reality game development process. Languages supported: It supports languages such as C++, Flash, ActionScript, and Lua. Platforms supported: CryEngine supports Windows, Linux, PlayStation 4, Xbox One, Oculus Rift, OSVR, PSVR, and HTC Vive. Mobile support is currently under development. Price: CryEngine is free but takes five percent of the revenues generated by each game built with CryEngine - after the revenues have passed $5,000. Learning curve: CryEngine has a steep learning curve as for anything other than basic games, you need to have strong command on languages such as C++, Flash, ActionScript, and Lua. Learning resources: CryENGINE Game Programming with C++, C#, and Lua                                  CryENGINE SDK Game Programming Essentials [Video] 4. Blender: an accessible tool for building exceptional graphics and animations Developer: Blender Foundation Release Date: 1998 Why choose Blender for virtual reality? Blender, a modern 3D graphics software is not only great for 3D modeling but supports the entirety of the 3D pipeline such as rigging, animation, simulation, rendering, motion tracking, video editing, and game creation. It also comes with a built-in powerful path-tracer engine called Cycles that offers stunning ultra-realistic rendering, real-time viewport preview, PBR shaders & HDR lighting support as well as VR rendering support. It also has a solid community of developers and offers tutorials, workshops, and courses on character modeling, character animation, and blender fundamentals. Blender comes with add-ons for VR such as BlenderVR that supports CAVE/VideoWall, Head-Mounted Displays (HMD) and external rendering modality engines. It helps with the cross-platform development of virtual reality applications as well as porting of scenes from one VR platform configuration to another without any requirement to edit the actual scene. Platforms supported:  Blender supports Windows, Mac OS, and Linux Price: Blender is free to use. Learning Curve: Blender has a flat learning curve and can be used with ease by both beginners and professionals alike. Learning resources: Building a Character using Blender 3D [Video]                                     Blender 3D Basics                            5. Amazon Lumberyard: an accessible and fast tool for building virtual reality games Developer: Amazon Release Date: 2015 Why choose Amazon Lumberyard for virtual reality game development? Bases on CryEngine’s architecture, Amazon Lumberyard, is a powerful cross-platform game engine comprising of tools that help you create the highest-quality games, and connect your games to the vast storage of the AWS Cloud, and engage fans on Twitch. Lumberyard's professional tools such as its virtual reality system use Lumberyard’s Gems, self-contained packages of assets and features that can be added within your game. In fact, these gems act as templates for you to build your own gems and supports all the VR devices without requiring any engine code editing. Lumberyard is also integrated with Amazon GameLift, which is an AWS service meant for deploying, operating, and scaling dedicated game servers for session-based multiplayer games. Lumberyard also speeds up virtual reality development with the new VR Preview function. This full VR preview function is in the editor, which you can click to see in VR right away. This lets the game developers make VR-specific adjustments and level the designs right in the editor, which is quite convenient and saves a lot of time. Platforms supported: Lumberyard supports HMDs such as Oculus Rift, HTC Vive and Open Source Virtual Reality (OSVR). It offers support for  PC, Xbox One, PlayStation 4, iOS (iPhone 5S+ and iOS 7.0+), and Android (Nexus 5 and equivalents with support for OpenGL 3.0+). Lumberyard also offers support for dedicated servers on Windows and Linux. Price: Amazon Lumberyard is free, with no seat licenses, royalties, or subscriptions required. You only need to pay the standard AWS fees for the AWS services that you choose to use. Learning curve: Lumberyard has a flat learning curve and is easy to use for both novices as well as professionals. Learning resources: Learning AWS Lumberyard Game Development 6. AppGameKit -VR (AGK): an easy way to build games for beginners Developer: The Game Creators Release Date: 2017 Why choose AppGameKit-VR for virtual reality game development? AppGameKit-VR lets anyone quickly code and builds apps for multiple platforms with the help of AGKs BASIC scripting system. It adds easy to use VR commands to the core AppGameKit Script Language, which delivers immersive VR experiences. It also allows full development control for SteamVR supported head-mounted displays, touch devices, and Leap Motion hand tracking. AGK does the majority of the work for you, so it makes it super easy to code, compile and export the apps to each platform. You mainly need to focus on your game/app idea.  AGK-VR offers 60 VR commands ranging from diagnostic checks on the hardware and SteamVR, Initialising the HMD, creating standing or seated VR experiences, rendering a 3D scene to the HMD, etc. AGK also offers demos on how to how to get started with using these commands in your games. It also has an online forum where you can ask questions, learn and interact with other users. The details of the AGK script is also fully documented. Platforms supported: AGK VR offers support for Windows, Mac, Linux, iOS Android (inc Google, Amazon & Ouya), HTML5, Raspberry Pi (free from TGC website). Price: AGK is available for $29.9 Learning curve: AppGameKit VR has a flat learning curve, which is ideal for beginners and makes the VR game development quick for the experienced. 7. Oculus Medium 2.0: software designed with virtual reality in mind Developer: Oculus VR Release Date: 2016 Why choose Oculus Medium for building virtual reality games? Oculus Medium is a great tool that brings sculpting, modeling, painting and creating objects for the virtual reality world all together in a single package. It's a very handy tool to have during the character designing process. It lets you sculpt and create a variety of 3D objects to include within your VR game with the help of Oculus Touch controllers alongside the Oculus Rift. It comes with features such as grid snapping, increased layer limit, multiple lights, and 300 prefabricated stamps.  It is quite simple to use, and anyone, be it a newbie or an experienced game developer can use this tool. The rendering engine in Oculus Medium uses Vulkan, which results in smoother frame rates and better memory management when building higher resolution sculpts. Other than that, Oculus Medium offers tutorials for you to quickly get hang of different features in the tool. It also has an online forum where different VR artisans and developers discuss tips, information, and videos to share with others. Price: Oculus Medium 2.0 is available for $30 which is quite affordable for novices and professionals alike. Learning curve: Oculus Medium has a flat learning curve as its pretty approachable for novices as well as professionals.                                 Each of the tools mentioned above brings something unique in terms of their abilities and features. However, keep in mind that selecting a tool solely based on its technical features is not the best idea. Rather, figure out what works best for you, depending on your experience, and requirement. So which tools/tool are you planning to use for VR game development? Is there any tool we missed out? Let us know! Game developers say Virtual Reality is here to stay What’s new in VR Haptics? Top 7 modern Virtual Reality hardware system
Read more
  • 0
  • 0
  • 9455
article-image-what-micro-frontend
Amit Kothari
08 Oct 2017
6 min read
Save for later

What is a micro frontend?

Amit Kothari
08 Oct 2017
6 min read
The microservice architecture enables us to write scalable and agile backend systems. Writing independent, self-contained services give us the flexibility to quickly add a new feature or easily change an existing one without affecting the whole system. Independently deployable services also allow us to scale our services as per the demand. We will show you how you can use a similar approach for frontend applications. You will learn about micro frontend architecture, its benefits, and strategy to break down a monolith web app into micro frontends. What is micro frontend architecture? Micro frontend architecture is an approach to developing web application as a composition of small frontend apps. Instead of writing a large monolith frontend application, the application is broken down into domain specific micro frontends, which are self-contained and can be developed and deployed independently. Advantages of using micro frontends Micro frontends bring the concept and benefits of micro services to frontend applications. Each micro frontend is self-contained, which allows faster delivery as multiple teams can work on different parts of the application without affecting each other. This also gives each team the freedom to choose different technology as required. Since the micro frontends are highly decoupled, they have a lower impact on other parts of the application and can be enhanced and deployed independently. Design considerations Let's say we want to build an online shopping website using micro frontend architecture. Instead of developing the site as one large application, we can split the website into micro frontends. For example, the pages to display lists of products and product details can be one micro frontend and the pages to show order history of a user can be another micro frontend. The user interface is made up of multiple micro frontends, but we do not want our users to feel that different pages are part of different apps. Here are some of the practices we can use to decompose a frontend application into smaller micro frontends, without compromising user experience. Single responsibility The first thing to consider is how to split an application into smaller apps so that each app can be developed and deployed independently. When teams are working on the different micro frontends, we want the apps to be highly decoupled so that a change in one app would not affect the other apps. This can be achieved by building domain specific micro frontends with single responsibility and well-defined bounded context. Just like our code, we want our micro frontends to have high cohesion and low coupling i.e. all the related code should be close to each other and less dependent on other modules. If we take the example of our online shopping site again, we want all the product related UI components in the product micro frontend and all the order related functionality in the order micro frontend. Let's say we have a user dashboard screen where users can see information from different domains, they can see their pending orders and also products which are on specials. Instead of creating a dashboard micro frontend, it is recommended to have the pending order UI component as part of order micro frontend and product related components as part of product micro frontend. This will allow us to split our system vertically and have domain specific frontend and backend services. Common interface for communication and data exchange For micro frontends to work harmoniously as a single web application, they need a common and consistent way to communicate with each other. Even if they are highly independent, they still need to talk to each other. One of the common approaches is to have an application that works as an integration layer. The app can work as a container to render different micro frontends and also facilitate communication between them. For example, in our online shopping website, once a user submits an order through the shopping cart micro frontend, we want to take the user to their order lists screen. Since both the order and shopping cart micro frontends are highly decoupled and do not know about each other, we can use the container app as the orchestration layer. On receiving order submission events from the shopping cart micro frontend, the container app will navigate the user to the order micro frontend. The container app can also be used to handle cross cutting concerns like user session management, analytics, etc. This approach works well with existing monolith frontends where the existing monolith application can work as the container and any new feature can be independently developed as a micro frontend and can be integrated into the existing app. The existing functionality can be also extracted and rewritten as micro frontends as required. Consistent look and feel Although our user interface is divided into multiple micro frontends, we still want our users to feel as if they are interacting with a single application. We want our apps to have a consistent look and feel, and also the ability to make UI changes easily across multiple apps. For example, we should be able to change the font or the primary colors across multiple micro frontends. This can be done by sharing CSS and assets like images, fonts, icons, etc. We also want the apps to use same UI components, for example, if we have date picker on multiple screens, we want all the date pickers to look the same. This can be achieved by creating a common library of UI components, which can be shared by micro frontends. Using shared assets and a UI component library will allow us to make changes easily instead of having to update multiple micro frontends. In this post, we discussed micro frontends, their benefits, and things to consider before migrating to micro frontend architecture. To deliver faster, we want the ability to build, test, and deploy features independently and this can be achieved by using micro frontends and microservices. Implementing micro frontends may present its own challenges and there will be technical hurdles to overcome but the benefits outweigh the complexity. If you are using micro frontend architecture, please share your experience with us. About the author Amit Kothari is a full stack software developer based in Melbourne, Australia. He has 10+ years experience in designing and implementing software mainly in Java/JEE. His recent experience is in building web application using JavaScript frameworks like React and AngularJS and backend micro services/ REST API in Java. He is passionate about lean software development and continuous delivery.
Read more
  • 0
  • 0
  • 9409

article-image-5-things-you-need-to-learn-to-become-a-server-side-web-developer
Amarabha Banerjee
19 Jun 2018
6 min read
Save for later

5 things you need to learn to become a server-side web developer

Amarabha Banerjee
19 Jun 2018
6 min read
The profession of a back end web developer is ringing out loud and companies seek to get a qualified server-side developer to their team. The fact that the back-end specialist has comprehensive set of knowledge and skills helps them realize their potential in versatile web development projects. Before diving into what it takes to succeed at back end development as a profession, let’s look at what it’s about. In simple words, the back end is that invisible part of any application that activates all its internal elements. If the front-end answers the question of “how does it look”, then the back end or server-side web development deals with “how does it work”. A back end developer is the one who deals with the administrative part of the web application, the internal content of the system, and server-side technologies such as database, architecture and software logic. If you intend to become a professional server-side developer then there are few basic steps which will ease out your journey. In this article we have listed down five aspects of server-side development: servers, databases, networks, queues and frameworks, which you must master to become a successful server side web developer. Servers and databases: At the heart of server-side development are servers which are nothing but the hardware and storage devices connected to a local computer with working internet connection. So everytime you ask your browser to load a web page, the data stored in the servers are accessed and sent to the browser in a certain format. The bigger the application, the larger the amount of data stored in the server-side. The larger the data, the higher possibility of lag and slow performance. Databases are the particular file formats in which the data is stored. There are two different types of databases - Relational and Non- Relational. Both have their own pros and cons. Some of the popular databases which you can learn to take your skills up to the next level are NoSQL, SQL Server, MySQL, MongoDB, DynamoDB etc. Static and Dynamic servers: Static servers are physical hard drives where application data, CSS and HTML files, pictures and images are stored. Dynamic servers actually signify another layer between the server and the browser. They are often known as application servers. The primary function of these application servers is to process the data and format it as per the web page when the data fetching operation is initiated from the browser. This makes saving data much easier and process of data loading becomes much faster. For example, Wikipedia servers are filled with huge amounts of data, but they are not stored as HTML pages, rather they are stored as raw data. When they are queried by the browser, the application browser processes the data and formats it into the HTML format and then sends it to the browser. This makes the process a whole lot faster and space saving for the physical data storage. If you want to go a step ahead and think futuristic, then the latest trend is moving your servers on the cloud. This means the server-side tasks are performed by different cloud based services like Amazon AWS, and Microsoft Azure. This makes your task much simpler as a back end developer, since you simply need to decide which services you would require to best run your application and the rest is taken care off by the cloud service providers. Another aspect of server side development that’s generating a lot of interest among developer is is serverless development. This is based on the concept that the cloud service providers will allocate server space depending on your need and you don’t have to take care of backend resources and requirements. In a way the name Serverless is a misnomer, because the servers are there, just that they are in the cloud and you don’t have to bother about it. The primary role of a backend developer in a serverless system would be to figure out the best possible services and optimize the running cost on the cloud, deploy and monitor the system for non-stop robust performance. The communication protocol: The protocol which defines the data transfer between client side and server side is called HyperTextTransfer Protocol (HTTP). When a search request is typed in the browser, an HTTP request with a URL is sent to the server and the server then sends a response message with either request succeeded or web page not found. When an HTML page is returned for a search query, it is rendered by the web browser. While processing the response, the browser may discover links to other resources (e.g. an HTML page usually references JavaScript and CSS pages), and send separate HTTP Requests to download these files. Both static and dynamic websites use exactly the same communication protocol/patterns. As we have progressed quite a long way from the initial communication protocols, newer technologies like SSL, TLS, IPv6 have taken over the web communication domain. Transport Layer Security (TLS) – and its predecessor, Secure Sockets Layer (SSL), which is now deprecated by the Internet Engineering Task Force (IETF) – are cryptographic protocols that provide communications security over a computer network. The primary reason these protocols were introduced was to protect user data and provide increased security. Similarly newer protocols had to be introduced around late 90’s to cater to the increasing number of internet users. Protocols are basically unique identification pointers that determine the IP address of the server. The initial protocol used was IPv4 which is currently being substituted by IPv6 which has the capability to provide 2^128 or 3.4×1038 addresses. Message Queuing: This is one of the most important aspects of creating fast and dynamic web applications. Message Queuing is the stage where data is queued as per the different responses and then delivered to the browser. This process is asynchronous which means that the server and the browser need not interact with the message queue at the same time. There are some popular message queuing tools like RabbitMQ, MQTT, ActiveMQ which provide real time message queuing functionality. Server-side frameworks and languages: Now comes the last but one of the most important pointers. If you are a developer with a particular choice of language in mind, you can use a language based framework to add functionalities to your application easily. Also this makes it more efficient. Some of the popular server-side frameworks are Node.js for JavaScript, Django for Python, Laravel for PHP, Spring for Java and so on. But using these frameworks will need some amount of experience in respective languages. Now that you have a broad understanding of what server-side web development is, and what are the components, you can jump right into server-side development, databases and protocols management to progress into a successful professional back-end web developer. The best backend tools in web development Preparing the Spring Web Development Environment Is novelty ruining web development?  
Read more
  • 0
  • 0
  • 9391

article-image-uses-of-machine-learning-in-gaming
Natasha Mathur
22 Oct 2018
5 min read
Save for later

Uses of Machine Learning in Gaming

Natasha Mathur
22 Oct 2018
5 min read
All around us, our perception of learning and intellect is being challenged daily with the advent of new and emerging technologies. From self-driving cars, playing Go and Chess, to computers being able to beat humans at classic Atari games, the advent of a group of technologies we colloquially call Machine Learning have come to dominate a new era in technological growth – a new era of growth that has been compared with the same importance as the discovery of electricity and has already been categorized as the next human technological age. Games and simulations are no stranger to AI technologies and there are numerous assets available to the Unity developer in order to provide simulated machine intelligence. These technologies include content like Behavior Trees, Finite State Machine, navigation meshes, A*, and other heuristic ways game developers use to simulate intelligence. So, why Machine Learning and why now? The reason is due in large part to the OpenAI initiative, an initiative that encourages research across academia and the industry to share ideas and research on AI and ML. This has resulted in an explosion of growth in new ideas, methods, and areas for research. This means for games and simulations that we no longer have to fake or simulate intelligence. Now, we can build agents that learn from their environment and even learn to beat their human builders. This article is an excerpt taken from the book 'Learn Unity ML-Agents – Fundamentals of Unity Machine Learning'  by Micheal Lanham. In this article, we look at the role that machine learning plays in game development. Machine Learning is an implementation of Artificial Intelligence. It is a way for a computer to assimilate data or state and provide a learned solution or response. We often think of AI now as a broader term to reflect a "smart" system. A full game AI system, for instance, may incorporate ML tools combined with more classic AIs like Behavior Trees in order to simulate a richer, more unpredictable AI. We will use AI to describe a system and ML to describe the implementation. How Machine Learning is useful in gaming Game engines have embraced the idea of incorporating ML into all aspects of its product and not just for use as a game AI. While most developers may try to use ML for gaming, it certainly helps game development in the following areas: Map/Level Generation: There are already plenty of examples where developers have used ML to auto-generate everything from dungeons to the realistic terrain. Getting this right can provide a game with endless replayability, but it can be some of the most challenging ML to develop. Texture/Shader Generation: Another area that is getting the attention of ML is texture and shader generation. These technologies are getting a boost brought on by the attention of advanced generative adversarial networks, or GAN. There are plenty of great and fun examples of this tech in action; just do a search for DEEP FAKES in your favorite search engine. Model Generation: There are a few projects coming to fruition in this area that could greatly simplify 3D object construction through enhanced scanning and/or auto-generation. Imagine being able to textually describe a simple model and having ML build it for you, in real-time, in a game or other AR/VR/MR app, for example. Audio Generation: Being able to generate audio sound effects or music on the fly is already being worked on for other areas, not just games. Yet, just imagine being able to have a custom designed soundtrack for your game developed by ML. Artificial Players: This encompasses many uses from the gamer themselves using ML to play the game on their behalf to the developer using artificial players as enhanced test agents or as a way to engage players during low activity. If your game is simple enough, this could also be a way of auto testing levels. NPCs or Game AI: Currently, there are better patterns out there to model basic behavioral intelligence in the form of Behavior Trees. While it's unlikely that BTs or other similar patterns will go away any time soon, imagine being able to model an NPC that may actually do an unpredictable, but rather cool behavior. This opens all sorts of possibilities that excite not only developers but players as well. So, we learned about different areas of the gaming world such as model generation, artificial players, NPCs, level generation, etc, where Machine learning can be extensively used. If you found this post useful, be sure to check out the book 'Learn Unity ML-Agents – Fundamentals of Unity Machine Learning' to learn more machine learning concepts in gaming. 5 Ways Artificial Intelligence is Transforming the Gaming Industry How should web developers learn machine learning? Deep Learning in games – Neural Networks set to design virtual worlds
Read more
  • 0
  • 0
  • 9252
article-image-neo4j-most-popular-graph-database
Amey Varangaonkar
02 Aug 2018
7 min read
Save for later

Why Neo4j is the most popular graph database

Amey Varangaonkar
02 Aug 2018
7 min read
Neo4j is an open source, distributed data store used to model graph problems. It departs from the traditional nomenclature of database technologies, in which entities are stored in schema-less, entity-like structures called nodes, which are connected to other nodes via relationships or edges. In this article, we are going to discuss the different features and use-cases of Neo4j. This article is an excerpt taken from the book 'Seven NoSQL Databases in a Week' written by Aaron Ploetz et al. Neo4j's best features Aside from its support of the property graph model, Neo4j has several other features that make it a desirable data store. Here, we will examine some of those features and discuss how they can be utilized in a successful Neo4j cluster. Clustering Enterprise Neo4j offers horizontal scaling through two types of clustering. The first is the typical high-availability clustering, in which several slave servers process data overseen by an elected master. In the event that one of the instances should fail, a new master is chosen. The second type of clustering is known as causal clustering. This option provides additional features, such as disposable read replicas and built-in load balancing, that help abstract the distributed nature of the clustered database from the developer. It also supports causal consistency, which aims to support Atomicity Consistency Isolation and Durability (ACID) compliant consistency in use cases where eventual consistency becomes problematic. Essentially, causal consistency is delivered with a distributed transaction algorithm that ensures that a user will be able to immediately read their own write, regardless of which instance handles the request. Neo4j Browser Neo4j ships with Neo4j Browser, a web-based application that can be used for database management, operations, and the execution of Cypher queries. In addition to, monitoring the instance on which it runs, Neo4j Browser also comes with a few built-in learning tools designed to help new users acclimate themselves to Neo4j and graph databases. Neo4j Browser is a huge step up from the command-line tools that dominate the NoSQL landscape. Cache sharding In most clustered Neo4j configurations, a single instance contains a complete copy of the data. At the moment, true sharding is not available, but Neo4j does have a feature known as cache sharding. This feature involves directing queries to instances that only have certain parts of the cache preloaded, so that read requests for extremely large data sets can be adequately served. Help for beginners One of the things that Neo4j does better than most NoSQL data stores is the amount of documentation and tutorials that it has made available for new users. The Neo4j website provides a few links to get started with in-person or online training, as well as meetups and conferences to become acclimated to the community. The Neo4j documentation is very well-done and kept up to date, complete with well-written manuals on development, operations, and data modeling. The blogs and videos by the Neo4j, Inc. engineers are also quite helpful in getting beginners started on the right path. Additionally, when first connecting to your instance/cluster with Neo4j Browser, the first thing that is shown is a list of links directed at beginners. These links direct the user to information about the Neo4j product, graph modeling and use cases, and interactive examples. In fact, executing the play movies command brings up a tutorial that loads a database of movies. This database consists of various nodes and edges that are designed to illustrate the relationships between actors and their roles in various films. Neo4j's versatility demonstrated in its wide use cases Because of Neo4j's focus on node/edge traversal, it is a good fit for use cases requiring analysis and examination of relationships. The property graph model helps to define those relationships in meaningful ways, enabling the user to make informed decisions. Bearing that in mind, there are several use cases for Neo4j (and other graph databases) that seem to fit naturally. Social networks Social networks seem to be a natural fit for graph databases. Individuals have friends, attend events, check in to geographical locations, create posts, and send messages. All of these different aspects can be tracked and managed with a graph database such as Neo4j. Who can see a certain person's posts? Friends? Friends of friends? Who will be attending a certain event? How is a person connected to others attending the same event? In small numbers, these problems could be solved with a number of data stores. But what about an event with several thousand people attending, where each person has a network of 500 friends? Neo4j can help to solve a multitude of problems in this domain, and appropriately scale to meet increasing levels of operational complexity. Matchmaking Like social networks, Neo4j is also a good fit for solving problems presented by matchmaking or dating sites. In this way, a person's interests, goals, and other properties can be traversed and matched to profiles that share certain levels of equality. Additionally, the underlying model can also be applied to prevent certain matches or block specific contacts, which can be useful for this type of application. Network management Working with an enterprise-grade network can be quite complicated. Devices are typically broken up into different domains, sometimes have physical and logical layers, and tend to share a delicate relationship of dependencies with each other. In addition, networks might be very dynamic because of hardware failure/replacement, organization, and personnel changes. The property graph model can be applied to adequately work with the complexity of such networks. In a use case study with Enterprise Management Associates (EMA), this type of problem was reported as an excellent format for capturing and modeling the inter dependencies that can help to diagnose failures. For instance, if a particular device needs to be shut down for maintenance, you would need to be aware of other devices and domains that are dependent on it, in a multitude of directions. Neo4j allows you to capture that easily and naturally without having to define a whole mess of linear relationships between each device. The path of relationships can then be easily traversed at query time to provide the necessary results. Analytics Many scalable data store technologies are not particularly suitable for business analysis or online analytical processing (OLAP) uses. When working with large amounts of data, coalescing desired data can be tricky with relational database management systems (RDBMS). Some enterprises will even duplicate their RDBMS into a separate system for OLAP so as not to interfere with their online transaction processing (OLTP) workloads. Neo4j can scale to present meaningful data about relationships between different enterprise-marketing entities, which is crucial for businesses. Recommendation engines Many brick-and-mortar and online retailers collect data about their customers' shopping habits. However, many of them fail to properly utilize this data to their advantage. Graph databases, such as Neo4j, can help assemble the bigger picture of customer habits for searching and purchasing, and even take trends in geographic areas into consideration. For example, purchasing data may contain patterns indicating that certain customers tend to buy certain beverages on Friday evenings. Based on the relationships of other customers to products in that area, the engine could also suggest things such as cups, mugs, or glassware. Is the customer also a male in his thirties from a sports-obsessed area? Perhaps suggesting a mug supporting the local football team may spark an additional sale. An engine backed by Neo4j may be able to help a retailer uncover these small troves of insight. To summarize, we saw Neo4j is widely used across all enterprises and businesses, primarily due to its speed, efficiency and accuracy. Check out the book Seven NoSQL Databases in a Week to learn more about Neo4j and the other popularly used NoSQL databases such as Redis, HBase, MongoDB, and more. Read more Top 5 programming languages for crunching Big Data effectively Top 5 NoSQL Databases Is Apache Spark today’s Hadoop?
Read more
  • 0
  • 0
  • 9195

article-image-learn-kotlin-next-universal-programming-language
Sugandha Lahoti
11 May 2018
14 min read
Save for later

Forget C and Java. Learn Kotlin: the next universal programming language

Sugandha Lahoti
11 May 2018
14 min read
Kotlin is fast moving towards becoming the universal programming language. What is a universal programming language? From a simplistic view, the expectation could be that one language is used for all types of programming. While that may be far-fetched in today's complex world, the expectation could be adjusted to one language becoming the dominant programming language. Most certainly, it is the single, most important language to master. [box type="shadow" align="" class="" width=""]This article is an excerpt from the book,  Kotlin Blueprints, written by Ashish Belagali, Hardik Trivedi, and Akshay Chordiya. With this book, you will learn how to design and prototype professional-grade applications using various features of Kotlin.[/box] Historically, different languages have used strategies appropriate for those times to become the universal programming languages: In the 1970s, C became the universal programming language. Prior to C, the programming languages of the world were divided between low-level and high-level languages, the former being the languages that were close to machine code and the latter being ones that were more concise and worked better for human understanding. The C programming language was developed as a single language that could work as a low-level and a high-level language. The Unix operating system was showcased as one that was built ground-up entirely on C, without needing another low-level language. In the 1990s, Java became the universal programming language with the Write Once Run Anywhere strategy. Prior to Java, developers needed to create different programs to run on different platforms (different operating systems running on different hardware needed different programs to run). However, with Java, programs could be written targeting a single platform, namely the Java Virtual Machine (JVM). The JVM is available on all the popular platforms and takes care of all platform-specific nuances. The Java language became the universal language by being the language in which to write programs for the JVM. Another two decades have passed, and the stage is all set to welcome the next universal language. Let's examine Kotlin's strategy to become that. Why can Kotlin be described as a better Java than any other language? How does Kotlin address areas beyond the Java world? What is Kotlin's winning strategy? What does this all mean for a smart developer? Why Kotlin vs Java? Why is being a better Java important for a language? For over a decade, Java has consistently been the world's most widely used programming language. Therefore, a language that gets crowned as being a better Java should automatically attract the attention of the world's single largest community of programmers: the Java programmers. The TIOBE index is widely referred to as a gauge of the popularity of programming languages. Updated to August 2017, the index graph is reproduced in the following illustration:   The interesting point is that while Java has been the #1 programming language in the world for the last 15 years or so, it has been in a steady state of decline for many years now. Many new languages have kept coming, and existing ones have kept improving, chipping steadily into Java's developer base; however, none of them have managed to take the #1 position from Java so far. Today, Kotlin is poised to become the most serious challenger for the better Java crown, and subsequently, to take the first place, for reasons that we will see shortly. Presently at 41st place, Kotlin is marching ahead at a fast pace. In May 2017, Google announced Kotlin to be the officially supported language for Android development in league with Java. This has turned out to be a major boost for Kotlin, and the rate of its adoption has accelerated ever since. Why not other languages? Many languages prior to Kotlin have tried to become a better Java. Let's see why they could never become one. Every language attracts the programmer community by giving them an ability to do something that was cumbersome before. Their adoption is directly driven by how much value the promise has for them and how much faith the community can put into that promise. All languages or frameworks that claimed to be a better Java and offered something worthwhile beyond what Java offers also took something back in turn. Here are a few examples: .NET framework has been the longtime rival of Java and has supported multiple languages from day one. Based on the lessons learned from Java, the .NET designers came up with better language constructs. However, the biggest hurdle for .NET was that it was a proprietary technology, and that created an impediment to its adoption. Also, .NET was more aggressive in adding newer language constructs. While the framework evolved quickly as a result of that, it broke its backward compatibility many times. Ruby (and Python) offered shortened code, enticing programming constructs, and greater expressiveness as opposed to the boring Java; however, they took away static typing support (which helps to make robust programs) and made the programs slower. Scala offered shortened code and advanced programming constructs, without sacrificing typing safety. However, Scala is complex and has a substantially high learning curve. It supports multiple coding styles. So, there is a danger that Scala code written by one developer may not be understood easily by another. These are risk factors for any project that includes a team of developers and when the application is expected to be supported over a long period, which is true about most applications anyway. Why Kotlin? Unlike other languages, Kotlin offers a lot of power over Java, while not taking anything away. Let's take a look at the following screenshot to see how: Kotlin is interoperable with Java. It is possible to write applications containing both Java and Kotlin code, calling one from the other. Calling Java code from Kotlin is simpler, as opposed to the other way around, but the former will be the case most of the times anyway, where new Kotlin code is added on top of legacy Java code. Kotlin is interoperable and can use all the Java libraries and legacy coding without having to do any code conversion. It is possible to inject Kotlin into a Java project without boiling the ocean. Concise yet expressive code While being interoperable, Kotlin code is far superior to Java code. Like Scala, Kotlin uses type inference to cut down on a lot of boilerplate code and makes it concise. (Type inference is a better feature than dynamic typing as it reduces the code without sacrificing the robustness of the end product). However, unlike Scala, Kotlin code is easy to read and understand, even for someone who may not know Kotlin. Kotlin's data class construct is the most prominent example of being concise as shown in the following: data class Employee (val id: Long, var name: String) Compared to its Java counterpart, the preceding line has packed into it the class definition, member variables, constructor, getter-setter methods, and also the utility methods, such as equals() and hashCode(). This will easily take 15-20 lines of Java code. The data classes construct is not an isolated example. There are many others where the syntax is concise and expressive. Consider the following as additional examples: Kotlin's default values to function parameters save the need to overload the functions Kotlin's extension functions can be used to add domain-specific functionality to existing classes, making it easy for someone from the domain to understand Enhanced robustness Statically typed languages have a built-in safety net because of the assurance that the compiler will catch any incorrect type cast. Both Java and Kotlin support static typing. With Java Generics introduced in Java 1.5, they both fare better over the Java releases prior to 1.5. However, Kotlin takes a big step further in addressing the Null pointer error. This Null pointer error causes a lot of checks in Java programs: String s = someOperation(); if (s != null) { ... } One can see that the null check is not needed if someOperation() never returns null. On the other hand, it is possible for a programmer to omit the null check while someOperation() returning null is a valid case. With Kotlin, the definition of someOperation() itself will return either String or String? and then there are implications on the subsequent code, so the developer just cannot go wrong. Refer the  following table: fun someOperation() : String // not nullable fun someOperation() : String? // nullable val s = someOperation() if (s != null) { // null check not needed – editor warning … } val s = someOperation() n = s.length() // error, null check imposed n = s?.length() ?: 0 // handling null condition One may point out that Java developers can use the @Nullable and @NotNull annotations or the Optional class; however, these were added quite late, most developers are not aware of them, and they can always get away with not using them, as the code does not break. Finally, they are not as elegant as putting a question mark. There is also a subtle point here. If a Kotlin developer is careless, he would write just the type name, which would automatically become a non-nullable declaration. If he wanted to make it nullable, he would have to  key in that extra question mark deliberately. Thus, you are on the side of caution, and that is as far as keeping the code robust is concerned. Another example of this robustness is found in the var/val declarations. Seasoned programmers know that most variables get a value assigned to them only once. In Kotlin, while declaring the variable, you choose val for such a variable. At the time of variable declaration, the programmer has to select between val and var, and so he puts some thought into it. On the other hand, in Java, you can get away with just declaring the type with its name, and you will rarely find any Java code that defines a variable with the final keyword, which is Java's way of declaring that the variable can be assigned a value only once. Basically, with the same maturity level of programmers, you expect a relatively more robust code in Kotlin as opposed to Java, and that's a big win from the business perspective. Excellent IDE support from day one Kotlin comes from JetBrains, who also develop a well-known Java integrated development environment (IDE): IntelliJ IDEA. JetBrains developers made sure that Kotlin has first-class support in IDEA. Not only that, they also developed a Kotlin plugin for Eclipse, which is the #1 most widely used Java IDE. Contrast this with the situation when Java appeared on the scene roughly two decades ago. There was no good IDE support. Programmers were asked to use simple text editors. Coding Java was hard, with no safety net provided by an IDE, until the Eclipse editor was open-sourced. In the case of Kotlin, the editor's suggestions being available from day one means that they can learn the language faster, make fewer mistakes, and write good quality compilable code with relative ease. Clearly, Kotlin does not want to waste any time in climbing up the ladder of popularity. Beyond being a better Java We saw that on the JVM platform, Kotlin is neat and quite superior. However, Kotlin has set its eyes beyond the JVM. Its strategy is to win based on its superior and modern feature set. Before we go ahead, let's list the top five appeals of Kotlin: Static typing (like in C or Java) means that there is built-in type safety. The compiler catches any incorrect type assignments. This makes programs robust. Kotlin is concise and expressive. Being concise implies that there is less to read and maintain. Being expressive implies better maintainability. Being a JVM language, the Kotlin programs can take advantage of the features built into the JVM, such as its cross-platform nature, memory management, high performance and sandbox security. Kotlin has inbuilt null-safety. Null references are famous as the billion-dollar mistake, as admitted by its inventor Tony Hoare and cost a great deal of unnecessary null-checks in programs. Kotlin eliminates those and makes the programs more robust. Kotlin is easy to learn, especially for Java developers. Its syntax is clean and therefore easy to understand, because of which, Kotlin programs are fun for developers to code and easy to understand, and enhancing for their peers. From a business angle, they are more robust and easy to maintain for businesses. Kotlin is in the winning camp The features of Kotlin have a good validation when one considers that other languages, which have similar features, are also growing in popularity: The Crystal language attracts Ruby programmers by adding static typing support. Similarly, TypeScript adds static typing support to JavaScript and has become the preferred language for some JavaScript frameworks. Scala and F# add functional programming support to traditional non-functional paradigms without sacrificing type safety and, hence, are more attractive. Kotlin uses functional programming, just enough to ease out the programming in a lot of cases, but not too much to make it complex. Like Kotlin, Swift, and Rust also have inbuilt null-safety. Kotlin and Swift are often compared, as their syntaxes resemble one another a lot. Server-side languages, which were getting designed after the emergence of the parallel computing phenomena, became one of the chief requirements for providing inbuilt constructs for easing the programmer's work. One can find this in both Kotlin (coroutines) and Rust. Go native strategy The Kotlin developers figured that the same strategy that is used on the JVM platform could be used on other platforms too. Consider the following illustration: On no platform does Kotlin disrupt the platform's existing technology: The JVM works with the Java bytecode and Kotlin gives an alternative to Java to generate the same bytecode (By no means is Kotlin the first alternative as there are already 200+ languages that work with JVM, but it is the most elegant one for all the reasons that we have seen previously). On modern browsers where JavaScript is the de facto standard, Kotlin can work by transpiling to JavaScript. Again, this means that Kotlin is friendly with existing browsers without making any special effort. On the Node.js platform where JavaScript is used on the server side, your Kotlin code transpiles into JavaScript, and hence there are no changes needed in the Node.js framework for Kotlin to run. In a similar way, Kotlin/Native plans to work with other technologies in a native way. Since the platform's technology is not disrupted, there are zero changes needed at the platform level to adopt Kotlin. Kotlin's compatibility with a given platform can be taken for granted from day one. This eliminates a big business risk. Kotlin's winning strategy Kotlin's winning strategy is the sum of the various factors that we have seen previously. It has a two-pronged strategy to win over the developers with the coolness of the language, and the ease of working with it, to win over business users with its business benefits. The following illustration shows us the different benefits of using Kotlin: The other benefits also include: The growing popularity of the language Endorsement from Google to make Kotlin an officially supported language in May 2017 Kotlin-specific development frameworks emerging Leading Java frameworks, such as Spring, offering Kotlin-specific improvements The growing number of applications being tried out with Kotlin The user groups spread across Kotlin developer hubs The growing number of technology companies using Kotlin With this in mind, the winning strategy for smart programmers is to master Kotlin and learn to work with Kotlin on various platforms. Being ahead of the curve as opposed to following the world after Kotlin is already big but it will be a quick path to being recognized as a leader. Further chapters of this book will help you in exactly this mission. Apart from going through this book, we strongly suggest you join the community. Join the Kotlin weekly mailing list at http://kotlinweekly.net. Join the nearest Kotlin user group at http://kotlinlang.org/community/user-groups.html. Kotlin's community on Slack is at https://kotlinlang.slack.com/. We saw how Kotlin is well positioned to take off as the universal programming language. It offers an opportunity for smart programmers to establish themselves at the forefront of this rising tide. This article was taken from the book Kotlin Blueprints. If you liked reading this piece, check out the  book to build comprehensive applications using Kotlin features.  Getting started with Kotlin programming Build your first Android app with Kotlin How to convert Java code into Kotlin
Read more
  • 0
  • 2
  • 9164