Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Mobile

213 Articles
article-image-understanding-the-foundation-of-protocol-oriented-design
Expert Network
30 Jun 2021
7 min read
Save for later

Understanding the Foundation of Protocol-oriented Design

Expert Network
30 Jun 2021
7 min read
When Apple announced Swift 2 at the World Wide Developers Conference (WWDC) in 2016, they also declared that Swift was the world’s first protocol-oriented programming (POP) language. From its name, we might assume that POP is all about protocol; however, that would be a wrong assumption. POP is about so much more than just protocol; it is actually a new way of not only writing applications but also thinking about programming. This article is an excerpt from the book Mastering Swift, 6th Edition by Jon Hoffman. In this article, we will discuss a protocol-oriented design and how we can use protocols and protocol extensions to replace superclasses. We will look at how to define animal types for a video game in a protocol-oriented way. Requirements When we develop applications, we usually have a set of requirements that we need to develop against. With that in mind, let’s define the requirements for the animal types that we will be creating in this article: We will have three categories of animals: land, sea, and air. Animals may be members of multiple categories. For example, an alligator can be a member of both the land and sea categories. Animals may attack and/or move when they are on a tile that matches the categories they are in. Animals will start off with a certain number of hit points, and if those hit points reach 0 or less, then they will be considered dead. POP Design We will start off by looking at how we would design the animal types needed and the relationships between them. Figure 1 shows our protocol-oriented design: Figure 1: Protocol-oriented design In this design, we use three techniques: protocol inheritance, protocol composition, and protocol extensions. Protocol inheritance Protocol inheritance is where one protocol can inherit the requirements from one or more additional protocols. We can also inherit requirements from multiple protocols, whereas a class in Swift can have only one superclass. Protocol inheritance is extremely powerful because we can define several smaller protocols and mix/match them to create larger protocols. You will want to be careful not to create protocols that are too granular because they will become hard to maintain and manage. Protocol composition Protocol composition allows types to conform to more than one protocol. With protocol-oriented design, we are encouraged to create multiple smaller protocols with very specific requirements. Let’s look at how protocol composition works. Protocol inheritance and composition are really powerful features but can also cause problems if used wrongly. Protocol composition and inheritance may not seem that powerful on their own; however, when we combine them with protocol extensions, we have a very powerful programming paradigm. Let’s look at how powerful this paradigm is. Protocol-oriented design — putting it all together We will begin by writing the Animal superclass as a protocol: protocol Animal { var hitPoints: Int { get set } } In the Animal protocol, the only item that we are defining is the hitPoints property. If we were putting in all the requirements for an animal in a video game, this protocol would contain all the requirements that would be common to every animal. We only need to add the hitPoints property to this protocol. Next, we need to add an Animal protocol extension, which will contain the functionality that is common for all types that conform to the protocol. Our Animal protocol extension would contain the following code: extension Animal { mutating func takeHit(amount: Int) { hitPoints -= amount } func hitPointsRemaining() -> Int { return hitPoints } func isAlive() -> Bool { return hitPoints > 0 ? true : false } } The Animal protocol extension contains the same takeHit(), hitPointsRemaining(), and isAlive() methods. Any type that conforms to the Animal protocol will automatically inherit these three methods. Now let’s define our LandAnimal, SeaAnimal, and AirAnimal protocols. These protocols will define the requirements for the land, sea, and air animals respectively: protocol LandAnimal: Animal { var landAttack: Bool { get } var landMovement: Bool { get } func doLandAttack() func doLandMovement() } protocol SeaAnimal: Animal { var seaAttack: Bool { get } var seaMovement: Bool { get } func doSeaAttack() func doSeaMovement() } protocol AirAnimal: Animal { var airAttack: Bool { get } var airMovement: Bool { get } func doAirAttack() func doAirMovement() } These three protocols only contain the functionality needed for their particular type of animal. Each of these protocols only contains four lines of code. This makes our protocol design much easier to read and manage. The protocol design is also much safer because the functionalities for the various animal types are isolated in their own protocols rather than being embedded in a giant superclass. We are also able to avoid the use of flags to define the animal category and, instead, define the category of the animal by the protocols it conforms to. In a full design, we would probably need to add some protocol extensions for each of the animal types, but we do not need them for our example here. Now, let’s look at how we would create our Lion and Alligator types using protocol-oriented design: struct Lion: LandAnimal { var hitPoints = 20 let landAttack = true let landMovement = true func doLandAttack() { print(“Lion Attack”) } func doLandMovement() { print(“Lion Move”) } } struct Alligator: LandAnimal, SeaAnimal { var hitPoints = 35 let landAttack = true let landMovement = true let seaAttack = true let seaMovement = true func doLandAttack() { print(“Alligator Land Attack”) } func doLandMovement() { print(“Alligator Land Move”) } func doSeaAttack() { print(“Alligator Sea Attack”) } func doSeaMovement() { print(“Alligator Sea Move”) } } Notice that we specify that the Lion type conforms to the LandAnimal protocol, while the Alligator type conforms to both the LandAnimal and SeaAnimal protocols. As we saw previously, having a single type that conforms to multiple protocols is called protocol composition and is what allows us to use smaller protocols, rather than one giant monolithic superclass. Both the Lion and Alligator types originate from the Animal protocol; therefore, they will inherit the functionality added with the Animal protocol extension. If our animal type protocols also had extensions, then they would also inherit the function added by those extensions. With protocol inheritance, composition, and extensions, our concrete types contain only the functionality needed by the particular animal types that they conform to. Since the Lion and Alligator types originate from the Animal protocol, we can use polymorphism. Let’s look at how this works: var animals = [Animal]() animals.append(Alligator()) animals.append(Alligator()) animals.append(Lion()) for (index, animal) in animals.enumerated() { if let _ = animal as? AirAnimal { print(“Animal at \(index) is Air”) } if let _ = animal as? LandAnimal { print(“Animal at \(index) is Land”) } if let _ = animal as? SeaAnimal { print(“Animal at \(index) is Sea”) } } In this example, we create an array that will contain Animal types named animals. We then create two instances of the Alligator type and one instance of the Lion type that are added to the animals array. Finally, we use a for-in loop to loop through the array and print out the animal type based on the protocol that the instance conforms to. Upgrade your knowledge and become an expert in the latest version of the Swift programming language with Mastering Swift 5.3, 6th Edition by Jon Hoffman. About Jon Hoffman has over 25 years of experience in the field of information technology. He has worked in the areas of system administration, network administration, network security, application development, and architecture. Currently, Jon works as an Enterprise Software Manager for Syn-Tech Systems.
Read more
  • 0
  • 0
  • 6109

article-image-how-to-implement-data-validation-with-xamarin-forms
Packt Editorial Staff
03 Feb 2020
8 min read
Save for later

How to implement data validation with Xamarin.Forms

Packt Editorial Staff
03 Feb 2020
8 min read
In software, data validation is a process that ensures the validity and integrity of user input and usually involves checking that that data is in the correct format and contains an acceptable value. In this Xamarin tutorial, you'll learn how to implement it with Xamarin.Forms. This article is an excerpt from the book Mastering Xamarin.Forms, Third Edition by Ed Snider. The book walks you through the creation of a simple app, explaining at every step why you're doing the things you're doing, so that you gain the skills you need to use Xamarin.Forms to create your own high-quality apps. Types of data validation in mobile application development There are typically two types of validation when building apps: server-side and client-side. Both play an important role in the lifecycle of an app's data. Server-side validation is critical when it comes to security, making sure malicious data or code doesn't make its way into the server or backend infrastructure. Client-side validation is usually more about user experience than security. A mobile app should always validate its data before sending it to a backend (such as a web API) for several reasons, including the following: To provide real time feedback to the user about any issues instead of waiting on a response from the backend. To support saving data in offline scenarios where the backend is not available. To prevent encoding issues when sending the data to the backend. Just as a backend server should never assume all incoming data has been validated by the client-side before being received, a mobile app should also never assume the backend will do its own server-side validation, even though it's a good security practice. For this reason, mobile apps should perform as much client-side validation as possible. When adding validation to a mobile app the actual validation logic can go in a few areas of the app architecture. It could go directly in the UI code (the View layer of an Model-View-ViewModel (MVVM) architecture), it could go in the business logic or controller code (the ViewModel layer of an MVVM architecture), or it could even go in the HTTP code. In most cases when implementing the MVVM pattern it will make the most sense to include validation in the ViewModels for the following reasons: The validation rules can be checked as the individual properties of the ViewModel are changed. The validation rules are often part of or dependent on some business logic that exists in the ViewModel. Most importantly, having the validation rules implemented in the ViewModel makes them easy to test. Adding a base validation ViewModel in Xamarin.Forms Validation makes the most sense in the ViewModel. To do this we will start by creating a new base ViewModel that will provide some base level methods, properties, and events for subclassed ViewModels to leverage. This new base ViewModel will be called BaseValidationViewModel and will subclass the BaseViewModel. It will also implement an interface called from the System.ComponentModel namespace. INotifyDataErrorInfo works a lot like INotifyPropertyChanged – it specifies some properties about what errors have occurred and as well as an event for when the error state of particular property changes. Create a new class in the ViewModels folder name BaseValidationViewModel that subclasses BaseViewModel: Create a new class in the ViewModels folder name BaseValidationViewModel that subclasses BaseViewModel: public class BaseValidationViewModel : BaseViewModel { public BaseValidationViewModel(INavService navService) : base(navService) { } } 2. Update BaseValidationViewModel to implement INotifyDataErrorInfo as follows: public class BaseValidationViewModel : BaseViewModel, INotifyDataErrorInfo { readonly IDictionary<string, List<string>> _errors = new Dictionary<string, List<string>>(); public BaseValidationViewModel(INavService navService) : base(navService) { } public event EventHandler<DataErrorsChangedEventArgs> ErrorsChanged; public bool HasErrors => _errors?.Any(x => x.Value?.Any() == true) == true; public IEnumerable GetErrors(string propertyName) { if (string.IsNullOrWhiteSpace(propertyName)) { return _errors.SelectMany(x => x.Value); } if (_errors.ContainsKey(propertyName) && _errors[propertyName].Any()) { return _errors[propertyName]; } return new List<string>(); } } 3. In addition to implementing the required members of INotifyDataErrorInfo – ErrorChanged, HasErrors, and GetErrors() – we also need to add a method that actually handles validating ViewModel properties. This method needs a validation rule parameter in the form of a Func<bool> and an error message to be used if the validation rule fails. Add a protected method named Validate to BaseValidationViewModel as follows: public class BaseValidationViewModel : BaseViewModel, INotifyDataErrorInfo { // ... protected void Validate(Func<bool> rule, string error, [CallerMemberName] string propertyName = "") { if (string.IsNullOrWhiteSpace(propertyName)) return; if (_errors.ContainsKey(propertyName)) { _errors.Remove(propertyName); } if (rule() == false) { _errors.Add(propertyName, new List<string> { error }); } OnPropertyChanged(nameof(HasErrors)); ErrorsChanged?.Invoke(this, new DataErrorsChangedEventArgs(propertyName)); } } If the validation rule Func<bool> returns false, the error message that is provided is added to a private list of errors-used by HasErrors and GetErrors()-mapped to the specific property that called into this Validate() method. Lastly, the Validate() method invokes the ErrorsChanged event with the caller property's name included in the event arguments. Now any ViewModel that needs to perform validation can subclass BaseValidationViewModel and call the Validate() method to check if individual properties are valid. In the next section, we will use BaseValidationViewModel to add validation to the new entry page and its supporting ViewModel. Adding validation to the new entry page in Xamarin.Forms In this section we will add some simple client-side validation to a couple of the entry fields on the new entry page. First, update NewEntryViewModel to subclass BaseValidationViewModel instead of BaseViewModel. public class NewEntryViewModel : BaseValidationViewModel { // ... } Because BaseValidationViewModel subclasses BaseViewModel, NewEntryViewModel is still able to leverage everything in BaseViewModel as well. 2. Next, add a call to Validate() in the Title property setter that includes a validation rule specifying that the field cannot be left blank: public string Title { get => _title; set { _title = value; Validate(() => !string.IsNullOrWhiteSpace(_title), "Title must be provided."); OnPropertyChanged(); SaveCommand.ChangeCanExecute(); } 3. Next, add a call to Validate() in the Rating property setter that includes a validation rule specifying that the field's value must be between 1 and 5: public int Rating { get => _rating; set { _rating = value; Validate(() => _rating >= 1 && _rating <= 5, "Rating must be between 1 and 5."); OnPropertyChanged(); SaveCommand.ChangeCanExecute(); } Notice we also added SaveCommand.ChangeCanExecute() to the setter as well. This is because we want to update the SaveCommand's canExecute value when this value as changed since it will now impact the return value of CanSave(), which we will update in the next step. 4. Next, update CanSave() – the method used for the SaveCommand's canExecute function – to prevent saving if the ViewModel has any errors: bool CanSave() => !string.IsNullOrWhitespace(Title) && !HasErrors; 5. Finally, update the new entry page to reflect any errors by highlighting the field's text color in red: // NewEntryPage.xaml: <EntryCell x:Name="title" Label="Title" Text="{Binding Title}" /> // ... <EntryCell x:Name="rating" Label="Rating" Keyboard="Numeric" Text="{Binding Rating}" /> // NewEntryPage.xaml.cs: using System; using System.Collections.Generic; using System.ComponentModel; using System.Linq; using Xamarin.Forms; using TripLog.ViewModels; public partial class NewEntryPage : ContentPage { NewEntryViewModel ViewModel => BindingContext as NewEntryViewModel; public NewEntryPage() { InitializeComponent(); BindingContextChanged += Page_BindingContextChanged; BindingContext = new NewEntryViewModel(); } void Page_BindingContextChanged(object sender, EventArgs e) { ViewModel.ErrorsChanged += ViewModel_ErrorsChanged; } void ViewModel_ErrorsChanged(object sender, DataErrorsChangedEventArgs e) { var propHasErrors = (ViewModel.GetErrors(e.PropertyName) as List<string>)?.Any() == true; switch (e.PropertyName) { case nameof(ViewModel.Title): title.LabelColor = propHasErrors ? Color.Red : Color.Black; break; case nameof(ViewModel.Rating): rating.LabelColor = propHasErrors ? Color.Red : Color.Black; break; Default: break; } } } Now when we run the app we will see the following screenshots: [caption id="attachment_31034" align="aligncenter" width="846"] The TripLog new entry page with client-side validation[/caption] Navigate to the new entry page and enter an invalid value in either the Title or Rating field we will see the field label turn red and the Save button will be disabled. Once the error has been corrected the field label color returns to black and the Save button is re-enabled. Learn more mobile application development with Xamarin and the open source Xamarin.Forms toolkit with the third edition Mastering Xamarin.Forms. About Ed Snider Ed Snider is a senior software developer, speaker, author, and Microsoft MVP based in the Washington D.C./Northern Virginia area. He has a passion for mobile design and development and regularly speaks about Xamarin and Windows app development in the community. Ed works at InfernoRed Technology, where his primary role is working with clients and partners to build mobile and media focused products on iOS, Android, and Windows. He started working with.NET in 2005 when .NET 2.0 came out and has been building mobile apps with .NET since 2011. Ed was recognized as a Xamarin MVP in 2015 and as a Microsoft MVP in 2017. Find him on Twitter: @edsnider
Read more
  • 0
  • 0
  • 19703

article-image-why-should-you-use-unreal-engine-4-to-build-augmented-and-virtual-reality-projects
Guest Contributor
20 Dec 2019
6 min read
Save for later

Why should you use Unreal Engine 4 to build Augmented and Virtual Reality projects

Guest Contributor
20 Dec 2019
6 min read
This is an exciting time to be a game developer. New technologies like Virtual Reality (VR) and Augmented Reality (AR) are here and growing in popularity, and a whole new generation of game consoles is just around the corner. Right now everyone wants to jump onto these bandwagons and create successful games using AR, VR and other technologies (for more detailed information see Chapter 15, Virtual Reality and Beyond, of my book, Learning C++ by Building Games with Unreal Engine 4 – Second Edition). But no one really wants to create everything from scratch (reinventing the wheel is just too much work). Fortunately, you don’t have to. Unreal Engine 4 (UE4) can help! Not only does Epic Games use their engine to develop their own games (and keep it constantly updated for that purpose), but many other game companies, both AAA and indie, also use the engine, and Epic is constantly adding new features for them too. They can also update the engine themselves, and they can make some of those changes available to the general public as well. UE4 also has a robust system for addons and plugins that many other developers contribute to. Some may be free, and others, more advanced ones are available for a price. These can be extremely specialized, and the developer may release regular updates to adjust to changes in Unreal and that adds new features that could make your life even easier. So how does UE4 help with new technologies? Here are some examples: Unreal Engine 4 for Virtual Reality Virtual Reality (VR) is one of the most exciting technologies around, and many people are trying to get into that particular door. VR headsets from companies like Oculus, HTC, and Sony are becoming cheaper, more common, and more powerful. If you were creating a game yourself from scratch you would need an extremely powerful graphics engine. Fortunately, UE4 already has one with VR functionality. If you already have a project you want to convert to VR, UE4 makes this easy for you. If you have an Oculus Rift or HTC Vive installed on your computer, viewing your game in VR is as easy as launching it in VR Preview mode and viewing it in your headset. While Controls might take more work, UE4 has a Motion Controller you can add to your controller to help you get started quickly. You can even edit your project in VR Mode, allowing you to see the editor view in your VR headset, which can help with positioning things in your game. If you’re starting a new project, UE4 now has VR specific templates for new projects. You also have plenty of online documentation and a large community of other users working with VR in Unreal Engine 4 who can help you out. Unreal Engine 4 for Augmented Reality Augmented Reality (AR) is another new technology that’s extremely popular right now. Pokemon Go is extremely popular, and many companies are trying to do something similar. There are also AR headsets and possibly other new ways to view AR information. Every platform has its own way of handling Augmented Reality right now. On mobile devices, iOS has ARKit to support AR programming and Android has ARCore. Fortunately, the Unreal website has a whole section on AR and how to support these in UE4 to develop AR games at https://docs.unrealengine.com/en-US/Platforms/AR/index.html. It also has information on using Magic Leap, Microsoft HoloLens, and Microsoft Hololens 2. So by using UE4, you get a big headstart on this type of development. Working with Other New Technologies If you want to use technology, chances are UE4 supports it (and if not, just wait and it will). Whether you’re trying to do procedural programming or just use the latest AI techniques (for more information see chapters 11 and 12 of my book, Learning C++ by Building Games with Unreal Engine 4 – Second Edition), chances are you can find something to help you get a head start in that technology that already works in UE4. And with so many people using the engine, it is likely to continue to be a great way to get support for new technologies. Support for New Platforms UE4 already supports numerous platforms such as PC, Mac, Mobile, web, Xbox One, PS4, Switch, and probably any other recent platform you can think of. With the next-gen consoles coming out in 2020, chances are they’re already working on support for them. For the consoles, you do generally need to be a registered developer with Microsoft, Sony, and/or Nintendo to have access to the tools to develop for those platforms (and you need expensive devkits). But as more indie games are showing up on these platforms you don’t necessarily have to be working at a AAA studio to do this anymore. What is amazing when you develop in UE4, is that publishing for another platform should basically just work. You may need to change the controls and the screen size. An AAA 3D title might be too slow to be playable if you try to just run it n a mobile device without any changes, but the basic game functionality will be there and you can make changes from that point. The Future It’s hard to tell what new technologies may come in the future, as new devices, game types, and methods of programming are developed. Regardless of what the future holds, there’s a strong chance that UE4 will support them. So learning UE4 now is a great investment of your time. If you’re interested in learning more, see my book, Learning C++ by Building Games with Unreal Engine 4 – Second Edition Author Bio Sharan Volin has been programming games for more than a decade. She has worked on AAA titles for Behavior Interactive, Blind Squirrel Games, Sony Online Entertainment/Daybreak Games, Electronic Arts (Danger Close Games), 7 Studios (Activision), and more, as well as numerous smaller games. She has primarily been a UI Programmer but is also interested in Audio, AI, and other areas. She also taught Game Programming for a year at the Art Institute of California and is the author of Learning C++ by Building Games with Unreal Engine 4 – Second Edition.
Read more
  • 0
  • 0
  • 10653

article-image-app-and-web-development-in-2019-what-we-loved-and-what-mattered
Richard Gall
17 Dec 2019
10 min read
Save for later

App and web development in 2019: What we loved and what mattered

Richard Gall
17 Dec 2019
10 min read
For app and web developers, the world at the end of the decade is very different to the one that began it. Sure, change is inevitable, but the way the discipline(s) have evolved in just a matter of years (arguably the most significant changes came in the latter half of the decade) is a mark of how technologies, business needs, customer expectations, and harsh economic realities have conspired to shape and remold our notion of what software development actually looks like. Full-stack, cloud-native, DevOps (and maybe even ‘NoOps’): all these things have been shaping the way app and web developers work over the last ten years. And in 2019 it feels like that new world is beginning to settle into a specific pattern. Many of the trends and technologies that really defined 2019 are, in truth, trends that have been nascent and emerging for a number of years. Cloud and microservices When cloud first emerged - at some point much earlier this decade - it was largely just about resource efficiency. The idea was to ditch your on premises servers and move instead to a model whereby you rent server space from big vendors. Okay, perhaps that’s a somewhat crude summation; but it’s nevertheless the case that cloud was primarily a field dealt with by administrators and IT professionals, rather than developers. Today, of course, cloud is having a very real impact on the way developers work, giving a degree of agility and flexibility in how software is deployed and managed. With cloud partnering nicely with microservices - which allow developers to break down an application into constituent parts - it’s easy to see how these two trends are getting many app and web developers excited. They shorten the development lifecycle and allow developers to get closer to their code as it runs in production. Learn cloud development - explore Packt's range of cloud bundles. Pick up 5 for $25 throughout our $5 campaign. An essential resource for microservices development: Microservices Development Cookbook. $5 for the rest of December and into January. Go and Rust The growth of Go and Rust throughout 2019 (okay, and a bit before that too) is directly related to the increasing importance of cloud and microservices in software development. Although JavaScript has been taken beyond the browser, it isn’t the best programming language for building high performance applications; that’s where the likes of Go and Rust have been taking over a not insignificant slice of the collective developer imagination. Both languages share a similar history (as this article nicely details); at a fundamental level, moreover, both also aim to build on C++, but with accessibility and safety in mind (C++ has long had a reputation for being both complicated and sometimes vulnerable to bugs and security issues). Go is likely to continue to grow at a faster rate than Rust: it’s a lot easier to use, so for web and app developers with experience in Java or JavaScript, it’s a much gentler learning curve. But this isn’t to say that Rust won’t remain a fixture for developers. Consistently ranked the ‘most loved’ language in Stack Overflow surveys, as developers seek relentless improvements to performance alongside watertight reliability and security, Rust will remain an important language in a fast-changing development world. Search Packt's extensive selection of Go eBooks and videos - $5 throughout December and into the new year. Visit the Packt store. Learn Rust with Rust Programming Cookbook. WebAssembly It’s impossible to talk about web and application development without mentioning WebAssembly. Arguably the full implications of WebAssembly are yet to be realised (indeed, at ReactConf 2019, Richard Feldman suggested that it was unlikely to initiate a wholesale transformation of the web - that, he believes, will take a few more years), but 2019 has been a year when it has properly started to make many developers sit up and take notice. But why is WebAssembly so exciting? Essentially, it allows you to run code on the web using multiple languages at a speed that’s almost akin to native applications. Indeed, WebAssembly is making languages like Rust more attractive to web developers. If WebAssembly is a bridge between Rust and JavaScript, Rust immediately becomes more attractive to developers who previously would have paid very little attention to it. If 2019 was the year more developers decided to take note of WebAssembly, 2020 will be the year when we start to see increased adoption. Learn WebAssembly is $5 throughout this year's $5 campaign. Get it here. State management: Redux, Flux, Vuex… For many years, MVC (Model-View-Controller) was the dominant model for managing application state. However, as applications have grown in complexity, it has become more and more difficult for us to establish a ‘single source of truth’ inside our apps.That can impact performance and can also make them harder to maintain on the development side. To tackle this, we’ve started to see a number of different patterns and frameworks emerging to help us manage application state. The growth of React has been instrumental here - as a very lightweight library it gives developers the freedom to manage application state however they choose - and it’s worth noting that Flux architecture was developed by Facebook to complement the library. Watch: Why do React developers love Redux for state management? https://www.youtube.com/watch?v=7YzgZA_hA48&feature=emb_title Following Flux we’ve also had Redux and Vuex - all of them, each with subtly different approaches, have become an essential aspect of modern web and app development. And while they might not have first emerged in 2019, it feels as though the state management discourse has hit the heights that it previously has not. If you haven’t yet had time to dive into this topic, it's well worth making sure you commit to it in 2020. Learning React with Redux and Flux [Video] is $5 - purchase it here on the Packt store. Learn Vuex with Vuex Quick Start Guide. Functional programming Functional programming is on the rise. This doesn’t however mean that purely functional languages like Haskell and Lisp are dominating the programming language landscape - in fact, it’s been said that JavaScript is now the language used for functional programming (even though it isn’t a functional language). Functional programming is popular because it can help minimize complexity and make it easier to test and reuse code. When you’re dealing with a dense codebase that grows and grows as your application scales, this is immensely valuable. It’s also worth placing functional programming in the context of managing application state. Insofar as functional programming allows you to be specific in determining how different parts of a component should interact with one another - the function is a theoretical abstraction that makes it easier to get to grips with managing the state of a complex and dynamic application. Get to grips with functional programming and discover how to leverage its power. Read Mastering Functional Programming. The new JavaScript framework boom I’m not sure whether JavaScript fatigue is over. On the one hand the space has coalesced around a handful of core tools and frameworks - React, GraphQL, Node.js, among a couple of others - but on the other hand, the last year (and a bit) have been characterized by many other small projects developed to support these core tools. So, while it’s maybe a little bit easier to parse the JavaScript ecosystem at pretty high level of abstraction than it was in the past, at a deeper level you have a range of tools that are designed for very specific purposes or to be used alongside some of those frameworks and tools just mentioned. Tools ranging from Koa.js (for Node), to Polymer, Nuxt, Next, Gatsby, Hugo, Vuelidate (to name just a random assortment) are all vying for developer mindshare. You could say that many of these tools are ‘second-order’ frameworks and libraries - they don’t fundamentally change the way you think about development but instead make it easier to do specific things. It’s for this reason that I’m reluctant to suggest that JavaScript fatigue will return to its former glory - this new JavaScript framework boom is very much geared towards productivity and immediate gains rather than overhauling the way you build applications because of some principled belief in the ‘right’ or ‘best’ way to do things. Learn Nuxt: pick up Build a News Feed with Nuxt 2 and Firestore [Video] for $5 before the end of the year. Get to grips with Next.js with Next.js Quick Start Guide. Learn Koa with Hands-on Server-Side Development with Koa.js [Video] Learn Gatsby with GatsbyJS: Build a PWA Blog with GraphQL, React, and WordPress [Video] GraphQL Much of this decade has been dominated by REST when it comes to APIs. But just as the so called ‘API economy’ has gone into overdrive, GraphQL has come on the scene. Adoption has been rapid, with many developers turning to it because it allows them to handle more complex and sophisticated requests at scale without writing long and confusing lines of code. This isn’t to say, of course, that GraphQL has all but killed REST. Instead, it’s more the case that GraphQL has been found to be a better tool for managing APIs in specific domains than REST. If you’re dealing with APIs that are complex in terms of the number of entities and their relationships between one another, then GraphQL can prove immensely useful. Find out how to put GraphQL to use. Pick up GraphQL Projects for $5 for the rest of December and into January. React Hooks (and Vue Hooks) Launched with React 16.8, React Hooks “let you use state and other React features without writing a class” (that’s from the project’s site). That’s a good thing because building components with a class can sometimes be somewhat inelegant. For a better explanation of the ‘point’ of React Hooks you could do a lot worse than this article. Vue Hooks is part of Vue 3.0 - this won’t be officially released until early next year. But the fact that both leading front end frameworks are taking similar approaches to improve the developer experience demonstrates that they’re responding to a need for more flexibility and control over large projects. That means 2019 has been the year that both tools have hit maturity in the web development space. Learn how React Hooks work with Packt's new React Hooks video. Conclusion The web and app development world is becoming difficult to parse. A few years ago discussion and debate really centered on frameworks; today it feels like there are many other elements to consider. Part of this is symptomatic of a slow DevOps revolution - the gap between build and production is smaller than it has ever been, and developers now have a significant degree of accountability and responsibility for things that were the preserve of different breeds of engineers and IT professionals. Perhaps that story is a bit of a simplification - however, it’s hard to dispute that the web and app developer skill set is incredibly diverse. That means there are an array of options and opportunities out there for those developers looking to push their careers forward, but it also means that they’ll need to do some serious decision making about what they want to do and how they want to do it.
Read more
  • 0
  • 0
  • 5800

article-image-bitbucket-to-no-longer-support-mercurial-users-must-migrate-to-git-by-may-2020
Fatema Patrawala
21 Aug 2019
6 min read
Save for later

Bitbucket to no longer support Mercurial, users must migrate to Git by May 2020

Fatema Patrawala
21 Aug 2019
6 min read
Yesterday marked an end of an era for Mercurial users, as Bitbucket announced to no longer support Mercurial repositories after May 2020. Bitbucket, owned by Atlassian, is a web-based version control repository hosting service, for source code and development projects. It has used Mercurial since the beginning in 2008 and then Git since October 2011. Now almost after ten years of sharing its journey with Mercurial, the Bitbucket team has decided to remove the Mercurial support from the Bitbucket Cloud and its API. The official announcement reads, “Mercurial features and repositories will be officially removed from Bitbucket and its API on June 1, 2020.” The Bitbucket team also communicated the timeline for the sunsetting of the Mercurial functionality. After February 1, 2020 users will no longer be able to create new Mercurial repositories. And post June 1, 2020 users will not be able to use Mercurial features in Bitbucket or via its API and all Mercurial repositories will be removed. Additionally all current Mercurial functionality in Bitbucket will be available through May 31, 2020. The team said the decision was not an easy one for them and Mercurial held a special place in their heart. But according to a Stack Overflow Developer Survey, almost 90% of developers use Git, while Mercurial is the least popular version control system with only about 3% developer adoption. Apart from this Mercurial usage on Bitbucket saw a steady decline, and the percentage of new Bitbucket users choosing Mercurial fell to less than 1%. Hence they decided on removing the Mercurial repos. How can users migrate and export their Mercurial repos Bitbucket team recommends users to migrate their existing Mercurial repos to Git. They have also extended support for migration, and kept the available options open for discussion in their dedicated Community thread. Users can discuss about conversion tools, migration, tips, and also offer troubleshooting help. If users prefer to continue using the Mercurial system, there are a number of free and paid Mercurial hosting services for them. The Bitbucket team has also created a Git tutorial that covers everything from the basics of creating pull requests to rebasing and Git hooks. Community shows anger and sadness over decision to discontinue Mercurial support There is an outrage among the Mercurial users as they are extremely unhappy and sad with this decision by Bitbucket. They have expressed anger not only on one platform but on multiple forums and community discussions. Users feel that Bitbucket’s decision to stop offering Mercurial support is bad, but the decision to also delete the repos is evil. On Hacker News, users speculated that this decision was influenced by potential to market rather than based on technically superior architecture and ease of use. They feel GitHub has successfully marketed Git and that's how both have become synonymous to the developer community. One of them comments, “It's very sad to see bitbucket dropping mercurial support. Now only Facebook and volunteers are keeping mercurial alive. Sometimes technically better architecture and user interface lose to a non user friendly hard solutions due to inertia of mass adoption. So a lesson in Software development is similar to betamax and VHS, so marketing is still a winner over technically superior architecture and ease of use. GitHub successfully marketed git, so git and GitHub are synonymous for most developers. Now majority of open source projects are reliant on a single proprietary solution Github by Microsoft, for managing code and project. Can understand the difficulty of bitbucket, when Python language itself moved out of mercurial due to the same inertia. Hopefully gitlab can come out with mercurial support to migrate projects using it from bitbucket.” Another user comments that Mercurial support was the only reason for him to use Bitbucket when GitHub is miles ahead of Bitbucket. Now when it stops supporting Mercurial too, Bitbucket will end soon. The comment reads, “Mercurial support was the one reason for me to still use Bitbucket: there is no other Bitbucket feature I can think of that Github doesn't already have, while Github's community is miles ahead since everyone and their dog is already there. More importantly, Bitbucket leaves the migration to you (if I read the article correctly). Once I download my repo and convert it to git, why would I stay with the company that just made me go through an annoying (and often painful) process, when I can migrate to Github with the exact same command? And why isn't there a "migrate this repo to git" button right there? I want to believe that Bitbucket has smart people and that this choice is a good one. But I'm with you there - to me, this definitely looks like Bitbucket will die.” On Reddit, programming folks see this as a big change from Bitbucket as they are the major mercurial hosting provider. And they feel Bitbucket announced this at a pretty short notice and they require more time for migration. Apart from the developer community forums, on Atlassian community blog as well users have expressed displeasure. A team of scientists commented, “Let's get this straight : Bitbucket (offering hosting support for Mercurial projects) was acquired by Atlassian in September 2010. Nine years later Atlassian decides to drop Mercurial support and delete all Mercurial repositories. Atlassian, I hate you :-) The image you have for me is that of a harmful predator. We are a team of scientists working in a university. We don't have computer scientists, we managed to use a version control simple as Mercurial, and it was a hard work to make all scientists in our team to use a version control system (even as simple as Mercurial). We don't have the time nor the energy to switch to another version control system. But we will, forced and obliged. I really don't want to check out Github or something else to migrate our projects there, but we will, forced and obliged.” Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack Attackers wiped many GitHub, GitLab, and Bitbucket repos with ‘compromised’ valid credentials leaving behind a ransom note BitBucket goes down for over an hour
Read more
  • 0
  • 0
  • 10079

article-image-are-you-looking-at-transitioning-from-being-a-developer-to-manager-here-are-some-leadership-roles-to-consider
Packt Editorial Staff
04 Jul 2019
6 min read
Save for later

Are you looking at transitioning from being a developer to manager? Here are some leadership roles to consider

Packt Editorial Staff
04 Jul 2019
6 min read
What does the phrase "a manager" really mean anyway? This phrase means different things to different people and is often overused for the position which nearly matches an analyst-level profile! This term, although common, is worth defining what it really means, especially in the context of software development. This article is an excerpt from the book The Successful Software Manager written by an internationally experienced IT manager, Herman Fung. This book is a comprehensive and practical guide to managing software developers, software customers, and explores the process of deciding what software needs to be built, not how to build it. In this article, we’ll look into aspects you must be aware of before making the move to become a manager in the software industry. A simple distinction I once used to illustrate the difference between an analyst and a manager is that while an analyst identifies, collects, and analyzes information, a manager uses this analysis and makes decisions, or more accurately, is responsible and accountable for the decisions they make. The structure of software companies is now enormously diverse and varies a lot from one to another, which has an obvious impact on how the manager’s role and their responsibilities are defined, which will be unique to each company. Even within the same company, it's subject to change from time to time, as the company itself changes. Broadly speaking, a manager within software development can be classified into three categories, as we will now discuss: Team Leader/Manager This role is often a lead developer who also doubles up as the team spokesperson and single point of contact. They'll typically be the most senior and knowledgeable member of a small group of developers, who work on the same project, product, and technology. There is often a direct link between each developer in the team and their code, which means the team manager has a direct responsibility to ensure the product as a whole works. Usually, the team manager is also asked to fulfill the people management duties, such as performance reviews and appraisals, and day-to-day HR responsibilities. Development/Delivery Manager This person could be either a techie or a non-techie. They will have a good understanding of the requirements, design, code, and end product. They will manage running workshops and huddles to facilitate better overall team working and delivery. This role may include setting up visual aids, such as team/project charts or boards. In a matrix management model, where developers and other experts are temporarily asked to work in project teams, the development manager will not be responsible for HR and people management duties. Project Manager This person is most probably a non-techie, but there are exceptions, and this could be a distinct advantage on certain projects. Most importantly, a project manager will be process-focused and output-driven and will focus on distributing tasks to individuals. They are not expected to jump in to solve technical problems, but they are responsible for ensuring that the proper resources are available, while managing expectations. Specifically, they take part in managing the project budget, timeline, and risks. They should also be aware of the political landscape and management agenda within the organization to be able to navigate through them. The project manager ensures the project follows the required methodology or process framework mandated by the Project Management Office (PMO). They will not have people-management responsibilities for project team members. Agile practitioner As with all roles in today's world of tech, these categories will vary and overlap. They can even be held by the same person, which is becoming an increasingly common trait. They are also constantly evolving, which exemplifies the need to learn and grow continually, regardless of your role or position. If you are a true Agile practitioner, you may have issues in choosing these generalized categories, (Team Leader, Development Manager and Project Manager)  and you'd be right to do so! These categories are most applicable to an organization that practises the traditional Waterfall model. Without diving into the everlasting Waterfall vs Agile debate, let's just say that these are the categories that transcend any methodologies. Even if they're not referred to by these names, they are the roles that need to be performed, to varying degrees, at various times. For completeness, it is worth noting one role specific to Agile, that is being a scrum master. Scrum master A scrum master is a role often compared – rightly or wrongly – with that of the project manager. The key difference is that their focus is on facilitation and coaching, instead of organizing and control. This difference is as much of a mindset as it is a strict practice, and is often referred to as being attributes of Servant Leadership. I believe a good scrum master will show traits of a good project manager at various times, and vice versa. This is especially true in ensuring that there is clear communication at all times and the team stays focused on delivering together. Yet, as we look back at all these roles, it's worth remembering that with the advent of new disciplines such as big data, blockchain, artificial intelligence, and machine learning, there are new categories and opportunities to move from a developer role into a management position, for example, as an algorithm manager or data manager. Transitioning, growing, progressing, or simply changing from a developer to a manager is a wonderfully rewarding journey that is unique to everyone. After clarifying what being a “modern manager" really means, and the broad categories applicable in software development (Team / Development / Project / Agile), the overarching and often key consideration for developers is whether it means they will be managing people and writing less code. In this article, we looked into different leadership roles that are available for developers for their career progression plan. Develop crucial skills to enhance your performance and advance your career with The Successful Software Manager written by Herman Fung. “Developers don’t belong on a pedestal, they’re doing a job like everyone else” – April Wensel on toxic tech culture and Compassionate Coding [Interview] Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl” ‘I code in my dreams too’, say developers in Jetbrains State of Developer Ecosystem 2019 Survey
Read more
  • 0
  • 0
  • 4055
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-react-native-vs-xamarin-which-is-the-better-cross-platform-mobile-development-framework
Guest Contributor
25 May 2019
10 min read
Save for later

React Native VS Xamarin: Which is the better cross-platform mobile development framework?

Guest Contributor
25 May 2019
10 min read
One of the most debated topics of the current mobile industry is the battle of the two giant app development platforms, Xamarin and React Native. Central to the buzzing hype of this battle and the increasing popularity of these two platforms are the communities of app developers built around them. Both of these open-source app development platforms are preferred by the app development community to create highly efficient applications while saving time and efforts of the app developers. Both React and Xamarin are supported by some merits and demerits, which makes selecting the best between the two a bit difficult. When it comes to selecting the appropriate mobile application platform, it boils down to the nature, needs and overall objectives of the business and company.  It also comes down to the features and characteristics of that technology, which either make it the best fit for one project or the worst approach for another. With that being said, let’s start with our comparing the two to find out the major differences, explore the key considerations and determine the winner of this unending platform battle. An overview of Xamarin An open-source cross-platform used for mobile application development, Xamarin can be used to build applications for Android, iOS and wearable devices. Offered as a high-tech enterprise app development tool within Microsoft Visual Studio IDE, Xamarin has now become one of the top mobile app development platforms used by various businesses and enterprises. Apart from being a free app development platform, it facilitates the development of mobile applications while using a single programming language, namely C#, for both the Android and iOS versions. Key features Since the day of its introduction, Xamarin has been using C#. C# is a popular programming language in the Microsoft community, and with great features like metaprogramming, functional programming and portability, C# is widely-preferred by many web developers. Xamarin makes it easy for C# developers to shift from web development platform to cross mobile app development platform. Features like portable class libraries, code sharing features, testing clouds and insights, and compatibility with Mac IDE and Visual Studio IDE makes Xamarin a great development tool with no additional costs. Development environment Xamarin provides app developers with a comprehensive app development toolkit and software package. The package includes highly compatible IDEs (for both Mac and VS), distribution and analytics tools such as Hockeyapp and testing tools such as Xamarin Test Cloud. With Xamarin, developers no longer have to invest their time and money in incorporating third-party tools. It uses Mono execution environment for both the platforms, i.e. Android and iOS. Framework C# has matured from its infancy, and the Xamarin framework now provides strong-safety typing which ensures prevention of unexpected code behavior. Since C# supports .NET framework, the language can be used with numerous .NET features like ASynC, LINQ, and Lambdas. Compilation Xamarin.iOS and Xamarin.Android are the two major products offered by this platform. In case of iOS code compilation, the platform follows Ahead-of-Time compilation whereas in Android Just-in-Time compilation approach is followed. However, the compilation process is fully automated and is equipped with features to tackle and resolve issues like memory allocation and garbage collection. App working principles Xamarin has an MVVM architecture coupled with a two-way data binding which provides great support for collaborative work among different departments. If your development approach doesn’t follow a strict performance-oriented approach, then go for Xamarin as it provides high process flexibility. How exactly does it work? Not only does C# form the basis of this platform, but it also provides developers with access to React Native APIs. This feature of Xamarin enables it to create universal backend code that can be used with any UI based on React Native SDK. An overview of React Native With Facebook being the creator of this platform, React Native is one of the widely-used programming platforms. From enabling mobile developers to build highly efficient apps to ensure great quality and increased sustainability, the demand for React Native apps is sure to increase over time. Key features React Native apps for the Android platform uses Java while the iOS version of the same app uses C#. The platforms provide numerous built-in tools, libraries, and frameworks. Its standout feature of hot reloading enables developers to make amendments to the code without spending much time on code compilation process. Development environment The React Native app development platform requires developers to follow a wide array of actions and processes to build a UI. The platform supports easy and faster iterations while enabling execution of a different code even when the application is running. Since React Native doesn’t provide support for 64-bit, it does impact the run time and speed of codes in iOS. Architecture React Native app development platform supports modular architecture. This means that developers can categorize the code into different functional and independent blocks of codes. This characteristic of the React Native platform, therefore, provides process flexibility, ease of upgrade and application updates. Compilation The Reactive Native app development platform follows and supports Just-in-Time compilation for Android applications. Whereas, in case of iOS application Just-in-Time compilation is not available as it might slow down the code execution procedure. App working principles This platform follows a one-way data binding approach which helps in boosting the overall performance of the application. However, through manual implementation, two-way data binding approach can be implemented which is useful for introducing code coherence and in reducing complex errors. How does it actually work? React Native enables developers to build applications using React and JavaScript. The working of a React Native application can be described as thread-based interaction. One thread handles the UI and user gestures while the other is React Native specific and deals with the application’s business logic. It also determines the structure and functionality of the overall user interface. The interaction could be asynchronous, batched or serializable. Learning curves of Xamarin and React Native To master Xamarin one has to be skilled in .NET. Xamarin provides you with easy and complete access to SDK platform capabilities because of Xamarin.iOS and Xamarin.Android libraries.  Xamarin provides a complete package which reduces the need of integrating third-party tools and libraries-- so to become a professional in Xamarin app development all you need is skills and expertise in C#, .NET and some basic working knowledge of React Native classes. While on the other hand, mastering React Native requires thorough knowledge and expertise of JavaScript. Since the platform doesn’t offer well-integrated libraries and tools, knowledge and expertise of third-party sources and tools are of core importance. Key differences between Xamarin and React Native While Trello, Slack, and GitHub use Xamarin, other successful companies like Facebook, Walmart, and Instagram have React Native-based mobile applications. While React, Native application offers better performance, not every company can afford to develop an app for each platform. Cross platforms like Xamarin are the best alternative to React Native apps as they offer higher development flexibility. Where Xamarin offers multiple platform support, cost-effectiveness and time-saving, React Native allows faster development and increased efficiency. Since Xamarin provides complete hardware support, the issues of hardware compatibility are reduced. React Native, on the other hand, provides you with ready-made components which reduce the need for writing the entire code from scratch. In React Native, with integration and after investment in third-party libraries and plugins, the need for WebView functions is eliminated which in turn reduces the memory requirements. Xamarin, on the other hand, provides you with a comprehensive toolkit with zero investments on additional plugins and third-party sources. However, this cross-platform offers restricted access to open-source technologies. A good quality React Native application requires more than a few weeks to develop which increases not only the development time but also the app complexity. If time-consumption is one of the drawbacks of the React Native app, then additional optimization for supporting larger application counts as a limitation for Xamarin. While frequent update contributes in shrinkage of the customer base of the React Native app, then stability complaints and app crashes are some common issues with Xamarin applications. When to go for Xamarin? Case #1: The foremost advantage of Xamarin is that all you need is command over C# and .NET. Case #2: One of the most exciting trends currently in the mobile development industry is the Internet of Things. Considering the rapid increase in need and demand of IoT, if you are developing a product that involves multiple hardware capacities and user devices then make developing with Xamarin your number one priority. Xamarin is fully compatible with numerous IoT devices which eliminates the need for a third-party source for functionality implementation. Case #3: If you are budget-constricted and time-bound then Xamarin is the solution to all your app development worries. Since the backend code for both Android and iOS is similar, it reduces the development time and efforts and is budget friendly. Case #4: The revolutionary and integral test cloud is probably the best part about Xamarin. Even though Test Cloud might take up a fraction of your budget, this expense is worth investing in. The test cloud not only recreates the activity of actual users but it also ensures that your application works well on various devices and is accessible to maximum users. When to go for React Native? Case #1: When it comes to game app development, Xamarin is not a wise choice. Since it supports C# framework and AOT compilation, getting speedy results and rendering is difficult with Xamarin. A Gaming application is updated dynamically, highly interactive and has high-performance graphics; the drawback of zero compatibility with heavy graphics makes Xamarin a poor choice in game app development. For these very reasons, many developers go for React Native when it comes to developing high-performing gaming applications. Case #2: The size of the application is an indirect indicator of the success of application among targeted users. Since many smartphone users have their own photos and video stuffed in their phone’s memory, there is barely any memory and storage left for an additional application. Xamarin-based apps are relatively heavier and occupy more space than their React Native counterparts. Wondering which framework to choose? Xamarin and React Native are the two major players of the mobile app development industry. So, it’s entirely up to you whether you want to proceed with React Native or Xamarin. However, your decision should be based on the type of application, requirements and development cost. If you want a faster development process go for the Xamarin and if you are developing a game, e-commerce or social site go for React Native. Author Bio Khalid Durrani is an Inbound Marketing Expert and a content strategist. He likes to cover the topics related to design, latest tech, startups, IOT, Artificial intelligence, Big Data, AR/VR, UI/UX and much more. Currently, he is the global marketing manager of LogoVerge, an AI-based design agency. The Ionic team announces the release of Ionic React Beta React Native 0.59 RC0 is now out with React Hooks, and more Changes made to React Native Community’s GitHub organization in 2018 for driving better collaboration
Read more
  • 0
  • 0
  • 7558

article-image-javascript-mobile-frameworks-comparison-react-native-vs-ionic-vs-nativescript
Bhagyashree R
03 Nov 2018
11 min read
Save for later

JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript

Bhagyashree R
03 Nov 2018
11 min read
Previously, when you wanted to build for both web and mobile, you would have to invest in separate teams with separate development workflows. Isn't that annoying? JavaScript-driven frameworks have changed this equation. You can now build mobile apps without having to learn a completely new language such as Kotlin, Java, Objective C, and development approach and use your current skills of web development. One of the first technologies to do this was Cordova, which enabled web developers to package their web apps into a native binary, and to access device APIs through plugins. Since then, developers have created a variety of alternative approaches to using JavaScript to drive native iOS and Android applications. In this article we will talk about three of these frameworks: React Native, Ionic, and NativeScript. After introducing you to these frameworks, we will move on to their comparison and try to find which one of them is best on what scenarios. What exactly are native and hybrid applications? Before we start with the comparison, let’s answer this simple question as we are going to use these terms a lot in this article. What are native applications? Native applications are built for a particular platform and are written in a particular language. For example, Android apps are written in Java or Kotlin, and iOS apps are written in Objective C and Swift. The word “native” here refers to a platform such as Android, iOS, or Windows phone. Designed for a specific platform, these apps are considered to be more efficient in terms of performance, as well as being more reliable. The downside of native applications is that a separate version of the app must be developed for each platform. As it is written in a completely different programming language, you can’t reuse any piece of code from another platform version. That’s why native app development is considered to be more time consuming and expensive in comparison to hybrid applications, at least in theory. What are hybrid applications? Unlike native applications, hybrid applications are cross-platform. They are written in languages such as C# or JavaScript and compiled to be executed on each platform. For device specific interactions, hybrid applications utilize the support of plugins.Developing them is faster and simpler. Also, they are less expensive as you have to develop only one app instead of developing multiple native apps for different platforms. The major challenge with hybrid apps is that they run in WebView which means they depend on the native browser. Because of this, hybrid apps aren’t as fast as native apps. You can also face serious challenges if the app requires complex interaction with the device. After all, there’s a limit to what plugins can achieve on this front. As all the rendering is done using web tech, we can’t produce a truly native user experience. Let’s now move on to the overview of the three frameworks: What is React Native? Source: React Native The story of React Native started in the summer of 2013 as Facebook’s internal hackathon project and it was later open sourced in 2015. React Native is a JavaScript framework used to build native mobile applications. As you might have already guessed from its name, React Native is based on React, a JavaScript library for building user interfaces. The reason why it is called “native” is that the UI built with React Native consists of native UI widgets that look and feel consistent with the apps you built using native languages. How does React Native work? Under the hood, React Native translates your UI definition written in Javascript/JSX into a hierarchy of native views correct for the target platform. For example, if we are building an iOS app, it will translate the Text primitive to a native iOS UIView, and in Android, it will result with a native TextView. So, even though we are writing a JavaScript application, we do not get a web app embedded inside the shell of a mobile one. We are getting a “real native app”. But how does this “translation” takes place? React Native runs on JavaScriptCore, the JavaScript engine on iOS and Android, and then renders native components. React components return markup from their render function, which describes how they should look. With React for the Web, this translates directly to the browser’s DOM. For React Native, this markup is translated to suit the host platform, so a <View> might become an Android-specific TextView. Applications built with React Native All the recent features in the Facebook app such as Blood Donations, Crisis Response, Privacy Shortcuts, and Wellness Checks were built with React Native. Other companies or products that use this framework include Instagram, Bloomberg, Pinterest, Skype, Tesla, Uber, Walmart, Wix, Discord, Gyroscope, SoundCloud Pulse, Tencent QQ, Vogue, and many more. What is Ionic framework? Source: Ionic Framework The Ionic framework was created by Drifty Co. and was initially released in 2013. It is an open source, frontend SDK for developing hybrid mobile apps with familiar web technologies such as HTML5, CSS, and JavaScript. With Ionic, you will be able to build and deploy apps that work across multiple platforms, such as native iOS, Android, desktop, and the web as a Progressive Web App. How does Ionic framework work? Ionic is mainly focused on an application’s look and feel, or the UI interaction. This tells us that it’s not meant to replace Cordova or your favorite JavaScript framework. In fact, it still needs a native wrapper like Cordova to run your app as a mobile app. It uses these wrappers to gain access to host operating systems features such as Camera, GPS, Flashlight, etc. Ionic apps run in low-level browser shell like UIWebView in iOS or WebView in Android, which is wrapped by tools like Cordova/PhoneGap. Currently, Ionic Framework has official integration with Angular, and support for Vue and React are in development. They have recently released the Ionic 4 beta version, which comes with better support for Angular. This version supports the new Angular tooling and features, ensuring that Ionic apps follow Angular standards and conventions. Applications built with Ionic Some of the apps that use Ionic framework are MarketWatch, Pacifica, Sworkit, Vertfolio and many more. You can view the full list of applications built with Ionic framework on their website. What is NativeScript? Source: NativeScript NativeScript is developed by Telerik (a subsidiary of Progress) and was first released in 2014. It’s an open source framework that helps you build apps using JavaScript or any other language that transpiles to JavaScript, for example, TypeScript. It directly supports the Angular framework and supports the Vue framework via a community-developed plugin. Mobile applications built with NativeScript result in fully native apps, which use the same APIs as if they were developed in Xcode or Android Studio. Additionally, software developers can re-purpose third-party libraries from CocoaPods, Android Arsenal, Maven, and npm.js in their mobile applications without the need for wrappers. How does NativeScript work? Since the applications are built in JavaScript there is a need of some proxy mechanism to translate JavaScript code to the corresponding native APIs. This is done by the runtime parts of NativeScript, which act as a “bridge” between the JavaScript and the native world (Android and iOS). The runtimes facilitate calling APIs in the Android and iOS frameworks using JavaScript code. To do that JavaScript Virtual Machines are used - Google’s V8 for Android and WebKit’s JavaScriptCore implementation distributed with iOS 7.0+. Applications built with NativeScript Some of the applications built with NativeScript are Strudel, BitPoints Wallet, Regelneef, and Dwitch. React Native vs Ionic vs NativeScript Now that we’ve introduced all the three frameworks, let’s tackle the difficult question: which framework is better? #1 Learning curve The time for learning any technology will depend on the knowledge you already have. If you are a web developer familiar with HTML5, CSS, and Javascript, it will be fairly easy for you to get started with all the three frameworks. But if you are coming from a mobile development background, then the learning curve will be a bit steep for all the three. Among the three of them, the Ionic framework is easy to learn and implement and they also have great documentation. #2 Community support Going by the GitHub stats, React Native is way ahead the other two frameworks be it in terms of popularity of the repository or the number of contributors. This year's GitHub Octoverse report also highlighted that React Native is one of the most active open source project currently. The following table shows the stats at the time of writing: Framework Stars Forks Contributors React Native 70150 15712 1767 Ionic 35664 12205 272 NativeScript 15200 1129 119 Source: GitHub Comparing these three frameworks by the weekly package downloads from the npm website also indicate that React Native is the most popular framework among the three. The comparison is shown as follows: Source: npm trends #3 Performance Ionic apps, as mentioned earlier, are hybrid apps, which means they run on the WebView.  Hybrid applications, as mentioned in the beginning, are arguably slower as compared to the JavaScript-driven native applications, as their speed depends on the WebView. This also makes Ionic not so suitable for high performance or UI intensive apps such as for game development. React Native, in turn, provides faster application speed. Since, React works separately from the main UI thread, your application can maintain high performance without sacrificing capability. Additionally, the introduction of the React Fiber algorithm, which was implemented with the goal of visual rendering acceleration adds up to its better performance. In the case of NativeScript, rendering slows down a NativeScript application. Also, the applications built with NativeScript for the Android platform are larger in size. This large size of the application influences the performance in a negative way. #4 Marketplace The marketplace for Ionic is great. The tool lists many starter apps, themes, and plugins. Plugins range from a DatePicker to Google Maps. Similarly, NativeScript has its official marketplace listing 923 plugins in total. React Native, on the other hand, does not have a dedicated marketplace from Facebook. However, there are some companies that do provide React Native plugins and starter apps. #5 Reusability of the codebase Because Ionic is a framework for developing “wrapped applications", it wins the code reusability contest hands down. Essentially, the very concept of Ionic is “write once, run everywhere”. NativeScript isn’t far behind Ionic in terms of code reusability. In August this year, the Progress team announced that they are working on a Code-Sharing Project. To realize this code-sharing dream, together the Angular and NativeScript teams have created nativescript-schematics, a schematic that enables you to build both web and mobile apps from a single project. In the case of React Native, you will be able to reuse the logic and structure of the components, however, you would have to rewrite the UI used in them. React Native follows a different approach: “learn once, write everywhere”. This means that the same team of developers who built the iOS version will be able to understand enough to build the Android version, but they still need to have some knowledge of Android. With React Native you will end up having two separate projects. That’s fine because they are for two different platforms, but their internal structure will still be very similar. So, which JavaScript mobile framework is best? All three mobile frameworks come with their pros and cons. These frameworks are meant for the same objective but different project requirements. Choosing any one of them depends on your project, your user requirements, and the skills of your team. While Ionic comes with the benefit of a single codebase, it’s not suitable for graphics-intensive applications. React Native provides better performance than the other two, but adds the overhead of creating native shell for each platform. The best thing about NativeScript is that it supports Vue, which is one of fastest growing JavaScript frameworks. But its downside is that it makes the app size large. In the future we will see more such frameworks to help developers quickly prototype, develop, and ship cross-platform application. One of them is Flutter by Google which is already creating a wave. Nativescript 4.1 has been released React Native 0.57 released with major improvements in accessibility APIs, WKWebView-backed implementation, and more! Ionic framework announces Ionic 4 Beta
Read more
  • 0
  • 0
  • 11878

article-image-a-decade-of-android-slayer-of-blackberry-challenger-of-iphone-mother-of-the-modern-mobile-ecosystem
Sandesh Deshpande
06 Oct 2018
6 min read
Save for later

A decade of Android: Slayer of Blackberry, challenger of iPhone, mother of the modern mobile ecosystem

Sandesh Deshpande
06 Oct 2018
6 min read
If someone says Eclair, Honeycomb, Ice Cream Sandwich, or Jelly Bean then apart from getting a sugar rush, you will probably think of Android OS. From just being a newly launched OS, filled with apprehensions, to being the biggest and most loved operating system in the history, Android has seen it all. The OS which powers our phones and makes our everyday life simpler recently celebrated its 10th anniversary. Android’s rise from the ashes The journey to become the most popular mobile OS since its launch in 2008, was not that easy for Android. Back then it competed with iOS and Blackberry, which were considered the go-to smartphones of that time. Google’s idea was to give users a Blackberry-like experience as the 'G1' had a full-sized physical Qwerty keypad just like Blackberry. But G1 had some limitations as it could play videos only on YouTube as it didn’t have any inbuilt video player app and Android Market (now Google Play) and had just a handful of apps. Though the idea to give users blackberry like the experience was spot on, it was not a hit with the users as by then Apple had made touchscreen all the rage with its iPhone. But one thing Google did right with Android OS, which its competitors didn't offer, was customizations and that's where Google scored a home run. Blackberry and iPhone were great and users loved them. But both the OS tied the users in their ecosystem. Motorola saw the potential for customization and it adopted Android to launch Motorola Droid in 2009.  This is when Android OS came of age and started competing with Apple's iOS. With Android OS, people could customize their phones and with its open source platform developers could tweak the Base OS and customize it to their liking. This resulted in users having options to choose themes, wallpapers, and launchers. This change pioneered the requirement for customization which was later adopted in iPhone as well. By virtue of it being an open platform and thanks to regular updates from Google, there was a huge surge in Android adoption and mobile manufacturers like Motorola, HTC, and Samsung launched their devices powered by Android OS. Because of this rapid adoption of Android by a large number of manufacturers, Android became the most popular mobile platform, beating Nokia's Symbian OS by the end of 2010. This Android phenomenon saved many manufacturers like HTC, Motorola, Samsung, Sony for losing significant market share to the then mobile handset market leaders, Nokia, Blackberry and Apple. They sensed the change in user preferences and adopted Android OS. Nokia, on the other hand, didn’t adopt Android and stuck to it’s Symbian OS which resulted in customer and market loss. Android: Sugar, and spice and everything nice In the subsequent years, Google launched Android versions like Cupcake, Donut, Eclair, Froyo, Gingerbread, Ice cream Sandwich, Jelly Bean, KitKat, Lollipop, and Marshmallow. The Android team sure love their sugars evident from all Android operating systems named after desserts. It's not new that tech companies get unique names for their software versions. For instance,  Apple names its OS after cats like Tiger, Leopard and Snow Leopard. But Google officially never revealed why their OS is named after desserts. Just in case that wasn't nerdy enough, Google put these sugary names in alphabetical order. Each update came with some cool features. Here’s a quick list of some popular features with the respective versions. Eclair (2009): Phone which came with Eclair onboard had digital zoom and flashlight for photos for the first time ever. Honeycomb (2011): Honeycomb was compatible with a tablet without any major glitches. Ice cream Sandwich (2011): Probably not as sophisticated as today but Ice cream sandwich had facial recognition and also a feature to take screenshots. Lollipop (2014): With Android Lollipop rounded icons were introduced in Android for the first time. Nougat (2016): With Nougat update Google introduced more natural looking emojis including skin tone modifiers, Unicode 9 emojis, and a removal of previously gender-neutral characters. Pie (2018): The latest Android update Android Pie also comes  with a bunch of cool features. However, the standout feature in this release is the  Indoor navigation which enables indoor GPS style tracking by determining your location within a building and facilitating turn-by-turn directions to help you navigate indoors. Android’s greatest strength probably is its large open platform community which helps developers to develop apps for Android. Though developers can write Android apps in any Java virtual machine (JVM) compatible programming language and can run on JVM, Google’s primary language for writing Android apps was Java (besides C++). At Google I/O 2018, Google announced that it will officially support Kotlin on Android as a “first-class” language. Kotlin is a super new programming language built by JetBrains, which also coincidentally develops the JetBrains IDE that powers the Android Studio. Apart from rich features and strong open platform community, Google also enhanced security with the newer Android versions which made it unbeatable. Manufacturers like Samsung leveraged the power of Android with their Galaxy S series making them one of the leading mobile manufacturers. Today, Google have proven themselves as strong players in the mobile market not only with Android OS but also with their Flagship phones like the Pixel series which receive updates before any other Smartphone with Android OS. Android today: love it, hate it, but you can’t escape it Today with a staggering 2 Billion active devices, Android is the market leader in mobile OS platform by far. A decade ago, no one anticipated that one mobile OS could have such dominance. Google has developed the OS for televisions, smartwatches, smart home devices, VR Headsets and has even developed Android Auto for cars. As Google showcased in Google I/O 2018 the power of machine learning with Smart compose for Gmail and Google Duplex for Google assistant, with Google assistant now being introduced on almost all latest android phones it is making Android more powerful than ever. However, all is not all sunshine and rainbows in the Android nation. In July this year, EU slapped Google with $5 billion fine as an outcome of its antitrust investigations around Android. Google was found guilty of imposing illegal restrictions on Android device manufacturers and network operators, since 2011, in an attempt to get all the traffic from these devices to the Google search engine. It is ironic that the very restrictive locked-in ecosystems that Android rebelled against in its early days are something it is now increasingly endorsing. Furthermore, as interfaces become less text and screen-based and more touch, voice, and gesture-based, Google does seem to realize Android’s limitations to some extent. They have been investing a lot into Project Fuschia lately, which many believe could be Android’s replacement in the future. With the tech landscape changing more rapidly than ever it will be interesting to see what the future holds for Android but for now, Android is here to stay. 6 common challenges faced by Android App developers Entry level phones to taste the Go edition of the Android 9.0 Pie version Android 9 pie’s Smart Linkify: How Android’s new machine learning based feature works
Read more
  • 0
  • 0
  • 3981

article-image-xamarin-how-to-add-a-mvvm-pattern-to-an-app-tutorial
Sugandha Lahoti
22 Jun 2018
13 min read
Save for later

Xamarin: How to add a MVVM pattern to an app [Tutorial]

Sugandha Lahoti
22 Jun 2018
13 min read
In our previous tutorial, we created a basic travel app using Xamarin.Forms. In this post, we will look at adding the Model-View-View-Model (MVVM) pattern to our travel app. The MVVM elements are offered with the Xamarin.Forms toolkit and we can expand on them to truly take advantage of the power of the pattern. As we dig into MVVM, we will apply what we have learned to the TripLog app that we started building in our previous tutorial. This article is an excerpt from the book Mastering Xamaring.Forms by Ed Snider. Understanding the MVVM pattern At its core, MVVM is a presentation pattern designed to control the separation between user interfaces and the rest of an application. The key elements of the MVVM pattern are as follows: Models: Models represent the business entities of an application. When responses come back from an API, they are typically deserialized to models. Views: Views represent the actual pages or screens of an application, along with all of the elements that make them up, including custom controls. Views are very platform-specific and depend heavily on platform APIs to render the application's user interface (UI). ViewModels: ViewModels control and manipulate the Views by serving as their data context. ViewModels are made up of a series of properties represented by Models. These properties are part of what is bound to the Views to provide the data that is displayed to users, or to collect the data that is entered or selected by users. In addition to model-backed properties, ViewModels can also contain commands, which are action-backed properties that bind the actual functionality and execution to events that occur in the Views, such as button taps or list item selections. Data binding: Data binding is the concept of connecting data properties and actions in a ViewModel with the user interface elements in a View. The actual implementation of how data binding happens can vary and, in most cases is provided by a framework, toolkit, or library. In Windows app development, data binding is provided declaratively in XAML. In traditional (non-Xamarin.Forms) Xamarin app development, data binding is either a manual process or dependent on a framework such as MvvmCross (https://github.com/MvvmCross/MvvmCross), a popular framework in the .NET mobile development community. Data binding in Xamarin.Forms follows a very similar approach to Windows app development. Adding MVVM to the app The first step of introducing MVVM into an app is to set up the structure by adding folders that will represent the core tenants of the pattern, such as Models, ViewModels, and Views. Traditionally, the Models and ViewModels live in a core library (usually, a portable class library or .NET standard library), whereas the Views live in a platform-specific library. Thanks to the power of the Xamarin.Forms toolkit and its abstraction of platform-specific UI APIs, the Views in a Xamarin.Forms app can also live in the core library. Just because the Views can live in the core library with the ViewModels and Models, this doesn't mean that separation between the user interface and the app logic isn't important. When implementing a specific structure to support a design pattern, it is helpful to have your application namespaces organized in a similar structure. This is not a requirement but it is something that can be useful. By default, Visual Studio for Mac will associate namespaces with directory names, as shown in the following screenshot: Setting up the app structure For the TripLog app, we will let the Views, ViewModels, and Models all live in the same core portable class library. In our solution, this is the project called TripLog. We have already added a Models folder in our previous tutorial, so we just need to add a ViewModels folder and a Views folder to the project to complete the MVVM structure. In order to set up the app structure, perform the following steps: Add a new folder named ViewModels to the root of the TripLog project. Add a new folder named Views to the root of the TripLog project. Move the existing XAML pages files (MainPage.xaml, DetailPage.xaml, and NewEntryPage.xaml and their .cs code-behind files) into the Views folder that we have just created. Update the namespace of each Page from TripLog to TripLog.Views. Update the x:Class attribute of each Page's root ContentPage from TripLog.MainPage, TripLog.DetailPage, and TripLog.NewEntryPage to TripLog.Views.MainPage, TripLog.Views.DetailPage, and TripLog.Views.NewEntryPage, respectively. Update the using statements on any class that references the Pages. Currently, this should only be in the App class in App.xaml.cs, where MainPage is instantiated. Once the MVVM structure has been added, the folder structure in the solution should look similar to the following screenshot: In MVVM, the term View is used to describe a screen. Xamarin.Forms uses the term View to describe controls, such as buttons or labels, and uses the term Page to describe a screen. In order to avoid confusion, I will stick with the Xamarin.Forms terminology and refer to screens as Pages, and will only use the term Views in reference to screens for the folder where the Pages will live, in order to stick with the MVVM pattern. Adding ViewModels In most cases, Views (Pages) and ViewModels have a one-to-one relationship. However, it is possible for a View (Page) to contain multiple ViewModels or for a ViewModel to be used by multiple Views (Pages). For now, we will simply have a single ViewModel for each Page. Before we create our ViewModels, we will start by creating a base ViewModel class, which will be an abstract class containing the basic functionality that each of our ViewModels will inherit. Initially, the base ViewModel abstract class will only contain a couple of members and will implement INotifyPropertyChanged, but we will add to this class as we continue to build upon the TripLog app throughout this book. In order to create a base ViewModel, perform the following steps: Create a new abstract class named BaseViewModel in the ViewModels folder using the following code: public abstract class BaseViewModel { protected BaseViewModel() { } } Update BaseViewModel to implement INotifyPropertyChanged: public abstract class BaseViewModel : INotifyPropertyChanged { protected BaseViewModel() { } public event PropertyChangedEventHandler PropertyChanged; protected virtual void OnPropertyChanged( [CallerMemberName] string propertyName = null) { PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName)); } } The implementation of INotifyPropertyChanged is key to the behavior and role of the ViewModels and data binding. It allows a Page to be notified when the properties of its ViewModel have changed. Now that we have created a base ViewModel, we can start adding the actual ViewModels that will serve as the data context for each of our Pages. We will start by creating a ViewModel for MainPage. Adding MainViewModel The main purpose of a ViewModel is to separate the business logic, for example, data access and data manipulation, from the user interface logic. Right now, our MainPage directly defines the list of data that it is displaying. This data will eventually be dynamically loaded from an API but for now, we will move this initial static data definition to its ViewModel so that it can be data bound to the user interface. In order to create the ViewModel for MainPage, perform the following steps: Create a new class file in the ViewModels folder and name it MainViewModel. Update the MainViewModel class to inherit from BaseViewModel: public class MainViewModel : BaseViewModel { // ... } Add an ObservableCollection<T> property to the MainViewModel class and name it LogEntries. This property will be used to bind to the ItemsSource property of the ListView element on MainPage.xaml: public class MainViewModel : BaseViewModel { ObservableCollection<TripLogEntry> _logEntries; public ObservableCollection<TripLogEntry> LogEntries { get { return _logEntries; } set { _logEntries = value; OnPropertyChanged (); } } // ... } Next, remove the List<TripLogEntry> that populates the ListView element on MainPage.xaml and repurpose that logic in the MainViewModel—we will put it in the constructor for now: public MainViewModel() { LogEntries = new ObservableCollection<TripLogEntry>(); LogEntries.Add(new TripLogEntry { Title = "Washington Monument", Notes = "Amazing!", Rating = 3, Date = new DateTime(2017, 2, 5), Latitude = 38.8895, Longitude = -77.0352 }); LogEntries.Add(new TripLogEntry { Title = "Statue of Liberty", Notes = "Inspiring!", Rating = 4, Date = new DateTime(2017, 4, 13), Latitude = 40.6892, Longitude = -74.0444 }); LogEntries.Add(new TripLogEntry { Title = "Golden Gate Bridge", Notes = "Foggy, but beautiful.", Rating = 5, Date = new DateTime(2017, 4, 26), Latitude = 37.8268, Longitude = -122.4798 }); } Set MainViewModel as the BindingContext property for MainPage. Do this by simply setting the BindingContext property of MainPage in its code-behind file to a new instance of MainViewModel. The BindingContext property comes from the Xamarin.Forms.ContentPage base class: public MainPage() { InitializeComponent(); BindingContext = new MainViewModel(); } Finally, update how the ListView element on MainPage.xaml gets its items. Currently, its ItemsSource property is being set directly in the Page's code behind. Remove this and instead update the ListView element's tag in MainPage.xaml to bind to the MainViewModel LogEntries property: <ListView ... ItemsSource="{Binding LogEntries}"> Adding DetailViewModel Next, we will add another ViewModel to serve as the data context for DetailPage, as follows: Create a new class file in the ViewModels folder and name it DetailViewModel. Update the DetailViewModel class to inherit from the BaseViewModel abstract class: public class DetailViewModel : BaseViewModel { // ... } Add a TripLogEntry property to the class and name it Entry. This property will be used to bind details about an entry to the various labels on DetailPage: public class DetailViewModel : BaseViewModel { TripLogEntry _entry; public TripLogEntry Entry { get { return _entry; } set { _entry = value; OnPropertyChanged (); } } // ... } Update the DetailViewModel constructor to take a TripLogEntry parameter named entry. Use this constructor property to populate the public Entry property created in the previous step: public class DetailViewModel : BaseViewModel { // ... public DetailViewModel(TripLogEntry entry) { Entry = entry; } } Set DetailViewModel as the BindingContext for DetailPage and pass in the TripLogEntry property that is being passed to DetailPage: public DetailPage (TripLogEntry entry) { InitializeComponent(); BindingContext = new DetailViewModel(entry); // ... } Next, remove the code at the end of the DetailPage constructor that directly sets the Text properties of the Label elements: public DetailPage(TripLogEntry entry) { // ... // Remove these lines of code: //title.Text = entry.Title; //date.Text = entry.Date.ToString("M"); //rating.Text = $"{entry.Rating} star rating"; //notes.Text = entry.Notes; } Next, update the Label element tags in DetailPage.xaml to bind their Text properties to the DetailViewModel Entry property: <Label ... Text="{Binding Entry.Title}" /> <Label ... Text="{Binding Entry.Date, StringFormat='{0:M}'}" /> <Label ... Text="{Binding Entry.Rating, StringFormat='{0} star rating'}" /> <Label ... Text="{Binding Entry.Notes}" /> Finally, update the map to get the values it is plotting from the ViewModel. Since the Xamarin.Forms Map control does not have bindable properties, the values have to be set directly to the ViewModel properties. The easiest way to do this is to add a private field to the page that returns the value of the page's BindingContext and then use that field to set the values on the map: public partial class DetailPage : ContentPage { DetailViewModel _vm { get { return BindingContext as DetailViewModel; } } public DetailPage(TripLogEntry entry) { InitializeComponent(); BindingContext = new DetailViewModel(entry); TripMap.MoveToRegion(MapSpan.FromCenterAndRadius( new Position(_vm.Entry.Latitude, _vm.Entry.Longitude), Distance.FromMiles(.5))); TripMap.Pins.Add(new Pin { Type = PinType.Place, Label = _vm.Entry.Title, Position = new Position(_vm.Entry.Latitude, _vm.Entry.Longitude) }); } } Adding NewEntryViewModel Finally, we will need to add a ViewModel for NewEntryPage, as follows: Create a new class file in the ViewModels folder and name it NewEntryViewModel. Update the NewEntryViewModel class to inherit from BaseViewModel: public class NewEntryViewModel : BaseViewModel { // ... } Add public properties to the NewEntryViewModel class that will be used to bind it to the values entered into the EntryCell elements in NewEntryPage.xaml: public class NewEntryViewModel : BaseViewModel { string _title; public string Title { get { return _title; } set { _title = value; OnPropertyChanged(); } } double _latitude; public double Latitude { get { return _latitude; } set { _latitude = value; OnPropertyChanged(); } } double _longitude; public double Longitude { get { return _longitude; } set { _longitude = value; OnPropertyChanged(); } } DateTime _date; public DateTime Date { get { return _date; } set { _date = value; OnPropertyChanged(); } } int _rating; public int Rating { get { return _rating; } set { _rating = value; OnPropertyChanged(); } } string _notes; public string Notes { get { return _notes; } set { _notes = value; OnPropertyChanged(); } } // ... } Update the NewEntryViewModel constructor to initialize the Date and Rating properties: public NewEntryViewModel() { Date = DateTime.Today; Rating = 1; } Add a public Command property to NewEntryViewModel and name it SaveCommand. This property will be used to bind to the Save ToolbarItem in NewEntryPage.xaml. The Xamarin. Forms Command type implements System.Windows.Input.ICommand to provide an Action to run when the command is executed, and a Func to determine whether the command can be executed: public class NewEntryViewModel : BaseViewModel { // ... Command _saveCommand; public Command SaveCommand { get { return _saveCommand ?? (_saveCommand = new Command(ExecuteSaveCommand, CanSave)); } } void ExecuteSaveCommand() { var newItem = new TripLogEntry { Title = Title, Latitude = Latitude, Longitude = Longitude, Date = Date, Rating = Rating, Notes = Notes }; } bool CanSave () { return !string.IsNullOrWhiteSpace (Title); } } In order to keep the CanExecute function of the SaveCommand up to date, we will need to call the SaveCommand.ChangeCanExecute() method in any property setters that impact the results of that CanExecute function. In our case, this is only the Title property: public string Title { get { return _title; } set { _title = value; OnPropertyChanged(); SaveCommand.ChangeCanExecute(); } } The CanExecute function is not required, but by providing it, you can automatically manipulate the state of the control in the UI that is bound to the Command so that it is disabled until all of the required criteria are met, at which point it becomes enabled. Next, set NewEntryViewModel as the BindingContext for NewEntryPage: public NewEntryPage() { InitializeComponent(); BindingContext = new NewEntryViewModel(); // ... } Next, update the EntryCell elements in NewEntryPage.xaml to bind to the NewEntryViewModel properties: <EntryCell Label="Title" Text="{Binding Title}" /> <EntryCell Label="Latitude" Text="{Binding Latitude}" ... /> <EntryCell Label="Longitude" Text="{Binding Longitude}" ... /> <EntryCell Label="Date" Text="{Binding Date, StringFormat='{0:d}'}" /> <EntryCell Label="Rating" Text="{Binding Rating}" ... /> <EntryCell Label="Notes" Text="{Binding Notes}" /> Finally, we will need to update the Save ToolbarItem element in NewEntryPage.xaml  to bind to the NewEntryViewModel SaveCommand property: <ToolbarItem Text="Save" Command="{Binding SaveCommand}" /> Now, when we run the app and navigate to the new entry page, we can see the data binding in action, as shown in the following screenshots. Notice how the Save button is disabled until the title field contains a value: To summarize, we updated the app that we had created in this article; Create a basic travel app using Xamarin.Forms. We removed data and data-related logic from the Pages, offloading it to a series of ViewModels and then binding the Pages to those ViewModels. If you liked this tutorial, read our book, Mastering Xamaring.Forms , to create an architecture rich mobile application with good design patterns and best practices using Xamarin.Forms. Xamarin Forms 3, the popular cross-platform UI Toolkit, is here! Five reasons why Xamarin will change mobile development Creating Hello World in Xamarin.Forms_sample
Read more
  • 0
  • 0
  • 23606
article-image-create-a-travel-app-with-xamarin
Sugandha Lahoti
20 Jun 2018
14 min read
Save for later

Create a travel app with Xamarin [Tutorial]

Sugandha Lahoti
20 Jun 2018
14 min read
Just like the beginning of many new mobile projects, we will start with an idea. In this tutorial, we will create a new Xamarin.Forms mobile app named TripLog with an initial app structure and user interface. Like the name suggests, it will be an app that will allow its users to log their travel adventures. Although the app might not solve any real-world problems, it will have features that will require us to solve real-world architecture and coding problems. This article is an excerpt from the book Mastering Xamaring.Forms by Ed Snider. Defining features Before we get started, it is important to understand the requirements and features of the TripLog app. We will do this by quickly defining some of the high-level things this app will allow its users to do: View existing log entries (online and offline) Add new log entries with the following data: Title Location using GPS Date Notes Rating Sign into the app Creating the initial app To start off the new TripLog mobile app project, we will need to create the initial solution architecture. We can also create the core shell of our app's user interface by creating the initial screens based on the basic features we have just defined. Setting up the solution We will start things off by creating a brand new, blank Xamarin.Forms solution within Visual Studio by performing the following steps: In Visual Studio, click on File | New Solution. This will bring up a series of dialog screens that will walk you through creating a new Xamarin.Forms solution. On the first dialog, click on App on the left-hand side, under the Multiplatform section, and then select Blank Forms App, as shown in the following screenshot: On the next dialog screen, enter the name of the app, TripLog, ensure that Use Portable Class Library is selected for the Shared Code option, and that Use XAML for user interface files option is checked, as shown in the following screenshot: The Xamarin.Forms project template in Visual Studio for Windows will use a .NET Standard library instead of a Portable Class Library for its core library project. As of the writing of this book, the Visual Studio for Mac templates still use a Portable Class Library. On the final dialog screen, simply click on the Create button, as follows: After creating the new Xamarin.Forms solution, you will have several projects created within it, as shown in the following screenshot: There will be a single portable class library project and two platform-specific projects, as follows: TripLog: This is a portable class library project that will serve as the core layer of the solution architecture. This is the layer that will include all our business logic, data objects, Xamarin.Forms pages, and other non-platform-specific code. The code in this project is common and not specific to a platform, and can therefore, be shared across the platform projects. TripLog.iOS: This is the iOS platform-specific project containing all the code and assets required to build and deploy the iOS app from this solution. By default, it will have a reference to the TripLog core project. TripLog.Droid: This is the Android platform-specific project containing all the code and assets required to build and deploy the Android app from this solution. By default, it will have a reference to the TripLog core project. If you are using Visual Studio for Mac, you will only get an iOS and an Android project when you create a new Xamarin.Forms solution. 
To include a Windows (UWP) app in your Xamarin.Forms solution, you will need to use Visual Studio for Windows. 
Although the screenshots and samples used throughout this book are demonstrated using Visual Studio for Mac, the code and concepts will also work in Visual Studio for Windows. Refer to the Preface of this book for further details on software and hardware requirements that need to be met to follow along with the concepts in this book. You'll notice a file in the core library named App.xaml, which includes a code-behind class in App.xaml.cs named App that inherits from Xamarin.Forms.Application. Initially, the App constructor sets the MainPage property to a new instance of a ContentPage named TripLogPage that simply displays some default text. The first thing we will do in our TripLog app is build the initial views, or screens, required for our UI, and then update that MainPage property of the App class in App.xaml.cs. Updating the Xamarin.Forms packages If you expand the Packages folder within each of the projects in the solution, you will see that Xamarin.Forms is a NuGet package that is automatically included when we select the Xamarin.Forms project template. It is possible that the included NuGet packages need to be updated. Ensure that you update them in each of the projects within the solution so that you are using the latest version of Xamarin.Forms. Creating the main page The main page of the app will serve as the entry point into the app and will display a list of existing trip log entries. Our trip log entries will be represented by a data model named TripLogEntry. Models are a key pillar in the Model-View-ViewModel (MVVM) pattern and data binding, which we will explore more in our tutorial, How to add MVVM pattern and data binding to our Travel app. For now, we will create a simple class that will represent the TripLogEntry model. Let us now start creating the main page by performing the following steps: First, add a new Xamarin.Forms XAML  ContentPage to the core project and name it MainPage. Next, update the MainPage property of the App class in App.xaml.cs to a new instance of Xamarin.Forms.NavigationPage whose root is a new instance of TripLog.MainPage that we just created: public App() { InitializeComponent(); MainPage = new NavigationPage(new MainPage()); } Delete TripLogPage.xaml from the core project as it is no longer needed. Create a new folder in the core project named Models. Create a new empty class file in the Models folder named TripLogEntry. Update the TripLogEntry class with auto-implemented properties representing the attributes of an entry: public class TripLogEntry { public string Title { get; set; } public double Latitude { get; set; } public double Longitude { get; set; } public DateTime Date { get; set; } public int Rating { get; set; } public string Notes { get; set; } } Now that we have a model to represent our trip log entries, we can use it to display some trips on the main page using a ListView control. We will use a DataTemplate to describe how the model data should be displayed in each of the rows in the ListView using the following XAML in the ContentPage.Content tag in MainPage.xaml: <ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" x:Class="TripLog.MainPage" Title="TripLog"> <ContentPage.Content> <ListView x:Name="trips"> <ListView.ItemTemplate> <DataTemplate> <TextCell Text="{Binding Title}" Detail="{Binding Notes}" /> </DataTemplate> </ListView.ItemTemplate> </ListView> </ContentPage.Content> </ContentPage> In the main page's code-behind, MainPage.xaml.cs, we will populate the ListView ItemsSource with a hard-coded collection of TripLogEntry objects. public partial class MainPage : ContentPage { public MainPage() { InitializeComponent(); var items = new List<TripLogEntry> { new TripLogEntry { Title = "Washington Monument", Notes = "Amazing!", Rating = 3, Date = new DateTime(2017, 2, 5), Latitude = 38.8895, Longitude = -77.0352 }, new TripLogEntry { Title = "Statue of Liberty", Notes = "Inspiring!", Rating = 4, Date = new DateTime(2017, 4, 13), Latitude = 40.6892, Longitude = -74.0444 }, new TripLogEntry { Title = "Golden Gate Bridge", Notes = "Foggy, but beautiful.", Rating = 5, Date = new DateTime(2017, 4, 26), Latitude = 37.8268, Longitude = -122.4798 } }; trips.ItemsSource = items; } } At this point, we have a single page that is displayed as the app's main page. If we debug the app and run it in a simulator, emulator, or on a physical device, we should see the main page showing the list of log entries we hard-coded into the view, as shown in the following screenshot. Creating the new entry page The new entry page of the app will give the user a way to add a new log entry by presenting a series of fields to collect the log entry details. There are several ways to build a form to collect data in Xamarin.Forms. You can simply use a StackLayout and present a stack of Label and Entry controls on the screen, or you can also use a TableView with various types of ViewCell elements. In most cases, a TableView will give you a very nice default, platform-specific look and feel. However, if your design calls for a more customized aesthetic, you might be better off leveraging the other layout options available in Xamarin.Forms. For the purpose of this app, we will use a TableView. There are some key data points we need to collect when our users log new entries with the app, such as title, location, date, rating, and notes. For now, we will use a regular EntryCell element for each of these fields. We will update, customize, and add things to these fields later in this book. For example, we will wire the location fields to a geolocation service that will automatically determine the location. We will also update the date field to use an actual platform-specific date picker control. For now, we will just focus on building the basic app shell. In order to create the new entry page that contains a TableView, perform the following steps: First, add a new Xamarin.Forms XAML ContentPage to the core project and name it NewEntryPage. Update the new entry page using the following XAML to build the TableView that will represent the data entry form on the page: <ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" x:Class="TripLog.NewEntryPage" Title="New Entry"> <ContentPage.Content> <TableView Intent="Form"> <TableView.Root> <TableSection> <EntryCell Label="Title" /> <EntryCell Label="Latitude" Keyboard="Numeric" /> <EntryCell Label="Longitude" Keyboard="Numeric" /> <EntryCell Label="Date" /> <EntryCell Label="Rating" Keyboard="Numeric" /> <EntryCell Label="Notes" /> </TableSection> </TableView.Root> </TableView> </ContentPage.Content> </ContentPage> Now that we have created the new entry page, we need to add a way for users to get to this new screen from the main page. We will do this by adding a New button to the main page's toolbar. In Xamarin.Forms, this is accomplished by adding a ToolbarItem to the ContentPage.ToolbarItems collection and wiring up the ToolbarItem.Clicked event to navigate to the new entry page, as shown in the following XAML: <!-- MainPage.xaml --> <ContentPage> <ContentPage.ToolbarItems> <ToolbarItem Text="New" Clicked="New_Clicked" /> </ContentPage.ToolbarItems> </ContentPage> // MainPage.xaml.cs public partial class MainPage : ContentPage { // ... void New_Clicked(object sender, EventArgs e) { Navigation.PushAsync(new NewEntryPage()); } } To handle navigation between pages, we will use the default Xamarin.Forms navigation mechanism. When we run the app, we will see a New button on the toolbar of the main page. Clicking on the New button should bring us to the new entry page, as shown in the following screenshot: We will need to add a save button to the new entry page toolbar so that we can save new items. The save button will be added to the new entry page toolbar in the same way the New button was added to the main page toolbar. Update the XAML in NewEntryPage.xaml to include a new ToolbarItem, as shown in the following code: <ContentPage> <ContentPage.ToolbarItems> <ToolbarItem Text="Save" /> </ContentPage.ToolbarItems> <!-- ... --> </ContentPage> When we run the app again and navigate to the new entry page, we should now see the Save button on the toolbar, as shown in the following screenshot: Creating the entry detail page When a user clicks on one of the log entry items on the main page, we want to take them to a page that displays more details about that particular item, including a map that plots the item's location. Along with additional details and a more in-depth view of the item, a detail page is also a common area where actions on that item might take place, such as, editing the item or sharing the item on social media. The detail page will take an instance of a TripLogEntry model as a constructor parameter, which we will use in the rest of the page to display the entry details to the user. In order to create the entry detail page, perform the following steps: First, add a new Xamarin.Forms XAML ContentPage to the project and name it DetailPage. Update the constructor of the DetailPage class in DetailPage.xaml.cs to take a TripLogEntry parameter named entry, as shown in the following code: public class DetailPage : ContentPage { public DetailPage(TripLogEntry entry) { // ... } } Add the Xamarin.Forms.Maps NuGet package to the core project and to each of the platform-specific projects. This separate NuGet package is required in order to use the Xamarin.Forms Map control in the next step. Update the XAML in DetailPage.xaml to include a Grid layout to display a Map control and some Label controls to display the trip's details, as shown in the following code: <ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:maps="clr-namespace:Xamarin.Forms.Maps;assembly=Xamarin.Forms.Maps" x:Class="TripLog.DetailPage"> <ContentPage.Content> <Grid> <Grid.RowDefinitions> <RowDefinition Height="4*" /> <RowDefinition Height="Auto" /> <RowDefinition Height="1*" /> </Grid.RowDefinitions> <maps:Map x:Name="map" Grid.RowSpan="3" /> <BoxView Grid.Row="1" BackgroundColor="White" Opacity=".8" /> <StackLayout Padding="10" Grid.Row="1"> <Label x:Name="title" HorizontalOptions="Center" /> <Label x:Name="date" HorizontalOptions="Center" /> <Label x:Name="rating" HorizontalOptions="Center" /> <Label x:Name="notes" HorizontalOptions="Center" /> </StackLayout> </Grid> </ContentPage.Content> </ContentPage> Update the detail page's code-behind, DetailPage.xaml.cs, to center the map and plot the trip's location. We also need to update the Label controls on the detail page with the properties of the entry constructor parameter: public DetailPage(TripLogEntry entry) { InitializeComponent(); map.MoveToRegion(MapSpan.FromCenterAndRadius( new Position(entry.Latitude, entry.Longitude), Distance.FromMiles(.5))); map.Pins.Add(new Pin { Type = PinType.Place, Label = entry.Title, Position = new Position(entry.Latitude, entry.Longitude) }); title.Text = entry.Title; date.Text = entry.Date.ToString("M"); rating.Text = $"{entry.Rating} star rating"; notes.Text = entry.Notes; } Next, we need to wire up the ItemTapped event of the ListView on the main page to pass the tapped item over to the entry detail page that we have just created, as shown in the following code: <!-- MainPage.xaml --> <ListView x:Name="trips" ItemTapped="Trips_ItemTapped"> <!-- ... --> </ListView> // MainPage.xaml.cs public MainPage() { // ... async void Trips_ItemTapped(object sender, ItemTappedEventArgs e) { var trip = (TripLogEntry)e.Item; await Navigation.PushAsync(new DetailPage(trip)); // Clear selection trips.SelectedItem = null; } } Finally, we will need to initialize the Xamarin.Forms.Maps library in each platform-specific startup class (AppDelegate for iOS and MainActivity for Android) using the following code: // in iOS AppDelegate global::Xamarin.Forms.Forms.Init(); Xamarin.FormsMaps.Init(); LoadApplication(new App()); // in Android MainActivity global::Xamarin.Forms.Forms.Init(this, bundle); Xamarin.FormsMaps.Init(this, bundle); LoadApplication(new App()); Now, when we run the app and tap on one of the log entries on the main page, it will navigate us to the details page to see more detail about that particular log entry, as shown in the following screenshot: We built a simple three-page app with static data, leveraging the most basic concepts of the Xamarin.Forms toolkit. We used the default Xamarin.Forms navigation APIs to move between the three pages. Stay tuned for our next post to learn about adding MVVM pattern and data binding to this Travel app. If you enjoyed reading this excerpt, check out this book Mastering Xamaring.Forms. Xamarin Forms 3, the popular cross-platform UI Toolkit, is here! Five reasons why Xamarin will change mobile development Creating Hello World in Xamarin.Forms_sample
Read more
  • 0
  • 1
  • 18953

article-image-vr-experiences-with-react-vr-create-maze
Sunith Shetty
12 Jun 2018
16 min read
Save for later

Building VR experiences with React VR 2.0: How to create maze that's new every time you play

Sunith Shetty
12 Jun 2018
16 min read
In today’s tutorial, we will examine the functionality required to build a simple maze. There are a few ways we could build a maze. The most straightforward way would be to fire up our 3D modeler package (say, Blender) and create a labyrinth out of polygons. This would work fine and could be very detailed. However, it would also be very boring. Why? The first time we get through the maze will be exciting, but after a few tries, you'll know the way through. When we construct VR experiences, you usually want people to visit often and have fun every time. This tutorial is an excerpt from a book written by John Gwinner titled Getting Started with React VR. In this book, you will learn how to create amazing 360 and virtual reality content that runs directly in your browsers. A modeled labyrinth would be boring. Life is too short to do boring things. So, we want to generate a Maze randomly. This way, you can change the Maze every time so that it'll be fresh and different. The way to do that is through random numbers to ensure that the Maze doesn't shift around us, so we want to actually do it with pseudo-random numbers. To start doing that, we'll need a basic application created. Please go to your VR directory and create an application called 'WalkInAMaze': react-vr init WalkInAMaze Almost random–pseudo random number generators To have a chance of replaying value or being able to compare scores between people, we really need a pseudo-random number generator. The basic JavaScript Math.random() is not a pseudo-random generator; it really gives you a totally random number every time. We need a pseudo-random number generator that takes a seed value. If you give the same seed to the random number generator, it will generate the same sequence of random numbers. (They aren't completely random but are very close.) Random number generators are a complex topic; for example, they are used in cryptography, and if your random number generator isn't completely random, someone could break your code. We aren't so worried about that, we just want repeatability. Although the UI for this may be a bit beyond the scope of this book, creating the Maze in a way that clicking on Refresh won't generate a totally different Maze is really a good thing and will avoid frustration on the part of the user. This will also allow two users to compare scores; we could persist a board number for the Maze and show this. This may be out of scope for our book; however, having a predictable Maze will help immensely during development. If it wasn't for this, you might get lost while working on your world. (Well, probably not, but it makes testing easier.) Including library code from other projects Up to this point, I've shown you how to create components in React VR (or React). JavaScript interestingly has a historical issue with include. With C++, Java, or C#, you can include a file in another file or make a reference to a file in a project. After doing that, everything in those other files, such as functions, classes, and global properties (variables), are then usable from the file that you've issued the include statement in. With a browser, the concept of "including" JavaScript is a little different. With Node.js, we use package.json to indicate what packages we need. To bring those packages into our code, we will use the following syntax in your .js files: var MersenneTwister = require('mersenne-twister'); Then, instead of using Math.random(), we will create a new random number generator and pass a seed, as follows: var rng = new MersenneTwister(this.props.Seed); From this point on, you just call rng.random() instead of Math.random(). We can just use npm install <package> and the require statement for properly formatted packages. Much of this can be done for you by executing the npm command: npm install mersenne-twister --save Remember, the --save command to update our manifest in the project. While we are at it, we can install another package we'll need later: npm install react-vr-gaze-button --save Now that we have a good random number generator, let's use it to complicate our world. The Maze render() How do we build a Maze? I wanted to develop some code that dynamically generates the Maze; anyone could model it in a package, but a VR world should be living. Having code that can dynamically build Maze in any size (to a point) will allow a repeat playing of your world. There are a number of JavaScript packages out there for printing mazes. I took one that seemed to be everywhere, in the public domain, on GitHub and modified it for HTML. This app consists of two parts: Maze.html and makeMaze.JS. Neither is React, but it is JavaScript. It works fairly well, although the numbers don't really represent exactly how wide it is. First, I made sure that only one x was displaying, both vertically and horizontally. This will not print well (lines are usually taller than wide), but we are building a virtually real Maze, not a paper Maze. The Maze that we generate with the files at Maze.html (localhost:8081/vr/maze.html) and the JavaScript file—makeMaze.js—will now look like this: x1xxxxxxx x x x xxx x x x x x x x x xxxxx x x x x x x x x x x x x 2 xxxxxxxxx It is a little hard to read, but you can count the squares vs. xs. Don't worry, it's going to look a lot fancier. Now that we have the HTML version of a Maze working, we'll start building the hedges. This is a slightly larger piece of code than I expected, so I broke it into pieces and loaded the Maze object onto GitHub rather than pasting the entire code here, as it's long. You can find a link for the source at: http://bit.ly/VR_Chap11 Adding the floors and type checking One of the things that look odd with a 360 Pano background, as we've talked about before, is that you can seem to "float" against the ground. One fix, other than fixing the original image, is to simply add a floor. This is what we did with the Space Gallery, and it looks pretty good as we were assuming we were floating in space anyway. For this version, let's import a ground square. We could use a large square that would encompass the entire Maze; we'd then have to resize it if the size of the Maze changes. I decided to use a smaller cube and alter it so that it's "underneath" every cell of the Maze. This would allow us some leeway in the future to rotate the squares for worn paths, water traps, or whatever. To make the floor, we will use a simple cube object that I altered slightly and is UV mapped. I used Blender for this. We also import a Hedge model, and a Gem, which will represent where we can teleport to. Inside 'Maze.js' we added the following code: import Hedge from './Hedge.js'; import Floor from './Hedge.js'; import Gem from './Gem.js'; Then, inside the Maze.js we could instantiate our floor with the code: <Floor X={-2} Y={-4}/> Notice that we don't use 'vr/components/Hedge.js' when we do the import; we're inside Maze.js. However, in index.vr.js to include the Maze, we do need: import Maze from './vr/components/Maze.js'; It's slightly more complicated though. In our code, the Maze builds the data structures when props have changed; when moving, if the maze needs rendering again, it simply loops through the data structure and builds a collection (mazeHedges) with all of the floors, teleport targets, and hedges in it. Given this, to create the floors, the line in Maze.js is actually: mazeHedges.push(<Floor {...cellLoc} />); Here is where I ran into two big problems, and I'll show you what happened so that you can avoid these issues. Initially, I was bashing my head against the wall trying to figure out why my floors looked like hedges. This one is pretty easy—we imported Floor from the Hedge.js file. The floors will look like hedges (did you notice this in my preceding code? If so, I did this on purpose as a learning experience. Honest). This is an easy fix. Make sure that you code import Floor from './floor.js'; note that Floor not type-checked. (It is, after all, JavaScript.) I thought this was odd, as the hedge.js file exports a Hedge object, not a Floor object, but be aware you can rename the objects as you import them. The second problem I had was more of a simple goof that is easy to occur if you aren't really thinking in React. You may run into this. JavaScript is a lovely language, but sometimes I miss a strongly typed language. Here is what I did: <Maze SizeX='4' SizeZ='4' CellSpacing='2.1' Seed='7' /> Inside the maze.js file, I had code like this: for (var j = 0; j < this.props.SizeX + 2; j++) { After some debugging, I found out that the value of j was going from 0 to 42. Why did it get 42 instead of 6? The reason was simple. We need to fully understand JavaScript to program complex apps. The mistake was in initializing SizeX to be '4' ; this makes it a string variable. When calculating j from 0 (an integer), React/JavaScript takes 2, adds it to a string of '4', and gets the 42 string, then converts it to an integer and assigns this to j. When this is done, very weird things happened. When we were building the Space Gallery, we could easily use the '5.1' values for the input to the box: <Pedestal MyX='0.0' MyZ='-5.1'/> Then, later use the transform statement below inside the class: transform: [ { translate: [ this.props.MyX, -1.7, this.props.MyZ] } ] React/JavaScript will put the string values into This.Props.MyX, then realize it needs an integer, and then quietly do the conversion. However, when you get more complicated objects, such as our Maze generation, you won't get away with this. Remember that your code isn't "really" JavaScript. It's processed. At the heart, this processing is fairly simple, but the implications can be a killer. Pay attention to what you code. With a loosely typed language such as JavaScript, with React on top, any mistakes you make will be quietly converted to something you didn't intend. You are the programmer. Program correctly. So, back to the Maze. The Hedge and Floor are straightforward copies of the initial Gem code. Let's take a look at our starting Gem, although note it gets a lot more complicated later (and in your source files): import React, { Component } from 'react'; import { asset, Box, Model, Text, View } from 'react-vr'; export default class Gem extends Component { constructor() { super(); this.state = { Height: -3 }; } render() { return ( <Model source={{ gltf2: asset('TeleportGem.gltf'), }} style={{ transform: [{ translate: [this.props.X, this.state.Height, this.props.Z] }] }} /> ); } } The Hedge and Floor are essentially the same thing. (We could have made a prop be the file loaded, but we want a different behavior for the Gem, so we will edit this file extensively.) To run this sample, first, we should have created a directory as you have before, called WalkInAMaze. Once you do this, download the files from the Git source for this part of the article (http://bit.ly/VR_Chap11). Once you've created the app, copied the files, and fired it up, (go to the WalkInAMaze directory and type npm start), and you should see something like this once you look around - except, there is a bug. This is what the maze should look like (if you use the file  'MazeHedges2DoubleSided.gltf' in Hedge.js, in the <Model> statement):> Now, how did we get those neat-looking hedges in the game? (OK, they are pretty low poly, but it is still pushing it.) One of the nice things about the pace of improvement on web standards is their new features. Instead of just .obj file format, React VR now has the capability to load glTF files. Using the glTF file format for models glTF files are a new file format that works pretty naturally with WebGL. There are exporters for many different CAD packages. The reason I like glTF files is that getting a proper export is fairly straightforward. Lightwave OBJ files are an industry standard, but in the case of React, not all of the options are imported. One major one is transparency. The OBJ file format allows that, but at of the time of writing this book, it wasn't an option. Many other graphics shaders that modern hardware can handle can't be described with the OBJ file format. This is why glTF files are the next best alternative for WebVR. It is a modern and evolving format, and work is being done to enhance the capabilities and make a fairly good match between what WebGL can display and what glTF can export. This is however on interacting with the world, so I'll give a brief mention on how to export glTF files and provide the objects, especially the Hedge, as glTF models. The nice thing with glTF from the modeling side is that if you use their material specifications, for example, for Blender, then you don't have to worry that the export won't be quite right. Today's physically Based Rendering (PBR) tends to use the metallic/roughness model, and these import better than trying to figure out how to convert PBR materials into the OBJ file's specular lighting model. Here is the metallic-looking Gem that I'm using as the gaze point: Using the glTF Metallic Roughness model, we can assign the texture maps that programs, such as Substance Designer, calculate and import easily. The resulting figures look metallic where they are supposed to be metallic and dull where the paint still holds on. I didn't use Ambient Occlusion here, as this is a very convex model; something with more surface depressions would look fantastic with Ambient Occlusion. It would also look great with architectural models, for example, furniture. To convert your models, there is user documentation at http://bit.ly/glTFExporting. You will need to download and install the Blender glTF exporter. Or, you can just download the files I have already converted. If you do the export, in brief, you do the following steps: Download the files from http://bit.ly/gLTFFiles. You will need the gltf2_Principled.blend file, assuming that you are on a newer version of Blender. In Blender, open your file, then link to the new materials. Go to File->Link, then choose the gltf2_Principled.blend file. Once you do that, drill into "NodeTree" and choose either glTF Metallic Roughness (for metal), or glTF specular glossiness for other materials. Choose the object you are going to export; make sure that you choose the Cycles renderer. Open the Node Editor in a window. Scroll down to the bottom of the Node Editor window, and make sure that the box Use Nodes is checked. Add the node via the nodal menu, Add->Group->glTF Specular Glossiness or Metallic Roughness. Once the node is added, go to Add->Texture->Image texture. Add as many image textures as you have image maps, then wire them up. You should end up with something similar to this diagram. To export the models, I recommend that you disable camera export and combine the buffers unless you think you will be exporting several models that share geometry or materials. The Export options I used are as follows: Now, to include the exported glTF object, use the <Model> component as you would with an OBJ file, except you have no MTL file. The materials are all described inside the .glTF file. To include the exported glTF object, you just put the filename as a gltf2 prop in the <Model: <Model source={{ gltf2: asset('TeleportGem2.gltf'),}} ... To find out more about these options and processes, you can go to the glTF export web site. This site also includes tutorials on major CAD packages and the all-important glTF shaders (for example, the Blender model I showed earlier). I have loaded several .OBJ files and .glTF files so you can experiment with different combinations of low poly and transparency. When glTF support was added in React VR version 2.0.0, I was very excited as transparency maps are very important for a lot of VR models, especially vegetation; just like our hedges. However, it turns out there is a bug in WebGL or three.js that does not render the transparency properly. As a result, I have gone with a low polygon version in the files on the GitHub site; the pictures, above, were with the file MazeHedges2DoubleSided.gltf in the Hedges.js file (in vr/components). If you get 404 errors, check the paths in the glTF file. It depends on which exporter you use—if you are working with Blender, the gltf2 exporter from the Khronos group calculates the path correctly, but the one from Kupoman has options, and you could export the wrong paths. We discussed important mechanics of props, state, and events. We also discussed how to create a maze using pseudo-random number generators to make sure that our props and state didn't change chaotically. To know more about how to create, move around in, and make worlds react to us in a Virtual Reality world, including basic teleport mechanics, do check out this book Getting Started with React VR.  Read More: Google Daydream powered Lenovo Mirage solo hits the market Google open sources Seurat to bring high precision graphics to Mobile VR Oculus Go, the first stand alone VR headset arrives!
Read more
  • 0
  • 0
  • 3732

article-image-upgrading-packaging-publishing-react-vr-app
Sunith Shetty
08 Jun 2018
19 min read
Save for later

Upgrading, packaging, and publishing your React VR app

Sunith Shetty
08 Jun 2018
19 min read
It is fun to develop and experience virtual worlds at home. Eventually, though, you want the world to see your creation. To do that, we need to package and publish our app. In the course of development, upgrades to React may come along; before publishing, you will need to decide whether you need to "code freeze" and ship with a stable version, or upgrade to a new version. This is a design decision. In today’s tutorial, we will learn to upgrade React VR and bundle the code in order to publish on the web. This article is an excerpt from a book written by John Gwinner titled Getting Started with React VR. This book will get you well-versed with Virtual Reality (VR) and React VR components to create your own VR apps. One of the neat things, although it can be frustrating, is that web projects are frequently updated.  There are a couple of different ways to do an upgrade: You can install/create a new app with the same name You will then go to your old app and copy everything over This is a facelift upgrade or Rip and Replace Do an update. Mostly, this is an update to package.json, and then delete node_modules and rebuild it. This is an upgrade in place. It is up to you which method you use, but the major difference is that an upgrade in place is somewhat easier—no source code to modify and copy—but it may or may not work. A Facelift upgrade also relies on you using the correct react-vr-cli. There is a notice that runs whenever you run React VR from the Command Prompt that will tell you whether it's old: The error or warning that comes up about an upgrade when you run React VR from a Command Prompt may fly by quickly. It takes a while to run, so you may go away for a cup of coffee. Pay attention to red lines, seriously. To do an upgrade in place, you will typically get an update notification from Git if you have subscribed to the project. If you haven't, you should go to: http://bit.ly/ReactVR, create an account (if you don't have one already), and click on the eyeball icon to join the watch list. Then, you will get an email every time there is an upgrade. We will cover the most straightforward way to do an upgrade—upgrade in place, first. Upgrading in place How do you know what version of React you have installed? From a Node.js prompt, type this: npm list react-vr Also, check the version of react-vr-web: npm list react-vr-web Check the version of react-vr-cli (the command-line interface, really only for creating the hello world app). npm list react-vr-cli Check the version of ovrui (open VR's user interface): npm list ovrui You can check these against the versions on the documentation. If you've subscribed to React VR on GitHub (and you should!), then you will get an email telling you that there is an upgrade. Note that the CLI will also tell you if it is out of date, although this only applies when you are creating a new application (folder/website). The release notes are at: http://bit.ly/VRReleases . There, you will find instructions to upgrade. The upgrade instructions usually have you do the following: Delete your node_modules directory. Open your package.json file. Update react-vr, react-vr-web, and ovrui to "New version number" for example, 2.0.0. Update react to "a.b.c". Update react-native to "~d.e.f". Update three to "^g.h.k". Run npm install or yarn. Note the ~ and ^ symbols; ~version means approximately equivalent to version and ^version means compatible with version. This is a help, as you may have other packages that may want other versions of react-native and three, specifically. To get the values of {a...k}, refer to the release notes. I have also found that you may need to include these modules in the devDependencies section of package.json: "react-devtools": "^2.5.2", "react-test-renderer": "16.0.0", You may see this error: module.js:529 throw err; ^ Error: Cannot find module './node_modules/react-native/packager/blacklist' If you do, make the following changes in your projects root folder in the rncli.config.js file. Replace the var blacklist = require('./node_modules/react-native/packager/blacklist'); line with var blacklist = require('./node_modules/metro-bundler/src/blacklist');. Third-party dependencies If you have been experimenting and adding modules with npm install <something>, you may find, after an upgrade, that things do not work. The package.json file also needs to know about all the additional packages you installed during experimentation. This is the project way (npm way) to ensure that Node.js knows we need a particular piece of software. If you have this issue, you'll need to either repeat the install with the—save parameter, or edit the dependencies section in your package.json file. { "name": "WalkInAMaze", "version": "0.0.1", "private": true, "scripts": { "start": "node -e "console.log('open browser at http://localhost:8081/vr/\n\n');" && node node_modules/react-native/local-cli/cli.js start", "bundle": "node node_modules/react-vr/scripts/bundle.js", "open": "node -e "require('xopen')('http://localhost:8081/vr/')"", "devtools": "react-devtools", "test": "jest" }, "dependencies": { "ovrui": "~2.0.0", "react": "16.0.0", "react-native": "~0.48.0", "three": "^0.87.0", "react-vr": "~2.0.0", "react-vr-web": "~2.0.0", "mersenne-twister": "^1.1.0" }, "devDependencies": { "babel-jest": "^19.0.0", "babel-preset-react-native": "^1.9.1", "jest": "^19.0.2", "react-devtools": "^2.5.2", "react-test-renderer": "16.0.0", "xopen": "1.0.0" }, "jest": { "preset": "react-vr" } } Again, this is the manual way; a better way is to use npm install <package> -save. The -s qualifier saves the new package you've installed in package.json. The manual edits can be handy to ensure that you've got the right versions if you get a version mismatch. If you mess around with installing and removing enough packages, you will eventually mess up your modules. If you get errors even after removing node_modules, issue these commands: npm cache clean --force npm start -- --reset-cache The cache clean won't do it by itself; you need the reset-cache, otherwise, the problem packages will still be saved, even if they don't physically exist! Really broken upgrades – rip and replace If, however, after all that work, your upgrade still does not work, all is not lost. We can do a rip and replace upgrade. Note that this is sort of a "last resort", but it does work fairly well. Follow these steps: Ensure that your react-vr-cli package is up to date, globally: [F:ReactVR]npm install react-vr-cli -g C:UsersJohnAppDataRoamingnpmreact-vr -> C:UsersJohnAppDataRoamingnpmnode_modulesreact-vr-cliindex.js + react-vr-cli@0.3.6 updated 8 packages in 2.83s This is important, as when there is a new version of React, you may not have the most up-to-date react-vr-cli. It will tell you when you use it that there is a newer version out, but that line frequently scrolls by; if you get bored and don't note, you can spend a lot of time trying to install an updated version, to no avail. An npm generates a lot of verbiage, but it is important to read what it says, especially red formatted lines. Ensure that all CLI (DOS) windows, editing sessions, Node.js running CLIs, and so on, are closed. (You shouldn't need to reboot, however; just close everything using the old directory). Rename the old code to MyAppName140 (add a version number to the end of the old react-vr directory). Create the application, using react-vr init MyAppName, in other words, the original app name. The next step is easiest using a diff program (refer to http://bit.ly/WinDiff). I use Beyond Compare, but there are other ones too. Choose one and install it, if needed. Compare the two directories, .MyAppName (new) and .MyAppName140, and see what files have changed. Move over any new files from your old app, including assets (you can probably copy over the entire static_assets folder). Merge any files that have changed, except package.json. Generally, you will need to merge these files: index.vr.js client.js (if you changed it) For package.json, see what lines have been added, and install those packages in the new app via npm install <missed package> --save, or start the app and see what is missing. Remove any files seeded by the hello world app, such as chess-world.jpg (unless you are using that background, of course). Usually, you don't change the rn-cli.config.js file (unless you modified the seeded version). Most code will move directly over. Ensure that you change the application name if you changed the directory name, but with the preceding directions, you won't have to. The preceding list of upgrade steps may be slightly easier if there are massive changes to React VR; it will require some picking through source files. The source is pretty straightforward, so this should be easy in practice. I found that these techniques will work best if the automatic upgrade did not work. As mentioned earlier, the time to do a major upgrade probably is not right before publishing the app, unless there is some new feature you need. You want to adequately test your app to ensure that there aren't any bugs. I'm including the upgrade steps here, though, but not because you should do it right before publishing. Getting your code ready to publish Honestly, you should never put off organizing your clothes until, oh, wait, we're talking about code. You should never put off organizing your code until the night you want to ship it. Even the code you think is throw away may end up in production. Learn good coding habits and style from the beginning. Good code organization Good code, from the very start, is very important for many reasons: If your code uses sloppy indentation, it's more difficult to read. Many code editors, such as Visual Studio Code, Atom, and Webstorm, will format code for you, but don't rely on these tools. Poor naming conventions can hide problems. An improper case on variables can hide problems, such as using this.State instead of this.state. Most of the time spent coding, as much as 80%, is in maintenance. If you can't read the code, you can't maintain it. When you're a starting out programmer, you frequently think you'll always be able to read your own code, but when you pick up a piece years later and say "Who wrote this junk?" and then realize it was you, you will quit doing things like a, b, c, d variable names and the like. Most software at some point is maintained, read, copied, or used by someone other than the author. Most programmers think code standards are for "the other guy," yet complain when they have to code well. Who then does? Most programmers will immediately ask for the code documentation and roll their eyes when they don't find it. I usually ask to see the documentation they wrote for their last project. Every programmer I've hired usually gives me a deer in the headlights look. This is why I usually require good comments in the code. A good comment is not something like this: //count from 99 to 1 for (i=99; i>0; i--) ... A good comment is this: //we are counting bottles of beer for (i=99; i>0; i--) ... Cleaning the lint trap (checking code standards) When you wash clothes, the lint builds up and will eventually clog your washing machine or dryer, or cause a fire. In the PC world, old code, poorly typed names, and all can also build up. Refactoring is one way to clean up the code. I highly recommend that you use some form of version control, such as Git or bitbucket to check your code; while refactoring, it's quite possible to totally mess up your code and if you don't use version control, you may lose a lot of work. A great way to do a code review of your work, before you publish, is to use a linter. Linters go through your code and point out problems (crud), improper syntax, things that may work differently than you intend, and generally try to pick up your room after you, like your mom does. While you might not like it if your mom does that, these tools are invaluable. Computers are, after all, very picky and why not use the machines against each other? One of the most common ways to let software check your software for JavaScript is a program called ESLint. You can read about it at: http://bit.ly/JSLinter. To install ESLint, you can do it via npm like most packages—npm install eslint --save-dev. The --save-dev option puts a requirement in your project while you are developing. Once you've published your app, you won't need to pack the ESLint information with your project! There are a number of other things you need to get ESLint to work properly; read the configuration pages and go through the tutorials. A lot depends on what IDE you use. You can use ESLint with Visual Studio, for example. Once you've installed ESLint, you need to configure a local configuration file. Do this with eslint --init. The --init command will display a prompt that will ask you how to configure the rules it will follow. It will ask a series of questions, and ask what style to use. AirBNB is fairly common, although you can use others; there's no wrong choice. If you are working for a company, they may already have standards, so check with management. One of the prompts will ask if you need React. React VR coding style Coding style can be nearly religious, but in the JavaScript and React world, some standards are very common. AirBNB has one good, fairly well–regarded style guide at: http://bit.ly/JStyle. For React VR, some style options to consider are as follows: Use lowercase for the first letter of a variable name. In other words, this.props.currentX, not this.props.CurrentX, and don't use underscores (this is called camelCase). Use PascalCase only when naming constructors or classes. As you're using PascalCase for files, make the filename match the class, so   import MyClass from './MyClass'. Be careful about 0 vs {0}. In general, learn JavaScript and React. Always use const or let to declare variables to avoid polluting the global namespace. Avoid using ++ and --. This one was hard for me, being a C++ programmer. Hopefully, by the time you've read this, I've fixed it in the source examples. If not, do as I say, not as I do! Learn the difference between == and ===, and use them properly, another thing that is new for C++ and C# programmers. In general, I highly recommend that you pour over these coding styles and use a linter when you write your code: Third-party dependencies For your published website/application to really work reliably, we also need to update package.json; this is sort of the "project" way to ensure that Node.js knows we need a particular piece of software. We will edit the "dependencies" section to add the last line,(bold emphasis mine, bold won't show up in a text editor, obviously!): { "name": "WalkInAMaze", "version": "0.0.1", "private": true, "scripts": { "start": "node -e "console.log('open browser at http://localhost:8081/vr/\n\n');" && node node_modules/react-native/local-cli/cli.js start", "bundle": "node node_modules/react-vr/scripts/bundle.js", "open": "node -e "require('xopen')('http://localhost:8081/vr/')"", "devtools": "react-devtools", "test": "jest" }, "dependencies": { "ovrui": "~2.0.0", "react": "16.0.0", "react-native": "~0.48.0", "three": "^0.87.0", "react-vr": "~2.0.0", "react-vr-web": "~2.0.0", "mersenne-twister": "^1.1.0" }, "devDependencies": { "babel-jest": "^19.0.0", "babel-preset-react-native": "^1.9.1", "jest": "^19.0.2", "react-devtools": "^2.5.2", "react-test-renderer": "16.0.0", "xopen": "1.0.0" }, "jest": { "preset": "react-vr" } } This is the manual way; a better way is to use npm install <package> -s. The -s qualifier saves the new package you've installed in package.json. The manual edits can be handy to ensure that you've got the right versions, if you get a version mismatch. If you mess around with installing and removing enough packages, you will eventually mess up your modules. If you get errors, even after removing node_modules, issue these commands: npm start -- --reset-cache npm cache clean --force The cache clean won't do it by itself; you need the reset–cache, otherwise the problem packages will still be saved, even if they don't physically exist! Bundling for publishing on the web Assuming that you have your project dependencies set up correctly to get your project to run from a web server, typically through an ISP or service provider, you need to "bundle" it. React VR has a script that will package up everything into just a few files. Note, of course, that your desktop machine counts as a "web server", although I wouldn't recommend that you expose your development machine to the web. The better way to have other people experience your new Virtual Reality is to bundle it and put it on a commercial web service. Packaging React VR for release on a website The basic process is easy with the React VR provided script: Go to the VR directory where you normally run npm start, and run the npm run bundle command: You will then go to your website the same way you normally upload files, and create a directory called vr. In your project directory, in our case f:ReactVRWalkInAMaze, find the following files in .VRBuild: client.bundle.js index.bundle.js Copy those to your website. Make a directory called static_assets. Copy all of your files (that your app uses) from AppNamestatic_assets to the new static_assets folder. Ensure that you have MIME mapping set up for all of your content; in particular, .obj, .mtl, and .gltf files may need new mappings. Check with your web server documentation: For gltf files, use model/gltf-binary Any .bin files used by gltf should be application/octet-stream For .obj files, I've used application/octet-stream The official list is at http://bit.ly/MimeTypes Very generally, application/octet-stream will send the files "exactly" as they are on the server, so this is sort of a general purpose "catch all" Copy the index.html from the root of your application to the directory on your website where you are publishing the app; in our case, it'll be the vr directory, so the file is alongside the two .js files. Modify index.html for the following lines (note the change to ./index.vr): <html> <head> <title>WalkInAMaze</title> <style>body { margin: 0; }</style> <meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no"> </head> <body> <!-- When you're ready to deploy your app, update this line to point to your compiled client.bundle.js --> <script src="./client.bundle?platform=vr"></script> <script> // Initialize the React VR application ReactVR.init( // When you're ready to deploy your app, update this line to point to // your compiled index.bundle.js './index.vr.bundle?platform=vr&dev=false', // Attach it to the body tag document.body ); </script> </body> </html> Note for a production release, which means if you're pointing to a prebuilt bundle on a static web server and not the React Native bundler, the dev and platform flags actually won't do anything, so there's no difference between dev=true, dev=false, or even dev=foobar. Obtaining releases and attribution If you used any assets from anywhere on the web, ensure that you have the proper release. For example, many Daz3D or Poser models do not include the rights to publish the geometry information; including these on your website as an OBJ or glTF file may be a violation of that agreement. Someone could easily download the model, or nearly all the geometry fairly easily, and then use it for something else. I am not a lawyer; you should check with wherever you get your models to ensure that you have permission, and if necessary, attribute properly. Attribution licenses are a little difficult with a VR world, unless you embed the attribution into a graphic somewhere; as we've seen, adding text can sometimes be distracting, and you will always have scale issues. If you embed a VR world in a page with <iframe>, you can always give proper attribution on the HTML side. However, this isn't really VR. Checking image sizes and using content delivery sites Some of the images you use, especially the ones in a <pano> statement, can be quite large. You may need to optimize these for proper web speed and responsiveness. This is a fairly general topic, but one thing that can help is a content delivery network (CDN), especially if your world will be a high-volume one. Adding a CDN to your web server is easy. You host your asset files from a separate location, and you pass the root directory as the assetRoot at the ReactVR.init() call. For example, if your files were hosted at https://cdn.example.com/vr_assets/, you would change the method call in index.html to include the following third argument: ReactVR.init( './index.bundle.js?platform=vr&dev=false', document.body, { assetRoot: 'https://cdn.example.com/vr_assets/' } ); Optimizing your models If you were watching the web console, you may have noted this model being loaded over and over. It is not necessarily the most efficient way. Consider other techniques such as passing a model for the various child components as a prop. Polygon decimation is another technique that is very valuable in optimizing models for the web and VR. With the glTF file format, you can use "normal maps" and still make a low polygon model look like a high-resolution one. Techniques to do this are well documented in the game development field. These techniques really do work well. You should also optimize models to not display unseen geometry. If you are showing a car model with blacked out windows, for example, there is no need to have engine detail and interior details loaded (unless the windows are transparent). This sounds obvious, although I found the lamp that I used to illustrate the lighting examples had almost tripled the number of polygons than was needed; the glass lamp shade had inner and outer polygons that were inside the model. We learned to do version upgrades, and if need be, how to do rip and replace upgrades. We further discussed when to do an upgrade and how to publish it on the web. If you are interested to know about how to include existing high-performance web code into a VR app, you may refer to the book Getting Started with React VR.   Build a Virtual Reality Solar System in Unity for Google Cardboard Understanding the hype behind Magic Leap’s New Augmented Reality Headsets Leap Motion open sources its $100 augmented reality headset, North Star
Read more
  • 0
  • 0
  • 3042
article-image-getting-started-polygons-blender
Sunith Shetty
05 Jun 2018
11 min read
Save for later

Building VR objects in React V2 2.0: Getting started with polygons in Blender

Sunith Shetty
05 Jun 2018
11 min read
A polygon is an n-sided object composed of vertices (points), edges, and faces. A face can face in or out or be double-sided. For most real-time VR, we use single–sided polygons; we noticed this when we first placed a plane in the world, depending on the orientation, you may not see it. In today’s tutorial, we will understand why Polygons are the best way to present real-time graphics. To really show how this all works, I'm going to show the internal format of an OBJ file. Normally, you won't hand edit these — we are beyond the days of VR constructed with a few thousand polygons (my first VR world had a train that represented downloads, and it had six polygons, each point lovingly crafted by hand), so hand editing things isn't necessary, but you may need to edit the OBJ files to include the proper paths or make changes your modeler may not do natively–so let's dive in! This article is an excerpt from a book written by John Gwinner titled Getting Started with React VR. In this book, you'll gain a deeper understanding of Virtual Reality and a full-fledged  VR app to add to your profile. Polygons are constructed by creating points in 3D space, and connecting them with faces. You can consider that vertices are connected by lines (most modelers work this way), but in the native WebGL that React VR is based on, it's really just faces. The points don't really exist by themselves, but more or less "anchor" the corners of the polygon. For example, here is a simple triangle, modeled in Blender: In this case, I have constructed a triangle with three vertices and one face (with just a flat color, in this case green). The edges, shown in yellow or lighter shade, are there for the convenience of the modeler and won't be explicitly rendered. Here is what the triangle looks like inside our gallery: If you look closely in the Blender photograph, you'll notice that the object is not centered in the world. When it exports, it will export with the translations that you have applied in Blender. This is why the triangle is slightly off center on the pedestal. The good news is that we are in outer space, floating in orbit, and therefore do not have to worry about gravity. (React VR does not have a physics engine, although it is straightforward to add one.) The second thing you may notice is that the yellow lines (lighter gray lines in print) around the triangle in Blender do not persist in the VR world. This is because the file is exported as one face, which connects three vertices. The plural of vertex is vertices, not vertexes. If someone asks you about vertexes, you can laugh at them almost as much as when someone pronouncing Bézier curve as "bez ee er." Ok, to be fair, I did that once, now I always say Beh zee a. Okay, all levity aside, now let's make it look more interesting than a flat green triangle. This is done through something usually called as texture mapping. Honestly, the phrase "textures" and "materials" often get swapped around interchangeably, although lately they have sort of settled down to materials meaning anything about an object's physical appearance except its shape; a material could be how shiny it is, how transparent it is, and so on. A texture is usually just the colors of the object — tile is red, skin may have freckles — and is therefore usually called a texture map which is represented with a JPG, TGA, or other image format. There is no real cross software file format for materials or shaders (which are usually computer code that represents the material). When it comes time to render, there are some shader languages that are standard, although these are not always used in CAD programs. You will need to learn what your CAD program uses, and become proficient in how it handles materials (and texture maps). This is far beyond the scope of this book. The OBJ file format (which is what React VR usually uses) allows the use of several different texture maps to properly construct the material. It also can indicate the material itself via parameters coded in the file. First, let's take a look at what the triangle consists of. We imported OBJ files via the Model keyword: <Model source={{ obj: asset('OneTri.obj'), mtl: asset('OneTri.mtl'), }} style={{ transform: [ { translate: [ -0, -1, -5. ] }, { scale: .1 }, ] }} /> First, let's open the MTL (material) file (as the .obj file uses the .mtl file). The OBJ file format was developed by Wavefront: # Blender MTL File: 'OneTri.blend' # Material Count: 1 newmtl BaseMat Ns 96.078431 Ka 1.000000 1.000000 1.000000 Kd 0.040445 0.300599 0.066583 Ks 0.500000 0.500000 0.500000 Ke 0.000000 0.000000 0.000000 Ni 1.000000 d 1.000000 illum 2 A lot of this is housekeeping, but the important things are the following parameters: Ka : Ambient color, in RGB format Kd : Diffuse color, in RGB format Ks : Specular color, in RGB format Ns : Specular exponent, from 0 to 1,000 d : Transparency (d meant dissolved). Note that WebGL cannot normally show refractive materials, or display real volumetric materials and raytracing, so d is simply the percentage of how much light is blocked. 1 (the default) is fully opaque. Note that d in the .obj specification works for illum mode 2. Tr : Alternate representation of transparency; 0 is fully opaque. illum <#> (a number from 0 to 10). Not all illumination models are supported by WebGL. The current list is: Color on and Ambient off. Color on and Ambient on. Highlight on (and colors) <= this is the normal setting. There are other illumination modes, but are currently not used by WebGL. This of course, could change. Ni is optical density. This is important for CAD systems, but the chances of it being supported in VR without a lot of tricks are pretty low.  Computers and video cards get faster and faster all the time though, so maybe optical density and real time raytracing will be supported in VR eventually, thanks to Moore's law (statistically, computing power roughly doubles every two years or so). Very important: Make sure you include the "lit" keyword with all of your model declarations, otherwise the loader will assume you have only an emissive (glowing) object and will ignore most of the parameters in the material file! YOU HAVE BEEN WARNED. It'll look very weird and you'll be completely confused. Don't ask me why I know! The OBJ file itself has a description of the geometry. These are not usually something you can hand edit, but it's useful to see the overall structure. For the simple object, shown before, it's quite manageable: # Blender v2.79 (sub 0) OBJ File: 'OneTri.blend' # www.blender.org mtllib OneTri.mtl o Triangle v -7.615456 0.218278 -1.874056 v -4.384528 15.177612 -6.276536 v 4.801097 2.745610 3.762014 vn -0.445200 0.339900 0.828400 usemtl BaseMat s off f 3//1 2//1 1//1 First, you see a comment (marked with #) that tells you what software made it, and the name of the original file. This can vary. The mtllib is a call out to a particular material file, that we already looked at. The o lines (and g line is if there a group) define the name of the object and group; although React VR doesn't  really  use these (currently), in most modeling packages this will be listed in the hierarchy of objects. The v and vn keywords are where it gets interesting, although these are still not something visible. The v keyword creates a vertex in x, y, z space. The vertices built will later be connected into polygons. The vn establishes the normal for those objects, and vt will create the texture coordinates of the same points. More on texture coordinates in a bit. The usemtl BaseMat establishes what material, specified in your .mtl file, that will be used for the following faces. The s off means smoothing is turned off. Smoothing and vertex normals can make objects look smooth, even if they are made with very few polygons. For example, take a look at these two teapots; the first is without smoothing. Looks pretty computer graphics like, right? Now, have a look at the same teapot with the "s 1" parameter specified throughout, and normals included in the file.  This is pretty normal (pun intended), what I mean is most CAD software will compute normals for you. You can make normals; smooth, sharp, and add edges where needed. This adds detail without excess polygons and is fast to render. The smooth teapot looks much more real, right? Well, we haven't seen anything yet! Let's discuss texture. I didn't used to like Sushi because of the texture. We're not talking about that kind of texture. Texture mapping is a lot like taking a piece of Christmas wrapping paper and putting it around an odd shaped object. Just like when you get that weird looking present at Christmas and don't know quite what to do, sometimes doing the wrapping doesn't have a clear right way to do it. Boxes are easy, but most interesting objects aren't always a box. I found this picture online with the caption "I hope it's an X-Box." The "wrapping" is done via U, V coordinates in the CAD system. Let's take a look at a triangle, with proper UV coordinates. We then go get our wrapping paper, that is to say, we take an image file we are going to use as the texture, like this: We then wrap that in our CAD program by specifying this as a texture map. We'll then export the triangle, and put it in our world. You would probably have expected to see "left and bottom" on the texture map. Taking a closer look in our modeling package (Blender still) we see that the default UV mapping (using Blender's standard tools) tries to use as much of the texture map as possible, but from an artistic standpoint, may not be what we want. This is not to show that Blender is "yer doin' it wrong" but to make the point that you've got to check the texture mapping before you export. Also, if you are attempting to import objects without U,V coordinates, double-check them! If you are hand editing an .mtl file, and your textures are not showing up, double–check your .obj file and make sure you have vt lines; if you don't, the texture will not show up. This means the U,V coordinates for the texture mapping were not set. Texture mapping is non-trivial; there is quite an art about it and even entire books written about texturing and lighting. After having said that, you can get pretty far with Blender and any OBJ file if you've downloaded something from the internet and want to make it look a little better. We'll show you how to fix it. The end goal is to get a UV map that is more usable and efficient. Not all OBJ file exporters export proper texture maps, and frequently .obj files you may find online may or may not have UVs set. You can use Blender to fix the unwrapping of your model. We have several good Blender books to provide you a head start in it. You can also use your favorite CAD modeling program, such as Max, Maya, Lightwave, Houdini, and so on. This is important, so I'll mention it again in an info box. If you already use a different polygon modeler or CAD page, you don't have to learn Blender; your program will undoubtedly work fine.  You can skim this section. If you don't want to learn Blender anyway, you can download all of the files that we construct from the Github link. You'll need some of the image files if you do work through the examples. Files for this article are at: http://bit.ly/VR_Chap7. To summarize, we learned the basics of polygon modeling with Blender, also got to know the importance of polygon budgets, how to export those models, and details about the OBJ/MTL file formats. To know more about how to make virtual worlds look real, do check out this book Getting Started with React VR. Top 7 modern Virtual Reality hardware systems Types of Augmented Reality targets Unity plugins for augmented reality application development
Read more
  • 0
  • 0
  • 2984

article-image-restful-web-services-with-kotlin
Natasha Mathur
01 Jun 2018
9 min read
Save for later

Building RESTful web services with Kotlin

Natasha Mathur
01 Jun 2018
9 min read
Kotlin has been eating up the Java world. It has already become a hit in the Android Ecosystem which was dominated by Java and is welcomed with open arms. Kotlin is not limited to Android development and can be used to develop server-side and client-side web applications as well. Kotlin is 100% compatible with the JVM so you can use any existing frameworks such as Spring Boot, Vert.x, or JSF for writing Java applications. In this tutorial, we will learn how to implement RESTful web services using Kotlin. This article is an excerpt from the book 'Kotlin Programming Cookbook', written by, Aanand Shekhar Roy and Rashi Karanpuria. Setting up dependencies for building RESTful services In this recipe, we will lay the foundation for developing the RESTful service. We will see how to set up dependencies and run our first SpringBoot web application. SpringBoot provides great support for Kotlin, which makes it easy to work with Kotlin. So let's get started. We will be using IntelliJ IDEA and Gradle build system. If you don't have that, you can get it from https://www.jetbrains.com/idea/. How to do it… Let's follow the given steps to set up the dependencies for building RESTful services: First, we will create a new project in IntelliJ IDE. We will be using the Gradle build system for maintaining dependency, so create a Gradle project: When you have created the project, just add the following lines to your build.gradle file. These lines of code contain spring-boot dependencies that we will need to develop the web app: buildscript { ext.kotlin_version = '1.1.60' // Required for Kotlin integration ext.spring_boot_version = '1.5.4.RELEASE' repositories { jcenter() } dependencies { classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" // Required for Kotlin integration classpath "org.jetbrains.kotlin:kotlin-allopen:$kotlin_version" // See https://kotlinlang.org/docs/reference/compiler-plugins.html#kotlin-spring-compiler-plugin classpath "org.springframework.boot:spring-boot-gradle-plugin:$spring_boot_version" } } apply plugin: 'kotlin' // Required for Kotlin integration apply plugin: "kotlin-spring" // See https://kotlinlang.org/docs/reference/compiler-plugins.html#kotlin-spring-compiler-plugin apply plugin: 'org.springframework.boot' jar { baseName = 'gs-rest-service' version = '0.1.0' } sourceSets { main.java.srcDirs += 'src/main/kotlin' } repositories { jcenter() } dependencies { compile "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version" // Required for Kotlin integration compile 'org.springframework.boot:spring-boot-starter-web' testCompile('org.springframework.boot:spring-boot-starter-test') } Let's now create an App.kt file in the following directory hierarchy: It is important to keep the App.kt file in a package (we've used the college package). Otherwise, you will get an error that says the following: ** WARNING ** : Your ApplicationContext is unlikely to start due to a `@ComponentScan` of the default package. The reason for this error is that if you don't include a package declaration, it considers it a "default package," which is discouraged and avoided. Now, let's try to run the App.kt class. We will put the following code to test if it's running: @SpringBootApplication open class App { } fun main(args: Array<String>) { SpringApplication.run(App::class.java, *args) } Now run the project; if everything goes well, you will see output with the following line at the end: Started AppKt in 5.875 seconds (JVM running for 6.445) We now have our application running on our embedded Tomcat server. If you go to http://localhost:8080, you will see an error as follows: The preceding error is 404 error and the reason for that is we haven't told our application to do anything when a user is on the / path. Creating a REST controller In the previous recipe, we learned how to set up dependencies for creating RESTful services. Finally, we launched our backend on the http://localhost:8080 endpoint but got 404 error as our application wasn't configured to handle requests at that path (/). We will start from that point and learn how to create a REST controller. Let's get started! We will be using IntelliJ IDE for coding purposes. For setting up of the environment, refer to the previous recipe. You can also find the source in the repository at https://gitlab.com/aanandshekharroy/kotlin-webservices. How to do it… In this recipe, we will create a REST controller that will fetch us information about students in a college. We will be using an in-memory database using a list to keep things simple: Let's first create a Student class having a name and roll number properties: package college class Student() { lateinit var roll_number: String lateinit var name: String constructor( roll_number: String, name: String): this() { this.roll_number = roll_number this.name = name } } Next, we will create the StudentDatabase endpoint, which will act as a database for the application: @Component class StudentDatabase { private val students = mutableListOf<Student>() } Note that we have annotated the StudentDatabase class with @Component, which means its lifecycle will be controlled by Spring (because we want it to act as a database for our application). We also need a @PostConstruct annotation, because it's an in-memory database that is destroyed when the application closes. So we would like to have a filled database whenever the application launches. So we will create an init method, which will add a few items into the "database" at startup time: @PostConstruct private fun init() { students.add(Student("2013001","Aanand Shekhar Roy")) students.add(Student("2013165","Rashi Karanpuria")) } Now, we will create a few other methods that will help us deal with our database: getStudent: Gets the list of students present in our database: fun getStudents()=students addStudent: This method will add a student to our database: fun addStudent(student: Student): Boolean { students.add(student) return true } Now let's put this database to use. We will be creating a REST controller that will handle the request. We will create a StudentController and annotate it with @RestController. Using @RestController is simple, and it's the preferred method for creating MVC RESTful web services. Once created, we need to provide our database using Spring dependency injection, for which we will need the @Autowired annotation. Here's how our StudentController looks: @RestController class StudentController { @Autowired private lateinit var database: StudentDatabase } Now we will set our response to the / path. We will show the list of students in our database. For that, we will simply create a method that lists out students. We will need to annotate it with @RequestMapping and provide parameters such as path and request method (GET, POST, and such): @RequestMapping("", method = arrayOf(RequestMethod.GET)) fun students() = database.getStudents() This is what our controller looks like now. It is a simple REST controller: package college import org.springframework.beans.factory.annotation.Autowired import org.springframework.web.bind.annotation.RequestMapping import org.springframework.web.bind.annotation.RequestMethod import org.springframework.web.bind.annotation.RestController @RestController class StudentController { @Autowired private lateinit var database: StudentDatabase @RequestMapping("", method = arrayOf(RequestMethod.GET)) fun students() = database.getStudents() } Now when you restart the server and go to http://localhost:8080, we will see the response as follows: As you can see, Spring is intelligent enough to provide the response in the JSON format, which makes it easy to design APIs. Now let's try to create another endpoint that will fetch a student's details from a roll number: @GetMapping("/student/{roll_number}") fun studentWithRollNumber( @PathVariable("roll_number") roll_number:String) = database.getStudentWithRollNumber(roll_number) Now, if you try the http://localhost:8080/student/2013001 endpoint, you will see the given output: {"roll_number":"2013001","name":"Aanand Shekhar Roy"} Next, we will try to add a student to the database. We will be doing it via the POST method: @RequestMapping("/add", method = arrayOf(RequestMethod.POST)) fun addStudent(@RequestBody student: Student) = if (database.addStudent(student)) student else throw Exception("Something went wrong") There's more… So far, our server has been dependent on IDE. We would definitely want to make it independent of an IDE. Thanks to Gradle, it is very easy to create a runnable JAR just with the following: ./gradlew clean bootRepackage The preceding command is platform independent and uses the Gradle build system to build the application. Now, you just need to type the mentioned command to run it: java -jar build/libs/gs-rest-service-0.1.0.jar You can then see the following output as before: Started AppKt in 4.858 seconds (JVM running for 5.548) This means your server is running successfully. Creating the Application class for Spring Boot The SpringApplication class is used to bootstrap our application. We've used it in the previous recipes; we will see how to create the Application class for Spring Boot in this recipe. We will be using IntelliJ IDE for coding purposes. To set up the environment, read previous recipes, especially the Setting up dependencies for building RESTful services recipe. How to do it… If you've used Spring Boot before, you must be familiar with using @Configuration, @EnableAutoConfiguration, and @ComponentScan in your main class. These were used so frequently that Spring Boot provides a convenient @SpringBootApplication alternative. The Spring Boot looks for the public static main method, and we will use a top-level function outside the Application class. If you noted, while setting up the dependencies, we used the kotlin-spring plugin, hence we don't need to make the Application class open. Here's an example of the Spring Boot application: package college import org.springframework.boot.SpringApplication import org.springframework.boot.autoconfigure.SpringBootApplication @SpringBootApplication class Application fun main(args: Array<String>) { SpringApplication.run(Application::class.java, *args) } The Spring Boot application executes the static run() method, which takes two parameters and starts a autoconfigured Tomcat web server when Spring application is started. When everything is set, you can start the application by executing the following command: ./gradlew bootRun If everything goes well, you will see the following output in the console: This is along with the last message—Started AppKt in xxx seconds. This means that your application is up and running. In order to run it as an independent server, you need to create a JAR and then you can execute as follows: ./gradlew clean bootRepackage Now, to run it, you just need to type the following command: java -jar build/libs/gs-rest-service-0.1.0.jar We learned how to set up dependencies for building RESTful services, creating a REST controller, and creating the application class for Spring boot. If you are interested in learning more about Kotlin then be sure to check out the 'Kotlin Programming Cookbook'. Build your first Android app with Kotlin 5 reasons to choose Kotlin over Java Getting started with Kotlin programming Forget C and Java. Learn Kotlin: the next universal programming language
Read more
  • 0
  • 0
  • 14302