Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Augmented Reality / Virtual Reality

15 Articles
article-image-gaming-in-the-metaverse
Irena Cronin, Robert Scoble
24 Oct 2024
10 min read
Save for later

Gaming in the Metaverse

Irena Cronin, Robert Scoble
24 Oct 2024
10 min read
This article is an excerpt from the book, The Immersive Metaverse Playbook for Business Leaders, by Irena Cronin, Robert Scoble. This book explains what the metaverse is and why it is of utmost value to business decision-makers. The chapters help you get a solid understanding of the concepts and roles that augmented reality and virtual reality play, along with providing information on metaverse technologies, as well as thought-provoking consumer and enterprise use cases.Introduction In the Metaverse’s expansive gaming landscape, several compelling use cases emerge. Gamers become creators and modifiers, democratizing game development, with quality control as a challenge. Crossplatform gaming integration fosters an inclusive gaming community, while blockchain-backed virtual merchandise and collectibles introduce new opportunities with authenticity and copyright concerns. Virtual esports tournaments become global events, requiring stringent security measures. In-game advertising and product placement offer marketing potential, but striking a balance with player experience is vital. These use cases exemplify the diverse facets of gaming in the Metaverse, highlighting innovation and challenges in the pursuit of immersive digital gaming experiences. Let’s take a closer look at some use cases. Use case 1 – game creation and modification This use case exemplifies how the Metaverse empowers gamers to become active contributors to the gaming industry, shaping its future through their creativity and innovation. It highlights the democratization of game development and the dynamic synergy between technology, interactivity, and the challenges that come with it in this evolving digital realm. The setup Within the expansive and thriving Metaverse gaming landscape, a remarkable facet emerges where 3D and 2D virtual gamers are not just players but empowered creators and modifiers of games themselves. The Metaverse offers a vast canvas, brimming with opportunities for individuals and teams to craft unique gaming experiences that cater to a global audience. Interactivity In this immersive gaming domain, players transition into creators as they engage with innovative game creation and modification tools which include the use of generative AI. These tools empower users to design levels, characters, and gameplay mechanics, breathing life into their imaginative concepts. Collaborative platforms within the Metaverse foster teamwork, allowing multiple creators to combine their skills and ideas seamlessly. Technical innovation The Metaverse’s technical innovation shines through in the form of user-friendly game development platforms that bridge the gap between novice creators and experienced developers. These platforms offer intuitive interfaces, drag-and-drop functionality, and pre-built assets, making game design accessible to a wide range of enthusiasts. AI-driven game design assistance provides suggestions and optimizations, reducing the learning curve for newcomers. And with generative AI, soon whole 3D, as well as 2D, games could be fully developed. Challenges While the Metaverse fuels creativity and democratizes game development, several challenges emerge on this vibrant frontier. Balancing the influx of user-generated content with quality control becomes pivotal. Moderation systems must ensure that games meet basic quality standards and are free from malicious or inappropriate content. Additionally, striking a harmonious balance between open creativity and maintaining fair play in modified games poses an ongoing challenge. Ensuring that user-created content doesn’t disrupt the gaming experience for others is a priority. Continuous development and refinement of moderation and quality control mechanisms are essential to maintain a thriving and enjoyable gaming ecosystem within the Metaverse. Use case 2 – cross-platform gaming integration This use case illustrates how the Metaverse transcends the limitations of individual gaming platforms, fostering a more inclusive and interconnected gaming community. Cross-platform gaming integration enhances the social and competitive aspects of gaming, enabling players to unite in a shared virtual gaming universe. As the Metaverse continues to evolve, it reshapes the way we perceive and engage in gaming, offering a glimpse into the future of interactive entertainment. The setup Within the expansive Metaverse gaming landscape, cross-platform gaming integration becomes a prominent feature. This innovation allows players from various gaming platforms and devices to seamlessly interact and play together, breaking down traditional gaming silos. Interactivity In this interconnected Metaverse, players can engage in cross-platform gaming experiences with friends and gamers from around the world. Whether you’re on a PC, console, VR headset, or mobile device, you can join the same virtual gaming universe. Gamers can form diverse teams and alliances, fostering a sense of community that transcends hardware preferences. This integration offers unprecedented opportunities for collaboration and competition. Technical innovation The technical innovation driving this use case is the development of cross-platform compatibility protocols and infrastructure. These innovations bridge the gaps between different gaming ecosystems, allowing for cross-device gameplay. Advanced matchmaking algorithms ensure that players of similar skill levels can enjoy fair and balanced gaming experiences, regardless of their chosen platform. This technical integration transforms the Metaverse into a truly inclusive gaming space. Challenges While cross-platform gaming integration is a remarkable achievement, it comes with its own set of challenges. Ensuring a level playing field for all players, regardless of their platform, requires ongoing fine-tuning of matchmaking algorithms. Addressing potential disparities in hardware capabilities, such as graphics processing power, can be complex. Additionally, maintaining a secure gaming environment across diverse platforms is essential to prevent cheating, unauthorized access, and other security concerns. Use case 3 – game-related merchandise and collectibles This use case showcases how the Metaverse transforms the concept of gaming merchandise and collectibles, offering a virtual marketplace where gamers can not only enhance their in-game experiences but also indulge in their passion for collecting virtual treasures. The integration of blockchain technology adds a layer of trust and scarcity to these digital possessions, creating a virtual economy that mirrors the real-world collectibles market. The setup Within the Metaverse, a vibrant and bustling marketplace dedicated to gaming-related merchandise and collectibles emerges. This dynamic digital marketplace transforms the concept of gaming memorabilia, offering a diverse range of 3D and 2D virtual goods that hold significant value for gamers and collectors alike. It’s a virtual bazaar where gamers can immerse themselves in the culture of their favorite games beyond the confines of traditional gameplay. Interactivity In this immersive Metaverse marketplace, players gain the opportunity to personalize their avatars with a rich array of virtual gaming apparel and accessories. Gamers can browse an extensive catalog of virtual merchandise, including iconic character costumes, in-game items, and exclusive skins. This personalized customization allows players to showcase their gaming identity and immerse themselves even deeper into their favorite game worlds. Technical innovation At the heart of this use case lies the groundbreaking implementation of blockchain technology. This innovation plays a pivotal role in securing virtual collectibles, offering gamers a sense of rarity and ownership verification akin to physical collectibles. Each virtual item is tokenized on the blockchain, ensuring its uniqueness and provenance. Gamers can confidently buy, sell, and trade virtual merchandise, knowing that their digital possessions are genuine and scarce. In terms of the companies that offer game-related merchandise and collectibles, generative AI provides an inexpensive, fast, and easy way to create assets. Challenges While this Metaverse marketplace promises exciting opportunities, it also presents unique challenges. Ensuring the authenticity of virtual merchandise is paramount. The presence of counterfeit or unauthorized virtual items could undermine the trust and value within the marketplace. Additionally, addressing potential copyright issues related to virtual merchandise is a central concern. Striking a balance between allowing creative expression and protecting intellectual property rights is essential to maintaining a thriving and ethical marketplace. Negative implications of gaming in the Metaverse Gaming in the Metaverse, while promising incredible innovation and immersive experiences, also carries negative implications that span technological, social, and ethical dimensions. These potential drawbacks must be considered alongside the benefits to ensure a balanced perspective on this digital frontier. Technological implications Dependency on technology: As gaming in the Metaverse becomes increasingly sophisticated, there is a risk of individuals becoming overly dependent on technology for their entertainment and social interactions. This dependence may lead to issues related to screen time, addiction, and reduced physical activity. Technical glitches: The reliance on advanced technology for immersive gaming experiences introduces the possibility of technical glitches, server outages, or compatibility issues. These disruptions can frustrate players and disrupt their gaming experiences. Privacy concerns: The collection and utilization of user data within the Metaverse for targeted advertising and analytics can raise privacy concerns. Users may feel uncomfortable with the extent to which their online activities are monitored and analyzed. Social implications Social isolation: Immersive gaming experiences in the Metaverse could lead to social isolation as individuals spend more time in virtual environments and less time in physical social interactions. Loneliness and a lack of real-world social skills can result from excessive immersion. Economic disparities: Access to the Metaverse and its premium gaming experiences may be limited by socioeconomic factors. Those with greater financial resources may enjoy a significant advantage, potentially creating digital divides and exclusivity. Loss of physical interaction: The allure of the Metaverse may lead to a reduction in face-toface social interactions, which are crucial for human well-being. The diminished importance of real-world connections could have adverse effects on mental health and relationships. Ethical implications Exploitative monetization: In-game purchases and microtransactions within the Metaverse can sometimes exploit players, particularly younger individuals who may not fully understand the financial implications. This raises ethical questions about the gaming industry’s practices. Digital addiction: The highly immersive nature of gaming in the Metaverse may contribute to digital addiction, where individuals struggle to disengage from virtual experiences and prioritize them over real-world responsibilities. Content regulation: Balancing freedom of expression and maintaining a safe and inclusive gaming environment can be challenging. The Metaverse may struggle with regulating hate speech, inappropriate content, and cyberbullying. Psychological implications Escapism: While gaming can be a form of entertainment, excessive escapism into the Metaverse may indicate underlying psychological issues or a desire to avoid real-world problems. Impact on mental health: Long hours spent in virtual gaming worlds may lead to mental health issues such as anxiety, depression, and a distorted sense of reality. Cognitive overload: The complexity of immersive gaming experiences within the Metaverse can lead to cognitive overload, especially in younger players, potentially impacting their academic performance and cognitive development. Environmental implications Energy consumption: The infrastructure required to support the Metaverse’s immersive experiences and multiplayer environments can consume significant amounts of energy, contributing to environmental concerns. Electronic waste: As technology evolves rapidly, older gaming equipment and hardware can quickly become obsolete, leading to electronic waste disposal challenges. Conclusion In conclusion, the Metaverse is revolutionizing gaming with new opportunities for creativity, community, and commerce. It empowers gamers as creators, enables cross-platform play, introduces blockchain-backed collectibles, and hosts virtual esports tournaments. However, these advancements come with challenges like quality control, security, and balancing ads with player experience. Additionally, potential negative impacts such as technological dependency, social isolation, and ethical concerns must be addressed. By fostering innovation responsibly, the Metaverse can become a transformative and enriching space for gamers worldwide. Author BioIrena Cronin is SVP of Product for DADOS Technology, which is making an Apple Vision Pro data analytics and visualization app. She is also the CEO of Infinite Retina, which helps companies develop and implement AI, AR, and other new technologies for their businesses. Before this, she worked as an equity research analyst and gained extensive experience in evaluating both public and private companies. Cronin has an MS with Distinction in Information Technology/Management and Systems from New York University, and a joint MBA/MA from the University of Southern California. She has a BA from the University of Pennsylvania with a major in Economics (summa cum laude). Cronin speaks four languages, with a near-fluent proficiency in Mandarin.Robert Scoble has coauthored four books on technology innovation – each a decade before the said technology went completely mainstream. He has interviewed thousands of entrepreneurs in the tech industry and has long kept his social media audiences up to date on what is happening inside the world of tech, which is bringing us so many innovations. Robert currently tracks the AI industry and is the host of a new video show, Unaligned, where he interviews entrepreneurs from the thousands of AI companies he tracks as head of strategy for Infinite Retina.
Read more
  • 0
  • 0
  • 879

article-image-deploying-ar-experiences-onto-mobile-devices
Anna Braun, Raffael Rizzo
21 Oct 2024
5 min read
Save for later

Deploying AR Experiences onto Mobile Devices

Anna Braun, Raffael Rizzo
21 Oct 2024
5 min read
This article is an excerpt from the book, XR Development with Unity, by Anna Braun, Raffael Rizzo. This practical guide helps you create immersive VR, AR, and MR experiences using Unity 2021.3 or later versions. You’ll learn to add physics, animations, teleportation, sound, effects, and hand-tracking to XR scenes and deploy them on VR headsets, simulators, and mobile devices—all that you need to create interactive XR projects in Unity is here.Introduction In this article, you will learn to launch your AR experiences onto smartphones or tablets. Primarily, you have two paths to accomplish this: deployment onto an Android or an iOS device. For solo projects, where the AR application is meant for personal use, you may opt to deploy onto just Android or iOS, depending on your device’s operating system. However, for larger-scale projects that involve several users – be it academic, industrial, or any other group, irrespective of its size – it’s advisable to deploy and test the AR app on both Android and iOS platforms. This strategy has multiple benefits. First, if your application gains momentum or its usage expands, it would already be compatible with both major platforms, eliminating the need for time-consuming porting later on. Second, making your app accessible on both platforms from the outset can draw in more users, and possibly attract increased funding or support. Another key advantage to this is the cross-platform compatibility offered by Unity. This enables you to maintain a singular code base for both platforms, simplifying the management and updating process for your application. Any modifications made need to be done in one location and then deployed across both platforms. In the next section, we’ll delve into the steps required to deploy your AR scene onto an Android device. Deploying onto Android This section outlines the procedure to deploy your AR scene onto an Android device. The initial part of the process involves enabling some settings on your phone to prepare it for testing. Here’s a step-by-step guide: Confirm that your device is compatible with ARCore. ARCore is essential for AR Foundation to work correctly. You can find a list of supported devices at https://developers. google.com/ar/devices. Install ARCore, which AR Foundation uses to enable AR capabilities on Android devices. ARCore can be downloaded from the Google Play Store at https://play.google.com/ store/apps/details?id=com.google.ar.core. Activate Developer Options. To do this, open Settings on your Android device, scroll down, and select About Phone. Find Build number and tap it seven times until a message appears stating You are now a developer! Upon returning to the main Settings menu, you should now see an option called Developer Options. If it’s not present, perform an online search to find out how to enable developer mode for your specific device. Though the method described in the previous step is the most common, the variety of Android devices available might require slightly different steps. With Developer Options enabled, turn on USB Debugging. This will allow you to transfer your AR scene to your Android device via a USB cable. Navigate to Settings | Developer options, scroll down to USB Debugging, and switch it on. Acknowledge any pop-up prompts. Depending on your Android version, you might need to allow the installation of apps from unknown sources: For Android versions 7 (Nougat) and earlier: Navigate to Settings | Security Settings and then check the box next to Unknown Sources to allow the installation of apps from sources other than the Google Play Store. For Android versions 8 (Oreo) and above: Select Settings |Apps & Notifications | Special App Access | Install unknown apps and activate Unknown sources. You will see a list of apps that you can grant permission to install from unknown sources. This is where you select the File Manager app, as you’re using it to download the unknown app from Unity. 7. Link your Android device to your computer using a USB cable. You can typically use your device’s charging cable for this. A prompt will appear on your Android device asking for permission to allow USB debugging from your computer. Confirm it. With your Android device properly prepared for testing AR scenes, you can now proceed to deploy your Unity AR scene. This involves adjusting several parameters in the Unity Editor’s Build Settings and Player Settings. Here’s a step-by-step guide on how to do this: Select File | Build Settings | Android, then click the Switch Platform button. Now, your Build Settings should look something like what is illustrated in Figure 4.17.  Figure 4.17 – Unity’s Build Settings configuration for Android Next, click on the Player Settings button. This will open a new window, also called Player Settings. Here, select the Android tab, scroll down, and set Minimum API Level to Android 7.0 Nougat (API level 24) or above. This is crucial, as ARCore requires at least Android 7.0 to function properly. Remaining in the Android tab of Player Settings, enter a package name. Ensure it follows the pattern com.company_name.application_name. This pattern is a widely adopted convention for naming application packages in Android and is used to ensure unique identification for each application on the Google Play Store. Return to File | Build Settings and click the Build and Run button. A new window will pop up, prompting you to create a new folder in your project’s directory. Name this folder Builds. Upon selecting this folder, Unity will construct the scene within the newly created Builds folder. This is how you can set up your Android device for deploying AR scenes onto it. In the next section, you will learn how you can deploy your AR scene onto an iOS device, such as an iPhone or iPad. Deploying onto iOS Before we delve into the process of deploying an AR scene onto an iOS device, it’s important to discuss certain hardware prerequisites. Regrettably, if you’re using a Windows PC and an iOS device, it’s not as straightforward as deploying an AR scene made in Unity. The reason for this is that Apple, in its characteristic style, requires the use of Xcode, its proprietary development environment, as an intermediary step. This is only available on Mac devices, not Windows or Linux. If you don’t possess a Mac, there are still ways to deploy your AR scene onto an iOS device. Here are a few alternatives: Borrowing a Mac: The simplest solution to gain access to Xcode and deploy your app onto an iOS device is to borrow a Mac from a friend or coworker. It’s also worth checking whether local libraries, universities, or co-working spaces offer public access to Macs. For commercial or academic projects, it’s highly recommended to invest in a Mac for testing your AR app on iOS. Using a virtual machine: Another no-cost alternative is to establish a macOS environment on your non-Apple PC. However, Apple neither endorses nor advises this method due to potential legal issues and stability concerns. Therefore, we won’t elaborate further or recommend it. Employing a Unity plugin: Fortunately, a widely used Unity plugin enables deployment of an AR scene onto your iOS device with relatively less hassle. Navigate to Windows | Asset Store, click on Search Online, and Unity Asset Store will open in your default browser. Search for iOS Project Builder for Windows by Pierre-Marie Baty. Though this plugin costs $50, it is a much cheaper alternative than buying a Mac. After purchasing the plugin, import it into your AR scene and configure everything correctly by following the plugin’s documentation (https://www.pmbaty.com/iosbuildenv/documentation/ unity.html). In this article, we focus exclusively on deploying AR applications onto iOS devices using a Mac for running Unity and Xcode. This is due to potential inconsistencies and maintenance concerns with other aforementioned methods. Before you initiate the deployment setup, ensure that your Mac and iOS devices have the necessary software and settings. The following steps detail this preparatory process: Make sure the latest software versions are installed on your MacOS and iOS devices. Check for updates by navigating to Settings | General | Software Update on each device and install any that are available. Confirm that your iOS device supports ARKit, which is crucial for the correct functioning of AR Foundation. You can check compatibility at https://developer.apple.com/ documentation/arkit/. Generally, any device running on iPadOS 11 or iOS 11 and later versions are compatible. You will need an Apple ID for the following steps. If you don’t have one, you can create it at https://appleid.apple.com/account. Download the Xcode software onto your Mac from Apple’s developer website at https:// developer.apple.com/xcode/. Enable Developer Mode on your iOS device by going to Settings | Privacy & Security | Developer Mode, activate Developer Mode, and then restart your device. If you don’t find the Developer Mode option, connect your iOS device to a Mac using a cable. Open Xcode, then navigate to Window | Devices and Simulator. If your device isn’t listed in the left pane, ensure you trust the computer on your device by acknowledging the prompt that appears after you connect your device to the Mac. Subsequently, you can enable Developer Mode on your iOS device. Having set up your Mac and iOS devices correctly, let’s now proceed with how to deploy your AR scene onto your iOS device. Each time you want to deploy your AR scene onto your iOS device, follow these steps: 1. Use a USB cable to connect your iOS device to your Mac. 2. Within your Unity project, navigate to File | Build Settings and select iOS from Platform options. Click the Switch Platform button. 3. Check the Development Build option in Build Settings for iOS. This enables you to deploy the app for testing purposes onto your iOS device. This step is crucial to avoid the annual subscription cost of an Apple Developer account. Note: Deploying apps onto an iOS device with a free Apple Developer account has certain limitations. You can only deploy up to three apps onto your device at once, and they need to be redeployed every 7 days due to the expiration of the free provisioning profile. For industrial or academic purposes, we recommend subscribing to a paid Developer account after thorough testing using the Development Build function. 4. Remain in File | Build Settings | iOS tab, click on Player Settings, scroll down to Bundle Identifier, and input an identifier in the form of com.company_name.application_name. 5. Return to File | Build Settings | iOS tab and click Build and Run. In the pop-up window, create a new folder in your project directory named Builds and select it. 6. Xcode will open with the build, displaying an error message due to the need for a signing certificate. To create this, click on the error message, navigate to the Signing and Capabilities tab, and select the checkbox. In the Team drop-down menu, select New Team, and create a new team consisting solely of yourself. Now, select this newly-created team from the drop-down menu. Ensure that the information in the Bundle Identifier field matches your Unity Project found in Edit | Project Settings | Player. 7. While in Xcode, click on the Any iOS Device menu and select your specific iOS device as the output. 8. Click the Play button on the top left of Xcode and wait for a message indicating Build succeeded. Your AR application should now be on your iOS device. However, you won’t be able to open it until you trust the developer (in this case, yourself). Navigate to Settings | General | VPN & Device Management on your iOS device, tap Developer App certificate under your Apple ID, and then tap Trust (Your Apple ID). 9. On your iOS device’s home screen, click the icon of your AR app. Grant the necessary permissions, such as camera access. Congratulations, you’ve successfully deployed your AR app onto your iOS device! You now know how to deploy your AR experiences onto both Android and iOS devices. ConclusionDeploying AR experiences onto mobile devices opens up a world of possibilities, enabling users to engage with your application in innovative ways. By following the steps outlined in this guide, you can ensure that your AR applications are compatible with both Android and iOS platforms, maximizing their reach and impact. Whether you’re developing for personal use or planning to distribute your app to a broader audience, having cross-platform compatibility from the start can save time and resources in the long run. With the tools and techniques provided here, you are well on your way to creating and deploying compelling AR experiences that captivate users on any mobile device. Looking to dive into the world of virtual, augmented, and mixed reality? XR Development with Unity is the perfect guide for beginners and professionals alike! This book takes you step-by-step through creating immersive experiences using Unity, without needing expensive VR hardware. Learn how to build interactive XR apps, explore hand-tracking, gaze-tracking, multiplayer capabilities, and more. Whether you're a game developer, hobbyist, or industry professional, this resource is a must-have to master cutting-edge XR technologies. Don't miss out—start your XR journey today! Author BioAnna Braun is a Unity expert, who is specialized in creating XR applications. At Deutsche Telekom, Anna has developed XR prototypes in Unity. One prototype enabled warehouse workers to find commodities more easily through the use of special location data and Augmented Reality. At Fraunhofer, Anna specialized in Hand-Tracking and worked on a VR education platform. Her master's degree in Extended Reality has a special focus on Eye Tracking, Deep Learning, and Computer Graphics. She is a published author in the tech space and regularly speaks at conferences hosted by academia or non-profits like the Mozilla Foundation. Anna co-founded a company that offers XR consulting and development.Raffael Rizzo is a XR developer and Unity expert. During his work at Deutsche Telekom, he consulted companies on the use of digital twins and implemented augmented reality wayfinding solutions. At Fraunhofer IGD, Raffael worked on a VR education platform. He developed a VR training program for a soccer academy to test the children's reaction times. For the same academy, Raffael created an application that uses computer vision and machine learning to automatically evaluate ball juggling. His master's degree in Extended Reality encompasses Rendering, Computer Vision, Machine Learning, and 3D Visualization. Raffael co-founded a company specializing in XR consulting and development. 
Read more
  • 0
  • 0
  • 714

article-image-why-should-you-use-unreal-engine-4-to-build-augmented-and-virtual-reality-projects
Guest Contributor
20 Dec 2019
6 min read
Save for later

Why should you use Unreal Engine 4 to build Augmented and Virtual Reality projects

Guest Contributor
20 Dec 2019
6 min read
This is an exciting time to be a game developer. New technologies like Virtual Reality (VR) and Augmented Reality (AR) are here and growing in popularity, and a whole new generation of game consoles is just around the corner. Right now everyone wants to jump onto these bandwagons and create successful games using AR, VR and other technologies (for more detailed information see Chapter 15, Virtual Reality and Beyond, of my book, Learning C++ by Building Games with Unreal Engine 4 – Second Edition). But no one really wants to create everything from scratch (reinventing the wheel is just too much work). Fortunately, you don’t have to. Unreal Engine 4 (UE4) can help! Not only does Epic Games use their engine to develop their own games (and keep it constantly updated for that purpose), but many other game companies, both AAA and indie, also use the engine, and Epic is constantly adding new features for them too. They can also update the engine themselves, and they can make some of those changes available to the general public as well. UE4 also has a robust system for addons and plugins that many other developers contribute to. Some may be free, and others, more advanced ones are available for a price. These can be extremely specialized, and the developer may release regular updates to adjust to changes in Unreal and that adds new features that could make your life even easier. So how does UE4 help with new technologies? Here are some examples: Unreal Engine 4 for Virtual Reality Virtual Reality (VR) is one of the most exciting technologies around, and many people are trying to get into that particular door. VR headsets from companies like Oculus, HTC, and Sony are becoming cheaper, more common, and more powerful. If you were creating a game yourself from scratch you would need an extremely powerful graphics engine. Fortunately, UE4 already has one with VR functionality. If you already have a project you want to convert to VR, UE4 makes this easy for you. If you have an Oculus Rift or HTC Vive installed on your computer, viewing your game in VR is as easy as launching it in VR Preview mode and viewing it in your headset. While Controls might take more work, UE4 has a Motion Controller you can add to your controller to help you get started quickly. You can even edit your project in VR Mode, allowing you to see the editor view in your VR headset, which can help with positioning things in your game. If you’re starting a new project, UE4 now has VR specific templates for new projects. You also have plenty of online documentation and a large community of other users working with VR in Unreal Engine 4 who can help you out. Unreal Engine 4 for Augmented Reality Augmented Reality (AR) is another new technology that’s extremely popular right now. Pokemon Go is extremely popular, and many companies are trying to do something similar. There are also AR headsets and possibly other new ways to view AR information. Every platform has its own way of handling Augmented Reality right now. On mobile devices, iOS has ARKit to support AR programming and Android has ARCore. Fortunately, the Unreal website has a whole section on AR and how to support these in UE4 to develop AR games at https://docs.unrealengine.com/en-US/Platforms/AR/index.html. It also has information on using Magic Leap, Microsoft HoloLens, and Microsoft Hololens 2. So by using UE4, you get a big headstart on this type of development. Working with Other New Technologies If you want to use technology, chances are UE4 supports it (and if not, just wait and it will). Whether you’re trying to do procedural programming or just use the latest AI techniques (for more information see chapters 11 and 12 of my book, Learning C++ by Building Games with Unreal Engine 4 – Second Edition), chances are you can find something to help you get a head start in that technology that already works in UE4. And with so many people using the engine, it is likely to continue to be a great way to get support for new technologies. Support for New Platforms UE4 already supports numerous platforms such as PC, Mac, Mobile, web, Xbox One, PS4, Switch, and probably any other recent platform you can think of. With the next-gen consoles coming out in 2020, chances are they’re already working on support for them. For the consoles, you do generally need to be a registered developer with Microsoft, Sony, and/or Nintendo to have access to the tools to develop for those platforms (and you need expensive devkits). But as more indie games are showing up on these platforms you don’t necessarily have to be working at a AAA studio to do this anymore. What is amazing when you develop in UE4, is that publishing for another platform should basically just work. You may need to change the controls and the screen size. An AAA 3D title might be too slow to be playable if you try to just run it n a mobile device without any changes, but the basic game functionality will be there and you can make changes from that point. The Future It’s hard to tell what new technologies may come in the future, as new devices, game types, and methods of programming are developed. Regardless of what the future holds, there’s a strong chance that UE4 will support them. So learning UE4 now is a great investment of your time. If you’re interested in learning more, see my book, Learning C++ by Building Games with Unreal Engine 4 – Second Edition Author Bio Sharan Volin has been programming games for more than a decade. She has worked on AAA titles for Behavior Interactive, Blind Squirrel Games, Sony Online Entertainment/Daybreak Games, Electronic Arts (Danger Close Games), 7 Studios (Activision), and more, as well as numerous smaller games. She has primarily been a UI Programmer but is also interested in Audio, AI, and other areas. She also taught Game Programming for a year at the Art Institute of California and is the author of Learning C++ by Building Games with Unreal Engine 4 – Second Edition.
Read more
  • 0
  • 0
  • 10891

article-image-magic-leaps-first-mixed-reality-headset-powered-by-nvidia-tegra-x2-is-coming-this-summer
Richard Gall
16 Jul 2018
2 min read
Save for later

Magic Leap's first augmented reality headset, powered by Nvidia Tegra X2, is coming this Summer

Richard Gall
16 Jul 2018
2 min read
The Magic Leap One - the first augmented reality headset produced by startup Magic Leap - is slated to be launched this Summer. The news closely follows an announcement by AT&T that it will be the sole carrier for the headset. However, although Magic Leap revealed a lot about how the Magic Leap One in a live stream on Friday (13 July), no official release date has been stated. What we learned from the Magic Leap One livestream The Magic Leap One livestream gave us a pretty neat insight into how the virtual reality headset is going to work. It showed us how gestures form the main part of the UX with a demo of a rock-throwing game called 'Dodge'. The visuals looked pretty exciting, and it seems like Magic Leap have done a good job of developing a headset that could be game-changing for the augmented reality industry. However, there were a few holes - one Twitter user noticed, for example, that the software at one point failed to recognize when a user's hand would have been blocking the virtual reality images. https://twitter.com/ID_R_McGregor/status/1017119982906494983 For those with a particular interest in the engineering that has gone into the headset, we also found out that the Magic Leap One runs on an Nvidia Tegra X2 processor. This makes it considerably more powerful than the processor that is helping to power the current Nintendo Switch console, which uses the Tegra X1.
Read more
  • 0
  • 0
  • 3322

article-image-vr-experiences-with-react-vr-create-maze
Sunith Shetty
12 Jun 2018
16 min read
Save for later

Building VR experiences with React VR 2.0: How to create maze that's new every time you play

Sunith Shetty
12 Jun 2018
16 min read
In today’s tutorial, we will examine the functionality required to build a simple maze. There are a few ways we could build a maze. The most straightforward way would be to fire up our 3D modeler package (say, Blender) and create a labyrinth out of polygons. This would work fine and could be very detailed. However, it would also be very boring. Why? The first time we get through the maze will be exciting, but after a few tries, you'll know the way through. When we construct VR experiences, you usually want people to visit often and have fun every time. This tutorial is an excerpt from a book written by John Gwinner titled Getting Started with React VR. In this book, you will learn how to create amazing 360 and virtual reality content that runs directly in your browsers. A modeled labyrinth would be boring. Life is too short to do boring things. So, we want to generate a Maze randomly. This way, you can change the Maze every time so that it'll be fresh and different. The way to do that is through random numbers to ensure that the Maze doesn't shift around us, so we want to actually do it with pseudo-random numbers. To start doing that, we'll need a basic application created. Please go to your VR directory and create an application called 'WalkInAMaze': react-vr init WalkInAMaze Almost random–pseudo random number generators To have a chance of replaying value or being able to compare scores between people, we really need a pseudo-random number generator. The basic JavaScript Math.random() is not a pseudo-random generator; it really gives you a totally random number every time. We need a pseudo-random number generator that takes a seed value. If you give the same seed to the random number generator, it will generate the same sequence of random numbers. (They aren't completely random but are very close.) Random number generators are a complex topic; for example, they are used in cryptography, and if your random number generator isn't completely random, someone could break your code. We aren't so worried about that, we just want repeatability. Although the UI for this may be a bit beyond the scope of this book, creating the Maze in a way that clicking on Refresh won't generate a totally different Maze is really a good thing and will avoid frustration on the part of the user. This will also allow two users to compare scores; we could persist a board number for the Maze and show this. This may be out of scope for our book; however, having a predictable Maze will help immensely during development. If it wasn't for this, you might get lost while working on your world. (Well, probably not, but it makes testing easier.) Including library code from other projects Up to this point, I've shown you how to create components in React VR (or React). JavaScript interestingly has a historical issue with include. With C++, Java, or C#, you can include a file in another file or make a reference to a file in a project. After doing that, everything in those other files, such as functions, classes, and global properties (variables), are then usable from the file that you've issued the include statement in. With a browser, the concept of "including" JavaScript is a little different. With Node.js, we use package.json to indicate what packages we need. To bring those packages into our code, we will use the following syntax in your .js files: var MersenneTwister = require('mersenne-twister'); Then, instead of using Math.random(), we will create a new random number generator and pass a seed, as follows: var rng = new MersenneTwister(this.props.Seed); From this point on, you just call rng.random() instead of Math.random(). We can just use npm install <package> and the require statement for properly formatted packages. Much of this can be done for you by executing the npm command: npm install mersenne-twister --save Remember, the --save command to update our manifest in the project. While we are at it, we can install another package we'll need later: npm install react-vr-gaze-button --save Now that we have a good random number generator, let's use it to complicate our world. The Maze render() How do we build a Maze? I wanted to develop some code that dynamically generates the Maze; anyone could model it in a package, but a VR world should be living. Having code that can dynamically build Maze in any size (to a point) will allow a repeat playing of your world. There are a number of JavaScript packages out there for printing mazes. I took one that seemed to be everywhere, in the public domain, on GitHub and modified it for HTML. This app consists of two parts: Maze.html and makeMaze.JS. Neither is React, but it is JavaScript. It works fairly well, although the numbers don't really represent exactly how wide it is. First, I made sure that only one x was displaying, both vertically and horizontally. This will not print well (lines are usually taller than wide), but we are building a virtually real Maze, not a paper Maze. The Maze that we generate with the files at Maze.html (localhost:8081/vr/maze.html) and the JavaScript file—makeMaze.js—will now look like this: x1xxxxxxx x x x xxx x x x x x x x x xxxxx x x x x x x x x x x x x 2 xxxxxxxxx It is a little hard to read, but you can count the squares vs. xs. Don't worry, it's going to look a lot fancier. Now that we have the HTML version of a Maze working, we'll start building the hedges. This is a slightly larger piece of code than I expected, so I broke it into pieces and loaded the Maze object onto GitHub rather than pasting the entire code here, as it's long. You can find a link for the source at: http://bit.ly/VR_Chap11 Adding the floors and type checking One of the things that look odd with a 360 Pano background, as we've talked about before, is that you can seem to "float" against the ground. One fix, other than fixing the original image, is to simply add a floor. This is what we did with the Space Gallery, and it looks pretty good as we were assuming we were floating in space anyway. For this version, let's import a ground square. We could use a large square that would encompass the entire Maze; we'd then have to resize it if the size of the Maze changes. I decided to use a smaller cube and alter it so that it's "underneath" every cell of the Maze. This would allow us some leeway in the future to rotate the squares for worn paths, water traps, or whatever. To make the floor, we will use a simple cube object that I altered slightly and is UV mapped. I used Blender for this. We also import a Hedge model, and a Gem, which will represent where we can teleport to. Inside 'Maze.js' we added the following code: import Hedge from './Hedge.js'; import Floor from './Hedge.js'; import Gem from './Gem.js'; Then, inside the Maze.js we could instantiate our floor with the code: <Floor X={-2} Y={-4}/> Notice that we don't use 'vr/components/Hedge.js' when we do the import; we're inside Maze.js. However, in index.vr.js to include the Maze, we do need: import Maze from './vr/components/Maze.js'; It's slightly more complicated though. In our code, the Maze builds the data structures when props have changed; when moving, if the maze needs rendering again, it simply loops through the data structure and builds a collection (mazeHedges) with all of the floors, teleport targets, and hedges in it. Given this, to create the floors, the line in Maze.js is actually: mazeHedges.push(<Floor {...cellLoc} />); Here is where I ran into two big problems, and I'll show you what happened so that you can avoid these issues. Initially, I was bashing my head against the wall trying to figure out why my floors looked like hedges. This one is pretty easy—we imported Floor from the Hedge.js file. The floors will look like hedges (did you notice this in my preceding code? If so, I did this on purpose as a learning experience. Honest). This is an easy fix. Make sure that you code import Floor from './floor.js'; note that Floor not type-checked. (It is, after all, JavaScript.) I thought this was odd, as the hedge.js file exports a Hedge object, not a Floor object, but be aware you can rename the objects as you import them. The second problem I had was more of a simple goof that is easy to occur if you aren't really thinking in React. You may run into this. JavaScript is a lovely language, but sometimes I miss a strongly typed language. Here is what I did: <Maze SizeX='4' SizeZ='4' CellSpacing='2.1' Seed='7' /> Inside the maze.js file, I had code like this: for (var j = 0; j < this.props.SizeX + 2; j++) { After some debugging, I found out that the value of j was going from 0 to 42. Why did it get 42 instead of 6? The reason was simple. We need to fully understand JavaScript to program complex apps. The mistake was in initializing SizeX to be '4' ; this makes it a string variable. When calculating j from 0 (an integer), React/JavaScript takes 2, adds it to a string of '4', and gets the 42 string, then converts it to an integer and assigns this to j. When this is done, very weird things happened. When we were building the Space Gallery, we could easily use the '5.1' values for the input to the box: <Pedestal MyX='0.0' MyZ='-5.1'/> Then, later use the transform statement below inside the class: transform: [ { translate: [ this.props.MyX, -1.7, this.props.MyZ] } ] React/JavaScript will put the string values into This.Props.MyX, then realize it needs an integer, and then quietly do the conversion. However, when you get more complicated objects, such as our Maze generation, you won't get away with this. Remember that your code isn't "really" JavaScript. It's processed. At the heart, this processing is fairly simple, but the implications can be a killer. Pay attention to what you code. With a loosely typed language such as JavaScript, with React on top, any mistakes you make will be quietly converted to something you didn't intend. You are the programmer. Program correctly. So, back to the Maze. The Hedge and Floor are straightforward copies of the initial Gem code. Let's take a look at our starting Gem, although note it gets a lot more complicated later (and in your source files): import React, { Component } from 'react'; import { asset, Box, Model, Text, View } from 'react-vr'; export default class Gem extends Component { constructor() { super(); this.state = { Height: -3 }; } render() { return ( <Model source={{ gltf2: asset('TeleportGem.gltf'), }} style={{ transform: [{ translate: [this.props.X, this.state.Height, this.props.Z] }] }} /> ); } } The Hedge and Floor are essentially the same thing. (We could have made a prop be the file loaded, but we want a different behavior for the Gem, so we will edit this file extensively.) To run this sample, first, we should have created a directory as you have before, called WalkInAMaze. Once you do this, download the files from the Git source for this part of the article (http://bit.ly/VR_Chap11). Once you've created the app, copied the files, and fired it up, (go to the WalkInAMaze directory and type npm start), and you should see something like this once you look around - except, there is a bug. This is what the maze should look like (if you use the file  'MazeHedges2DoubleSided.gltf' in Hedge.js, in the <Model> statement):> Now, how did we get those neat-looking hedges in the game? (OK, they are pretty low poly, but it is still pushing it.) One of the nice things about the pace of improvement on web standards is their new features. Instead of just .obj file format, React VR now has the capability to load glTF files. Using the glTF file format for models glTF files are a new file format that works pretty naturally with WebGL. There are exporters for many different CAD packages. The reason I like glTF files is that getting a proper export is fairly straightforward. Lightwave OBJ files are an industry standard, but in the case of React, not all of the options are imported. One major one is transparency. The OBJ file format allows that, but at of the time of writing this book, it wasn't an option. Many other graphics shaders that modern hardware can handle can't be described with the OBJ file format. This is why glTF files are the next best alternative for WebVR. It is a modern and evolving format, and work is being done to enhance the capabilities and make a fairly good match between what WebGL can display and what glTF can export. This is however on interacting with the world, so I'll give a brief mention on how to export glTF files and provide the objects, especially the Hedge, as glTF models. The nice thing with glTF from the modeling side is that if you use their material specifications, for example, for Blender, then you don't have to worry that the export won't be quite right. Today's physically Based Rendering (PBR) tends to use the metallic/roughness model, and these import better than trying to figure out how to convert PBR materials into the OBJ file's specular lighting model. Here is the metallic-looking Gem that I'm using as the gaze point: Using the glTF Metallic Roughness model, we can assign the texture maps that programs, such as Substance Designer, calculate and import easily. The resulting figures look metallic where they are supposed to be metallic and dull where the paint still holds on. I didn't use Ambient Occlusion here, as this is a very convex model; something with more surface depressions would look fantastic with Ambient Occlusion. It would also look great with architectural models, for example, furniture. To convert your models, there is user documentation at http://bit.ly/glTFExporting. You will need to download and install the Blender glTF exporter. Or, you can just download the files I have already converted. If you do the export, in brief, you do the following steps: Download the files from http://bit.ly/gLTFFiles. You will need the gltf2_Principled.blend file, assuming that you are on a newer version of Blender. In Blender, open your file, then link to the new materials. Go to File->Link, then choose the gltf2_Principled.blend file. Once you do that, drill into "NodeTree" and choose either glTF Metallic Roughness (for metal), or glTF specular glossiness for other materials. Choose the object you are going to export; make sure that you choose the Cycles renderer. Open the Node Editor in a window. Scroll down to the bottom of the Node Editor window, and make sure that the box Use Nodes is checked. Add the node via the nodal menu, Add->Group->glTF Specular Glossiness or Metallic Roughness. Once the node is added, go to Add->Texture->Image texture. Add as many image textures as you have image maps, then wire them up. You should end up with something similar to this diagram. To export the models, I recommend that you disable camera export and combine the buffers unless you think you will be exporting several models that share geometry or materials. The Export options I used are as follows: Now, to include the exported glTF object, use the <Model> component as you would with an OBJ file, except you have no MTL file. The materials are all described inside the .glTF file. To include the exported glTF object, you just put the filename as a gltf2 prop in the <Model: <Model source={{ gltf2: asset('TeleportGem2.gltf'),}} ... To find out more about these options and processes, you can go to the glTF export web site. This site also includes tutorials on major CAD packages and the all-important glTF shaders (for example, the Blender model I showed earlier). I have loaded several .OBJ files and .glTF files so you can experiment with different combinations of low poly and transparency. When glTF support was added in React VR version 2.0.0, I was very excited as transparency maps are very important for a lot of VR models, especially vegetation; just like our hedges. However, it turns out there is a bug in WebGL or three.js that does not render the transparency properly. As a result, I have gone with a low polygon version in the files on the GitHub site; the pictures, above, were with the file MazeHedges2DoubleSided.gltf in the Hedges.js file (in vr/components). If you get 404 errors, check the paths in the glTF file. It depends on which exporter you use—if you are working with Blender, the gltf2 exporter from the Khronos group calculates the path correctly, but the one from Kupoman has options, and you could export the wrong paths. We discussed important mechanics of props, state, and events. We also discussed how to create a maze using pseudo-random number generators to make sure that our props and state didn't change chaotically. To know more about how to create, move around in, and make worlds react to us in a Virtual Reality world, including basic teleport mechanics, do check out this book Getting Started with React VR.  Read More: Google Daydream powered Lenovo Mirage solo hits the market Google open sources Seurat to bring high precision graphics to Mobile VR Oculus Go, the first stand alone VR headset arrives!
Read more
  • 0
  • 0
  • 3813

article-image-upgrading-packaging-publishing-react-vr-app
Sunith Shetty
08 Jun 2018
19 min read
Save for later

Upgrading, packaging, and publishing your React VR app

Sunith Shetty
08 Jun 2018
19 min read
It is fun to develop and experience virtual worlds at home. Eventually, though, you want the world to see your creation. To do that, we need to package and publish our app. In the course of development, upgrades to React may come along; before publishing, you will need to decide whether you need to "code freeze" and ship with a stable version, or upgrade to a new version. This is a design decision. In today’s tutorial, we will learn to upgrade React VR and bundle the code in order to publish on the web. This article is an excerpt from a book written by John Gwinner titled Getting Started with React VR. This book will get you well-versed with Virtual Reality (VR) and React VR components to create your own VR apps. One of the neat things, although it can be frustrating, is that web projects are frequently updated.  There are a couple of different ways to do an upgrade: You can install/create a new app with the same name You will then go to your old app and copy everything over This is a facelift upgrade or Rip and Replace Do an update. Mostly, this is an update to package.json, and then delete node_modules and rebuild it. This is an upgrade in place. It is up to you which method you use, but the major difference is that an upgrade in place is somewhat easier—no source code to modify and copy—but it may or may not work. A Facelift upgrade also relies on you using the correct react-vr-cli. There is a notice that runs whenever you run React VR from the Command Prompt that will tell you whether it's old: The error or warning that comes up about an upgrade when you run React VR from a Command Prompt may fly by quickly. It takes a while to run, so you may go away for a cup of coffee. Pay attention to red lines, seriously. To do an upgrade in place, you will typically get an update notification from Git if you have subscribed to the project. If you haven't, you should go to: http://bit.ly/ReactVR, create an account (if you don't have one already), and click on the eyeball icon to join the watch list. Then, you will get an email every time there is an upgrade. We will cover the most straightforward way to do an upgrade—upgrade in place, first. Upgrading in place How do you know what version of React you have installed? From a Node.js prompt, type this: npm list react-vr Also, check the version of react-vr-web: npm list react-vr-web Check the version of react-vr-cli (the command-line interface, really only for creating the hello world app). npm list react-vr-cli Check the version of ovrui (open VR's user interface): npm list ovrui You can check these against the versions on the documentation. If you've subscribed to React VR on GitHub (and you should!), then you will get an email telling you that there is an upgrade. Note that the CLI will also tell you if it is out of date, although this only applies when you are creating a new application (folder/website). The release notes are at: http://bit.ly/VRReleases . There, you will find instructions to upgrade. The upgrade instructions usually have you do the following: Delete your node_modules directory. Open your package.json file. Update react-vr, react-vr-web, and ovrui to "New version number" for example, 2.0.0. Update react to "a.b.c". Update react-native to "~d.e.f". Update three to "^g.h.k". Run npm install or yarn. Note the ~ and ^ symbols; ~version means approximately equivalent to version and ^version means compatible with version. This is a help, as you may have other packages that may want other versions of react-native and three, specifically. To get the values of {a...k}, refer to the release notes. I have also found that you may need to include these modules in the devDependencies section of package.json: "react-devtools": "^2.5.2", "react-test-renderer": "16.0.0", You may see this error: module.js:529 throw err; ^ Error: Cannot find module './node_modules/react-native/packager/blacklist' If you do, make the following changes in your projects root folder in the rncli.config.js file. Replace the var blacklist = require('./node_modules/react-native/packager/blacklist'); line with var blacklist = require('./node_modules/metro-bundler/src/blacklist');. Third-party dependencies If you have been experimenting and adding modules with npm install <something>, you may find, after an upgrade, that things do not work. The package.json file also needs to know about all the additional packages you installed during experimentation. This is the project way (npm way) to ensure that Node.js knows we need a particular piece of software. If you have this issue, you'll need to either repeat the install with the—save parameter, or edit the dependencies section in your package.json file. { "name": "WalkInAMaze", "version": "0.0.1", "private": true, "scripts": { "start": "node -e "console.log('open browser at http://localhost:8081/vr/\n\n');" && node node_modules/react-native/local-cli/cli.js start", "bundle": "node node_modules/react-vr/scripts/bundle.js", "open": "node -e "require('xopen')('http://localhost:8081/vr/')"", "devtools": "react-devtools", "test": "jest" }, "dependencies": { "ovrui": "~2.0.0", "react": "16.0.0", "react-native": "~0.48.0", "three": "^0.87.0", "react-vr": "~2.0.0", "react-vr-web": "~2.0.0", "mersenne-twister": "^1.1.0" }, "devDependencies": { "babel-jest": "^19.0.0", "babel-preset-react-native": "^1.9.1", "jest": "^19.0.2", "react-devtools": "^2.5.2", "react-test-renderer": "16.0.0", "xopen": "1.0.0" }, "jest": { "preset": "react-vr" } } Again, this is the manual way; a better way is to use npm install <package> -save. The -s qualifier saves the new package you've installed in package.json. The manual edits can be handy to ensure that you've got the right versions if you get a version mismatch. If you mess around with installing and removing enough packages, you will eventually mess up your modules. If you get errors even after removing node_modules, issue these commands: npm cache clean --force npm start -- --reset-cache The cache clean won't do it by itself; you need the reset-cache, otherwise, the problem packages will still be saved, even if they don't physically exist! Really broken upgrades – rip and replace If, however, after all that work, your upgrade still does not work, all is not lost. We can do a rip and replace upgrade. Note that this is sort of a "last resort", but it does work fairly well. Follow these steps: Ensure that your react-vr-cli package is up to date, globally: [F:ReactVR]npm install react-vr-cli -g C:UsersJohnAppDataRoamingnpmreact-vr -> C:UsersJohnAppDataRoamingnpmnode_modulesreact-vr-cliindex.js + react-vr-cli@0.3.6 updated 8 packages in 2.83s This is important, as when there is a new version of React, you may not have the most up-to-date react-vr-cli. It will tell you when you use it that there is a newer version out, but that line frequently scrolls by; if you get bored and don't note, you can spend a lot of time trying to install an updated version, to no avail. An npm generates a lot of verbiage, but it is important to read what it says, especially red formatted lines. Ensure that all CLI (DOS) windows, editing sessions, Node.js running CLIs, and so on, are closed. (You shouldn't need to reboot, however; just close everything using the old directory). Rename the old code to MyAppName140 (add a version number to the end of the old react-vr directory). Create the application, using react-vr init MyAppName, in other words, the original app name. The next step is easiest using a diff program (refer to http://bit.ly/WinDiff). I use Beyond Compare, but there are other ones too. Choose one and install it, if needed. Compare the two directories, .MyAppName (new) and .MyAppName140, and see what files have changed. Move over any new files from your old app, including assets (you can probably copy over the entire static_assets folder). Merge any files that have changed, except package.json. Generally, you will need to merge these files: index.vr.js client.js (if you changed it) For package.json, see what lines have been added, and install those packages in the new app via npm install <missed package> --save, or start the app and see what is missing. Remove any files seeded by the hello world app, such as chess-world.jpg (unless you are using that background, of course). Usually, you don't change the rn-cli.config.js file (unless you modified the seeded version). Most code will move directly over. Ensure that you change the application name if you changed the directory name, but with the preceding directions, you won't have to. The preceding list of upgrade steps may be slightly easier if there are massive changes to React VR; it will require some picking through source files. The source is pretty straightforward, so this should be easy in practice. I found that these techniques will work best if the automatic upgrade did not work. As mentioned earlier, the time to do a major upgrade probably is not right before publishing the app, unless there is some new feature you need. You want to adequately test your app to ensure that there aren't any bugs. I'm including the upgrade steps here, though, but not because you should do it right before publishing. Getting your code ready to publish Honestly, you should never put off organizing your clothes until, oh, wait, we're talking about code. You should never put off organizing your code until the night you want to ship it. Even the code you think is throw away may end up in production. Learn good coding habits and style from the beginning. Good code organization Good code, from the very start, is very important for many reasons: If your code uses sloppy indentation, it's more difficult to read. Many code editors, such as Visual Studio Code, Atom, and Webstorm, will format code for you, but don't rely on these tools. Poor naming conventions can hide problems. An improper case on variables can hide problems, such as using this.State instead of this.state. Most of the time spent coding, as much as 80%, is in maintenance. If you can't read the code, you can't maintain it. When you're a starting out programmer, you frequently think you'll always be able to read your own code, but when you pick up a piece years later and say "Who wrote this junk?" and then realize it was you, you will quit doing things like a, b, c, d variable names and the like. Most software at some point is maintained, read, copied, or used by someone other than the author. Most programmers think code standards are for "the other guy," yet complain when they have to code well. Who then does? Most programmers will immediately ask for the code documentation and roll their eyes when they don't find it. I usually ask to see the documentation they wrote for their last project. Every programmer I've hired usually gives me a deer in the headlights look. This is why I usually require good comments in the code. A good comment is not something like this: //count from 99 to 1 for (i=99; i>0; i--) ... A good comment is this: //we are counting bottles of beer for (i=99; i>0; i--) ... Cleaning the lint trap (checking code standards) When you wash clothes, the lint builds up and will eventually clog your washing machine or dryer, or cause a fire. In the PC world, old code, poorly typed names, and all can also build up. Refactoring is one way to clean up the code. I highly recommend that you use some form of version control, such as Git or bitbucket to check your code; while refactoring, it's quite possible to totally mess up your code and if you don't use version control, you may lose a lot of work. A great way to do a code review of your work, before you publish, is to use a linter. Linters go through your code and point out problems (crud), improper syntax, things that may work differently than you intend, and generally try to pick up your room after you, like your mom does. While you might not like it if your mom does that, these tools are invaluable. Computers are, after all, very picky and why not use the machines against each other? One of the most common ways to let software check your software for JavaScript is a program called ESLint. You can read about it at: http://bit.ly/JSLinter. To install ESLint, you can do it via npm like most packages—npm install eslint --save-dev. The --save-dev option puts a requirement in your project while you are developing. Once you've published your app, you won't need to pack the ESLint information with your project! There are a number of other things you need to get ESLint to work properly; read the configuration pages and go through the tutorials. A lot depends on what IDE you use. You can use ESLint with Visual Studio, for example. Once you've installed ESLint, you need to configure a local configuration file. Do this with eslint --init. The --init command will display a prompt that will ask you how to configure the rules it will follow. It will ask a series of questions, and ask what style to use. AirBNB is fairly common, although you can use others; there's no wrong choice. If you are working for a company, they may already have standards, so check with management. One of the prompts will ask if you need React. React VR coding style Coding style can be nearly religious, but in the JavaScript and React world, some standards are very common. AirBNB has one good, fairly well–regarded style guide at: http://bit.ly/JStyle. For React VR, some style options to consider are as follows: Use lowercase for the first letter of a variable name. In other words, this.props.currentX, not this.props.CurrentX, and don't use underscores (this is called camelCase). Use PascalCase only when naming constructors or classes. As you're using PascalCase for files, make the filename match the class, so   import MyClass from './MyClass'. Be careful about 0 vs {0}. In general, learn JavaScript and React. Always use const or let to declare variables to avoid polluting the global namespace. Avoid using ++ and --. This one was hard for me, being a C++ programmer. Hopefully, by the time you've read this, I've fixed it in the source examples. If not, do as I say, not as I do! Learn the difference between == and ===, and use them properly, another thing that is new for C++ and C# programmers. In general, I highly recommend that you pour over these coding styles and use a linter when you write your code: Third-party dependencies For your published website/application to really work reliably, we also need to update package.json; this is sort of the "project" way to ensure that Node.js knows we need a particular piece of software. We will edit the "dependencies" section to add the last line,(bold emphasis mine, bold won't show up in a text editor, obviously!): { "name": "WalkInAMaze", "version": "0.0.1", "private": true, "scripts": { "start": "node -e "console.log('open browser at http://localhost:8081/vr/\n\n');" && node node_modules/react-native/local-cli/cli.js start", "bundle": "node node_modules/react-vr/scripts/bundle.js", "open": "node -e "require('xopen')('http://localhost:8081/vr/')"", "devtools": "react-devtools", "test": "jest" }, "dependencies": { "ovrui": "~2.0.0", "react": "16.0.0", "react-native": "~0.48.0", "three": "^0.87.0", "react-vr": "~2.0.0", "react-vr-web": "~2.0.0", "mersenne-twister": "^1.1.0" }, "devDependencies": { "babel-jest": "^19.0.0", "babel-preset-react-native": "^1.9.1", "jest": "^19.0.2", "react-devtools": "^2.5.2", "react-test-renderer": "16.0.0", "xopen": "1.0.0" }, "jest": { "preset": "react-vr" } } This is the manual way; a better way is to use npm install <package> -s. The -s qualifier saves the new package you've installed in package.json. The manual edits can be handy to ensure that you've got the right versions, if you get a version mismatch. If you mess around with installing and removing enough packages, you will eventually mess up your modules. If you get errors, even after removing node_modules, issue these commands: npm start -- --reset-cache npm cache clean --force The cache clean won't do it by itself; you need the reset–cache, otherwise the problem packages will still be saved, even if they don't physically exist! Bundling for publishing on the web Assuming that you have your project dependencies set up correctly to get your project to run from a web server, typically through an ISP or service provider, you need to "bundle" it. React VR has a script that will package up everything into just a few files. Note, of course, that your desktop machine counts as a "web server", although I wouldn't recommend that you expose your development machine to the web. The better way to have other people experience your new Virtual Reality is to bundle it and put it on a commercial web service. Packaging React VR for release on a website The basic process is easy with the React VR provided script: Go to the VR directory where you normally run npm start, and run the npm run bundle command: You will then go to your website the same way you normally upload files, and create a directory called vr. In your project directory, in our case f:ReactVRWalkInAMaze, find the following files in .VRBuild: client.bundle.js index.bundle.js Copy those to your website. Make a directory called static_assets. Copy all of your files (that your app uses) from AppNamestatic_assets to the new static_assets folder. Ensure that you have MIME mapping set up for all of your content; in particular, .obj, .mtl, and .gltf files may need new mappings. Check with your web server documentation: For gltf files, use model/gltf-binary Any .bin files used by gltf should be application/octet-stream For .obj files, I've used application/octet-stream The official list is at http://bit.ly/MimeTypes Very generally, application/octet-stream will send the files "exactly" as they are on the server, so this is sort of a general purpose "catch all" Copy the index.html from the root of your application to the directory on your website where you are publishing the app; in our case, it'll be the vr directory, so the file is alongside the two .js files. Modify index.html for the following lines (note the change to ./index.vr): <html> <head> <title>WalkInAMaze</title> <style>body { margin: 0; }</style> <meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no"> </head> <body> <!-- When you're ready to deploy your app, update this line to point to your compiled client.bundle.js --> <script src="./client.bundle?platform=vr"></script> <script> // Initialize the React VR application ReactVR.init( // When you're ready to deploy your app, update this line to point to // your compiled index.bundle.js './index.vr.bundle?platform=vr&dev=false', // Attach it to the body tag document.body ); </script> </body> </html> Note for a production release, which means if you're pointing to a prebuilt bundle on a static web server and not the React Native bundler, the dev and platform flags actually won't do anything, so there's no difference between dev=true, dev=false, or even dev=foobar. Obtaining releases and attribution If you used any assets from anywhere on the web, ensure that you have the proper release. For example, many Daz3D or Poser models do not include the rights to publish the geometry information; including these on your website as an OBJ or glTF file may be a violation of that agreement. Someone could easily download the model, or nearly all the geometry fairly easily, and then use it for something else. I am not a lawyer; you should check with wherever you get your models to ensure that you have permission, and if necessary, attribute properly. Attribution licenses are a little difficult with a VR world, unless you embed the attribution into a graphic somewhere; as we've seen, adding text can sometimes be distracting, and you will always have scale issues. If you embed a VR world in a page with <iframe>, you can always give proper attribution on the HTML side. However, this isn't really VR. Checking image sizes and using content delivery sites Some of the images you use, especially the ones in a <pano> statement, can be quite large. You may need to optimize these for proper web speed and responsiveness. This is a fairly general topic, but one thing that can help is a content delivery network (CDN), especially if your world will be a high-volume one. Adding a CDN to your web server is easy. You host your asset files from a separate location, and you pass the root directory as the assetRoot at the ReactVR.init() call. For example, if your files were hosted at https://cdn.example.com/vr_assets/, you would change the method call in index.html to include the following third argument: ReactVR.init( './index.bundle.js?platform=vr&dev=false', document.body, { assetRoot: 'https://cdn.example.com/vr_assets/' } ); Optimizing your models If you were watching the web console, you may have noted this model being loaded over and over. It is not necessarily the most efficient way. Consider other techniques such as passing a model for the various child components as a prop. Polygon decimation is another technique that is very valuable in optimizing models for the web and VR. With the glTF file format, you can use "normal maps" and still make a low polygon model look like a high-resolution one. Techniques to do this are well documented in the game development field. These techniques really do work well. You should also optimize models to not display unseen geometry. If you are showing a car model with blacked out windows, for example, there is no need to have engine detail and interior details loaded (unless the windows are transparent). This sounds obvious, although I found the lamp that I used to illustrate the lighting examples had almost tripled the number of polygons than was needed; the glass lamp shade had inner and outer polygons that were inside the model. We learned to do version upgrades, and if need be, how to do rip and replace upgrades. We further discussed when to do an upgrade and how to publish it on the web. If you are interested to know about how to include existing high-performance web code into a VR app, you may refer to the book Getting Started with React VR.   Build a Virtual Reality Solar System in Unity for Google Cardboard Understanding the hype behind Magic Leap’s New Augmented Reality Headsets Leap Motion open sources its $100 augmented reality headset, North Star
Read more
  • 0
  • 0
  • 3093
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-getting-started-polygons-blender
Sunith Shetty
05 Jun 2018
11 min read
Save for later

Building VR objects in React V2 2.0: Getting started with polygons in Blender

Sunith Shetty
05 Jun 2018
11 min read
A polygon is an n-sided object composed of vertices (points), edges, and faces. A face can face in or out or be double-sided. For most real-time VR, we use single–sided polygons; we noticed this when we first placed a plane in the world, depending on the orientation, you may not see it. In today’s tutorial, we will understand why Polygons are the best way to present real-time graphics. To really show how this all works, I'm going to show the internal format of an OBJ file. Normally, you won't hand edit these — we are beyond the days of VR constructed with a few thousand polygons (my first VR world had a train that represented downloads, and it had six polygons, each point lovingly crafted by hand), so hand editing things isn't necessary, but you may need to edit the OBJ files to include the proper paths or make changes your modeler may not do natively–so let's dive in! This article is an excerpt from a book written by John Gwinner titled Getting Started with React VR. In this book, you'll gain a deeper understanding of Virtual Reality and a full-fledged  VR app to add to your profile. Polygons are constructed by creating points in 3D space, and connecting them with faces. You can consider that vertices are connected by lines (most modelers work this way), but in the native WebGL that React VR is based on, it's really just faces. The points don't really exist by themselves, but more or less "anchor" the corners of the polygon. For example, here is a simple triangle, modeled in Blender: In this case, I have constructed a triangle with three vertices and one face (with just a flat color, in this case green). The edges, shown in yellow or lighter shade, are there for the convenience of the modeler and won't be explicitly rendered. Here is what the triangle looks like inside our gallery: If you look closely in the Blender photograph, you'll notice that the object is not centered in the world. When it exports, it will export with the translations that you have applied in Blender. This is why the triangle is slightly off center on the pedestal. The good news is that we are in outer space, floating in orbit, and therefore do not have to worry about gravity. (React VR does not have a physics engine, although it is straightforward to add one.) The second thing you may notice is that the yellow lines (lighter gray lines in print) around the triangle in Blender do not persist in the VR world. This is because the file is exported as one face, which connects three vertices. The plural of vertex is vertices, not vertexes. If someone asks you about vertexes, you can laugh at them almost as much as when someone pronouncing Bézier curve as "bez ee er." Ok, to be fair, I did that once, now I always say Beh zee a. Okay, all levity aside, now let's make it look more interesting than a flat green triangle. This is done through something usually called as texture mapping. Honestly, the phrase "textures" and "materials" often get swapped around interchangeably, although lately they have sort of settled down to materials meaning anything about an object's physical appearance except its shape; a material could be how shiny it is, how transparent it is, and so on. A texture is usually just the colors of the object — tile is red, skin may have freckles — and is therefore usually called a texture map which is represented with a JPG, TGA, or other image format. There is no real cross software file format for materials or shaders (which are usually computer code that represents the material). When it comes time to render, there are some shader languages that are standard, although these are not always used in CAD programs. You will need to learn what your CAD program uses, and become proficient in how it handles materials (and texture maps). This is far beyond the scope of this book. The OBJ file format (which is what React VR usually uses) allows the use of several different texture maps to properly construct the material. It also can indicate the material itself via parameters coded in the file. First, let's take a look at what the triangle consists of. We imported OBJ files via the Model keyword: <Model source={{ obj: asset('OneTri.obj'), mtl: asset('OneTri.mtl'), }} style={{ transform: [ { translate: [ -0, -1, -5. ] }, { scale: .1 }, ] }} /> First, let's open the MTL (material) file (as the .obj file uses the .mtl file). The OBJ file format was developed by Wavefront: # Blender MTL File: 'OneTri.blend' # Material Count: 1 newmtl BaseMat Ns 96.078431 Ka 1.000000 1.000000 1.000000 Kd 0.040445 0.300599 0.066583 Ks 0.500000 0.500000 0.500000 Ke 0.000000 0.000000 0.000000 Ni 1.000000 d 1.000000 illum 2 A lot of this is housekeeping, but the important things are the following parameters: Ka : Ambient color, in RGB format Kd : Diffuse color, in RGB format Ks : Specular color, in RGB format Ns : Specular exponent, from 0 to 1,000 d : Transparency (d meant dissolved). Note that WebGL cannot normally show refractive materials, or display real volumetric materials and raytracing, so d is simply the percentage of how much light is blocked. 1 (the default) is fully opaque. Note that d in the .obj specification works for illum mode 2. Tr : Alternate representation of transparency; 0 is fully opaque. illum <#> (a number from 0 to 10). Not all illumination models are supported by WebGL. The current list is: Color on and Ambient off. Color on and Ambient on. Highlight on (and colors) <= this is the normal setting. There are other illumination modes, but are currently not used by WebGL. This of course, could change. Ni is optical density. This is important for CAD systems, but the chances of it being supported in VR without a lot of tricks are pretty low.  Computers and video cards get faster and faster all the time though, so maybe optical density and real time raytracing will be supported in VR eventually, thanks to Moore's law (statistically, computing power roughly doubles every two years or so). Very important: Make sure you include the "lit" keyword with all of your model declarations, otherwise the loader will assume you have only an emissive (glowing) object and will ignore most of the parameters in the material file! YOU HAVE BEEN WARNED. It'll look very weird and you'll be completely confused. Don't ask me why I know! The OBJ file itself has a description of the geometry. These are not usually something you can hand edit, but it's useful to see the overall structure. For the simple object, shown before, it's quite manageable: # Blender v2.79 (sub 0) OBJ File: 'OneTri.blend' # www.blender.org mtllib OneTri.mtl o Triangle v -7.615456 0.218278 -1.874056 v -4.384528 15.177612 -6.276536 v 4.801097 2.745610 3.762014 vn -0.445200 0.339900 0.828400 usemtl BaseMat s off f 3//1 2//1 1//1 First, you see a comment (marked with #) that tells you what software made it, and the name of the original file. This can vary. The mtllib is a call out to a particular material file, that we already looked at. The o lines (and g line is if there a group) define the name of the object and group; although React VR doesn't  really  use these (currently), in most modeling packages this will be listed in the hierarchy of objects. The v and vn keywords are where it gets interesting, although these are still not something visible. The v keyword creates a vertex in x, y, z space. The vertices built will later be connected into polygons. The vn establishes the normal for those objects, and vt will create the texture coordinates of the same points. More on texture coordinates in a bit. The usemtl BaseMat establishes what material, specified in your .mtl file, that will be used for the following faces. The s off means smoothing is turned off. Smoothing and vertex normals can make objects look smooth, even if they are made with very few polygons. For example, take a look at these two teapots; the first is without smoothing. Looks pretty computer graphics like, right? Now, have a look at the same teapot with the "s 1" parameter specified throughout, and normals included in the file.  This is pretty normal (pun intended), what I mean is most CAD software will compute normals for you. You can make normals; smooth, sharp, and add edges where needed. This adds detail without excess polygons and is fast to render. The smooth teapot looks much more real, right? Well, we haven't seen anything yet! Let's discuss texture. I didn't used to like Sushi because of the texture. We're not talking about that kind of texture. Texture mapping is a lot like taking a piece of Christmas wrapping paper and putting it around an odd shaped object. Just like when you get that weird looking present at Christmas and don't know quite what to do, sometimes doing the wrapping doesn't have a clear right way to do it. Boxes are easy, but most interesting objects aren't always a box. I found this picture online with the caption "I hope it's an X-Box." The "wrapping" is done via U, V coordinates in the CAD system. Let's take a look at a triangle, with proper UV coordinates. We then go get our wrapping paper, that is to say, we take an image file we are going to use as the texture, like this: We then wrap that in our CAD program by specifying this as a texture map. We'll then export the triangle, and put it in our world. You would probably have expected to see "left and bottom" on the texture map. Taking a closer look in our modeling package (Blender still) we see that the default UV mapping (using Blender's standard tools) tries to use as much of the texture map as possible, but from an artistic standpoint, may not be what we want. This is not to show that Blender is "yer doin' it wrong" but to make the point that you've got to check the texture mapping before you export. Also, if you are attempting to import objects without U,V coordinates, double-check them! If you are hand editing an .mtl file, and your textures are not showing up, double–check your .obj file and make sure you have vt lines; if you don't, the texture will not show up. This means the U,V coordinates for the texture mapping were not set. Texture mapping is non-trivial; there is quite an art about it and even entire books written about texturing and lighting. After having said that, you can get pretty far with Blender and any OBJ file if you've downloaded something from the internet and want to make it look a little better. We'll show you how to fix it. The end goal is to get a UV map that is more usable and efficient. Not all OBJ file exporters export proper texture maps, and frequently .obj files you may find online may or may not have UVs set. You can use Blender to fix the unwrapping of your model. We have several good Blender books to provide you a head start in it. You can also use your favorite CAD modeling program, such as Max, Maya, Lightwave, Houdini, and so on. This is important, so I'll mention it again in an info box. If you already use a different polygon modeler or CAD page, you don't have to learn Blender; your program will undoubtedly work fine.  You can skim this section. If you don't want to learn Blender anyway, you can download all of the files that we construct from the Github link. You'll need some of the image files if you do work through the examples. Files for this article are at: http://bit.ly/VR_Chap7. To summarize, we learned the basics of polygon modeling with Blender, also got to know the importance of polygon budgets, how to export those models, and details about the OBJ/MTL file formats. To know more about how to make virtual worlds look real, do check out this book Getting Started with React VR. Top 7 modern Virtual Reality hardware systems Types of Augmented Reality targets Unity plugins for augmented reality application development
Read more
  • 0
  • 0
  • 3373

article-image-unity-arcore-application-android
Sugandha Lahoti
21 May 2018
11 min read
Save for later

Build an ARCore app with Unity from scratch

Sugandha Lahoti
21 May 2018
11 min read
In this tutorial, we will learn to install, build, and deploy Unity ARCore apps for Android. Unity is a leading cross-platform game engine that is exceptionally easy to use for building game and graphic applications quickly. Unity has developed something of a bad reputation in recent years due to its overuse in poor-quality games. It isn't because Unity can't produce high-quality games, it most certainly can. However, the ability to create games quickly often gets abused by developers seeking to release cheap games for profit. This article is an excerpt from the book, Learn ARCore - Fundamentals of Google ARCore, written by Micheal Lanham. The following is a summary of the topics we will cover in this article: Installing Unity and ARCore Building and deploying to Android Remote debugging Exploring the code Installing Unity and ARCore Installing the Unity editor is relatively straightforward. However, the version of Unity we will be using may still be in beta. Therefore, it is important that you pay special attention to the following instructions when installing Unity: Navigate a web browser to https://unity3d.com/unity/beta. At the time of writing, we will use the most recent beta version of Unity since ARCore is also still in beta preview. Be sure to note the version you are downloading and installing. This will help in the event you have issues working with ARCore. Click on the Download installer button. This will download UnityDownloadAssistant. Launch UnityDownloadAssistant. Click on Next and then agree to the Terms of Service. Click on Next again. Select the components, as shown: Install Unity in a folder that identifies the version, as follows: Click on Next to download and install Unity. This can take a while, so get up, move around, and grab a beverage. Click on the Finish button and ensure that Unity is set to launch automatically. Let Unity launch and leave the window open. We will get back to it shortly. Once Unity is installed, we want to download the ARCore SDK for Unity. This will be easy now that we have Git installed. Follow the given instructions to install the SDK: Open a shell or Command Prompt. Navigate to your Android folder. On Windows, use this: cd C:Android Type and execute the following: git clone https://github.com/google-ar/arcore-unity-sdk.git After the git command completes, you will see a new folder called arcore-unity-sdk. If this is your first time using Unity, you will need to go online to https://unity3d.com/ and create a Unity user account. The Unity editor will require that you log in on first use and from time to time. Now that we have Unity and ARCore installed, it's time to open the sample project by implementing the following steps: If you closed the Unity window, launch the Unity editor. The path on Windows will be C:Unity 2017.3.0b8EditorUnity.exe. Feel free to create a shortcut with the version number in order to make it easier to launch the specific Unity version later. Switch to the Unity project window and click on the Open button. Select the Android/arcore-unity-sdk folder. This is the folder we used the git command to install the SDK to earlier, as shown in the following dialog: Click on the Select Folder button. This will launch the editor and load the project. Open the Assets/GoogleARCore/HelloARExample/Scenes folder in the Project window, as shown in the following excerpt: Double-click on the HelloAR scene, as shown in the Project window and in the preceding screenshot. This will load our AR scene into Unity. At any point, if you see red console or error messages in the bottom status bar, this likely means you have a version conflict. You will likely need to install a different version of Unity. Now that we have Unity and ARCore installed, we will build the project and deploy the app to an Android device in the next section. Building and deploying to Android With most Unity development, we could just run our scene in the editor for testing. Unfortunately, when developing ARCore applications, we need to deploy the app to a device for testing. Fortunately, the project we are opening should already be configured for the most part. So, let's get started by following the steps in the next exercise: Open up the Unity editor to the sample ARCore project and open the HelloAR scene. If you left Unity open from the last exercise, just ignore this step. Connect your device via USB. From the menu, select File | Build Settings. Confirm that the settings match the following dialog: Confirm that the HelloAR scene is added to the build. If the scene is missing, click on the Add Open Scenes button to add it. Click on Build and Run. Be patient, first-time builds can take a while. After the app gets pushed to the device, feel free to test it, as you did with the Android version. Great! Now we have a Unity version of the sample ARCore project running. In the next section, we will look at remotely debugging our app. Remote debugging Having to connect a USB all the time to push an app is inconvenient. Not to mention that, if we wanted to do any debugging, we would need to maintain a physical USB connection to our development machine at all times. Fortunately, there is a way to connect our Android device via Wi-Fi to our development machine. Use the following steps to establish a Wi-Fi connection: Ensure that a device is connected via USB. Open Command Prompt or shell. On Windows, we will add C:Androidsdkplatform-tools to the path just for the prompt we are working on. It is recommended that you add this path to your environment variables. Google it if you are unsure of what this means. Enter the following commands: //WINDOWS ONLY path C:Androidsdkplatform-tools //FOR ALL adb devices adb tcpip 5555 If it worked, you will see restarting in TCP mode port: 5555. If you encounter an error, disconnect and reconnect the device. Disconnect your device. Locate the IP address of your device by doing as follows: Open your phone and go to Settings and then About phone. Tap on Status. Note down the IP address. Go back to your shell or Command Prompt and enter the following: adb connect [IP Address] Ensure that you use the IP Address you wrote down from your device. You should see connected to [IP Address]:5555. If you encounter a problem, just run through the steps again. Testing the connection Now that we have a remote connection to our device, we should test it to ensure that it works. Let's test our connection by doing the following: Open up Unity to the sample AR project. Expand the Canvas object in the Hierarchy window until you see the SearchingText object and select it, just as shown in the following excerpt: Hierarchy window showing the selected SearchingText object Direct your attention to the Inspector window, on the right-hand side by default. Scroll down in the window until you see the text "Searching for surfaces…". Modify the text to read "Searching for ARCore surfaces…", just as we did in the last chapter for Android. From the menu, select File | Build and Run. Open your device and test your app. Remotely debugging a running app Now, building and pushing an app to your device this way will take longer, but it is far more convenient. Next, let's look at how we can debug a running app remotely by performing the following steps: Go back to your shell or Command Prompt. Enter the following command: adb logcat You will see a stream of logs covering the screen, which is not something very useful. Enter Ctrl + C (command + C on Mac) to kill the process. Enter the following command: //ON WINDOWS C:Androidsdktoolsmonitor.bat //ON LINUX/MAC cd android-sdk/tools/ monitor This will open Android Device Monitor. You should see your device on the list to the left. Ensure that you select it. You will see the log output start streaming in the LogCat window. Drag the LogCat window so that it is a tab in the main window, as illustrated: Android Device Monitor showing the LogCat window Leave the Android Device Monitor window open and running. We will come back to it later. Now we can build, deploy, and debug remotely. This will give us plenty of flexibility later when we want to become more mobile. Of course, the remote connection we put in place with adb will also work with Android Studio. Yet, we still are not actually tracking any log output. We will output some log messages in the next section. Exploring the code Unlike Android, we were able to easily modify our Unity app right in the editor without writing code. In fact, given the right Unity extensions, you can make a working game in Unity without any code. However, for us, we want to get into the nitty-gritty details of ARCore, and that will require writing some code. Jump back to the Unity editor, and let's look at how we can modify some code by implementing the following exercise: From the Hierarchy window, select the ExampleController object. This will pull up the object in the Inspector window. Select the Gear icon beside Hello AR Controller (Script) and from the context menu, select Edit Script, as in the following excerpt: This will open your script editor and load the script, by default, MonoDevelop. Unity supports a number of Integrated Development Environments (IDEs) for writing C# scripts. Some popular options are Visual Studio 2015-2017 (Windows), VS Code (All), JetBrains Rider (Mac), and even Notepad++(All). Do yourself a favor and try one of the options listed for your OS.   Scroll down in the script until you see the following block of code: public void Update () { _QuitOnConnectionErrors(); After the _QuitOnConnectionErrors(); line of code, add the following code: Debug.Log("Unity Update Method"); Save the file and then go back to Unity. Unity will automatically recompile the file. If you made any errors, you will see red error messages in the status bar or console. From the menu, select File | Build and Run. As long as your device is still connected via TCP/IP, this will work. If your connection broke, just go back to the previous section and reset it. Run the app on the device. Direct your attention to Android Device Monitor and see whether you can spot those log messages. Unity Update method The Unity Update method is a special method that runs before/during a frame update or render. For your typical game running at 60 frames per second, this means that the Update method will be called 60 times per second as well, so you should be seeing lots of messages tagged as Unity. You can filter these messages by doing the following: Jump to the Android Device Monitor window. Click on the green plus button in the Saved Filters panel, as shown in the following excerpt: Adding a new tag filter Create a new filter by entering a Filter Name (use Unity) and by Log Tag (use Unity), as shown in the preceding screenshot. Click on OK to add the filter. Select the new Unity filter. You will now see a list of filtered messages specific to Unity platform when the app is running on the device. If you are not seeing any messages, check your connection and try to rebuild. Ensure that you saved your edited code file in MonoDevelop as well. Good job. We now have a working Unity set up with remote build and debug support. In this post,  we installed Unity and the ARCore SDK for Unity. We then took a slight diversion by setting up a remote build and debug connection to our device using TCP/IP over Wi-Fi. Next, we tested out our ability to modify the C# script in Unity by adding some debug log output. Finally, we tested our code changes using the Android Device Monitor tool to filter and track log messages from the Unity app deployed to the device. To know how to setup web development with JavaScript in ARCore and look through the various sample ARCore templates, check out the book Learn ARCore - Fundamentals of Google ARCore. Getting started with building an ARCore application for Android Unity plugins for augmented reality application development Types of Augmented Reality targets
Read more
  • 0
  • 0
  • 9009

article-image-building-arcore-android-application
Sugandha Lahoti
24 Apr 2018
9 min read
Save for later

Getting started with building an ARCore application for Android

Sugandha Lahoti
24 Apr 2018
9 min read
Google developed ARCore to be accessible from multiple development platforms (Android [Java], Web [JavaScript], Unreal [C++], and Unity [C#]), thus giving developers plenty of flexibility and options to build applications on various platforms. While each platform has its strengths and weaknesses, all the platforms essentially extend from the native Android SDK that was originally built as Tango. This means that regardless of your choice of platform, you will need to install and be somewhat comfortable working with the Android development tools. In this article, we will focus on setting up the Android development tools and building an ARCore application for Android. The following is a summary of the major topics we will cover in this post: Installing Android Studio Installing ARCore Build and deploy Exploring the code Installing Android Studio Android Studio is a development environment for coding and deploying Android applications. As such, it contains the core set of tools we will need for building and deploying our applications to an Android device. After all, ARCore needs to be installed to a physical device in order to test. Follow the given instructions to install Android Studio for your development environment: Open a browser on your development computer to https://developer.android.com/studio. Click on the green DOWNLOAD ANDROID STUDIO button. Agree to the Terms and Conditions and follow the instructions to download. After the file has finished downloading, run the installer for your system. Follow the instructions on the installation dialog to proceed. If you are installing on Windows, ensure that you set a memorable installation path that you can easily find later, as shown in the following example: Click through the remaining dialogs to complete the installation. When the installation is complete, you will have the option to launch the program. Ensure that the option to launch Android Studio is selected and click on Finish. Android Studio comes embedded with OpenJDK. This means we can omit the steps to installing Java, on Windows at least. If you are doing any serious Android development, again on Windows, then you should go through the steps on your own to install the full Java JDK 1.7 and/or 1.8, especially if you plan to work with older versions of Android. On Windows, we will install everything to C:Android; that way, we can have all the Android tools in one place. If you are using another OS, use a similar well-known path. Now that we have Android Studio installed, we are not quite done. We still need to install the SDK tools that will be essential for building and deployment. Follow the instructions in the next exercise to complete the installation: If you have not installed the Android SDK before, you will be prompted to install the SDK when Android Studio first launches, as shown: Select the SDK components and ensure that you set the installation path to a well-known location, again, as shown in the preceding screenshot. Leave the Welcome to Android Studio dialog open for now. We will come back to it in a later exercise. That completes the installation of Android Studio. In the next section, we will get into installing ARCore. Installing ARCore Of course, in order to work with or build any ARCore applications, we will need to install the SDK for our chosen platform. Follow the given instructions to install the ARCore SDK: We will use Git to pull down the code we need directly from the source. You can learn more about Git and how to install it on your platform at https://git-scm.com/book/en/v2/Getting-Started-Installing-Git or use Google to search: getting started installing Git. Ensure that when you install on Windows, you select the defaults and let the installer set the PATH environment variables. Open Command Prompt or Windows shell and navigate to the Android (C:Android on Windows) installation folder. Enter the following command: git clone https://github.com/google-ar/arcore-android-sdk.git This will download and install the ARCore SDK into a new folder called arcore-android-sdk, as illustrated in the following screenshot: Ensure that you leave the command window open. We will be using it again later. Installing the ARCore service on a device Now, with the ARCore SDK installed on our development environment, we can proceed with installing the ARCore service on our test device. Use the following steps to install the ARCore service on your device: NOTE: this step is only required when working with the Preview SDK of ARCore. When Google ARCore 1.0 is released you will not need to perform this step. Grab your mobile device and enable the developer and debugging options by doing the following: Opening the Settings app Selecting the System Scrolling to the bottom and selecting About phone Scrolling again to the bottom and tapping on Build number seven times Going back to the previous screen and selecting Developer options near the bottom Selecting USB debugging Download the ARCore service APK from https://github.com/google-ar/arcore-android-sdk/releases/download/sdk-preview/arcore-preview.apk to the Android installation folder (C:Android). Also note that this URL will likely change in the future. Connect your mobile device with a USB cable. If this is your first time connecting, you may have to wait several minutes for drivers to install. You will then be prompted to switch on the device to allow the connection. Select Allow to enable the connection. Go back to your Command Prompt or Windows shell and run the following command: adb install -r -d arcore-preview.apk //ON WINDOWS USE: sdkplatform-toolsadb install -r -d arcore-preview.apk After the command is run, you will see the word Success. This completes the installation of ARCore for the Android platform. In the next section, we will build our first sample ARCore application. Build and deploy Now that we have all the tedious installation stuff out of the way, it's time to build and deploy a sample app to your Android device. Let's begin by jumping back to Android Studio and following the given steps: Select the Open an existing Android Studio project option from the Welcome to Android Studio window. If you accidentally closed Android Studio, just launch it again. Navigate and select the Androidarcore-android-sdksamplesjava_arcore_hello_ar folder, as follows: Click on OK. If this is your first time running this project, you will encounter some dependency errors, such as the one here: In order to resolve the errors, just click on the link at the bottom of the error message. This will open a dialog, and you will be prompted to accept and then download the required dependencies. Keep clicking on the links until you see no more errors. Ensure that your mobile device is connected and then, from the menu, choose Run - Run. This should start the app on your device, but you may still need to resolve some dependency errors. Just remember to click on the links to resolve the errors. This will open a small dialog. Select the app option. If you do not see the app option, select Build - Make Project from the menu. Again, resolve any dependency errors by clicking on the links. "Your patience will be rewarded." - Alton Brown Select your device from the next dialog and click on OK. This will launch the app on your device. Ensure that you allow the app to access the device's camera. The following is a screenshot showing the app in action: Great, we have built and deployed our first Android ARCore app together. In the next section, we will take a quick look at the Java source code. Exploring the code Now, let's take a closer look at the main pieces of the app by digging into the source code. Follow the given steps to open the app's code in Android Studio: From the Project window, find and double-click on the HelloArActivity, as shown: After the source is loaded, scroll through the code to the following section: private void showLoadingMessage() { runOnUiThread(new Runnable() { @Override public void run() { mLoadingMessageSnackbar = Snackbar.make( HelloArActivity.this.findViewById(android.R.id.content), "Searching for surfaces...", Snackbar.LENGTH_INDEFINITE); mLoadingMessageSnackbar.getView().setBackgroundColor(0xbf323232); mLoadingMessageSnackbar.show(); } }); } Note the highlighted text—"Searching for surfaces..". Select this text and change it to "Searching for ARCore surfaces..". The showLoadingMessage function is a helper for displaying the loading message. Internally, this function calls runOnUIThread, which in turn creates a new instance of Runnable and then adds an internal run function. We do this to avoid thread blocking on the UI, a major no-no. Inside the run function is where the messaging is set and the message Snackbar is displayed. From the menu, select Run - Run 'app' to start the app on your device. Of course, ensure that your device is connected by USB. Run the app on your device and confirm that the message has changed. Great, now we have a working app with some of our own code. This certainly isn't a leap, but it's helpful to walk before we run. In this article, we started exploring ARCore by building and deploying an AR app for the Android platform. We did this by first installing Android Studio. Then, we installed the ARCore SDK and ARCore service onto our test mobile device. Next, we loaded up the sample ARCore app and patiently installed the various required build and deploy dependencies. After a successful build, we deployed the app to our device and tested. Finally, we tested making a minor code change and then deployed another version of the app. You read an excerpt from the book, Learn ARCore - Fundamentals of Google ARCore, written by Micheal Lanham. This book will help you will create next-generation Augmented Reality and Mixed Reality apps with the latest version of Google ARCore. Read More Google ARCore is pushing immersive computing forward Types of Augmented Reality targets
Read more
  • 0
  • 0
  • 7865

article-image-introduction-hololens
Packt
11 Jul 2017
10 min read
Save for later

Introduction to HoloLens

Packt
11 Jul 2017
10 min read
In this article, Abhijit Jana, Manish Sharma, and Mallikarjuna Rao, the authors of the book, HoloLens Blueprints, we will be covering the following points to introduce you to using HoloLens for exploratory data analysis. Digital Reality - Under the Hood Holograms in reality Sketching the scenarios 3D Modeling workflow Adding Air Tap on speaker Real-time visualization through HoloLens (For more resources related to this topic, see here.) Digital Reality - Under the Hood Welcome to the world of Digital Reality. The purpose of Digital Reality is to bring immersive experiences, such as taking or transporting you to different world or places, make you interact within those immersive, mix digital experiences with reality, and ultimately open new horizons to make you more productive. Applications of Digital Reality are advancing day by day; some of them are in the field of gaming, education, defense, tourism, aerospace, corporate productivity, enterprise applications, and so on. The spectrum and scenarios of Digital Reality are huge. In order to understand them better, they are broken down into three different categories:  Virtual Reality (VR): It is where you are disconnected from the real world and experience the virtual world. Devices available on the market for VR are Oculus Rift, Google VR, and so on. VR is the common abbreviation of Virtual Reality. Augmented Reality (AR): It is where digital data is overlaid over the real world. Pokémon GO, one of the very famous games, is an example of the this globally. A device available on the market, which falls under this category, is Google Glass. Augmented Reality is abbreviated to AR. Mixed Reality (MR): It spreads across the boundary of the real environment and VR. Using MR, you can have a seamless and immersive integration of the virtual and the real world. Mixed Reality is abbreviated to MR. This topic is mainly focused on developing MR applications using Microsoft HoloLens devices. Although these technologies look similar in the way they are used, and sometimes the difference is confusing to understand, there is a very clear boundary that distinguishes these technologies from each other. As you can see in the following diagram, there is a very clear distinction between AR and VR. However, MR has a spectrum, that overlaps across all three boundaries of real world, AR, and MR. Digital Reality Spectrum The following table describes the differences between the three: Holograms in reality Till now, we have mentioned Hologram several times. It is evident that these are crucial for HoloLens and Holographic apps, but what is a Hologram? Virtual Reality Complete Virtual World User is completely isolated from the Real World Device examples: Oculus Rift and Google VR Augmented Reality Overlays Data over the real world Often used for mobile devices Device example: Google Glass Application example: Pokémon GO Mixed Reality Seamless integration of the real and virtual world Virtual world interacts with Real world Natural interactions Device examples: HoloLens and Meta Holograms are the virtual objects which will be made up with light and sound and blend with the real world to give us an immersive MR experience with both real and virtual worlds. In other words, a Hologram is an object like any other real-world object; the only difference is that it is made up of light rather than matter. The technology behind making holograms is known as Holography. The following figure represent two holographic objects placed on the top of a real-size table and gives the experience of placing a real object on a real surface: Holograms objects in real environment Interacting with holograms There are basically five ways that you can interact with holograms and HoloLens. Using your Gaze, Gesture, and Voice and with spatial audio and spatial mapping. Spatial mapping provides a detailed illustration of the real-world surfaces in the environment around HoloLens. This allows developers to understand the digitalized real environments and mix holograms into the world around you. Gaze is the most usual and common one, and we start the interaction with it. At any time, HoloLens would know what you are looking at using Gaze. Based on that, the device can take further decisions on the gesture and voice that should be targeted. Spatial audio is the sound coming out from HoloLens and we use spatial audio to inflate the MR experience beyond the visual. HoloLens Interaction Model Sketching the scenarios The next step after elaborating scenario details is to come up with sketches for this scenario. There is a twofold purpose for sketching; first, it will be input to the next phase of asset development for the 3D Artist, as well as helping to validate requirements from the customer, so there are no surprises at the time of delivery. For sketching, either the designer can take it up on their own and build sketches, or they can take help from the 3D Artist. Let's start with the sketch for the primary view of the scenario, where the user is viewing the HoloLens's hologram: Roam around the hologram to view it from different angles Gaze at different interactive components Sketch for user viewing hologram for the HoloLens Sketching - interaction with speakers While viewing the hologram, a user can gaze at different interactive components. One such component, identified earlier, is the speaker. At the time of gazing at the speaker, it should be highlighted and the user can then Air Tap at it. The Air Tap action should expand the speaker hologram and the user should be able to view the speaker component in detail. Sketch for expanded speakers After the speakers are expanded, the user should be able to visualize the speaker components in detail. Now, if the user Air Taps on the expanded speakers, the application should do the following: Open the textual detail component about the speakers; the user can read the content and learn about the speakers in detail Start voice narration, detailing speaker details The user can also Air Tap on the expanded speaker component, and this action should close the expanded speaker Textual and voice narration for speaker details  As you did sketching for the speakers, apply a similar approach and do sketching for other components, such as lenses, buttons, and so on. 3D Modeling workflow Before jumping to 3D Modeling, let's understand the 3D Modeling workflow across different tools that we are going to use during the course of this topic. The following diagram explains the flow of the 3D Modeling workflow: Flow of 3D Modeling workflow Adding Air Tap on speaker In this project, we will be using the left-side speaker for applying Air Tap on speaker. However, you can apply the same for the right-side speaker as well. Similar to Lenses, we have two objects here which we need to identify from the object explorer. Navigate to Left_speaker_geo and left_speaker_details_geo in Object Hierarchy window Tag them as leftspeaker and speakerDetails respectively By default, when you are just viewing the Holograms, we will be hiding the speaker details section. This section only becomes visible when we do the Air Tap, and goes back again when we Air Tap again: Speaker with Box Collider Add a new script inside the Scripts folder, and name it ShowHideBehaviour. This script will handle the Show and Hide behaviour of the speakerDetails game object. Use the following script inside the ShowHideBehaviour.cs file. This script we can use for any other object to show or hide. public class ShowHideBehaviour : MonoBehaviour { public GameObject showHideObject; public bool showhide = false; private void Start() { try { MeshRenderer render = showHideObject.GetComponent InChildren<MeshRenderer>(); if (render != null) { render.enabled = showhide; } } catch (System.Exception) { } } } The script finds the MeshRenderer component from the gameObject and enables or disables it based on the showhide property. In this script, the showhide is property exposed as public, so that you can provide the reference of the object from the Unity scene itself. Attach ShowHideBehaviour.cs as components in speakerDetails tagged object. Then drag and drop the object in the showhide property section. This just takes the reference for the current speaker details objects and will hide the object in the first instance. Attach show-hide script to the object By default, it is unchecked, showhide is set to false and it will be hidden from view. At this point in time, you must check the left_speaker_details_geo on, as we are now handling visibility using code. Now, during the Air Tapped event handler, we can handle the render object to enable visibility. Add a new script by navigating from the context menu Create | C# Scripts, and name it SpeakerGestureHandler. Open the script file in Visual Studio. Similar to SpeakerGestureHandler, by default, the SpeakerGestureHandler class will be inherited from the MonoBehaviour. In the next step, implement the InputClickHandler interface in the SpeakerGestureHandler class. This interface implement the methods OnInputClicked() that invoke on click input. So, whenever you do an Air Tap gesture, this method is invoked. RaycastHit hit; bool isTapped = false; public void OnInputClicked(InputEventData eventData) { hit = GazeManager.Instance.HitInfo; if (hit.transform.gameObject != null) { isTapped = !isTapped; var lftSpeaker = GameObject.FindWithTag("LeftSpeaker"); var lftSpeakerDetails = GameObject.FindWithTag("speakerDetails"); MeshRenderer render = lftSpeakerDetails.GetComponentInChildren <MeshRenderer>(); if (isTapped) { lftSpeaker.transform.Translate(0.0f, -1.0f * Time.deltaTime, 0.0f); render.enabled = true; } else { lftSpeaker.transform.Translate(0.0f, 1.0f * Time.deltaTime, 0.0f); render.enabled = false; } } } When it is gazed, we find the game object for both LeftSpeaker and speakerDetails by the tag name. For the LeftSpeaker object, we are applying transformation based on tapped or not tapped, which worked like what we did for lenses. In the case of speaker details object, we have also taken the reference of MeshRenderer to make it's visibility true and false based on the Air Tap. Attach the SpeakerGestureHandler class with leftSpeaker Game Object. Air Tap in speaker – see it in action Air Tap action for speaker is also done. Save the scene, build and run the solution in emulator once again. When you can see the cursor on the speaker, perform Air Tap. Default View and Air Tapped View Real-time visualization through HoloLens We have learned about the data ingress flow, where devices connect with the IoT Hub, and stream analytics processes the stream of data and pushes it to storage. Now, in this section, let's discuss how this stored data will be consumed for data visualization within holographic application. Solution to consume data through services Summary In this article, we demonstrated using HoloLens,  for exploring Digital Reality - Under the Hood, Holograms in reality, Sketching the scenarios, 3D Modeling workflow, Adding Air Tap on speaker,  and  Real-time visualization through HoloLens. Resources for Article: Further resources on this subject: Creating Controllers with Blueprints [article] Raspberry Pi LED Blueprints [article] Exploring and Interacting with Materials using Blueprints [article]
Read more
  • 0
  • 0
  • 3142
article-image-welcome-new-world
Packt
23 Jan 2017
8 min read
Save for later

Welcome to the New World

Packt
23 Jan 2017
8 min read
We live in very exciting times. Technology is changing at a pace so rapid, that it is becoming near impossible to keep up with these new frontiers as they arrive. And they seem to arrive on a daily basis now. Moore's Law continues to stand, meaning that technology is getting smaller and more powerful at a constant rate. As I said, very exciting. In this article by Jason Odom, the author of the book HoloLens Beginner's Guide, we will be discussing about one of these new emerging technologies that finally is reaching a place more material than science fiction stories, is Augmented or Mixed Reality. Imagine the world where our communication and entertainment devices are worn, and the digital tools we use, as well as the games we play, are holographic projections in the world around us. These holograms know how to interact with our world and change to fit our needs. Microsoft has to lead the charge by releasing such a device... the HoloLens. (For more resources related to this topic, see here.) The Microsoft HoloLens changes the paradigm of what we know as personal computing. We can now have our Word window up on the wall (this is how I am typing right now), we can have research material floating around it, we can have our communication tools like Gmail and Skype in the area as well. We are finally no longer trapped to a virtual desktop, on a screen, sitting on a physical desktop. We aren't even trapped by the confines of a room anymore. What exactly is the HoloLens? The HoloLens is a first of its kind, head-worn standalone computer with a sensor array which includes microphones and multiple types of cameras, spatial sound speaker array, a light projector, and an optical waveguide. The HoloLens is not only a wearable computer; it is also a complete replacement for the standard two-dimensional display. HoloLens has the capability of using holographic projection to create multiple screens throughout and environment as well as fully 3D- rendered objects as well. With the HoloLens sensor array these holograms can fully interact with the environment you are in. The sensor array allows the HoloLens to see the world around it, to see input from the user's hands, as well as for it to hear voice commands. While Microsoft has been very quiet about what the entire sensor array includes we have a good general idea about the components used in the sensor array, let's have a look at them: One IMU: The Inertia Measurement Unit (IMU) is a sensor array that includes, an Accelerometer, a Gyroscope, and a Magnetometer. This unit handles head orientation tracking and compensates for drift that comes from the Gyroscopes eventual lack of precision. Four environment understanding sensors: These together form the spatial mapping that the HoloLens uses to create a mesh of the world around the user. One depth camera:Also known as a structured--light 3D scanner. This device is used for measuring the three-dimensional shape of an object using projected light patterns and a camera system. Microsoft first used this type of camera inside the Kinect for the Xbox 360 and Xbox One.  One ambient light sensor:Ambient light sensors or photosensors are used for ambient light sensing as well as proximity detection. 2 MP photo/HD video camera:For taking pictures and video. Four-microphone array: These do a great job of listening to the user and not the sounds around them. Voice is one of the primary input types with HoloLens. Putting all of these elements together forms a Holographic computer that allows the user to see, hear and interact with the world around in new and unique ways. What you need to develop for the HoloLens The HoloLens development environment breaks down to two primary tools, Unity and Visual Studio. Unity is the 3D environment that we will do most of our work in. This includes adding holograms, creating user interface elements, adding sound, particle systems and other things that bring a 3D program to life. If Unity is the meat on the bone, Visual Studio is a skeleton. Here we write scripts or machine code to make our 3D creations come to life and add a level of control and immersion that Unity can not produce on its own. Unity Unity is a software framework designed to speed up the creation of games and 3D based software. Generally speaking, Unity is known as a game engine, but as the holographic world becomes more apparently, the more we will use such a development environment for many different kinds of applications. Unity is an application that allows us to take 3D models, 2D graphics, particle systems, and sound to make them interact with each other and our user. Many elements are drag and drop, plug and play, what you see is what you get. This can simplify the iteration and testing process. As developers, we most likely do not want to build and compile forever little change we make in the development process. This allows us to see the changes in context to make sure they work, then once we hit a group of changes we can test on the HoloLens ourselves. This does not work for every aspect of HoloLens--Unity development but it does work for a good 80% - 90%. Visual Studio community Microsoft Visual Studio Community is a great free Integrated Development Environment (IDE). Here we use programming languages such as C# or JavaScript to code change in the behavior of objects, and generally, make things happen inside of our programs. HoloToolkit - Unity The HoloToolkit--Unity is a repository of samples, scripts, and components to help speed up the process of development. This covers a large selection of areas in HoloLens Development such as: Input:Gaze, gesture, and voice are the primary ways in which we interact with the HoloLens Sharing:The sharing repository helps allow users to share holographic spaces and connect to each other via the network. Spatial Mapping:This is how the HoloLens sees our world. A large 3D mesh of our space is generated and give our holograms something to interact with or bounce off of. Spatial Sound:The speaker array inside the HoloLens does an amazing work of giving the illusion of space. Objects behind us sound like they are behind us. HoloLens emulator The HoloLens emulator is an extension to Visual Studio that will simulate how a program will run on the HoloLens. This is great for those who want to get started with HoloLens development but do not have an actual HoloLens yet. This software does require the use of Microsoft Hyper-V , a feature only available inside of the Windows 10 Pro operating system. Hyper-V is a virtualization environment, which allows the creation of a virtual machine. This virtual machine emulates the specific hardware so one can test without the actual hardware. Visual Studio tools for Unity This collection of tools adds IntelliSense and debugging features to Visual Studio. If you use Visual Studio and Unity this is a must have: IntelliSense:An intelligent code completion tool for Microsoft Visual Studio. This is designed to speed up many processes when writing code. The version that comes with Visual Studios tools for Unity has unity specific updates. Debugging:Up to the point that this extension exists debugging Unity apps proved to be a little tedious. With this tool, we can now debug Unity applications inside Visual Studio speeding of the bug squashing process considerably. Other useful tools Following mentioned are some the useful tools that are required: Image editor: Photoshop or Gimp are both good examples of programs that allow us to create 2D UI elements and textures for objects in our apps. 3D Modeling Software: 3D Studio Max, Maya, and Blender are all programs that allow us to make 3D objects that can be imported in Unity. Sound Editing Software: There are a few resources for free sounds out of the web with that in mind, Sound Forge is a great tool for editing those sounds, layering sounds together to create new sounds. Summary In this article, we have gotten to know a little bit about the HoloLens, so we can begin our journey into this new world. Here the only limitations are our imaginations. Resources for Article: Further resources on this subject: Creating a Supercomputer [article] Building Voice Technology on IoT Projects [article] C++, SFML, Visual Studio, and Starting the first game [article]
Read more
  • 0
  • 0
  • 1711

article-image-getting-started-vr-programming
Jake Rheude
04 Jul 2016
8 min read
Save for later

Getting Started with VR Programming

Jake Rheude
04 Jul 2016
8 min read
This guide will go through some simple programming for VR apps using the Google VR SDK (software development kit) and the Unity3D game engine. This guide will assume that you already have a mobile device capable of running Google VR apps with a Google Cardboard, as well as a computer able to run Unity3D. Getting Started First and foremost, download the latest version of Unity3D from their website. Out of the four options, select “Personal” since it costs nothing to the user. Then download and run the installer. The installation process is straightforward. However, you must make sure that you select the “Android Build Support” component if you are planning on using an Android device or “iOS Build Support” for an iOS device. If you are unsure at this point, just select both, as neither of them requires a lot of space. Now that you have Unity3D installed, the next step is to set it up for the Google VR SDK which can be found here. After agreeing to the terms and conditions, you will be given a link to download the repository directly. After downloading and extracting the ZIP file, you will notice that it contains a Unity Package file. Double-click on the file, and Unity will automatically load up. You will then see a window similar to the pop up below on your screen. Click the “NEW” button on the top right corner to begin your first Google VR project. Give it any project name other than the default “New Unity Project” name. For this guide, I have chosen “VR Programming Tutorial” as the project name.   As soon as your new project loads up, so will the Google VR SDK Unity Package. The relevant files should all be selected by default, so simply click the “Import” button on the bottom right corner to include the SDK into your project.   In your project’s “Assets” folder, there should be a folder named “GoogleVR”. This is where all the necessary components are located in order to begin working with the SDK.   From the “Assets” folder, go into “GoogleVR”->”DemoScenes”->”HeadSetDemo”. Double-click on the Unity icon that is named “DemoScene”. You should see something similar to this upon opening the scene file. This is where you can preview the scene before playing it to get an idea of how the game objects will be laid out in the environment. So let’s try that by clicking on the “Play” button. The scene will start out from the user’s perspective, which would be the main camera.   There is a slight difference in how the left eye and right eye camera are displaying the environment. This is called distortion correction, which is intentionally designed that way in order to accustom the display to the Google Cardboard eye lenses. You may be wondering why you are unable to look around with your mouse. This design is also intentional to allow the developer to hover the mouse pointer in and out of the game window without disrupting the scene while it is playing. In order to look around in the environment, hold down the Ctrl key, and then the Alt key to enable head movement. Make sure to press the keys in this order, otherwise you will only be rotating the display along the Z-axis. You might also be wondering where the interactive menu on the floor canvas has gone. The menu is still there, it’s just that it does not appear in VR mode. Notice that the dot in the center of the display will turn into a halo when you move it over the hovering cube. This happens whenever the dot is placed over a game object in the environment that is interactive. So even if the menu is not visible, you are still able to select the menu items. If you happen to click on the “VR Mode” button, the left eye and right eye cameras will simply go away and the main camera will be the only camera that displays the world space. VR Mode can be enabled/disabled by clicking on the "VR Mode Enabled" checkbox in the project's inspector. Simply select "GvrMain" in the DemoScene hierarchy to have the inspector display its information. How the scene is displayed when VR mode is disabled. Note that as of the current implementation of Google VR, it is impossible to add UI components into the world space. This is due to the stereoscopic functionality of Google VR and the mathematics involved in calculating the distance of the game objects from the left eye and right eye cameras relative to the world environment. However, it is possible to add non-interactive UI elements (i.e. player HUD) as a child 3D element with the main camera being its parent. If you wish to create interactive UI components, they must be done strictly as game objects in the world space. This also implies that the interactive UI components must be selected by the user from a fixed position in the world space, as they would find it difficult to make their selections otherwise. Now that we have gone over the basics of the Google VR SDK, let’s move onto some programming. Applying Attributes to Game Objects When creating an interactive medium of any kind (in this case a VR app), some of the most basic functions can end up being more complicated than they initially seem to be. We will demonstrate that by incorporating, what seems to be, simple user movement. In the same DemoScene scene, we will add four more cubes to the environment. For the sake of cleanliness, first we will remove the existing cube as it will be an obstruction for our new cube. To delete a game object from a scene, simply right-click it in the hierarchy and select "Delete". Now that we have removed the existing cube, add a new one by clicking "Create" in the hierarchy, select "3D Object" and then "Cube".   Move the cube about 4-5 units along the X or Z axis away from the origin. You can do so by clicking and dragging the red or blue arrow. Now that we have added our cube, the next step is to add a script to the player’s perspective object. For this project, we can use the “GvrMain” game object to incorporate the player’s movement. In the inspector tab, click on the "Add Component" button, select "New Script" and create a new script titled "MoveToCube".   Once the script has been created, click on the cogwheel icon and select "Edit Script".   Copy and paste this code below into MoveToCube.cs Next, add an Event Trigger component to your cube.   Create a new script titled "CubeSelect". Then select the cogwheel icon and select "Edit Script" to open the script in the script editor.   Copy and paste the code below into your CubeSelect.cs script.     Click on the "Add New Event Type" button. Select "PointerClick". Click the + icon to add a method to refer to. In the left box, select the "Cube" game object. For the method, select "CubeSelect" and then click on "GetCubePosition". Finally, select "GvrMain" as the target game object for the method. When you are finished adding the necessary components, copy and paste the cube in the project hierarchy tab three times in order to get four cubes. They will seem as if they did not appear on the scene, only because they are overlapping each other. Change the positions of each cube so that they are separated from each other along the X and Z axis. Once completed, the scene should look something similar to this: Now you can run the VR app and see for yourself that we have now incorporated player movement in this basic implementation. Tips and General Advice Many developers recommend that you do not incorporate any acceleration and/or deceleration to the main camera. Doing so will cause nausea to the users and thus give them a negative experience with your VR application. Keep your VR app relatively simple! The user only has two modes of input: head tracking and the Cardboard trigger button. Trying to force functionality with multiple gestures (i.e. looking straight down and/or up) will not be intuitive to the user and will more than likely cause frustration. About the Author Jake Rheude is the Director of Business Development for Red Stag Fulfillment, a US-based e-commerce fulfillment provider focused primarily on serving ecommerce businesses shipping heavy, large, or valuable products to customers all around the world. Red Stag is so confident in its fulfillment software combined with their warehouse operations, that for any error, inaccuracy, or late shipment, not only will they reimburse you for that order, but they’ll write you a check for $50.
Read more
  • 0
  • 0
  • 3880

article-image-augmented-reality
Packt
22 Nov 2013
6 min read
Save for later

Augmented Reality

Packt
22 Nov 2013
6 min read
(For more resources related to this topic, see here.) A quick overview of AR concepts As AR has become increasingly popular in the media over the last few years, unfortunately, several distorted notions of Augmented Reality have evolved. Anything that is somehow related to the real world and involves some computing, such as standing in front of a shop and watching 3D models wear the latest fashions, has become AR. Augmented Reality emerged from research labs a few decades ago and different definitions of AR have been produced. As more and more research fields (for example, computer vision, computer graphics, human-computer interaction, medicine, humanities, and art) have investigated AR as a technology, application, or concept, multiple overlapping definitions now exist for AR. Rather than providing you with an exhaustive list of definitions, we will present some major concepts present in any AR application. Sensory augmentation The term Augmented Reality itself contains the notion of reality. Augmenting generally refers to the aspect of influencing one of your human sensory systems, such as vision or hearing, with additional information. This information is generally defined as digital or virtual and will be produced by a computer. The technology currently uses displays to overlay and merge the physical information with the digital information. To augment your hearing, modified headphones or earphones equipped with microphones are able to mix sound from your surroundings in realtime with sound generated by your computer. Displays The TV screen at home is the ideal device to perceive virtual content, streamed from broadcasts or played from your DVD. Unfortunately, most common TV screens are not able to capture the real world and augment it. An Augmented Reality display needs to simultaneously show the real and virtual worlds. One of the first display technologies for AR was produced by Ivan Sutherlandin 1964 (named "The Sword of Damocles"). The system was rigidly mounted on the ceiling and used some CRT screens and a transparent display to be able to create the sensation of visually merging the real and virtual. Since then, we have seen different trends in AR display, going from static to wearable and handheld displays. One of the major trends is the usage of optical see-through (OST) technology. The idea is to still see the real world through a semitransparent screen and project some virtual content on the screen. The merging of the real and virtual worlds does not happen on the computer screen, but directly on the retina of your eye, as depicted in the following figure: The other major trend in AR display is what we call video see-through (VST) technology. You can imagine perceiving the world not directly, but through a video on a monitor. The video image is mixed with some virtual content (as you will see in a movie) and sent back to some standard display, such as your desktop screen, your mobile phone, or the upcoming generation of head-mounted displays as shown in the following figure: In this book, we will work on Android-driven mobile phones and, therefore, discuss only VST systems; the video camera used will be the one on the back of your phone. Registration in 3D With a display (OST or VST) in your hands, you are already able to superimpose things from your real world, as you will see in TV advertisements with text banners at the bottom of the screen. However, any virtual content (such as text or images will remain fixed in its position on the screen. The superposition being really static, your AR display will act as a head-up display (HUD), but won't really be an AR as shown in the following figure: Google Glass is an example of an HUD. While it uses a semitransparent screen like an OST, the digital content remains in a static position. AR needs to know more about real and virtual content. It needs to know where things are in space (registration) and follow where they are moving (tracking). Registration is basically the idea of aligning virtual and real content in the same space. If you are into movies or sports, you will notice that 2D or 3D graphics are superimposed onto scenes of the physical world quite often. In ice hockey, the puck is often highlighted with a colored trail. In movies such as Walt Disney'sTRON (1982 version), the real and virtual elements are seamlessly blended. However, AR differs from those effects as it is based on all of the following aspects (proposed by Ronald T. Azumain 1997): It's in 3D: In the olden days, some of the movies were edited manually to merge virtual visual effects with real content. A well-known example is Star Wars, where all the lightsaber effects have been painted by hand by hundreds of artists and, thus, frame by frame. Nowadays, more complex techniques support merging digital 3D content (such as characters or cars) with the video image (and is called match moving). AR is inherently always doing that in a 3D space. The registration happens in real time: In a movie, everything is prerecorded and generated in a studio; you just play the media. In AR, everything is in real time, so your application needs to merge, at each instance, reality and virtuality. It's interactive: In a movie, you only look passively at the scene from where it has been shot. In AR, you can actively move around, forward, and backward and turn your AR display—you will still see an alignment between both worlds. Interaction with the environment Building a rich AR application needs interaction between environments; otherwise you end up with pretty, 3D graphics that can turn boring quite fast. AR interaction refers to selecting and manipulating digital and physical objects and navigating in the augmented scene. Rich AR applications allow you to use objects which can be on your table, to move some virtual characters, use your hands to select some floating virtual objects while walking on the street, or speak to a virtual agent appearing on your watch to arrange a meeting later in the day. We will look at how some of the standard mobile interaction techniques can also be applied to AR. We will also dig into specific techniques involving the manipulation of the real world. Summary Thus we have learned about the AR concepts through this article. Resources for Article: Further resources on this subject: Marker-based Augmented Reality on iPhone or iPad [Article] Creating Dynamic UI with Android Fragments [Article] Introducing an Android platform [Article]
Read more
  • 0
  • 0
  • 2901
article-image-ar-experience-using-vuforia-and-features-definition
Packt
01 Oct 2013
4 min read
Save for later

AR experience using Vuforia and features definition

Packt
01 Oct 2013
4 min read
(For more resources related to this topic, see here.) What decides trackable score? Trackables are the foundation of the AR experience using Vuforia. It is paramount to understand and create a suitable trackable for the experience to be robust and useful. The score attributed to the trackable in the target manager is our indication of how robust the target image is going to perform, but what decides that score? Best way of understanding this, is by understanding how Vuforia tracks the images. The idea is simple, it looks for position of contrasting edges in clusters all around the image. Those edges are tracked, and based on the map of positions that are stored in the dataset, Vuforia can tell the relative position of the trackable in the real world and accordingly render the 3D content on top of it. This particularly means that tracking the image is not a function of its color or what really is in it, as much as how many contrasting edges are there in the image, and how well they are distributed on the image. To better understand this, we can look on the current edges that are recognizable in the image we have just uploaded. To do that, simply click on the Show Features link on the top left of the webpage. The following image shows features in image target stones: Once the Show Features link has been clicked, the image target manager layers over the target image an overlay of where it detects a recognizable edge that it can track in a Vuforia image target. Notice that it is only tracking the dark edges between the Stones and nothing else in the image. It is even tracking only the high contrast edges between the Stones, while ignoring some of the lighter ones. Also notice that the number of edges found in the image is large, and evenly distributed all around the image. This is a great factor in what made this image great for tracking. To contrast this image's result, let's try an image that will yield a 1-star score when tried on the target manager. The following image shows landscape image added to target image: Before adding this image, intuitively we might think that this image is suitable for tracking. It certainly has a lot of details of a wide-angle landscape. But this image yielded a shocking 1-star result when added to the Target Manager. The main reason for the low score for this image is the fact that the entire image is a shade of green. This greatly diminishes contrasting edges in the image. If we are to click on the Show Features link on the top, we will be able to see what the target manager detected from the image. The following image shows features in the mountain landscape image: Immediately, we notice the considerably lower number of features detected in the image compared to the stones one. It only detected the edges created by the shadows of the objects in the image, which is clearly not enough to award it any score above 1 star. Features definition To help us get a higher score, we must understand what are the features that the target manager is looking for. We do know now that the main thing that the target manager is looking for in an image is edges, but what kind of edges specifically? To understand that, we need the definition of features. A feature is a sharp and spiked detail in the image, like the corner of an edge. Features must be very contrasting to be found and it has to be distributed evenly across the image and in a random manner. The following image shows shapes and features recognized in them: In the shapes illustrated above, we can see the yellow crosses representation of the features recognizable in the shape. The representation is as follows: Shape 1: It is a perfect circle without any corners at all, and such no features are recognizable in it. Shape 2: It has an edge to the left with two recognizable corners. That yields two features recognizable in the shape. Shape 3: It is a square with four edges and four corners. This yields four recognizable features in the shape. This means that any curved object yields little to none features at all. Primarily, humans and animals make very poor trackables due to their curved nature. Summary Thus in this article, we learned about how to track an image and which features are recognizable in an image. Resources for Article: Further resources on this subject: Interface Designing for Games in iOS [Article] Unity Game Development: Welcome to the 3D world [Article] Unity Game Development: Interactions (Part 1) [Article]
Read more
  • 0
  • 0
  • 3123

article-image-getting-started-kinect
Packt
30 Aug 2013
8 min read
Save for later

Getting Started with Kinect

Packt
30 Aug 2013
8 min read
(For more resources related to this topic, see here.) Before the birth of Microsoft Kinect, few people were familiar with the technology of motion sensing. Similar devices have been invented and developed originally for monitoring aerial and undersea aggressors in wars. Then in the non-military cases, motion sensors are widely used in alarm systems, lighting systems and so on, which could detect if someone or something disrupts the waves throughout a room and trigger predefined events. Although radar sensors and modern infrared motion sensors are used more popularly in our life, we seldom notice their existence, and can hardly make use of these devices in our own applications. But Kinect changed everything from the time it was launched in North America at the end of 2010. Different from most other user input controllers, Kinect enables users to interact with programs without really touching a mouse or a pad, but only through gestures. In a top-level view, a Kinect sensor is made up of an RGB camera, a depth sensor, an IR emitter, and a microphone array, which consists of several microphones for sound and voice recognition. A standard Kinect (for Windows) equipment is shown as follows: The Kinect device The Kinect drivers and software, which are either from Microsoft or from third-party companies, can even track and analyze advanced gestures and skeletons of multiple players. All these features make it possible to design brilliant and exciting applications with handsfree user inputs. And until now, Kinect had already brought a lot of games and software to an entirely new level. It is believed to be the bridge between the physical world we exist in and the virtual reality we create, and a completely new way of interacting with arts and a pro fitable business opportunity for individuals and companies. In this article, we will try to make an interesting game with the popular Kinect technology for user inputs, As Kinect captures the camera and depth images as video streams, we can also merge this view of our real-world environment with virtual elements, which is called Augmented Reality (AR) . This enables users to feel as if they appear and live in a nonexistent world, or something unbelievable exists in the physical earth. In this article, we will first introduce the installation of Kinect hardware and software on personal computers, and then consider a good enough idea compounded of Kinect and augmented reality elements. Before installing the Kinect device on your PCs, obviously you should buy Kinect equipment first. In this article, we will depend on Kinect for Windows or Kinect for Xbox 360, which can be learned about and bought at: http://www.microsoft.com/en-us/kinectforwindows/ http://www.xbox.com/en-US/kinect Please note that you don't need to buy an Xbox 360 at all. Kinect will be connected to PCs so that we can make custom programs for it. An alternative choice is Kinect for Windows, which is located at: http://www.microsoft.com/en-us/kinectforwindows/purchase/ The uses and developments of both will be of no difference for our cases. Installation of Kinect It is strongly suggested that you have a Windows 7 operating system or higher. It can be either 32-bit or 64-bit and with dual-core or faster processors. Linux developers can also benefit from third-party drivers and SDKs to manipulate Kinect components. Before we start to discuss the software installation, you can download both the Microsoft Kinect SDK and the Developer Toolkit from: http://www.microsoft.com/en-us/kinectforwindows/develop/developerdownloads.aspx In this article, we prefer to develop Kinect-based applications using Kinect SDK Version 1.5 (or higher versions) and the C++ language. Later versions should be backward compatible so that the source code provided in this article doesn't need to be changed. Setting up your Kinect software on PCs After we have downloaded the SDK and the Developer Toolkit, it's time for us to install them on the PC and ensure that they can work with the Kinect hardware. Let's perform the following steps: Run the setup executable with administrator permissions. Select I agree to the license terms and conditions after reading the License Agreement. The Kinect SDK setup dialog Follow the steps until the SDK installation has finished. And then, install the toolkit following similar instructions. The hardware installation is easy: plug the ends of the cable into the USB port and a power point, and plug the USB into your PC. Wait for the drivers to be found automatically. Now, start the Developer Toolkit Browser, choose Samples: C++ from the tabs, and find and run the sample with the name Skeletal Viewer. You should be able to see a new window demonstrating the depth/ skeleton/color images of the current physical scene, which is similar to the following image: The depth (left), skeleton (middle), and color (right) images read from Kinect Why did I do that? We chose to set up the SDK software at first so that it will install the motor and camera drivers, the APIs, and the documentations, as well as the toolkit including resources and samples onto the PC. If the operation steps are inversed, that is, the hardware is connected before installing the SDK, your Windows OS may not be able to recognize the device. Just start the SDK setup at this time and the device should be identified again during the installation process. But before actually using Kinect, you still have to ensure there is nothing between the device and you (the player). And it's best to keep the play space at least 1.8 m wide and about 1.8 m to 3.6 m long from the sensor. If you have more than one Kinect device, don't keep them face-to-face as there may be infrared interference between them. If you have multiple Kinects to install on the same PC, please note that one USB root hub can have one and only one Kinect connected. The problem happens because Kinect takes over 50 percent of the USB bandwidth, and it needs an individual USB controller to run. So plugging more than one device on the same USB hub means only one of them will work. The depth image at the left in the preceding image shows a human (in fact, the author) standing in front of the camera. Some parts may be totally black if they are too near (often less than 80 cm), or too far (often more than 4 m). If you are using Kinect for Windows, you can turn on Near Mode to show objects that are near the camera; however, Kinect for Xbox 360 doesn't have such features. You can read more about the software and hardware setup at: http://www.microsoft.com/en-us/kinectforwindows/purchase/sensor_setup.aspx The idea of the AR-based Fruit Ninja game Now it's time for us to define the goal we are going to achieve in this article. As a quick but practical guide for Kinect and augmented reality, we should be able to make use of the depth detection, video streaming, and motion tracking functionalities in our project. 3D graphics APIs are also important here because virtual elements should also be included and interacted with irregular user inputs not common mouse or keyboard inputs). A fine example is the Fruit Ninja game, which is already a very popular game all over the world. Especially on mobile devices like smartphones and pads, you can see people destroy different kinds of fruits by touching and swiping their fingers on the screen. With the help of Kinect, our arms can act as blades to cut off flying fruits, and our images can also be shown along with the virtual environment so that we can determine the posture of our bodies and position of our arms through the screen display. Unfortunately, this idea is not fresh enough for now. Already, there are commercial products with similar purposes available in the market; for example: http://marketplace.xbox.com/en-US/Product/Fruit-Ninja-Kinect/66acd000-77fe-1000-9115-d80258410b79 But please note that we are not going to design a completely different product here, or even bring it to the market after finishing this article. We will only learn how to develop Kinect-based applications, work in our own way from the very beginning, and benefit from the experience in our professional work or as an amateur. So it is okay to reinvent the wheel this time, and have fun in the process and the results. Summary Kinect, which is a portmanteau of the words "kinetic" and "connect", is a motion sensor developed and released by Microsoft. It provides a natural user interface (NUI) for tracking and manipulating handsfree user inputs such as gestures and skeleton motions. It can be considered as one of the most successful consumer electronics device in recent years, and we will be using this novel device to build the Fruit Ninja game in this article. We will focus on developing Kinect and AR-based applications on Windows 7 or higher using the Microsoft Kinect SDK 1.5 (or higher) and the C++ programming language. Mainly, we have introduced how to install Kinect for Windows SDK in this article. Resources for Article : Further resources on this subject: So, what is KineticJS? [Article] Mission Running in EVE Online [Article] Making Money with Your Game [Article]
Read more
  • 0
  • 0
  • 2363