Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Game Development

369 Articles
article-image-gaming-in-the-metaverse
Irena Cronin, Robert Scoble
24 Oct 2024
10 min read
Save for later

Gaming in the Metaverse

Irena Cronin, Robert Scoble
24 Oct 2024
10 min read
This article is an excerpt from the book, The Immersive Metaverse Playbook for Business Leaders, by Irena Cronin, Robert Scoble. This book explains what the metaverse is and why it is of utmost value to business decision-makers. The chapters help you get a solid understanding of the concepts and roles that augmented reality and virtual reality play, along with providing information on metaverse technologies, as well as thought-provoking consumer and enterprise use cases.Introduction In the Metaverse’s expansive gaming landscape, several compelling use cases emerge. Gamers become creators and modifiers, democratizing game development, with quality control as a challenge. Crossplatform gaming integration fosters an inclusive gaming community, while blockchain-backed virtual merchandise and collectibles introduce new opportunities with authenticity and copyright concerns. Virtual esports tournaments become global events, requiring stringent security measures. In-game advertising and product placement offer marketing potential, but striking a balance with player experience is vital. These use cases exemplify the diverse facets of gaming in the Metaverse, highlighting innovation and challenges in the pursuit of immersive digital gaming experiences. Let’s take a closer look at some use cases. Use case 1 – game creation and modification This use case exemplifies how the Metaverse empowers gamers to become active contributors to the gaming industry, shaping its future through their creativity and innovation. It highlights the democratization of game development and the dynamic synergy between technology, interactivity, and the challenges that come with it in this evolving digital realm. The setup Within the expansive and thriving Metaverse gaming landscape, a remarkable facet emerges where 3D and 2D virtual gamers are not just players but empowered creators and modifiers of games themselves. The Metaverse offers a vast canvas, brimming with opportunities for individuals and teams to craft unique gaming experiences that cater to a global audience. Interactivity In this immersive gaming domain, players transition into creators as they engage with innovative game creation and modification tools which include the use of generative AI. These tools empower users to design levels, characters, and gameplay mechanics, breathing life into their imaginative concepts. Collaborative platforms within the Metaverse foster teamwork, allowing multiple creators to combine their skills and ideas seamlessly. Technical innovation The Metaverse’s technical innovation shines through in the form of user-friendly game development platforms that bridge the gap between novice creators and experienced developers. These platforms offer intuitive interfaces, drag-and-drop functionality, and pre-built assets, making game design accessible to a wide range of enthusiasts. AI-driven game design assistance provides suggestions and optimizations, reducing the learning curve for newcomers. And with generative AI, soon whole 3D, as well as 2D, games could be fully developed. Challenges While the Metaverse fuels creativity and democratizes game development, several challenges emerge on this vibrant frontier. Balancing the influx of user-generated content with quality control becomes pivotal. Moderation systems must ensure that games meet basic quality standards and are free from malicious or inappropriate content. Additionally, striking a harmonious balance between open creativity and maintaining fair play in modified games poses an ongoing challenge. Ensuring that user-created content doesn’t disrupt the gaming experience for others is a priority. Continuous development and refinement of moderation and quality control mechanisms are essential to maintain a thriving and enjoyable gaming ecosystem within the Metaverse. Use case 2 – cross-platform gaming integration This use case illustrates how the Metaverse transcends the limitations of individual gaming platforms, fostering a more inclusive and interconnected gaming community. Cross-platform gaming integration enhances the social and competitive aspects of gaming, enabling players to unite in a shared virtual gaming universe. As the Metaverse continues to evolve, it reshapes the way we perceive and engage in gaming, offering a glimpse into the future of interactive entertainment. The setup Within the expansive Metaverse gaming landscape, cross-platform gaming integration becomes a prominent feature. This innovation allows players from various gaming platforms and devices to seamlessly interact and play together, breaking down traditional gaming silos. Interactivity In this interconnected Metaverse, players can engage in cross-platform gaming experiences with friends and gamers from around the world. Whether you’re on a PC, console, VR headset, or mobile device, you can join the same virtual gaming universe. Gamers can form diverse teams and alliances, fostering a sense of community that transcends hardware preferences. This integration offers unprecedented opportunities for collaboration and competition. Technical innovation The technical innovation driving this use case is the development of cross-platform compatibility protocols and infrastructure. These innovations bridge the gaps between different gaming ecosystems, allowing for cross-device gameplay. Advanced matchmaking algorithms ensure that players of similar skill levels can enjoy fair and balanced gaming experiences, regardless of their chosen platform. This technical integration transforms the Metaverse into a truly inclusive gaming space. Challenges While cross-platform gaming integration is a remarkable achievement, it comes with its own set of challenges. Ensuring a level playing field for all players, regardless of their platform, requires ongoing fine-tuning of matchmaking algorithms. Addressing potential disparities in hardware capabilities, such as graphics processing power, can be complex. Additionally, maintaining a secure gaming environment across diverse platforms is essential to prevent cheating, unauthorized access, and other security concerns. Use case 3 – game-related merchandise and collectibles This use case showcases how the Metaverse transforms the concept of gaming merchandise and collectibles, offering a virtual marketplace where gamers can not only enhance their in-game experiences but also indulge in their passion for collecting virtual treasures. The integration of blockchain technology adds a layer of trust and scarcity to these digital possessions, creating a virtual economy that mirrors the real-world collectibles market. The setup Within the Metaverse, a vibrant and bustling marketplace dedicated to gaming-related merchandise and collectibles emerges. This dynamic digital marketplace transforms the concept of gaming memorabilia, offering a diverse range of 3D and 2D virtual goods that hold significant value for gamers and collectors alike. It’s a virtual bazaar where gamers can immerse themselves in the culture of their favorite games beyond the confines of traditional gameplay. Interactivity In this immersive Metaverse marketplace, players gain the opportunity to personalize their avatars with a rich array of virtual gaming apparel and accessories. Gamers can browse an extensive catalog of virtual merchandise, including iconic character costumes, in-game items, and exclusive skins. This personalized customization allows players to showcase their gaming identity and immerse themselves even deeper into their favorite game worlds. Technical innovation At the heart of this use case lies the groundbreaking implementation of blockchain technology. This innovation plays a pivotal role in securing virtual collectibles, offering gamers a sense of rarity and ownership verification akin to physical collectibles. Each virtual item is tokenized on the blockchain, ensuring its uniqueness and provenance. Gamers can confidently buy, sell, and trade virtual merchandise, knowing that their digital possessions are genuine and scarce. In terms of the companies that offer game-related merchandise and collectibles, generative AI provides an inexpensive, fast, and easy way to create assets. Challenges While this Metaverse marketplace promises exciting opportunities, it also presents unique challenges. Ensuring the authenticity of virtual merchandise is paramount. The presence of counterfeit or unauthorized virtual items could undermine the trust and value within the marketplace. Additionally, addressing potential copyright issues related to virtual merchandise is a central concern. Striking a balance between allowing creative expression and protecting intellectual property rights is essential to maintaining a thriving and ethical marketplace. Negative implications of gaming in the Metaverse Gaming in the Metaverse, while promising incredible innovation and immersive experiences, also carries negative implications that span technological, social, and ethical dimensions. These potential drawbacks must be considered alongside the benefits to ensure a balanced perspective on this digital frontier. Technological implications Dependency on technology: As gaming in the Metaverse becomes increasingly sophisticated, there is a risk of individuals becoming overly dependent on technology for their entertainment and social interactions. This dependence may lead to issues related to screen time, addiction, and reduced physical activity. Technical glitches: The reliance on advanced technology for immersive gaming experiences introduces the possibility of technical glitches, server outages, or compatibility issues. These disruptions can frustrate players and disrupt their gaming experiences. Privacy concerns: The collection and utilization of user data within the Metaverse for targeted advertising and analytics can raise privacy concerns. Users may feel uncomfortable with the extent to which their online activities are monitored and analyzed. Social implications Social isolation: Immersive gaming experiences in the Metaverse could lead to social isolation as individuals spend more time in virtual environments and less time in physical social interactions. Loneliness and a lack of real-world social skills can result from excessive immersion. Economic disparities: Access to the Metaverse and its premium gaming experiences may be limited by socioeconomic factors. Those with greater financial resources may enjoy a significant advantage, potentially creating digital divides and exclusivity. Loss of physical interaction: The allure of the Metaverse may lead to a reduction in face-toface social interactions, which are crucial for human well-being. The diminished importance of real-world connections could have adverse effects on mental health and relationships. Ethical implications Exploitative monetization: In-game purchases and microtransactions within the Metaverse can sometimes exploit players, particularly younger individuals who may not fully understand the financial implications. This raises ethical questions about the gaming industry’s practices. Digital addiction: The highly immersive nature of gaming in the Metaverse may contribute to digital addiction, where individuals struggle to disengage from virtual experiences and prioritize them over real-world responsibilities. Content regulation: Balancing freedom of expression and maintaining a safe and inclusive gaming environment can be challenging. The Metaverse may struggle with regulating hate speech, inappropriate content, and cyberbullying. Psychological implications Escapism: While gaming can be a form of entertainment, excessive escapism into the Metaverse may indicate underlying psychological issues or a desire to avoid real-world problems. Impact on mental health: Long hours spent in virtual gaming worlds may lead to mental health issues such as anxiety, depression, and a distorted sense of reality. Cognitive overload: The complexity of immersive gaming experiences within the Metaverse can lead to cognitive overload, especially in younger players, potentially impacting their academic performance and cognitive development. Environmental implications Energy consumption: The infrastructure required to support the Metaverse’s immersive experiences and multiplayer environments can consume significant amounts of energy, contributing to environmental concerns. Electronic waste: As technology evolves rapidly, older gaming equipment and hardware can quickly become obsolete, leading to electronic waste disposal challenges. Conclusion In conclusion, the Metaverse is revolutionizing gaming with new opportunities for creativity, community, and commerce. It empowers gamers as creators, enables cross-platform play, introduces blockchain-backed collectibles, and hosts virtual esports tournaments. However, these advancements come with challenges like quality control, security, and balancing ads with player experience. Additionally, potential negative impacts such as technological dependency, social isolation, and ethical concerns must be addressed. By fostering innovation responsibly, the Metaverse can become a transformative and enriching space for gamers worldwide. Author BioIrena Cronin is SVP of Product for DADOS Technology, which is making an Apple Vision Pro data analytics and visualization app. She is also the CEO of Infinite Retina, which helps companies develop and implement AI, AR, and other new technologies for their businesses. Before this, she worked as an equity research analyst and gained extensive experience in evaluating both public and private companies. Cronin has an MS with Distinction in Information Technology/Management and Systems from New York University, and a joint MBA/MA from the University of Southern California. She has a BA from the University of Pennsylvania with a major in Economics (summa cum laude). Cronin speaks four languages, with a near-fluent proficiency in Mandarin.Robert Scoble has coauthored four books on technology innovation – each a decade before the said technology went completely mainstream. He has interviewed thousands of entrepreneurs in the tech industry and has long kept his social media audiences up to date on what is happening inside the world of tech, which is bringing us so many innovations. Robert currently tracks the AI industry and is the host of a new video show, Unaligned, where he interviews entrepreneurs from the thousands of AI companies he tracks as head of strategy for Infinite Retina.
Read more
  • 0
  • 0
  • 442

article-image-why-should-you-use-unreal-engine-4-to-build-augmented-and-virtual-reality-projects
Guest Contributor
20 Dec 2019
6 min read
Save for later

Why should you use Unreal Engine 4 to build Augmented and Virtual Reality projects

Guest Contributor
20 Dec 2019
6 min read
This is an exciting time to be a game developer. New technologies like Virtual Reality (VR) and Augmented Reality (AR) are here and growing in popularity, and a whole new generation of game consoles is just around the corner. Right now everyone wants to jump onto these bandwagons and create successful games using AR, VR and other technologies (for more detailed information see Chapter 15, Virtual Reality and Beyond, of my book, Learning C++ by Building Games with Unreal Engine 4 – Second Edition). But no one really wants to create everything from scratch (reinventing the wheel is just too much work). Fortunately, you don’t have to. Unreal Engine 4 (UE4) can help! Not only does Epic Games use their engine to develop their own games (and keep it constantly updated for that purpose), but many other game companies, both AAA and indie, also use the engine, and Epic is constantly adding new features for them too. They can also update the engine themselves, and they can make some of those changes available to the general public as well. UE4 also has a robust system for addons and plugins that many other developers contribute to. Some may be free, and others, more advanced ones are available for a price. These can be extremely specialized, and the developer may release regular updates to adjust to changes in Unreal and that adds new features that could make your life even easier. So how does UE4 help with new technologies? Here are some examples: Unreal Engine 4 for Virtual Reality Virtual Reality (VR) is one of the most exciting technologies around, and many people are trying to get into that particular door. VR headsets from companies like Oculus, HTC, and Sony are becoming cheaper, more common, and more powerful. If you were creating a game yourself from scratch you would need an extremely powerful graphics engine. Fortunately, UE4 already has one with VR functionality. If you already have a project you want to convert to VR, UE4 makes this easy for you. If you have an Oculus Rift or HTC Vive installed on your computer, viewing your game in VR is as easy as launching it in VR Preview mode and viewing it in your headset. While Controls might take more work, UE4 has a Motion Controller you can add to your controller to help you get started quickly. You can even edit your project in VR Mode, allowing you to see the editor view in your VR headset, which can help with positioning things in your game. If you’re starting a new project, UE4 now has VR specific templates for new projects. You also have plenty of online documentation and a large community of other users working with VR in Unreal Engine 4 who can help you out. Unreal Engine 4 for Augmented Reality Augmented Reality (AR) is another new technology that’s extremely popular right now. Pokemon Go is extremely popular, and many companies are trying to do something similar. There are also AR headsets and possibly other new ways to view AR information. Every platform has its own way of handling Augmented Reality right now. On mobile devices, iOS has ARKit to support AR programming and Android has ARCore. Fortunately, the Unreal website has a whole section on AR and how to support these in UE4 to develop AR games at https://docs.unrealengine.com/en-US/Platforms/AR/index.html. It also has information on using Magic Leap, Microsoft HoloLens, and Microsoft Hololens 2. So by using UE4, you get a big headstart on this type of development. Working with Other New Technologies If you want to use technology, chances are UE4 supports it (and if not, just wait and it will). Whether you’re trying to do procedural programming or just use the latest AI techniques (for more information see chapters 11 and 12 of my book, Learning C++ by Building Games with Unreal Engine 4 – Second Edition), chances are you can find something to help you get a head start in that technology that already works in UE4. And with so many people using the engine, it is likely to continue to be a great way to get support for new technologies. Support for New Platforms UE4 already supports numerous platforms such as PC, Mac, Mobile, web, Xbox One, PS4, Switch, and probably any other recent platform you can think of. With the next-gen consoles coming out in 2020, chances are they’re already working on support for them. For the consoles, you do generally need to be a registered developer with Microsoft, Sony, and/or Nintendo to have access to the tools to develop for those platforms (and you need expensive devkits). But as more indie games are showing up on these platforms you don’t necessarily have to be working at a AAA studio to do this anymore. What is amazing when you develop in UE4, is that publishing for another platform should basically just work. You may need to change the controls and the screen size. An AAA 3D title might be too slow to be playable if you try to just run it n a mobile device without any changes, but the basic game functionality will be there and you can make changes from that point. The Future It’s hard to tell what new technologies may come in the future, as new devices, game types, and methods of programming are developed. Regardless of what the future holds, there’s a strong chance that UE4 will support them. So learning UE4 now is a great investment of your time. If you’re interested in learning more, see my book, Learning C++ by Building Games with Unreal Engine 4 – Second Edition Author Bio Sharan Volin has been programming games for more than a decade. She has worked on AAA titles for Behavior Interactive, Blind Squirrel Games, Sony Online Entertainment/Daybreak Games, Electronic Arts (Danger Close Games), 7 Studios (Activision), and more, as well as numerous smaller games. She has primarily been a UI Programmer but is also interested in Audio, AI, and other areas. She also taught Game Programming for a year at the Art Institute of California and is the author of Learning C++ by Building Games with Unreal Engine 4 – Second Edition.
Read more
  • 0
  • 0
  • 10632

article-image-harrison-ferrone-why-c-preferred-programming-language-building-games-unity
Sugandha Lahoti
16 Dec 2019
6 min read
Save for later

Harrison Ferrone explains why C# is the preferred programming language for building games in Unity

Sugandha Lahoti
16 Dec 2019
6 min read
C# is one of the most popular programming languages which is used to create games in the Unity game engine. Experiences (games, AR/VR apps, etc) built with Unity have reached nearly 3 billion devices worldwide and were installed 24 billion times in the last 12 months. We spoke to Harrison Ferrone, software engineer, game developer, creative technologist and author of the book, “Learning C# by Developing Games with Unity 2019”. We talked about why C# is used for game designing, the recent Unity 2019.2 release, and some tips and tricks tips for those developing games with Unity. On C# and Game development Why is C# is widely-used to create games? How does it compare to C++? How is C# being used in other areas such as mobile and web development? I think Unity chose to move forward with C# instead of Javascript or Boo because of its learning curve and its history with Microsoft. [Boo was one of the three scripting languages for the Unity game engine until it was dropped in 2014]. In my experience, C# is easier to learn than languages like C++, and that accessibility is a huge draw for game designers and programmers in general. With Xamarin mobile development and ASP.NET web applications in the mix, there’s really no stopping the C# language any time soon. What are C# scripts? How are they useful for creating games with Unity? C# scripts are the code files that store behaviors in Unity, powering everything the engine does. While there are a lot of new tools that will allow a developer to make a game without them, scripts are still the best way to create custom actions and interactions within a game space. Editor’s Tip: To get started with how to create a C# script in Unity, you can go through Chapter 1 of Harrison Ferrone’s book Learning C# by Developing Games with Unity 2019. On why Harrison wrote his book, Learning C# by Developing Games with Unity 2019 Tell us the motivation behind writing your book Learning C# by Developing Games with Unity 2019. Why is developing Unity games a good way to learn the C# programming language? Why do you prefer Unity over other game engines? My main motivation for writing the book was two-fold. First, I always wanted to be a writer, so marrying my love for technology with a lifelong dream was a no-brainer. Second, I wanted to write a beginner’s book that would stay true to a beginner audience, always keeping them in mind. In terms of choosing games as a medium for learning, I’ve found that making something interesting and novel while learning a new skill-set leads to greater absorption of the material and more overall enjoyment. Unity has always been my go-to engine because its interface is highly intuitive and easy to get started with. You have 3 years of experience building iOS applications in Swift. You also have a number of articles and tutorials on the same on the Ray Wenderlich website. Recently, you started branching out into C++ and Unreal Engine 4. How did you get into game design and Unity development? What made you interested in building games?  I actually got into Game design and Unity development first, before all the iOS and Swift experience. It was my major in university, and even though I couldn’t find a job in the game industry right after I graduated, I still held onto it as a passion. On developing games The latest release of Unity, Unity 2019.2 has a number of interesting features such as ProBuilder, Shader Graph, and effects, 2D Animation, Burst Compiler, etc. What are some of your favorite features in this release? What are your expectations from Unity 2019.3?  I’m really excited about ProBuilder in this release, as it’s a huge time saver for someone as artistically challenged as I am. I think tools like this will level the playing field for independent developers who may not have access to the environment or level builders. What are some essential tips and tricks that a game developer must keep in mind when working in Unity? What are the do’s and don’ts? I’d say the biggest thing to keep in mind when working with Unity is the component architecture that it’s built on. When you’re writing your own scripts, think about how they can be separated into their individual functions and structure them like that - with purpose. There’s nothing worse than having a huge, bloated C# script that does everything under the sun and attaching it to a single game object in your project, then realizing it really needs to be separated into its component parts. What are the biggest challenges today in the field of game development? What is your advice for those developing games using C#? Reaching the right audience is always challenge number one in any industry, and game development is no different. This is especially true for indie game developers as they have to always be mindful of who they are making their game for and purposefully design and program their games accordingly. As far as advice goes, I always say the same thing - learn design patterns and agile development methodologies, they will open up new avenues for professional programming and project management. Rust has been touted as one of the successors of the C family of languages. The present state of game development in Rust is also quite encouraging. What are your thoughts on Rust for game dev? Do you think major game engines like Unity and Unreal will support Rust for game development in the future? I don’t have any experience with Rust, but major engines like Unity and Unreal are unlikely to adopt a new language because of the huge cost associated with a changeover of that magnitude. However, that also leaves the possibility open for another engine to be developed around Rust in the future that targets games, mobile, and/or web development. About the Author Harrison Ferrone was born in Chicago, IL, and raised all over. Most days, you can find him creating instructional content for LinkedIn Learning and Pluralsight, or tech editing for the Ray Wenderlich website. After a few years as an iOS developer at small start-ups, and one Fortune 500 company, he fell into a teaching career and never looked back. Throughout all this, he's bought many books, acquired a few cats, worked abroad, and continually wondered why Neuromancer isn't on more course syllabi. You can follow him on Linkedin, and GitHub.
Read more
  • 0
  • 0
  • 15717

article-image-what-is-unitys-new-data-oriented-technology-stack-dots
Guest Contributor
04 Dec 2019
7 min read
Save for later

What is Unity’s new Data-Oriented Technology Stack (DOTS)

Guest Contributor
04 Dec 2019
7 min read
If we look at the evolution of computing and gaming over the last decade, we can see how different things are with respect to ten years ago. However, one of the most significant change was moving from a world where 90% of the code ran on a single thread on a single core, to a world where we all carry in our pockets hundreds of GPU cores, and we must design efficient code that can run in parallel. If we look at this change, we can imagine why Unity feels the urge to adapt to this new paradigm. Unity’s original design born in a different era, and now it is time for it to adjust to the future. The Data-Oriented Technology Stack (DOTS) is the collective name for Unity's attempt at reshaping its internal architecture in a way that is faster, lighter, and, more important, optimized for the current massive multi-threading world. In this article, we will take a look at the main three components of DOTS and how it can help you develop next-generation games. Want to learn more optimization techniques in Unity? Unity engine comes with a great set of features to help you build high-performance games. If you want to know the techniques for writing better game scripts and learn how to optimize a game using Unity technologies such as ECS and the Burst compiler, read the book Unity Game Optimization - Third Edition written by Chris Dickinson and Dr. Davide Aversa. This book will help you get up to speed with a series of performance-enhancing coding techniques and methods that will help you improve the performance of your Unity applications. The Data-Oriented Technology Stack Three components compose the Data-Oriented Technology Stack: The Entity Component System (ECS) The C# Job System The Burst compiler Let's see each one of them. The Entity Component System (ECS) If you know Unity, you know that two basic structures represent every part of a game: the GameObject and the MonoBehavior. Every GameObject contains one or more MonoBehavior, which in turn describes the data (what the object knows) and the behavior (what the object does) of each element in a scene. GameObject and MonoBehavior worked well during Unity’s initial years; however, with the rise of multithreaded programming, many issues with the GameObject architecture started to become more evident. First of all, a GameObject is a fat, heavy, data structure. In theory, it should only be a container of MonoBehavior instances. In practice, instead, it has a significant number of problems. To name a few:  Every GameObject has a name and an ID.  Every GameObject has a C# wrapping object pointing to the native C++ code Creating and deleting a GameObject requires to lock and edit a global list (that is, these operations cannot run in parallel). Moreover, both GameObject and MonoBehavior are dynamic objects, and they are stored everywhere in memory. It would be much better if we could keep all the MonoBehavior of a GameObject close to each other so that finding and running them would be more efficient. To solve all these issues, Unity introduced the Entity Component System (ECS), a new paradigm alternative to the traditional GameObject/MonoBehavior one. As the name suggests, there are three elements in ECS: Components: They are conceptually similar to a MonoBehavior, but they contain only data. For instance, a Position component will contain only a 3D vector representing the entity position in space; a LinearVelocity component would contain only the velocity of the object, and so on. They are just plain data. Entities: They are just a “collection” of components. For example, if I have a particle in space, I can represent it just with the list of components, e.g., Position and LinearVelocity components. System: A system is where the behavior is. Each system takes a list of components and executes a function over all the entities composed by the components of the archetype. [box type="shadow" align="" class="" width=""]To be technically correct, an entity is not a collection data structure. Instead, it is a pointer to a location in memory where the entity’s components are stored. The actual storage, though, is handled by Unity.[/box] With this system, we can store components into contiguous arrays, and an entity is just a pointer to the archetype instance. A single function for each system can define the behavior of thousands of similar entities. This is more efficient than running an Update on every MonoBehavior in every GameObject. For this reason, with ECS, we can use entities without any slowdown or system overhead where it was impossible with GameObject instances. For instance, having an entity for each particle of a particle system. For more technical info on ECS there is a very detailed blog post on Unity’s official website. The C# Job System If ECS is how we describe the scene, we need a way to run the systems efficiently. As we said in the introduction, the modern approach to efficiency is to exploit every core in our system, and this means to run code in parallel using massive multithreaded systems. Sadly, multi-threading is hard. Extremely hard. As any experienced developer can tell you, moving from single-thread to multi-thread programming introduce a large class of new issues and bugs such as race conditions. Moreover, for true multi-threading, we should go as much close as possible to the metal, avoiding all the dynamic allocations and deallocations of C# and the Garbage Collector and code part of our game in C++. Luckily for us, Unity introduced a component in Data-Oriented Technology Stack with the specific purpose of simplifying multithreaded programming in Unity using only C#: the Job System. You can imagine a Job as a piece of code that you want to run in parallel over as much cores as possible. The Unity C# Job System helps you design this code in a way to avoid all common multi-threading pitfalls using only C#. You can finally unleash all the power of your machine without writing a single line of C++ code. The Burst Compiler What if I tell you that it is possible to obtain higher performances by writing C# code instead of C++? You would think I am crazy. However, I am not, and this the goal of the last component of Data-Oriented Technology Stack (DOTS): the Burst compiler. The Burst compiler is a specialized code-generator that compiles a subset of C# (often called High-Performance C# or HPC#) into machine code that is, most of the time, smaller and faster than the one that is generated by an equivalent C++ code. The Burst compiler is still in preview, but you can already try it by using the Unity's Package Manager. Of course, you get the most from it when combined with the other two DOTS components. For more technical info on the Burst compiler, you can refer to Unity’s blog post. Learn More About Unity Optimization In this article, we only scratched the surface of Data-Oriented Technology Stack (DOTS). If you want to learn more on how to use the DOTS technologies and other optimization techniques for Unity you can read more in my  book Unity Game Optimization - Third Edition. This Unity book is your guide to optimizing various aspects of your game development, from game characters and scripts, right through to animations. You will also explore techniques for solving performance issues with your VR projects and learn best practices for project organization to save time through an improved workflow. Author Bio Dr. Davide Aversa holds a PhD in artificial intelligence and an MSc in artificial intelligence and robotics from the University of Rome La Sapienza in Italy. He has a strong interest in artificial intelligence for the development of interactive virtual agents and procedural content generation. He served as a Program Committee member of video game-related conferences such as the IEEE conference on computational intelligence and games, and he also regularly participates in game-jam contests. He also writes a blog on game design and game development. You can find him on Twitter, Github, Linkedin. Unity 2019.2 releases with updated ProBuilder, Shader Graph, 2D Animation, Burst Compiler and more Japanese Anime studio Khara is switching its primary 3D CG tools to Blender Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool
Read more
  • 0
  • 0
  • 14096

article-image-blizzard-comes-under-fire-after-banning-pro-player-for-expressing-support-for-hong-kong-protests
Sugandha Lahoti
10 Oct 2019
6 min read
Save for later

Blizzard comes under fire after banning pro-player for expressing support for Hong Kong protests

Sugandha Lahoti
10 Oct 2019
6 min read
Update: The article has now been updated to include Blizzard's press release about relaxing the ban on the pro-player.  Blizzard has been under fire since last weekend after the game publisher issued a year-long ban to a Hearthstone player who expressed support for the Hong Kong protestors during a competition live stream. The incident occurred on Sunday when Ng “Blitzchung” Wai Chung voiced support for the protesters in Hong Kong in a post-game interview. Blitzchung said, “Liberate Hong Kong. Revolution of our age!” The ban is effective from October 5th and forbids Blitzchung from participating in any tournaments for an entire year. Blizzard is also withholding any prize money he would have earned from competing in the tournament. Blizzard has also terminated its contract with the two casters who were interviewing the competitor. Explaining the reason behind this ban Blizzard issued a statement, “Per the competition rule, players aren’t allowed to do anything that brings [them] into public disrepute, offends a portion or group of the public, or otherwise damages [Blizzard’s] image. While we stand by one’s right to express individual thoughts and opinions, players and other participants that elect to participate in our esports competitions must abide by the official competition rules.” Game Players, US politicians, and Blizzard employees are outraged After the ban of Hearthstone pro,  Blizzard was at the end of major backlash from video game players, US politicians, and Blizzard employees. On Tuesday, a small group of Blizzard employees walked out of work to protest the company’s actions. The demonstration featured about 12-30 employees from multiple departments, who gathered around the Orc warrior statue in the center of the company’s main campus in Irvine, California. The Daily Beast spoke with a few employees. “The action Blizzard took against the player was pretty appalling but not surprising,” said a longtime Blizzard employee. “Blizzard makes a lot of money in China, but now the company is in this awkward position where we can’t abide by our values.” “I’m disappointed,” another current Blizzard employee said. “We want people all over the world to play our games, but no action like this can be made with political neutrality.” US Senators Marco Rubio and Ron Wyden also chastised the actions of Blizzard on Twitter. “Blizzard shows it is willing to humiliate itself to please the Chinese Communist Party,” Senator Wyden tweeted. “No American company should censor calls for freedom to make a quick buck.” “Recognize what’s happening here,” Senator Rubio said on Twitter. “People who don’t live in #China must either self-censor or face dismissal & suspensions. China using access to the market as leverage to crush free speech globally. Implications of this will be felt long after everyone in U.S. politics today is gone.” https://twitter.com/marcorubio/status/1181556058659135488 Blizzard’s own forums and subreddits were also bombarded with angry messages denouncing the ban. The r/Blizzard subreddit went down for a few hours on Tuesday after the board was drowned with posts calling for players to boycott Blizzard and its games like World of Warcraft, Overwatch, and Hearthstone. On its Hearthstone board, a redditor Hinz97 said in a post,“ I play [Hearthstone] everyday, I climbed to Legend several times. I spent more than $10k. As a [Hong Konger], I quit [ Hearthstone] without consideration.” “I’ve been playing since beta. Good riddance,” Redditor UltimaterializerX said. “Blizzard CLEARLY only cares about the Chinese market. The censorship of art was bad enough. The censorship of human life is indefensible. Finding videos of what’s going on in Hong Kong is easy and I suggest everyone do so. It’s Tiananmen Square all over again.” https://twitter.com/Espsilverfire2/status/1182001007976423424 Mark Kern, Team Lead for Vanilla World of Warcraft tweeted, “This hurts. But until Blizzard reverses their decision on @blitzchungHS.  I am giving up playing Classic WoW, which I helped make and helped convince Blizzard to relaunch. There will be no Mark of Kern guild after all.” Fortnite creator Epic Games released a statement stating that it will not ban players or content creators for political speech. “Epic supports everyone’s right to express their views on politics and human rights. We wouldn’t ban or punish a Fortnite player or content creator for speaking on these topics.” https://twitter.com/TimSweeneyEpic/status/1181933071760789504 Blizzard has not yet responded to this development or lifted the ban. Hong Kong protests began in June and now the tech industry has been caught in between the China HK political tussle. In August, Chinese state-run media agencies were caught buying advertisements and promoted tweets on Twitter and Facebook to portray Hong Kong protestors and their pro-democracy demonstrations as violent. Post this revelation, Twitter banned 936 accounts managed by the Chinese state; Facebook removed seven Pages, three Groups and five Facebook accounts involved in coordinated inauthentic behavior; Google shutdown 210 YouTube channels. Most recently Apple, after pressure from the Chinese govt, banned a protest safety app that helps people track locations of the Hong Kong police which made people very angry. Amid the protests a day later, Apple again brought it back to the iOS Store. Yesterday, according to Quartz investigations editor John Keefe, Apple has reportedly removed the Quartz application from the App Store at the request of the Chinese government. Quartz has been covering the Hong Kong protests in detail and has been blocked across all of mainland China. Update as on Oct 11: After four days of mounting public pressure, Blizzard Entertainment published a press release partially relaxing the ban on the professional player who expressed support for the Hong Kong protestors during a competition live stream. The one year ban on Ng "blitzchung" has since been changed to a six-month suspension. Additionally, the two Chinese broadcasters who had been fired are now put on a six-month suspension from their jobs. Blizzard President J. Allen Brack wrote also clarified that they were not under the influence of China. "The specific views expressed by blitzchung were NOT a factor in the decision we made," Brack wrote. "I want to be clear: our relationships in China had no influence on our decision." Apple bans HKmap.live, a Hong Kong protest safety app from the iOS Store as it makes people ‘evade law enforcement’. Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong Kong protests. Telegram faces massive DDoS attack; suspects link to the ongoing Hong Kong protests
Read more
  • 0
  • 0
  • 2615

article-image-unreal-engine-4-23-releases-with-major-new-features-like-chaos-virtual-production-improvement-in-real-time-ray-tracing-and-more
Vincy Davis
09 Sep 2019
5 min read
Save for later

Unreal Engine 4.23 releases with major new features like Chaos, Virtual Production, improvement in real-time ray tracing and more

Vincy Davis
09 Sep 2019
5 min read
Last week, Epic released the stable version of Unreal Engine 4.23 with a whopping 192 improvements. The major features include beta varieties like Chaos - Destruction, Multi-Bounce Reflection fallback in Real-Time Ray Tracing, Virtual Texturing, Unreal Insights, HoloLens 2 native support, Niagara improvements and many more. Unreal Engine 4.23 will no longer support iOS 10, as iOS 11 is now the minimum required version. What’s new in Unreal Engine 4.23? Chaos - Destruction Labelled as “Unreal Engine's new high-performance physics and destruction system” Chaos is available in beta for users to attain cinematic-quality visuals in real-time scenes. It also supports high level artist control over content creation and destruction. https://youtu.be/fnuWG2I2QCY Chaos supports many distinct characteristics like- Geometry Collections: It is a new type of asset in Unreal for short-lived objects. The Geometry assets can be built using one or more Static Meshes. It offers flexibility to the artist on choosing what to simulate, how to organize and author the destruction. Fracturing: A Geometry Collection can be broken into pieces either individually, or by applying one pattern across multiple pieces using the Fracturing tools. Clustering: Sub-fracturing is used by artists to increase optimization. Every sub-fracture is an extra level added to the Geometry Collection. The Chaos system keeps track of the extra levels and stores the information in a Cluster, to be controlled by the artist. Fields: It can be used to control simulation and other attributes of the Geometry Collection. Fields enable users to vary the mass, make something static, to make the corner more breakable than the middle, and others. Unreal Insights Currently in beta, Unreal Insights enable developers to collect and analyze data about Unreal Engine's behavior in a fixed way. The Trace System API system is one of its components and is used to collect information from runtime systems consistently. Another component of Unreal Insights is called the Unreal Insights Tool. It supplies interactive visualization of data through the Analysis API. For in-depth details about Unreal Insights and other features, you can also check out the first preview release of Unreal Engine 4.23. Virtual Production Pipeline Improvements Unreal Engine 4.23 explores advancements in virtual production pipeline by improving virtually scout environments and compose shots by connecting live broadcast elements with digital representations and more. In-Camera VFX: With improvements in-Camera VFX, users can achieve final shots live on set by combining real-world actors and props with Unreal Engine environment backgrounds. VR Scouting for Filmmakers: The new VR Scouting tools can be used by filmmakers to navigate and interact with the virtual world in VR. Controllers and settings can also be customized in Blueprints,rather than rebuilding the engine in C++. Live Link Datatypes and UX Improvements: The Live Link Plugin be used to drive character animation, camera, lights, and basic 3D transforms dynamically from other applications and data sources in the production pipeline. Other improvements include save and load presets for Live Link setups, better status indicators to show the current Live Link sources, and more. Remote Control over HTTP: Unreal Engine 4.23 users can send commands to Unreal Engine and Unreal Editor remotely over HTTP. This makes it possible for users to create customized web user interfaces to trigger changes in the project's content. Read Also: Epic releases Unreal Engine 4.22, focuses on adding “photorealism in real-time environments” Real-Time Ray tracing Improvements Performance and Stability Expanded DirectX 12 Support Improved Denoiser quality Increased Ray Traced Global Illumination (RTGI) quality Additional Geometry and Material Support Landscape Terrain Hierarchical Instanced Static Meshes (HISM) and Instanced Static Meshes (ISM) Procedural Meshes Transmission with SubSurface Materials World Position Offset (WPO) support for Landscape and Skeletal Mesh geometries Multi-Bounce Reflection Fallback Unreal Engine 4.23 provides improved support for multi-bounce Ray Traced Reflections (RTR) by using Reflection Captures. This will increase the performance of all types of intra-reflections. Virtual Texturing The beta version of Virtual Texturing in Unreal Engine 4.23 enables users to create and use large textures for a lower and more constant memory footprint at runtime. Streaming Virtual Texturing: The Streaming Virtual Texturing uses the Virtual Texture assets to present an option to stream textures from disk rather than the existing Mip-based streaming. It minimizes the texture memory overhead and increases performance when using very large textures. Runtime Virtual Texturing: The Runtime Virtual Texturing avails a Runtime Virtual Texture asset. It can be used to supply shading data over large areas, thus making it suitable for Landscape shading. Unreal Engine 4.23 also presents new features like Skin Weight Profiles, Animation Streaming, Dynamic Animation Graphs, Open Sound Control, Sequencer Curve Editor Improvements, and more. As expected, users love the new features in Unreal Engine 4.23, especially Chaos. https://twitter.com/rista__m/status/1170608746692673537 https://twitter.com/jayakri59101140/status/1169553133518782464 https://twitter.com/NoisestormMusic/status/1169303013149806595 To know about the full updates in Unreal Engine 4.23, users can head over to the Unreal Engine blog. Other news in Game Development Japanese Anime studio Khara is switching its primary 3D CG tools to Blender Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects
Read more
  • 0
  • 0
  • 5703
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-bitbucket-to-no-longer-support-mercurial-users-must-migrate-to-git-by-may-2020
Fatema Patrawala
21 Aug 2019
6 min read
Save for later

Bitbucket to no longer support Mercurial, users must migrate to Git by May 2020

Fatema Patrawala
21 Aug 2019
6 min read
Yesterday marked an end of an era for Mercurial users, as Bitbucket announced to no longer support Mercurial repositories after May 2020. Bitbucket, owned by Atlassian, is a web-based version control repository hosting service, for source code and development projects. It has used Mercurial since the beginning in 2008 and then Git since October 2011. Now almost after ten years of sharing its journey with Mercurial, the Bitbucket team has decided to remove the Mercurial support from the Bitbucket Cloud and its API. The official announcement reads, “Mercurial features and repositories will be officially removed from Bitbucket and its API on June 1, 2020.” The Bitbucket team also communicated the timeline for the sunsetting of the Mercurial functionality. After February 1, 2020 users will no longer be able to create new Mercurial repositories. And post June 1, 2020 users will not be able to use Mercurial features in Bitbucket or via its API and all Mercurial repositories will be removed. Additionally all current Mercurial functionality in Bitbucket will be available through May 31, 2020. The team said the decision was not an easy one for them and Mercurial held a special place in their heart. But according to a Stack Overflow Developer Survey, almost 90% of developers use Git, while Mercurial is the least popular version control system with only about 3% developer adoption. Apart from this Mercurial usage on Bitbucket saw a steady decline, and the percentage of new Bitbucket users choosing Mercurial fell to less than 1%. Hence they decided on removing the Mercurial repos. How can users migrate and export their Mercurial repos Bitbucket team recommends users to migrate their existing Mercurial repos to Git. They have also extended support for migration, and kept the available options open for discussion in their dedicated Community thread. Users can discuss about conversion tools, migration, tips, and also offer troubleshooting help. If users prefer to continue using the Mercurial system, there are a number of free and paid Mercurial hosting services for them. The Bitbucket team has also created a Git tutorial that covers everything from the basics of creating pull requests to rebasing and Git hooks. Community shows anger and sadness over decision to discontinue Mercurial support There is an outrage among the Mercurial users as they are extremely unhappy and sad with this decision by Bitbucket. They have expressed anger not only on one platform but on multiple forums and community discussions. Users feel that Bitbucket’s decision to stop offering Mercurial support is bad, but the decision to also delete the repos is evil. On Hacker News, users speculated that this decision was influenced by potential to market rather than based on technically superior architecture and ease of use. They feel GitHub has successfully marketed Git and that's how both have become synonymous to the developer community. One of them comments, “It's very sad to see bitbucket dropping mercurial support. Now only Facebook and volunteers are keeping mercurial alive. Sometimes technically better architecture and user interface lose to a non user friendly hard solutions due to inertia of mass adoption. So a lesson in Software development is similar to betamax and VHS, so marketing is still a winner over technically superior architecture and ease of use. GitHub successfully marketed git, so git and GitHub are synonymous for most developers. Now majority of open source projects are reliant on a single proprietary solution Github by Microsoft, for managing code and project. Can understand the difficulty of bitbucket, when Python language itself moved out of mercurial due to the same inertia. Hopefully gitlab can come out with mercurial support to migrate projects using it from bitbucket.” Another user comments that Mercurial support was the only reason for him to use Bitbucket when GitHub is miles ahead of Bitbucket. Now when it stops supporting Mercurial too, Bitbucket will end soon. The comment reads, “Mercurial support was the one reason for me to still use Bitbucket: there is no other Bitbucket feature I can think of that Github doesn't already have, while Github's community is miles ahead since everyone and their dog is already there. More importantly, Bitbucket leaves the migration to you (if I read the article correctly). Once I download my repo and convert it to git, why would I stay with the company that just made me go through an annoying (and often painful) process, when I can migrate to Github with the exact same command? And why isn't there a "migrate this repo to git" button right there? I want to believe that Bitbucket has smart people and that this choice is a good one. But I'm with you there - to me, this definitely looks like Bitbucket will die.” On Reddit, programming folks see this as a big change from Bitbucket as they are the major mercurial hosting provider. And they feel Bitbucket announced this at a pretty short notice and they require more time for migration. Apart from the developer community forums, on Atlassian community blog as well users have expressed displeasure. A team of scientists commented, “Let's get this straight : Bitbucket (offering hosting support for Mercurial projects) was acquired by Atlassian in September 2010. Nine years later Atlassian decides to drop Mercurial support and delete all Mercurial repositories. Atlassian, I hate you :-) The image you have for me is that of a harmful predator. We are a team of scientists working in a university. We don't have computer scientists, we managed to use a version control simple as Mercurial, and it was a hard work to make all scientists in our team to use a version control system (even as simple as Mercurial). We don't have the time nor the energy to switch to another version control system. But we will, forced and obliged. I really don't want to check out Github or something else to migrate our projects there, but we will, forced and obliged.” Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack Attackers wiped many GitHub, GitLab, and Bitbucket repos with ‘compromised’ valid credentials leaving behind a ransom note BitBucket goes down for over an hour
Read more
  • 0
  • 0
  • 10053

article-image-are-you-looking-at-transitioning-from-being-a-developer-to-manager-here-are-some-leadership-roles-to-consider
Packt Editorial Staff
04 Jul 2019
6 min read
Save for later

Are you looking at transitioning from being a developer to manager? Here are some leadership roles to consider

Packt Editorial Staff
04 Jul 2019
6 min read
What does the phrase "a manager" really mean anyway? This phrase means different things to different people and is often overused for the position which nearly matches an analyst-level profile! This term, although common, is worth defining what it really means, especially in the context of software development. This article is an excerpt from the book The Successful Software Manager written by an internationally experienced IT manager, Herman Fung. This book is a comprehensive and practical guide to managing software developers, software customers, and explores the process of deciding what software needs to be built, not how to build it. In this article, we’ll look into aspects you must be aware of before making the move to become a manager in the software industry. A simple distinction I once used to illustrate the difference between an analyst and a manager is that while an analyst identifies, collects, and analyzes information, a manager uses this analysis and makes decisions, or more accurately, is responsible and accountable for the decisions they make. The structure of software companies is now enormously diverse and varies a lot from one to another, which has an obvious impact on how the manager’s role and their responsibilities are defined, which will be unique to each company. Even within the same company, it's subject to change from time to time, as the company itself changes. Broadly speaking, a manager within software development can be classified into three categories, as we will now discuss: Team Leader/Manager This role is often a lead developer who also doubles up as the team spokesperson and single point of contact. They'll typically be the most senior and knowledgeable member of a small group of developers, who work on the same project, product, and technology. There is often a direct link between each developer in the team and their code, which means the team manager has a direct responsibility to ensure the product as a whole works. Usually, the team manager is also asked to fulfill the people management duties, such as performance reviews and appraisals, and day-to-day HR responsibilities. Development/Delivery Manager This person could be either a techie or a non-techie. They will have a good understanding of the requirements, design, code, and end product. They will manage running workshops and huddles to facilitate better overall team working and delivery. This role may include setting up visual aids, such as team/project charts or boards. In a matrix management model, where developers and other experts are temporarily asked to work in project teams, the development manager will not be responsible for HR and people management duties. Project Manager This person is most probably a non-techie, but there are exceptions, and this could be a distinct advantage on certain projects. Most importantly, a project manager will be process-focused and output-driven and will focus on distributing tasks to individuals. They are not expected to jump in to solve technical problems, but they are responsible for ensuring that the proper resources are available, while managing expectations. Specifically, they take part in managing the project budget, timeline, and risks. They should also be aware of the political landscape and management agenda within the organization to be able to navigate through them. The project manager ensures the project follows the required methodology or process framework mandated by the Project Management Office (PMO). They will not have people-management responsibilities for project team members. Agile practitioner As with all roles in today's world of tech, these categories will vary and overlap. They can even be held by the same person, which is becoming an increasingly common trait. They are also constantly evolving, which exemplifies the need to learn and grow continually, regardless of your role or position. If you are a true Agile practitioner, you may have issues in choosing these generalized categories, (Team Leader, Development Manager and Project Manager)  and you'd be right to do so! These categories are most applicable to an organization that practises the traditional Waterfall model. Without diving into the everlasting Waterfall vs Agile debate, let's just say that these are the categories that transcend any methodologies. Even if they're not referred to by these names, they are the roles that need to be performed, to varying degrees, at various times. For completeness, it is worth noting one role specific to Agile, that is being a scrum master. Scrum master A scrum master is a role often compared – rightly or wrongly – with that of the project manager. The key difference is that their focus is on facilitation and coaching, instead of organizing and control. This difference is as much of a mindset as it is a strict practice, and is often referred to as being attributes of Servant Leadership. I believe a good scrum master will show traits of a good project manager at various times, and vice versa. This is especially true in ensuring that there is clear communication at all times and the team stays focused on delivering together. Yet, as we look back at all these roles, it's worth remembering that with the advent of new disciplines such as big data, blockchain, artificial intelligence, and machine learning, there are new categories and opportunities to move from a developer role into a management position, for example, as an algorithm manager or data manager. Transitioning, growing, progressing, or simply changing from a developer to a manager is a wonderfully rewarding journey that is unique to everyone. After clarifying what being a “modern manager" really means, and the broad categories applicable in software development (Team / Development / Project / Agile), the overarching and often key consideration for developers is whether it means they will be managing people and writing less code. In this article, we looked into different leadership roles that are available for developers for their career progression plan. Develop crucial skills to enhance your performance and advance your career with The Successful Software Manager written by Herman Fung. “Developers don’t belong on a pedestal, they’re doing a job like everyone else” – April Wensel on toxic tech culture and Compassionate Coding [Interview] Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl” ‘I code in my dreams too’, say developers in Jetbrains State of Developer Ecosystem 2019 Survey
Read more
  • 0
  • 0
  • 4030

article-image-microsofts-xbox-team-at-e3-2019-project-scarlett-ai-powered-flight-simulator-keanu-reeves-in-cyberpunk-2077-and-more
Bhagyashree R
11 Jun 2019
6 min read
Save for later

Microsoft’s Xbox team at E3 2019: Project Scarlett, AI-powered Flight Simulator, Keanu Reeves in Cyberpunk 2077, and more

Bhagyashree R
11 Jun 2019
6 min read
On Sunday at E3 2019, Microsoft made some really big announcements that had the audience screaming. These included release date of Project Scarlett, Xbox One successor, more than 60 game trailers, Keanu Reeves humbling the stage for promoting Cyberpunk 2077, and much more. E3, which stands for Electronic Entertainment Expo, is one of the biggest gaming events of the year. Its official dates are June 11-13, however, these dates are just for the shows happening at Los Angeles Convention Center. The press conferences were held on June 8 and 9. Along with hosting the world premiere of several computer and video games, this event also showcases new hardware and software products that take the gaming experience to the next level. Here are some of the highlights from Microsoft’s press conference: Project Scarlett will arrive in fall 2020 with Halo infinite Rumors have been going around about the next-generation of Xbox since December last year. Putting all these rumors to rest, Microsoft officially announced that Project Scarlett is planned to release during fall next year. The tech giant further shared that the next big upcoming space war game, Halo Infinite will launch alongside Project Scarlett. According to Microsoft, we can expect this new device to be four times more powerful than Xbox One X. It includes a custom designed CPU based on AMD’s Zen 2 and Radeon RDNA architecture. It supports 8K gaming, framerates of 120fps, and ray-tracing. The device will also include a non-mechanical SSD hard drive enabling faster game loads than its older mechanical hard drives. https://youtu.be/-ktN4bycj9s xCloud will open for public trials in October, one month ahead of Google’s Stadia After giving a brief live demonstration of its upcoming xCloud game streaming service in March, Microsoft announced that it will be available to the public in October this year. This announcement seems to be a direct response to Google’s Stadia, which was revealed in March and will make its public debut in November. Along with sharing the release date, the tech giant also gave E3 attendees the first hands-on trial of the service. At the event, Xbox chief Phil Spencer said, “Two months ago we connected all Xbox developers to Project xCloud. Today, we invite those of you here at E3 for our first public hands-on of Project xCloud. To experience the freedom to play right here at the show.” Microsoft built xCloud to provide gamers with a new way to play Xbox games where the gamers decide how and when they want to play. With xCloud Console Streaming you will be able to “turn your Xbox One into your own personal and free xCloud server.” It will enable you to stream entire Xbox One library including games from Xbox Game Pass to any device of your choice. https://twitter.com/Xbox/status/1137833126959280128 Xbox Elite 2 Wireless Controller to reach you on November 4th for $179.99 Microsoft announced the launch of Xbox Elite Wireless Controller Series 2, which it says is the totally re-engineered version of the previous Elite controller. It is open for pre-orders now and will be available on November 4th in 24 countries, priced at $179.99. The controller’s new adjustable tension thumbsticks provide improved precision and shorter hair trigger locks enable you to fire faster. The device includes USB-C support, Bluetooth, and a rechargeable battery that lasts for up to 40 hours per charge. Along with all these updates, it also allows you to do limitless customizations with the Xbox Accessories app on Xbox One and Windows 10 PC. https://youtu.be/SYVw0KqQiOI Cyberpunk 2077 featuring Keanu Reeves to release on April 16th, 2020 Last year, CD Projekt Red, the creator of Cyberpunk 2077 said that E3 2019 will be its “most important E3” ever and we cannot agree more. Keanu Reeves aka John Wick himself came to announce the release date of Cyberpunk 2077, which is April 16th, 2020. The trailer of the game ended with the biggest surprise for the audience: the appearance of Reeves’ as a character apparently named “Mr. Fusion.” The crowd went wild as soon as Reeves took to the stage to promote Cyberpunk 2077. When the actor said that walking in the streets of Cyberpunk 2077 will be breathtaking, a guy from the crowd yelled, "you're breathtaking." To which Reeves kindly replied: https://twitter.com/Xbox/status/1137854943006605312 The guy from the crowd was YouTuber Peter Sark, who shared on Twitter that "Keanu Reeves just announced to the world that I'm breathtaking." https://twitter.com/petertheleader/status/1137846108305014784 CD Projekt Red is now giving him a free collector’s edition copy of the game, which is amazing! For everyone else, don’t be upset as you can also pre-order Cyberpunk 2077’s physical and collector's edition from their official website. Though as xCloud, attendees will not be able to get a hands-on trial now, they will still be able to see the demo presentation. The demo is happening at the South Hall in the LA Convention Center, booth 1023, on June 11-13th. The new Microsoft Flight Simulator is powered by Azure cloud AI Microsoft showcased a new installment of its long-running Microsoft Flight Simulator series. Powered by Azure cloud artificial intelligence and satellite data, this updated simulator is capable of rendering amazingly real visuals. Though not many details have been shared, its trailer shows a stunning real-time 4K footage of lifelike landscapes and aircraft. Have a look at it yourself! https://youtu.be/ReDDgFfWlS4 Though this simulator has been PC-only in the past, this newly updated simulator is coming to Xbox One and will also be available via Xbox Game Pass. The specific release dates are unknown but they're expected to be out next year. Double Fine joins Xbox Game Studios At the event, Tim Schafer, the founder of Double Fine, shared that his company has now joined Microsoft’s ever-growing gaming studio. Double Fine Productions is the studio behind games like Psychonauts, Brutal Legend, Broken Age. He jokingly said, "For the last 19 years, we've been independent. Then Microsoft came to us and said, 'What if we gave you a bunch of money.' And I said 'OK, yeah.'" Schafer posted another video on YouTube explaining what this means for the company’s existing commitments. He shared that Psychonauts 2 will be provided to crowdfunders on the platforms they chose, but going forward the company will focus on "Xbox, Game Pass, and PC.” https://youtu.be/uR9yKz2C3dY These were just a few key announcements from the event. To know more, you can watch Microsoft keynote on YouTube: https://www.youtube.com/watch?v=zeYQ-kPF0iQ 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Microsoft introduces Service Mesh Interface (SMI) for interoperability across different service mesh technologies
Read more
  • 0
  • 0
  • 3229

article-image-microsoft-build-2019-microsoft-showcases-new-updates-to-ms-365-platfrom-with-focus-on-ai-and-developer-productivity
Sugandha Lahoti
07 May 2019
10 min read
Save for later

Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with focus on AI and developer productivity

Sugandha Lahoti
07 May 2019
10 min read
At the ongoing Microsoft Build 2019 conference, Microsoft has announced a ton of new features and tool releases with a focus on innovation using AI and mixed reality with the intelligent cloud and the intelligent edge. In his opening keynote, Microsoft CEO Satya Nadella outlined the company’s vision and developer opportunity across Microsoft Azure, Microsoft Dynamics 365 and IoT Platform, Microsoft 365, and Microsoft Gaming. “As computing becomes embedded in every aspect of our lives, the choices developers make will define the world we live in,” said Satya Nadella, CEO, Microsoft. “Microsoft is committed to providing developers with trusted tools and platforms spanning every layer of the modern technology stack to build magical experiences that create new opportunity for everyone.” https://youtu.be/rIJRFHDr1QE Increasing developer productivity in Microsoft 365 platform Microsoft Graph data connect Microsoft Graphs are now powered with data connectivity, a service that combines analytics data from the Microsoft Graph with customers’ business data. Microsoft Graph data connect will provide Office 365 data and Microsoft Azure resources to users via a toolset. The migration pipelines are deployed and managed through Azure Data Factory. Microsoft Graph data connect can be used to create new apps shared within enterprises or externally in the Microsoft Azure Marketplace. It is generally available as a feature in Workplace Analytics and also as a standalone SKU for ISVs. More information here. Microsoft Search Microsoft Search works as a unified search experience across all Microsoft apps-  Office, Outlook, SharePoint, OneDrive, Bing and Windows. It applies AI technology from Bing and deep personalized insights surfaced by the Microsoft Graph to personalized searches. Other features included in Microsoft Search are: Search box displacement Zero query typing and key-phrase suggestion feature Query history feature, and personal search query history Administrator access to the history of popular searches for their organizations, but not to search history for individual users Files/people/site/bookmark suggestions Microsoft Search will begin publicly rolling out to all Microsoft 365 and Office 365 commercial subscriptions worldwide at the end of May. Read more on MS Search here. Fluid Framework As the name suggests Microsoft's newly launched Fluid framework allows seamless editing and collaboration between different applications. Essentially, it is a web-based platform and componentized document model that allows users to, for example, edit a document in an application like Word and then share a table from that document in Microsoft Teams (or even a third-party application) with real-time syncing. Microsoft says Fluid can translate text, fetch content, suggest edits, perform compliance checks, and more. The company will launch the software developer kit and the first experiences powered by the Fluid Framework later this year on Microsoft Word, Teams, and Outlook. Read more about Fluid framework here. Microsoft Edge new features Microsoft Build 2019 paved way for a bundle of new features to Microsoft’s flagship web browser, Microsoft Edge. New features include: Internet Explorer mode: This mode integrates Internet Explorer directly into the new Microsoft Edge via a new tab. This allows businesses to run legacy Internet Explorer-based apps in a modern browser. Privacy Tools: Additional privacy controls which allow customers to choose from 3 levels of privacy in Microsoft Edge—Unrestricted, Balanced, and Strict. These options limit third parties to track users across the web.  “Unrestricted” allows all third-party trackers to work on the browser. “Balanced” prevents third-party trackers from sites the user has not visited before. And “Strict” blocks all third-party trackers. Collections: Collections allows users to collect, organize, share and export content more efficiently and with Office integration. Microsoft is also migrating Edge as a whole over to Chromium. This will make Edge easier to develop for by third parties. For more details, visit Microsoft’s developer blog. New toolkit enhancements in Microsoft 365 Platform Windows Terminal Windows Terminal is Microsoft’s new application for Windows command-line users. Top features include: User interface with emoji-rich fonts and graphics-processing-unit-accelerated text rendering Multiple tab support and theming and customization features Powerful command-line user experience for users of PowerShell, Cmd, Windows Subsystem for Linux (WSL) and all forms of command-line application Windows Terminal will arrive in mid-June and will be delivered via the Microsoft Store in Windows 10. Read more here. React Native for Windows Microsoft announced a new open-source project for React Native developers at Microsoft Build 2019. Developers who prefer to use the React/web ecosystem to write user-experience components can now leverage those skills and components on Windows by using “React Native for Windows” implementation. React for Windows is under the MIT License and will allow developers to target any Windows 10 device, including PCs, tablets, Xbox, mixed reality devices and more. The project is being developed on GitHub and is available for developers to test. More mature releases will follow soon. Windows Subsystem for Linux 2 Microsoft rolled out a new architecture for Windows Subsystem for Linux: WSL 2 at the MSBuild 2019. Microsoft will also be shipping a fully open-source Linux kernel with Windows specially tuned for WSL 2. New features include massive file system performance increases (twice as much speed for file-system heavy operations, such as Node Package Manager install). WSL also supports running Linux Docker containers. The next generation of WSL arrives for Insiders in mid-June. More information here. New releases in multiple Developer Tools .NET 5 arrives in 2020 .NET 5 is the next major version of the .NET Platform which will be available in 2020. .NET 5 will have all .NET Core features as well as more additions: One Base Class Library containing APIs for building any type of application More choice on runtime experiences Java interoperability will be available on all platforms. Objective-C and Swift interoperability will be supported on multiple operating systems .NET 5 will provide both Just-in-Time (JIT) and Ahead-of-Time (AOT) compilation models to support multiple compute and device scenarios. .NET 5 also will offer one unified toolchain supported by new SDK project types as well as a flexible deployment model (side-by-side and self-contained EXEs) Detailed information here. ML.NET 1.0 ML.NET is Microsoft’s open-source and cross-platform framework that runs on Windows, Linux, and macOS and makes machine learning accessible for .NET developers. Its new version, ML.NET 1.0, was released at the Microsoft Build Conference 2019 yesterday. Some new features in this release are: Automated Machine Learning Preview: Transforms input data by selecting the best performing ML algorithm with the right settings. AutoML support in ML.NET is in preview and currently supports Regression and Classification ML tasks. ML.NET Model Builder Preview: Model Builder is a simple UI tool for developers which uses AutoML to build ML models. It also generates model training and model consumption code for the best performing model. ML.NET CLI Preview: ML.NET CLI is a dotnet tool which generates ML.NET Models using AutoML and ML.NET. The ML.NET CLI quickly iterates through a dataset for a specific ML Task and produces the best model. Visual Studio IntelliCode, Microsoft’s tool for AI-assisted coding Visual Studio IntelliCode, Microsoft’s AI-assisted coding is now generally available. It is essentially an enhanced IntelliSense, Microsoft’s extremely popular code completion tool. Intellicode is trained by using the code of thousands of open-source projects from GitHub that have at least 100 stars. It is available for C# and XAML for Visual Studio and Java, JavaScript, TypeScript, and Python for Visual Studio Code. IntelliCode also is included by default in Visual Studio 2019, starting in version 16.1 Preview 2. Additional capabilities, such as custom models, remain in public preview. Visual Studio 2019 version 16.1 Preview 2 Visual Studio 2019 version 16.1 Preview 2 release includes IntelliCode and the GitHub extensions by default. It also brings out of preview the Time Travel Debugging feature introduced with version 16.0. Also includes multiple performances and productivity improvements for .NET and C++ developers. Gaming and Mixed Reality Minecraft AR game for mobile devices At the end of Microsoft’s Build 2019 keynote yesterday, Microsoft teased a new Minecraft game in augmented reality, running on a phone. The teaser notes that more information will be coming on May 17th, the 10-year anniversary of Minecraft. https://www.youtube.com/watch?v=UiX0dVXiGa8 HoloLens 2 Development Edition and unreal engine support The HoloLens 2 Development Edition includes a HoloLens 2 device, $500 in Azure credits and three-months free trials of Unity Pro and Unity PiXYZ Plugin for CAD data, starting at $3,500 or as low as $99 per month. The HoloLens 2 Development Edition will be available for preorder soon and will ship later this year. Unreal Engine support for streaming and native platform integration will be available for HoloLens 2 by the end of May. Intelligent Edge and IoT Azure IoT Central new features Microsoft Build 2019 also featured new additions to Azure IoT Central, an IoT software-as-a-service solution. Better rules processing and customs rules with services like Azure Functions or Azure Stream Analytics Multiple dashboards and data visualization options for different types of users Inbound and outbound data connectors, so that operators can integrate with   systems Ability to add custom branding and operator resources to an IoT Central application with new white labeling options New Azure IoT Central features are available for customer trials. IoT Plug and Play IoT Plug and Play is a new, open modeling language to connect IoT devices to the cloud seamlessly without developers having to write a single line of embedded code. IoT Plug and Play also enable device manufacturers to build smarter IoT devices that just work with the cloud. Cloud developers will be able to find IoT Plug and Play enabled devices in Microsoft’s Azure IoT Device Catalog. The first device partners include Compal, Kyocera, and STMicroelectronics, among others. Azure Maps Mobility Service Azure Maps Mobility Service is a new API which provides real-time public transit information, including nearby stops, routes and trip intelligence. This API also will provide transit services to help with city planning, logistics, and transportation. Azure Maps Mobility Service will be in public preview in June. Read more about Azure Maps Mobility Service here. KEDA: Kubernetes-based event-driven autoscaling Microsoft and Red Hat collaborated to create KEDA, which is an open-sourced project that supports the deployment of serverless, event-driven containers on Kubernetes. It can be used in any Kubernetes environment — in any public/private cloud or on-premises such as Azure Kubernetes Service (AKS) and Red Hat OpenShift. KEDA has support for built-in triggers to respond to events happening in other services or components. This allows the container to consume events directly from the source, instead of routing through HTTP. KEDA also presents a new hosting option for Azure Functions that can be deployed as a container in Kubernetes clusters. Securing elections and political campaigns ElectionGuard SDK and Microsoft 365 for Campaigns ElectionGuard, is a free open-source software development kit (SDK) as an extension of Microsoft’s Defending Democracy Program to enable end-to-end verifiability and improved risk-limiting audit capabilities for elections in voting systems. Microsoft365 for Campaigns provides security capabilities of Microsoft 365 Business to political parties and individual candidates. More details here. Microsoft Build is in its 6th year and will continue till 8th May. The conference hosts over 6,000 attendees with early 500 student-age developers and over 2,600 customers and partners in attendance. Watch it live here! Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces a collaboration with Microsoft’s .NET at DockerCon 2019 How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsered by Microsoft]
Read more
  • 0
  • 0
  • 5335
article-image-google-deepminds-ai-alphastar-beats-starcraft-ii-pros-tlo-and-mana-wins-10-1-against-the-gamers
Natasha Mathur
25 Jan 2019
5 min read
Save for later

Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers

Natasha Mathur
25 Jan 2019
5 min read
It was two days back when the Blizzard team announced an update about the demo of the progress made by Google’s DeepMind AI at StarCraft II, a real-time strategy video game. The demo was presented yesterday over a live stream where it showed, AlphaStar, DeepMind’s StarCraft II AI program, beating the top two professional StarCraft II players, TLO and MaNa. The demo presented a series of five separate test matches that were held earlier on 19 December, against Team Liquid’s Grzegorz "MaNa" Komincz, and Dario “TLO” Wünsch. AlphaStar beat the two professional players, managing to score 10-0 in total (5-0 against each). After the 10 straight wins, AlphaStar finally got beaten by MaNa in a live match streamed by Blizzard and DeepMind. https://twitter.com/LiquidTLO/status/1088524496246657030 https://twitter.com/Liquid_MaNa/status/1088534975044087808 How does AlphaStar learn? AlphaStar learns by imitating the basic micro and macro-strategies used by players on the StarCraft ladder. A neural network was trained initially using supervised learning from anonymised human games released by Blizzard. This initial AI agent managed to defeat the “Elite” level AI in 95% of games. Once the agents get trained from human game replays, they’re then trained against other competitors in the “AlphaStar league”. This is where a multi-agent reinforcement learning process starts. New competitors are added to the league (branched from existing competitors). Each of these agents then learns from games against other competitors. This ensures that each competitor performs well against the strongest strategies, and does not forget how to defeat earlier ones.                                          AlphaStar As the league continues to progress, new counter-strategies emerge, that can defeat the earlier strategies. Also, each agent has its own learning objective which gets adapted during the training. One agent might have an objective to beat one specific competitor, while another one might want to beat a whole distribution of competitors. So, the neural network weights of each agent get updated using reinforcement learning, from its games against competitors. This helps optimise their personal learning objective. How does AlphaStar play the game? TLO and MaNa, professional StarCraft players, can issue hundreds of actions per minute (APM) on average. AlphaStar had an average APM of around 280 in its games against TLO and MaNa, which is significantly lower than the professional players. This is because AlphaStar starts its learning using replays and thereby mimics the way humans play the game. Moreover, AlphaStar also showed the delay between observation and action of 350ms on average.                                                    AlphaStar AlphaStar might have had a slight advantage over the human players as it interacted with the StarCraft game engine directly via its raw interface. What this means is that it could observe the attributes of its own as well as its opponent’s visible units on the map directly, basically getting a zoomed out view of the game. Human players, however, have to split their time and attention to decide where to focus the camera on the map. But, the analysis results of the game showed that the AI agents “switched context” about 30 times per minute, akin to MaNa or TLO. This proves that AlphaStar’s success against MaNa and TLO is due to its superior macro and micro-strategic decision-making. It isn’t the superior click-rate, faster reaction times, or the raw interface, that made the AI win. MaNa managed to beat AlphaStar in one match DeepMind also developed a second version of AlphaStar, which played like human players, meaning that it had to choose when and where to move the camera. Two new agents were trained, one that used the raw interface and the other that learned to control the camera, against the AlphaStar league.                                                           AlphaStar “The version of AlphaStar using the camera interface was almost as strong as the raw interface, exceeding 7000 MMR on our internal leaderboard”, states the DeepMind team. But, the team didn’t get the chance to test the AI against a human pro prior to the live stream.   In a live exhibition match, MaNa managed to defeat the new version of AlphaStar using the camera interface, which was trained for only 7 days. “We hope to evaluate a fully trained instance of the camera interface in the near future”, says the team. DeepMind team states AlphaStar’s performance was initially tested against TLO, where it won the match. “I was surprised by how strong the agent was..(it) takes well-known strategies..I hadn’t thought of before, which means there may still be new ways of playing the game that we haven’t fully explored yet,” said TLO. The agents were then trained for an extra one week, after which they played against MaNa. AlphaStar again won the game. “I was impressed to see AlphaStar pull off advanced moves and different strategies across almost every game, using a very human style of gameplay I wouldn’t have expected..this has put the game in a whole new light for me. We’re all excited to see what comes next,” said MaNa. Public reaction to the news is very positive, with people congratulating the DeepMind team for AlphaStar’s win: https://twitter.com/SebastienBubeck/status/1088524371285557248 https://twitter.com/KaiLashArul/status/1088534443718045696 https://twitter.com/fhuszar/status/1088534423786668042 https://twitter.com/panicsw1tched/status/1088524675540549635 https://twitter.com/Denver_sc2/status/1088525423229759489 To learn about the strategies developed by AlphaStar, check out the complete set of replays of AlphaStar's matches against TLO and MaNa on DeepMind's website. Best game engines for Artificial Intelligence game development Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads for AI use in healthcare
Read more
  • 0
  • 0
  • 5485

article-image-minecraft-java-team-are-open-sourcing-some-of-minecrafts-code-as-libraries
Sugandha Lahoti
08 Oct 2018
2 min read
Save for later

Minecraft Java team are open sourcing some of Minecraft's code as libraries

Sugandha Lahoti
08 Oct 2018
2 min read
Stockholm's Minecraft Java team are open sourcing some of Minecraft's code as libraries for game developers. Developers can now use them to improve their Minecraft mods, use them for their own projects, or help improve pieces of the Minecraft Java engine. The team will open up different libraries gradually. These libraries are open source and MIT licensed. For now, they have open sourced two libraries Brigadier and DataFixerUpper. Brigadier The first library, Brigadier takes random strings of text entered into Minecraft and turns into an actual function that the game will perform. Basically, if you enter in the game something like /give Dinnerbone sticks, it goes internally into Brigadier and breaks it down into pieces. Then it tries to figure out what the developer is trying to do with this random piece of text. Nathan Adams, a Java developer hopes that giving the Minecraft community access to Brigadier can make it “extremely user-friendly one day.” Brigadier has been available for a week now. It has already seen improvements in the code and the readme doc. DataFixerUpper Another important library of the Minecraft game engine, the DataFixerUpper is also being open sourced. When a developer adds a new feature into Minecraft, they have to change the way level data and save files are stored. DataFixerUpper turns these data formats to what the game should currently be using now. Also in consideration for open sourcing is the Blaze3D library, which is a complete rewrite of the render engine for Minecraft 1.14. You can check out the announcement on the Minecraft website. You can also download Brigadier and DataFixerUpper. Minecraft is serious about global warming, adds a new (spigot) plugin to allow changes in climate mechanics. Learning with Minecraft Mods A Brief History of Minecraft Modding
Read more
  • 0
  • 0
  • 6897

article-image-unity-2018-2-unity-release-for-this-year-2nd-time-in-a-row
Sugandha Lahoti
12 Jul 2018
4 min read
Save for later

Unity 2018.2: Unity release for this year 2nd time in a row!

Sugandha Lahoti
12 Jul 2018
4 min read
It has only been two months since the release of Unity 2018.1 and Unity is back again with their next release for this year. Unity 2018.2 builds on the features of Unity 2018.1 such as Scriptable Render Pipeline (SRP), Shader Graph, and Entity component system. It also adds support for managed code debugging on iOS and Android, along with the final release of 64-bit (ARM64) support for Android devices. Let us look at the features in detail. Scriptable Render Pipeline improvements As mentioned above, Unity 2018.2 builds on Scriptable Render Pipeline introduced in 2018.1. The version 2 comes with two additional features: The SRP batcher: It is a new Unity engine inner loop for speeding up CPU rendering without compromising GPU performance. It works with the High Definition Render Pipeline (HDRP) and Lightweight Render Pipeline (LWRP), with PC DirectX-11, Metal and PlayStation 4 currently supported. A Scriptable shader variants stripping: It can manage the number of shader variants generated, without affecting iteration time or maintenance complexity. This leads to a dramatic reduction in player build time and data size. Performance optimizations in Lightweight Render Pipeline and High Definition Render Pipeline Unity 2018.2 improves performance and optimization of Lightweight Render Pipeline (LWRP) with an Optimized Tile utilization. This feature adjusts the number of load-and-store to tiles in order to optimize the memory of mobile GPUs. It also shades light in batches, which reduces overdraw and draw calls. Unity 2018.2 comes with better high-end visual quality in High Definition Render Pipeline (HDRP). Improvements include volumetrics, glossy planar reflection, Geometric specular AA, and Proxy Screen Space Reflection & Refraction, Mesh decals, and Shadow Mask. Improvements in C# Job System, Entity Component System and Burst Compiler Unity 2018.2 introduces new Reactive system samples in the Entity Component system (ECS) to let developers respond when there are changes to component state and emulate event-driven behavior. Burst compiling for ECS is now available on all editor platforms (Windows, Mac, Linux), and game developers will be able to build AOT for standalone players (Desktop, PS4, Xbox, iOS and Android). The C# Job system, allows developers to take full advantage of the multicore processors currently available and write parallel code without worrying about programming. Updates to Shader Graph Shader Graph, announced as a preview package in Unity 2018.2 will allow developers to build shaders visually. It has further added additional improvements like: High Definition Render Pipeline (HDRP) support, Manual modification of vertex position, Editing of the Reference name for a property, Editable paths for graphs, Texture 2D and 3D array, and more. Texture Mipmap Streaming Game developers can now stream texture mipmaps into memory on demand to reduce the texture memory requirements of a Unity application. This feature speeds up initial load time, gives developers more control, and is simple to enable and manage. Particle System improvements Unity 2018.2 has 7 major improvements to Particle system which are: Support for eight UVs, to use more custom data. MinMaxCurve and MinMaxGradient types in custom scripts to match the style used by the Particle System UI. Particle Systems now converts colors into linear space, when appropriate, before uploading them to the GPU. Two new modes to the Shape module to emit from a sprite or SpriteRenderer component. Two new APIs for baking the geometry of a Particle System into a mesh. Show Only Selected (aka Solo Mode) with the Play/Restart/Stop, etc; controls. Shaders that use separate alpha textures can now be used with particles, while using sprites in the Texture Sheet Animation module. Unity Hub Unity Hub (v1.0) is a new tool, to be released soon, designed to streamline onboarding and setup processes for all users. It is a centralized location to manage all Unity Projects, simpliflying how developers find, download, and manage Unity Editor licenses and add-on components. The Hub 1.0 will be shipped with: Project templates Custom install location Added Asset Store packages to new projects Modified project build target Editor: Added components post-installation There are additional features like Vulkan support for Editor on Windows and Linux and improvements to Progressive Lightmapper, 2D games, SVG importer, etc. It will also support .java and .cpp source files as plugins in a Unity project along with updates to Cinematics and Unity core engine. In total, there are 183 improvements and 1426 fixes in Unity 2018.2 release. Refer to the release notes to view the full list of new features, improvements and fixes. Put your game face on! Unity 2018.1 is now available Unity plugins for augmented reality application development Unity 2D & 3D game kits simplify Unity game development for beginner
Read more
  • 0
  • 0
  • 3856
article-image-implementing-unity-2017-game-audio-tutorial
Amarabha Banerjee
11 Jul 2018
11 min read
Save for later

Implementing Unity 2017 Game Audio [Tutorial]

Amarabha Banerjee
11 Jul 2018
11 min read
Background music and audio effects play a big role in determining any game's success or failure. Creating engaging game audio, importing audio from other sources and working and customizing Audio FX clips as per the game flow is a vital task for any game developer.  In this article, we are going to discuss about how to create, customize and use third party audio in Unity games. This article is a part of the book titled "Unity 2017 2D Game Development Projects" written by Lauren S. Ferro & Francesco Sapio. Basics of audio and sound FX in Unity Adding sound in Unity is simple enough, but you can implement it better if you understand how sound travels. While this is extremely important in 3D games because of the added third dimension, it is quite important in 2D games, just in a slightly different way. Before we discuss the differences, let's first learn about what and how sound works from a quick physics lesson. Listening to the physics behind sound What we hear is not just music, sound effects (FX) and ambient background noise. The sound is a longitudinal, mechanical (vibrating) wave. These "waves" can pass through different mediums (for example, air, water, your desk) but not through a vacuum. Therefore, no one will hear your screams in space. The sound is a variation in pressure. A region of increased pressure on a sound wave is called a compression (or condensation). A region of decreased pressure on a sound wave is called a rarefaction (or dilation). You can see this concept illustrated in the following image: The density of certain materials, such as glass and plastic, allows a certain amount of light to pass through them. This will influence how the light will behave when it passes through them, such as bending/refracting (that is, the index of refraction), various materials (for example, liquids, solids, gases) have the same effect when it comes to allowing sound waves to pass. Some materials allow the sound to pass easily, while others dampen it. Therefore, sound studios/booths are made of certain materials to remove things such as echoes. It has a similar effect to when you scream underwater that there is a shark. It won't be as loud as if you scream from your kitchen to tell everyone dinner is ready. Another thing to consider is what is known as the Doppler Effect. The Doppler Effect results from an increase (or decrease) in the frequency of sound (and other things such as light, ripples in water) as the source of the sound and person/player move toward (or away from) each other. A simple example of this is when an emergency vehicle passes by you. You will notice that the sound of the siren is different before it reaches you when it is near you, and once it passes you. Considering this example, it is because there is a sudden change in pitch in the passing siren. This is visualized in the following image: So, what is the point of knowing this when it comes to developing games? Well, this is particularly important when creating games, more so in 3D, in relation to how sounds are heard by players in many ways. For example, imagine that you're nearing a creek, but there are dense bushes, large pine trees, and a rugged terrain. The sound that creek makes from where a player is in the game world is going to sound very different if it was a completely flat plane free from any vegetation. When it comes to 2D games, this is not necessarily as important because we are working without depth (z-axis) but similar principles apply when players may be navigating around a top-down environment and they are near a point of interest. You don't want that sound to be as loud when the player is far away as it would be if they were up close. Within the context of 2D and 3D sounds, Unity has a parameter for this exact thing called Spatial Blend. We will discuss this more in the Audio Source section. There are several ways that you can create audio within Unity, from importing your own/downloaded sounds to recording it live. Like images, Unity can import most standard audio file formats: AIFF, WAV, MP3, and Ogg, and tracker modules (for example, short instrument samples): .xm, .mod, .it, and .s3m. Importing audio Importing audio into Unity follows the same processes as importing any other type of asset. We will cover the basics of what you need to know in the following sections. Audio Listener Have you heard the saying, If a tree falls in a forest and no one is there to hear it, does it still make a sound? Well, in Unity, if there is nothing to hear your audio, then the answer is no. This is because Unity has a component called an Audio Listener, which works like a microphone. To locate the Audio Listener, click the Main Camera, and then look over at the Inspector; it should be located near the bottom, like in the following image: If for some reason, it isn't there, you can always add it by clicking the following button titled Add Component, type Audio Listener, and select it (click it) from the list, like in the following image: The important thing to remember is that an Audio Listener is the location of the sound, so it makes sense as to why it is typically placed on the Main Camera, but it can also be placed on a Player. A single scene can only have one Audio Listener; therefore, it's best to experiment with the one that works best for your game. It is important to remember that an Audio Listener works with an Audio Source, and must have one to work. Audio Source The Audio Source is where the sound comes from. This can be from many different objects within a Scene as well as background and sound FX. The Audio Source has several parameters; later we will briefly discuss the main ones. To see more information about all the parameters, you can check out the official Unity documentation by visiting the link or scanning the QR code: https://docs.unity3d.com/2017.2/Documentation/Manual/class-AudioSource.html You may be wondering why we should have a slider for Spatial Blend, instead of a checkbox. This is because we need to fade between 2D and 3D, and there is a good reason for this. Imagine that you're in a game and you're looking at a screen on a computer. In this case, your camera is going to be fixated on whatever is on the screen. This could be checking an inventory or even entering nuclear codes. In any case, you will want the sound that is being emitted from the screen to be the focal audio. Therefore, the slider in the Spatial Blend parameter is going to be closer to 2D. This is because you may still want ambient noises that are in the background incorporated into the experience. So, if you are closer to 2D, the sound will be the same in both speakers (or headphones). The closer you slide toward 3D, the more the volume will depend on the proximity of the Sound Listener to the Sound Source. It will also allow for things, such as the Doppler Effect, to be more noticeable, as it takes in 3D space. There are also specific settings for these things. Choosing sounds for background and FX When it comes to picking the right kind of music for your game, just like the aesthetics, you need to think about what kind of "mood" you're trying to create. Is it a somber or uplifting kind of mood, are you ironically contrasting the graphics (for example, happy) with gloomy music? There is really no right or wrong when it comes to your musical selection if you can communicate to the player what they are supposed to feel, at least in general. For this game, I have provided you with some example "moods" that you can apply to this game. Of course, you're welcome to choose sounds other than this that are more to your liking! All the sounds that we will use will be from the Free Sound website: https://freesound.org. You will need to create an account to download them, but it's free and there are many great sounds that you can use when creating games. In saying this, if you're intending to create your games for commercial purposes, please make sure that you check the Terms and Conditions on Free Sound to make sure that you're not violating any of them. Each track will have its own attribution licenses, including those for commercial use, so always check! For this project, we're going to stick with the "Happy" version. But I encourage you to experiment! Happy Collecting Angel Cakes: Chime sound (https://freesound.org/people/jgreer/sounds/333629/) Being attacked by the enemy: Cat Purr/Twit4.wav (https://freesound.org/people/steffcaffrey/sounds/262309/) Collecting health: correct (https://freesound.org/people/ertfelda/sounds/243701/) Collecting bonuses: Signal-Ring 1 (https://freesound.org/people/Vendarro/sounds/399315/) Background: Kirmes_Orgel_004_2_Rosamunde.mp3 (https://freesound.org/people/bilwiss/sounds/24720/) Sad Collecting Angel Cakes: Glass Tap (https://freesound.org/people/Unicornaphobist/sounds/262958/) Being attacked by the enemy: musicbox1.wav (https://freesound.org/people/sandocho/sounds/17700/) Collecting health: chime.wav (https://freesound.org/people/Psykoosiossi/sounds/398661/) Collecting bonuses: short metallic hit (https://freesound.org/people/waveplay/sounds/366400/) Background: improvised chill 8 (https://freesound.org/people/waveplay/sounds/238529/) Retro Collecting Angel Cakes: TF_Buzz.flac (https://freesound.org/people/copyc4t/sounds/235652/) Being attacked by the enemy: Game Die (https://freesound.org/people/josepharaoh99/sounds/364929/) Collecting health: galanghee.wav (https://freesound.org/people/metamorphmuses/sounds/91387/) Collecting bonuses: SW05.WAV (https://freesound.org/people/mad-monkey/sounds/66684/) Background: Angel-techno pop music loop (https://freesound.org/people/frankum/sounds/387410/) Not everyone can hear well or at all, so it pays to keep this in mind when you're developing games that may rely on audio to provide feedback to players. While subtitles can enable dialogue to be more accessible, sound FX can be a little trickier. Therefore, when it comes to implementing audio, think about how you could complement it, even if the same effect that you're trying to achieve with sound is subtle. For example, if you play a "bleep" for every item collected, perhaps you could associate it with a slight glow or flash of color. The choice is up to you, but it's something to keep in mind. On the other end of the spectrum, those who can hear might also want to turn the sounds off. We've all played that game (or several) that really begins to become irritating, so make sure that you also check this while you're playtesting. You don't want an awesome game to suck because your audio is intolerable and there is not an option to TURN THE SOUND OFF! You’ve been warned. Integrating background music in our game Once you choose which music better suits the kind of feel you want to create for your game, import both the sound and the music inside the project. If you want, you can create two folders for them, SoundFX and Music, respectively. Now, in our scene, we need to do the following: Create an empty game object (by clicking GameObject | Create empty), rename it Background Music. Attach an Audio Source component (in the Inspector, click Add Component | Audio | Audio Source). Next, we need to drag and drop the music we decided on/downloaded into the AudioClip variable and check the Loop option, so the background music will never stop. Also, check that Play on Awake is checked as well, even if it should be by default, so the music will start playing as soon as the game starts. Hit Play to start the game. Lastly, adjust the volume, depending on the music you chose. This may require a bit of playtesting (remember to set the value after the play mode, because the settings you adjust during play mode are not kept). In the end, this is how the component should look (in the image, I chose the happy theme music, and set a Volume of 0.1): Here in this article we have shown you how to incorporate game audio effects and background music in Unity games. If you liked this article, then check out the complete book Unity 2017 2D Game Development Projects. AI for Unity game developers: How to emulate real-world senses in your NPC agent Working with Unity Variables to script powerful Unity 2017 games How to use arrays, lists, and dictionaries in Unity for 3D game development
Read more
  • 0
  • 0
  • 6918

article-image-working-with-shaders-in-c-to-create-3d-games
Amarabha Banerjee
15 Jun 2018
28 min read
Save for later

Working with shaders in C++ to create 3D games

Amarabha Banerjee
15 Jun 2018
28 min read
A shader is a computer program that is used to do image processing such as special effects, color effects, lighting, and, well, shading. The position, brightness, contrast, hue, and other effects on all pixels, vertices, or textures used to produce the final image on the screen can be altered during runtime, using algorithms constructed in the shader program(s). These days, most shader programs are built to run directly on the Graphical Processing Unit (GPU). In this article, we are going to get acquainted with shaders and implement our own shader infrastructure for the example engine. Shader programs are executed in parallel. This means, for example, that a shader might be executed once per pixel, with each of the executions running simultaneously on different threads on the GPU. The amount of simultaneous threads depends on the graphics card specific GPU, with modern cards sporting processors in the thousands. This all means that shader programs can be very performant and provide developers with lots of creative flexibility. The following article is a part of the book Mastering C++ game Development written by Mickey Macdonald. With this book, you can create advanced games with C++. Shader languages With advances in graphics card technology, more flexibility has been added to the rendering pipeline. Where at one time developers had little control over concepts such as fixed-function pipeline rendering, new advancements have allowed programmers to take deeper control of graphics hardware for rendering their creations. Originally, this deeper control was achieved by writing shaders in assembly language, which was a complex and cumbersome task. It wasn't long before developers yearned for a better solution. Enter the shader programming languages. Let's take a brief look at a few of the more common languages in use. C for graphics (Cg) is a shading language originally developed by the Nvidia graphics company. Cg is based on the C programming language and, although they share the same syntax, some features of C were modified and new data types were added to make Cg more suitable for programming GPUs. Cg compilers can output shader programs supported by both DirectX and OpenGL. While Cg was mostly deprecated, it has seen a resurgence in a new form with its use in the Unity game engine. High-Level Shading Language (HLSL) is a shading language developed by the Microsoft Corporation for use with the DirectX graphics API. HLSL is again modeled after the C programming language and shares many similarities to the Cg shading language. HLSL is still in development and continues to be the shading language of choice for DirectX. Since the release, DirectX 12 the HLSL language supports even lower level hardware control and has seen dramatic performance improvements. OpenGL Shading Language (GLSL) is a shading language that is also based on the C programming language. It was created by the OpenGL Architecture Review Board (OpenGL ARB) to give developers more direct control of the graphics pipeline without having to use ARB assembly language or other hardware-specific languages. The language is still in open development and will be the language we will focus on in our examples. Building a shader program infrastructure Most modern shader programs are composed of up to five different types of shader files: fragment or pixel shaders, vertex shaders, geometry shaders, compute shaders, and tessellation shaders. When building a shader program, each of these shader files must be compiled and linked together for use, much like how a C++ program is compiled and linked. Next, we are going to walk you through how this process works and see how we can build an infrastructure to allow for easier interaction with our shader programs. To get started, let's look at how we compile a GLSL shader. The GLSL compiler is part of the OpenGL library itself, and our shaders can be compiled within an OpenGL program. We are going to build an architecture to support this internal compilation. The whole process of compiling a shader can be broken down into some simple steps. First, we have to create a shader object, then provide the source code to the shader object. We can then ask the shader object to be compiled. These steps can be represented in the following three basic calls to the OpenGL API. First, we create the shader object: GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER); We create the shader object using the glCreateShader() function. The argument we pass in is the type of shader we are trying to create. The types of shaders can be GL_VERTEX_SHADER, GL_FRAGMENT_SHADER, GL_GEOMETRY_SHADER, GL_TESS_EVALUATION_SHADER, GL_TESS_CONTROL_SHADER, or GL_COMPUTE_SHADER. In our example case, we are trying to compile a vertex shader, so we use the GL_VERTEX_SHADER type. Next, we copy the shader source code into the shader object: GLchar* shaderCode = LoadShader("shaders/simple.vert"); glShaderSource(vertexShader, 1, shaderCode, NULL); Here we are using the glShaderSource() function to load our shader source to memory. This function accepts an array of strings, so before we call glShaderSource(), we create a pointer to the start of the shaderCode array object using a still-to-be-created method. The first argument to glShaderSource() is the handle to the shader object. The second is the number of source code strings that are contained in the array. The third argument is a pointer to an array of source code strings. The final argument is an array of GLint values that contains the length of each source code string in the previous argument. Finally, we compile the shader: glCompileShader(vertexShader); The last step is to compile the shader. We do this by calling the OpenGL API method, glCompileShader(), and passing the handle to the shader that we want compiled. Of course, because we are using memory to store the shaders, we should know how to clean up when we are done. To delete a shader object, we can call the glDeleteShader() function. Deleting a Shader ObjectShader objects can be deleted when no longer needed by calling glDeleteShader(). This frees the memory used by the shader object. It should be noted that if a shader object is already attached to a program object, as in linked to a shader program, it will not be immediately deleted, but rather flagged for deletion. If the object is flagged for deletion, it will be deleted when it is detached from the linked shader program object. Once we have compiled our shaders, the next step we need to take before we can use them in our program is to link them together into a complete shader program. One of the core aspects of the linking step involves making the connections between input variables from one shader to the output variables of another and making the connections between the input/output variables of a shader to appropriate locations in the OpenGL program itself. Linking is much like compiling the shader. We create a new shader program and attach each shader object to it. We then tell the shader program object to link everything together. The steps to accomplish this in the OpenGL environment can be broken down into a few calls to the API, as follows: First, we create the shader program object: GLuint shaderProgram = glCreateProgram(); To start, we call the glCreateProgram() method to create an empty program object. This function returns a handle to the shader program object which, in this example, we are storing in a variable named shaderProgram. Next, we attach the shaders to the program object: glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); To load each of the shaders into the shader program, we use the glAttachShader() method. This method takes two arguments. The first argument is the handle to the shader program object, and the second is the handle to the shader object to be attached to the shader program. Finally, we link the program: glLinkProgram(programHandle); When we are ready to link the shaders together we call the glLinkProgram() method. This method has only one argument: the handle to the shader program we want to link. It's important that we remember to clean up any shader programs that we are not using anymore. To remove a shader program from the OpenGL memory, we call glDeleteProgram() method. The glDeleteProgram() method takes one argument: the handle to the shader program that is to be deleted. This method call invalidates the handle and frees the memory used by the shader program. It is important to note that if the shader program object is currently in use, it will not be immediately deleted, but rather flagged for deletion. This is similar to the deletion of shader objects. It is also important to note that the deletion of a shader program will detach any shader objects that were attached to the shader program at linking time. This, however, does mean the shader object will be deleted immediately unless those shader objects have already been flagged for deletion by a previous call to the glDeleteShader() method. So those are the simplified OpenGL API calls required to create, compile, and link shader programs. Now we are going to move onto implementing some structure to make the whole process much easier to work with. To do this, we are going to create a new class called ShaderManager. This class will act as the interface for compiling, linking, and managing the cleanup of shader programs. To start with, let's look at the implementation of the CompileShaders() method in the ShaderManager.cpp file. I should note that I will be focusing on the important aspects of the code that pertain to the implementation of the architecture. The full source code for this chapter can be found in the Chapter07 folder in the GitHub repository. void ShaderManager::CompileShaders(const std::string& vertexShaderFilePath, const std::string& fragmentShaderFilepath) { m_programID = glCreateProgram(); m_vertexShaderID = glCreateShader(GL_VERTEX_SHADER); if (m_vertexShaderID == 0){ Exception("Vertex shader failed to be created!"); } m_fragmentShaderID = glCreateShader(GL_FRAGMENT_SHADER); if (m_fragmentShaderID == 0){ Exception("Fragment shader failed to be created!"); } CompileShader(vertexShaderFilePath, m_vertexShaderID); CompileShader(fragmentShaderFilepath, m_fragmentShaderID); } To begin, for this example we are focusing on two of the shader types, so our ShaderManager::CompileShaders() method accepts two arguments. The first argument is the file path location of the vertex shader file, and the second is the file path location to the fragment shader file. Both are strings. Inside the method body, we first create the shader program handle using the glCreateProgram() method and store it in the m_programID variable. Next, we create the handles for the vertex and fragment shaders using the glCreateShader() command. We check for any errors when creating the shader handles, and if we find any we throw an exception with the shader name that failed. Once the handles have been created, we then call the CompileShader() method, which we will look at next. The CompileShader() function takes two arguments: the first is the path to the shader file, and the second is the handle in which the compiled shader will be stored. The following is the full CompileShader() function. It handles the look and loading of the shader file from storage, as well as calling the OpenGL compile command on the shader file. We will break it down chunk by chunk: void ShaderManager::CompileShader(const std::string& filePath, GLuint id) { std::ifstream shaderFile(filePath); if (shaderFile.fail()){ perror(filePath.c_str()); Exception("Failed to open " + filePath); } //File contents stores all the text in the file std::string fileContents = ""; //line is used to grab each line of the file std::string line; //Get all the lines in the file and add it to the contents while (std::getline(shaderFile, line)){ fileContents += line + "n"; } shaderFile.close(); //get a pointer to our file contents c string const char* contentsPtr = fileContents.c_str(); //tell opengl that we want to use fileContents as the contents of the shader file glShaderSource(id, 1, &contentsPtr, nullptr); //compile the shader glCompileShader(id); //check for errors GLint success = 0; glGetShaderiv(id, GL_COMPILE_STATUS, &success); if (success == GL_FALSE){ GLint maxLength = 0; glGetShaderiv(id, GL_INFO_LOG_LENGTH, &maxLength); //The maxLength includes the NULL character std::vector<char> errorLog(maxLength); glGetShaderInfoLog(id, maxLength, &maxLength, &errorLog[0]); //Provide the infolog in whatever manor you deem best. //Exit with failure. glDeleteShader(id); //Don't leak the shader. //Print error log and quit std::printf("%sn", &(errorLog[0])); Exception("Shader " + filePath + " failed to compile"); } } To start the function, we first use an ifstream object to open the file with the shader code in it. We also check to see if there were any issues loading the file and if, there were, we throw an exception notifying us that the file failed to open: std::ifstream shaderFile(filePath); if (shaderFile.fail()) { perror(filePath.c_str()); Exception("Failed to open " + filePath); } Next, we need to parse the shader. To do this, we create a string variable called fileContents that will hold the text in the shader file. We then create another string variable named line; this will be a temporary holder for each line of the shader file we are trying to parse. Next, we use a while loop to step through the shader file, parsing the contents line by line and saving each loop into the fileContents string. Once all the lines have been read into the holder variable, we call the close method on the shaderFile ifstream object to free up the memory used to read the file: std::string fileContents = ""; std::string line; while (std::getline(shaderFile, line)) { fileContents += line + "n"; } shaderFile.close(); You might remember from earlier in the chapter that I mentioned that when we are using the glShaderSource() function, we have to pass the shader file text as a pointer to the start of a character array. In order to meet this requirement, we are going to use a neat trick where we use the C string conversation method built into the string class to allow us to pass back a pointer to the start of our shader character array. This, in case you are unfamiliar, is essentially what a string is: const char* contentsPtr = fileContents.c_str(); Now that we have a pointer to the shader text, we can call the glShaderSource() method to tell OpenGL that we want to use the contents of the file to compile our shader. Then, finally, we call the glCompileShader() method with the handle to the shader as the argument: glShaderSource(id, 1, &contentsPtr, nullptr); glCompileShader(id); That handles the compilation, but it is a good idea to provide ourselves with some debug support. We implement this compilation debug support by closing out the CompileShader() function by first checking to see if there were any errors during the compilation process. We do this by requesting information from the shader compiler through glGetShaderiv() function, which, among its arguments, takes an enumerated value that specifies what information we would like returned. In this call, we are requesting the compile status: GLint success = 0; glGetShaderiv(id, GL_COMPILE_STATUS, &success); Next, we check to see if the returned value is GL_FALSE, and if it is, that means we have had an error and should ask the compiler for more information about the compile issues. We do this by first asking the compiler what the max length of the error log is. We use this max length value to then create a vector of character values called errorLog. Then we can request the shader compile log by using the glGetShaderInfoLog() method, passing in the handle to the shader file the number of characters we are pulling, and where we want to save the log: if (success == GL_FALSE){ GLint maxLength = 0; glGetShaderiv(id, GL_INFO_LOG_LENGTH, &maxLength); std::vector<char> errorLog(maxLength); glGetShaderInfoLog(id, maxLength, &maxLength, &errorLog[0]); Once we have the log file saved, we go ahead and delete the shader using the glDeleteShader() method. This ensures we don't have any memory leaks from our shader: glDeleteShader(id); Finally, we first print the error log to the console window. This is great for runtime debugging. We also throw an exception with the shader name/file path, and the message that it failed to compile: std::printf("%sn", &(errorLog[0])); Exception("Shader " + filePath + " failed to compile"); } ... That really simplifies the process of compiling our shaders by providing a simple interface to the underlying API calls. Now, in our example program, to load and compile our shaders we use a simple line of code similar to the following: shaderManager.CompileShaders("Shaders/SimpleShader.vert", "Shaders/SimpleShader.frag"); Having now compiled the shaders, we are halfway to a useable shader program. We still need to add one more piece, linking. To abstract away some of the processes of linking the shaders and to provide us with some debugging capabilities, we are going to create the LinkShaders() method for our ShaderManager class. Let's take a look and then break it down: void ShaderManager::LinkShaders() { //Attach our shaders to our program glAttachShader(m_programID, m_vertexShaderID); glAttachShader(m_programID, m_fragmentShaderID); //Link our program glLinkProgram(m_programID); //Note the different functions here: glGetProgram* instead of glGetShader*. GLint isLinked = 0; glGetProgramiv(m_programID, GL_LINK_STATUS, (int *)&isLinked); if (isLinked == GL_FALSE){ GLint maxLength = 0; glGetProgramiv(m_programID, GL_INFO_LOG_LENGTH, &maxLength); //The maxLength includes the NULL character std::vector<char> errorLog(maxLength); glGetProgramInfoLog(m_programID, maxLength, &maxLength, &errorLog[0]); //We don't need the program anymore. glDeleteProgram(m_programID); //Don't leak shaders either. glDeleteShader(m_vertexShaderID); glDeleteShader(m_fragmentShaderID); //print the error log and quit std::printf("%sn", &(errorLog[0])); Exception("Shaders failed to link!"); } //Always detach shaders after a successful link. glDetachShader(m_programID, m_vertexShaderID); glDetachShader(m_programID, m_fragmentShaderID); glDeleteShader(m_vertexShaderID); glDeleteShader(m_fragmentShaderID); } To start our LinkShaders() function, we call the glAttachShader() method twice, using the handle to the previously created shader program object, and the handle to each shader we wish to link, respectively: glAttachShader(m_programID, m_vertexShaderID); glAttachShader(m_programID, m_fragmentShaderID); Next, we perform the actual linking of the shaders into a usable shader program by calling the glLinkProgram() method, using the handle to the program object as its argument: glLinkProgram(m_programID); We can then check to see if the linking process has completed without any errors and provide ourselves with any debug information that we might need if there were any errors. I am not going to go through this code chunk line by line since it is nearly identical to what we did with the CompileShader() function. Do note, however, that the function to return the information from the linker is slightly different and uses glGetProgram* instead of the glGetShader* functions from before: GLint isLinked = 0; glGetProgramiv(m_programID, GL_LINK_STATUS, (int *)&isLinked); if (isLinked == GL_FALSE){ GLint maxLength = 0; glGetProgramiv(m_programID, GL_INFO_LOG_LENGTH, &maxLength); //The maxLength includes the NULL character std::vector<char> errorLog(maxLength); glGetProgramInfoLog(m_programID, maxLength, &maxLength, &errorLog[0]); //We don't need the program anymore. glDeleteProgram(m_programID); //Don't leak shaders either. glDeleteShader(m_vertexShaderID); glDeleteShader(m_fragmentShaderID); //print the error log and quit std::printf("%sn", &(errorLog[0])); Exception("Shaders failed to link!"); } Lastly, if we are successful in the linking process, we need to clean it up a bit. First, we detach the shaders from the linker using the glDetachShader() method. Next, since we have a completed shader program, we no longer need to keep the shaders in memory, so we delete each shader with a call to the glDeleteShader() method. Again, this will ensure we do not leak any memory in our shader program creation process: glDetachShader(m_programID, m_vertexShaderID); glDetachShader(m_programID, m_fragmentShaderID); glDeleteShader(m_vertexShaderID); glDeleteShader(m_fragmentShaderID); } We now have a simplified way of linking our shaders into a working shader program. We can call this interface to the underlying API calls by simply using one line of code, similar to the following one: shaderManager.LinkShaders(); So that handles the process of compiling and linking our shaders, but there is another key aspect to working with shaders, which is the passing of data to and from the running program/the game and the shader programs running on the GPU. We will look at this process and how we can abstract it into an easy-to-use interface for our engine next. Working with shader data One of the most important aspects of working with shaders is the ability to pass data to and from the shader programs running on the GPU. This can be a deep topic, and much like other topics in this book has had its own dedicated books. We are going to stay at a higher level when discussing this topic and again will focus on the two needed shader types for basic rendering: the vertex and fragment shaders. To begin with, let's take a look at how we send data to a shader using the vertex attributes and Vertex Buffer Objects (VBO). A vertex shader has the job of processing the data that is connected to the vertex, doing any modifications, and then passing it to the next stage of the rendering pipeline. This occurs once per vertex. In order for the shader to do its thing, we need to be able to pass it data. To do this, we use what are called vertex attributes, and they usually work hand in hand with what is referred to as VBO. For the vertex shader, all per-vertex input attributes are defined using the keyword in. So, for example, if we wanted to define a vector 3 input attribute named VertexColour, we could write something like the following: in vec3 VertexColour; Now, the data for the VertexColour attribute has to be supplied by the program/game. This is where VBO come in. In our main game or program, we make the connection between the input attribute and the vertex buffer object, and we also have to define how to parse or step through the data. That way, when we render, the OpenGL can pull data for the attribute from the buffer for each call of the vertex shader. Let's take a look a very simple vertex shader: #version 410 in vec3 VertexPosition; in vec3 VertexColour; out vec3 Colour; void main(){ Colour = VertexColour; gl_Position = vec4(VertexPosition, 1.0); } In this example, there are just two input variables for this vertex shader, VertexPosition and VertexColor. Our main OpenGL program needs to supply the data for these two attributes for each vertex. We will do so by mapping our polygon/mesh data to these variables. We also have one output variable named Colour, which will be sent to the next stage of the rendering pipeline, the fragment shader. In this example, Colour is just an untouched copy of VertexColour. The VertexPosition attribute is simply expanded and passed along to the OpenGL API output variable gl_Position for more processing. Next, let's take a look at a very simple fragment shader: #version 410 in vec3 Colour; out vec4 FragColour; void main(){ FragColour = vec4(Colour, 1.0); } In this fragment shader example, there is only one input attribute, Colour. This input corresponds to the output of the previous rendering stage, the vertex shader's Colour output. For simplicity's sake, we are just expanding the Colour and outputting it as the variable FragColour for the next rendering stage. That sums up the shader side of the connection, so how do we compose and send the data from inside our engine? We can accomplish this in basically four steps. First, we create a Vertex Array Object (VAO) instance to hold our data: GLunit vao; Next, we create and populate the VBO for each of the shaders' input attributes. We do this by first creating a VBO variable, then, using the glGenBuffers() method, we generate the memory for the buffer objects. We then create handles to the different attributes we need buffers for, assigning them to elements in the VBO array. Finally, we populate the buffers for each attribute by first calling the glBindBuffer() method, specifying the type of object being stored. In this case, it is a GL_ARRAY_BUFFER for both attributes. Then we call the glBufferData() method, passing the type, size, and handle to bind. The last argument for the glBufferData() method is one that gives OpenGL a hint about how the data will be used so that it can determine how best to manage the buffer internally. For full details about this argument, take a look at the OpenGL documentation: GLuint vbo[2]; glGenBuffers(2, vbo); GLuint positionBufferHandle = vbo[0]; GLuint colorBufferHandle = vbo[1]; glBindBuffer(GL_ARRAY_BUFFER,positionBufferHandle); glBufferData(GL_ARRAY_BUFFER, 9 * sizeof(float), positionData, GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER, colorBufferHandle); glBufferData(GL_ARRAY_BUFFER, 9 * sizeof(float), colorData, GL_STATIC_DRAW); The third step is to create and define the VAO. This is how we will define the relationship between the input attributes of the shader and the buffers we just created. The VAO contains this information about the connections. To create a VAO, we use the glGenVertexArrays() method. This gives us a handle to our new object, which we store in our previously created VAO variable. Then, we enable the generic vertex attribute indexes 0 and 1 by calling the glEnableVertexAttribArray() method. By making the call to enable the attributes, we are specifying that they will be accessed and used for rendering. The last step makes the connection between the buffer objects we have created and the generic vertex attribute indexes the match too: glGenVertexArrays( 1, &vao ); glBindVertexArray(vao); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glBindBuffer(GL_ARRAY_BUFFER, positionBufferHandle); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, colorBufferHandle); glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, NULL); Finally, in our Draw() function call, we bind to the VAO and call glDrawArrays() to perform the actual render: glBindVertexArray(vaoHandle);glDrawArrays(GL_TRIANGLES, 0, 3 ); Before we move on to another way to pass data to the shader, there is one more piece of this attribute connection structure we need to discuss. As mentioned, the input variables in a shader are linked to the generic vertex attribute we just saw, at the time of linking. When we need to specify the relationship structure, we have a few different choices. We can use what are known as layout qualifiers within the shader code itself. The following is an example: layout (location=0) in vec3 VertexPosition; Another choice is to just let the linker create the mapping when linking, and then query for them afterward. The third and the one I personally prefer is to specify the relationship prior to the linking process by making a call to the glBindAttribLocation() method. We will see how this is implemented shortly when we discuss how to abstract these processes. We have described how we can pass data to a shader using attributes, but there is another option: uniform variables. Uniform variables are specifically used for data that changes infrequently. For example, matrices are great candidates for uniform variables. Within a shader, a uniform variable is read-only. That means the value can only be changed from outside the shader. They can also appear in multiple shaders within the same shader program. They can be declared in one or more shaders within a program, but if a variable with a given name is declared in more than one shader, its type must be the same in all shaders. This gives us insight into the fact that the uniform variables are actually held in a shared namespace for the whole of the shader program. To use a uniform variable in your shader, you first have to declare it in the shader file using the uniform identifier keyword. The following is what this might look like: uniform mat4 ViewMatrix; We then need to provide the data for the uniform variable from inside our game/program. We do this by first finding the location of the variable using the glGetUniformLocation() method. Then we assign a value to the found location using one of the glUniform() methods. The code for this process could look something like the following: GLuint location = glGetUniformLocation(programHandle," ViewMatrix "); if( location >= 0 ) { glUniformMatrix4fv(location, 1, GL_FALSE, &viewMatrix [0][0]) } We then assign a value to the uniform variable's location using the glUniformMatrix4fv() method. The first argument is the uniform variable's location. The second argument is the number of matrices that are being assigned. The third is a GL bool type specifying whether or not the matrix should be transposed. Since we are using the GLM library for our matrices, a transpose is not required. If you were implementing the matrix using data that was in row-major order, instead of column-major order, you might need to use the GL_TRUE type for this argument. The last argument is a pointer to the data for the uniform variable. Uniform variables can be any GLSL type, and this includes complex types such as structures and arrays. The OpenGL API provides a glUniform() function with the different suffixes that match each type. For example, to assign to a variable of type vec3, we would use glUniform3f() or glUniform3fv() methods. (the v denotes multiple values in the array). So, those are the concepts and techniques for passing data to and from our shader programs. However, as we did for the compiling and linking of our shaders, we can abstract these processes into functions housed in our ShaderManager class. We are going to focus on working with attributes and uniform variables. First, we will look at the abstraction of adding attribute bindings using the AddAttribute() function of the ShaderManger class. This function takes one argument, the attribute's name, to be bound as a string. We then call the glBindAttribLocation() function, passing the program's handle and the current index or number of attributes, which we increase on call, and finally the C string conversion of the attributeName string, which provides a pointer to the first character in the string array. This function must be called after compilation, but before the linking of the shader program: void ShaderManager::AddAttribute(const std::string& attributeName) { glBindAttribLocation(m_programID, m_numAttributes++, attributeName.c_str()); } For the uniform variables, we create a function that abstracts looking up the location of the uniform in the shader program, the GetUniformLocation() function. This function again takes only one variable which is a uniform name in the form of a string. We then create a temporary holder for the location and assign it the returned value of the glGetUniformLocation() method call. We check to make sure the location is valid, and if not we throw an exception letting us know about the error. Finally, we return the valid location if found: GLint ShaderManager::GetUniformLocation(const std::string& uniformName) { GLint location = glGetUniformLocation(m_programID, uniformName.c_str()); if (location == GL_INVALID_INDEX) { Exception("Uniform " + uniformName + " not found in shader!"); } return location; } This gives us the abstraction for binding our data, but we still need to assign which shader should be used for a certain draw call, and to activate any attributes we need. To accomplish this, we create a function in the ShaderManager called Use(). This function will first set the current shader program as the active one using the glUseProgram() API method call. We then use a for loop to step through the list of attributes for the shader program, activating each one: void ShaderManager::Use(){ glUseProgram(m_programID); for (int i = 0; i < m_numAttributes; i++) { glEnableVertexAttribArray(i); } } Of course, since we have an abstracted way to enable the shader program, it only makes sense that we should have a function to disable the shader program. This function is very similar to the Use() function, but in this case, we are setting the program in use to 0, effectively making it NULL, and we use the glDisableVertexAtrribArray() method to disable the attributes in the for loop: void ShaderManager::UnUse() { glUseProgram(0); for (int i = 0; i < m_numAttributes; i++) { glDisableVertexAttribArray(i); } } The net effect of this abstraction is we can now set up our entire shader program structure with a few simple calls. Code similar to the following would create and compile the shaders, add the necessary attributes, link the shaders into a program, locate a uniform variable, and create the VAO and VBO for a mesh: shaderManager.CompileShaders("Shaders/SimpleShader.vert", "Shaders/SimpleShader.frag"); shaderManager.AddAttribute("vertexPosition_modelspace"); shaderManager.AddAttribute("vertexColor"); shaderManager.LinkShaders(); MatrixID = shaderManager.GetUniformLocation("ModelViewProjection"); m_model.Init("Meshes/Dwarf_2_Low.obj", "Textures/dwarf_2_1K_color.png"); Then, in our Draw loop, if we want to use this shader program to draw, we can simply use the abstracted functions to activate and deactivate our shader, similar to the following code: shaderManager.Use(); m_model.Draw(); shaderManager.UnUse(); This makes it much easier for us to work with and test out advanced rendering techniques using shaders. Here in this article, we have discussed how advanced rendering techniques, hands-on practical knowledge of game physics and shaders and lighting can help you to create advanced games with C++. If you have liked this above article, check out the complete book Mastering C++ game Development.  How to use arrays, lists, and dictionaries in Unity for 3D game development Unity 2D & 3D game kits simplify Unity game development for beginners How AI is changing game development
Read more
  • 0
  • 0
  • 14309