Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Game Development

370 Articles
article-image-optimizing-graphics-pipelines-with-meshlets-a-guide-to-efficient-geometry-processing
Marco Castorina, Gabriel Sassone
09 Dec 2024
15 min read
Save for later

Optimizing Graphics Pipelines with Meshlets: A Guide to Efficient Geometry Processing

Marco Castorina, Gabriel Sassone
09 Dec 2024
15 min read
This article is an excerpt from the book, "Mastering Graphics Programming with Vulkan", by Marco Castorina, Gabriel Sassone. Mastering Graphics Programming with Vulkan starts by familiarizing you with the foundations of a modern rendering engine. This book will guide you through GPU-driven rendering and show you how to drive culling and rendering from the GPU to minimize CPU overhead. Finally, you’ll explore advanced rendering techniques like temporal anti-aliasing and ray tracing.IntroductionIn modern graphics pipelines, optimizing the geometry stage can have a significant impact on overall rendering performance. This article delves into the concept of meshlets—an approach to breaking down large meshes into smaller, more manageable chunks for efficient GPU processing. By subdividing meshes into meshlets, we can enhance culling techniques, reduce unnecessary shading, and better handle complex geometry. Join us as we explore how meshlets work, their benefits, and practical steps to implement them.Breaking down large meshes into meshletsIn this article, we are going to focus primarily on the geometry stage of the pipeline, the one before the shading stage. Adding some complexity to the geometry stage of the pipeline will pay dividends in later stages as we’ll reduce the number of pixels that need to be shaded.NoteWhen we refer to the geometry stage of the graphics pipeline, we don’t mean geometry shaders. Th e geometry stage of the pipeline refers to input assembly (IA), vertex processing, and primitive assembly (PA). Vertex processing can, in turn, run one or  more of the following shaders: vertex, geometry, tessellation, task, and mesh shaders.Content geometry comes in many shapes, sizes, and complexity. A rendering engine must be able to deal with meshes from small, detailed objects to large terrains. Large meshes (think terrain or buildings) are usually broken down by artists so that the rendering engine can pick out the diff erent levels of details based on the distance from the camera of these objects.Breaking down meshes into smaller chunks can help cull geometry that is not visible, but some of these meshes are still large enough that we need to process them in full, even if only a small portion is visible.Meshlets have been developed to address these problems. Each mesh is subdivided into groups of vertices (usually 64) that can be more easily processed on the GPU.The following image illustrates how meshes can be broken down into meshlets:Figure 6.1 – A meshlet subdivision exampleThese vertices can make up an arbitrary number of triangles, but we usually tune this value according to the hardware we are running on. In Vulkan, the recommended value is 126 (as written in https://developer.nvidia.com/blog/introduction-turing-mesh-shaders/, the number is needed to reserve some memory for writing the primitive count with each meshlet).NoteAt the time of writing, mesh and task shaders are only available on Nvidia hardware through its extension. While some of the APIs described in this chapter are specifi c to this extension, the concepts can be generally applied and implemented using generic compute shaders. A more generic version of this extension is currently being worked on by the Khronos committee so that mesh and task shaders should soon be available from other vendors!Now that we have a much smaller number of triangles, we can use them to have much finer-grained control by culling meshlets that are not visible or are being occluded by other objects.Together with the list of vertices and triangles, we also generate some additional data for each meshlet that will be very useful later on to perform back-face, frustum, and occlusion culling.One additional possibility (that will be added in the future) is to choose the level of detail (LOD) of a mesh and, thus, a different subset of meshlets based on any wanted heuristic.The first of this additional data represents the bounding sphere of a meshlet, as shown in the following screenshot:Figure 6.2 – A meshlet bounding spheres example; some of the larger spheres have been hidden for claritySome of you might ask: why not AABBs? AABBs require at least two vec3 of data: one for the center and one for the half-size vector. Another encoding could be to store the minimum and maximum corners. Instead, spheres can be encoded with a single vec4: a vec3 for the center plus the radius.Given that we might need to process millions of meshlets, each saved byte counts! Spheres can also be more easily tested for frustum and occlusion culling, as we will describe later in the chapter.The next additional piece of data that we’re going to use is the meshlet cone, as shown in the following screenshot:Figure 6.3 – A meshlet cone example; not all cones are displayed for clarityThe cone indicates the direction a meshlet is facing and will be used for back-face culling.Now we have a better understanding of why meshlets are useful and how we can use them to improve the culling of larger meshes, let’s see how we generate them in code!Generating meshletsWe are using an open source library, called MeshOptimizer (https://github.com/zeux/meshoptimizer) to generate the meshlets. An alternative library is meshlete (https:// github.com/JarkkoPFC/meshlete) and we encourage you to try both to find the one that best suits your needs.After we have loaded the data (vertices and indices) for a given mesh, we are going to generate the list of meshlets. First, we determine the maximum number of meshlets that could be generated for our mesh and allocate memory for the vertices and indices arrays that  will describe the meshlets:const sizet max_meshlets = meshopt_buildMeshletsBound( indices_accessor.count, max_vertices, max_triangles ); Array<meshopt_Meshlet> local_meshlets; local_meshlets.init( temp_allocator, max_meshlets, max_meshlets ); Array<u32> meshlet_vertex_indices; meshlet_vertex_indices.init( temp_allocator, max_meshlets * max_vertices, max_meshlets* max_vertices ); Array<u8> meshlet_triangles; meshlet_triangles.init( temp_allocator, max_meshlets * max_triangles * 3, max_meshlets* max_triangles * 3 );Notice the types for the indices and triangle arrays. We are not modifying the original vertex or index buffer, but only generating a list of indices in the original buffers. Another interesting aspect is that we only need 1 byte to store the triangle indices. Again, saving memory is very important to keep meshlet processing efficient!The next step is to generate our meshlets:const sizet max_vertices = 64; const sizet max_triangles = 124; const f32 cone_weight = 0.0f; sizet meshlet_count = meshopt_buildMeshlets( local_meshlets.data, meshlet_vertex_indices.data, meshlet_triangles.data, indices, indices_accessor.count, vertices, position_buffer_accessor.count, sizeof( vec3s ), max_vertices, max_triangles, cone_weight );As mentioned in the preceding step, we need to tell the library the maximum number of vertices and triangles that a meshlet can contain. In our case, we are using the recommended values for the Vulkan API. The other parameters include the original vertex and index buffer, and the arrays we have just created that will contain the data for the meshlets.Let’s have a better look at the data structure of each meshlet:struct meshopt_Meshlet { unsigned int vertex_offset; unsigned int triangle_offset; unsigned int vertex_count; unsigned int triangle_count; };Each meshlet is described by two offsets and two counts, one for the vertex indices and one for the indices of the triangles. Note that these off sets refer to meshlet_vertex_indices and meshlet_ triangles that are populated by the library, not the original vertex and index buff ers of the mesh.Now that we have the meshlet data, we need to upload it to the GPU. To keep the data size to a minimum, we store the positions at full resolution while we compress the normals to 1 byte for each dimension and UV coordinates to half-float for each dimension. In pseudocode, this is as follows:meshlet_vertex_data.normal = ( normal + 1.0 ) * 127.0; meshlet_vertex_data.uv_coords = quantize_half( uv_coords );The next step is to extract the additional data (bounding sphere and cone) for each meshlet:for ( u32 m = 0; m < meshlet_count; ++m ) { meshopt_Meshlet& local_meshlet = local_meshlets[ m ]; meshopt_Bounds meshlet_bounds = meshopt_computeMeshletBounds( meshlet_vertex_indices.data + local_meshlet.vertex_offset, meshlet_triangles.data + local_meshlet.triangle_offset, local_meshlet.triangle_count, vertices, position_buffer_accessor .count, sizeof( vec3s ) ); ... }We loop over all the meshlets and we call the MeshOptimizer API that computes the bounds for each meshlet. Let’s see in more detail the structure of the data that is returned:struct meshopt_Bounds { float center[3]; float radius; float cone_apex[3]; float cone_axis[3]; float cone_cutoff; signed char cone_axis_s8[3]; signed char cone_cutoff_s8; };The first four floats represent the bounding sphere. Next, we have the cone definition, which is comprised of the cone direction (cone_axis) and the cone angle (cone_cutoff). We are not using the cone_apex value as it makes the back-face culling computation more expensive. However, it can lead to better results.Once again, notice that quantized values (cone_axis_s8 and cone_cutoff_s8) help us reduce the size of the data required for each meshlet.Finally, meshlet data is copied into GPU buff ers and it will be used during the execution of task and mesh shaders.For each processed mesh, we will also save an offset and count of meshlets to add a coarse culling based on the parent mesh: if the mesh is visible, then its meshlets will be added.In this article, we have described what meshlets are and why they are useful to improve the culling of geometry on the GPU.ConclusionMeshlets represent a powerful tool for optimizing the rendering of complex geometries. By subdividing meshes into small, efficient chunks and incorporating additional data like bounding spheres and cones, we can achieve finer-grained control over visibility and culling processes. Whether you're leveraging advanced shader technologies or applying these concepts with compute shaders, adopting meshlets can lead to significant performance improvements in your graphics pipeline. With libraries like MeshOptimizer at your disposal, implementing this technique has never been more accessible.Author BioMarco Castorina first became familiar with Vulkan while working as a driver developer at Samsung. Later, he developed a 2D and 3D renderer in Vulkan from scratch for a leading media server company. He recently joined the games graphics performance team at AMD. In his spare time, he keeps up to date with the latest techniques in real-time graphics. He also likes cooking and playing guitar.Gabriel Sassone is a rendering enthusiast currently working as a principal rendering engineer at The Multiplayer Group. Previously working for Avalanche Studios, where he first encountered Vulkan, they developed the Vulkan layer for the proprietary Apex Engine and its Google Stadia port. He previously worked at ReadyAtDawn, Codemasters, FrameStudios, and some other non-gaming tech companies. His spare time is filled with music and rendering, gaming, and outdoor activities.
Read more
  • 0
  • 0
  • 465

article-image-gaming-in-the-metaverse
Irena Cronin, Robert Scoble
24 Oct 2024
10 min read
Save for later

Gaming in the Metaverse

Irena Cronin, Robert Scoble
24 Oct 2024
10 min read
This article is an excerpt from the book, The Immersive Metaverse Playbook for Business Leaders, by Irena Cronin, Robert Scoble. This book explains what the metaverse is and why it is of utmost value to business decision-makers. The chapters help you get a solid understanding of the concepts and roles that augmented reality and virtual reality play, along with providing information on metaverse technologies, as well as thought-provoking consumer and enterprise use cases.Introduction In the Metaverse’s expansive gaming landscape, several compelling use cases emerge. Gamers become creators and modifiers, democratizing game development, with quality control as a challenge. Crossplatform gaming integration fosters an inclusive gaming community, while blockchain-backed virtual merchandise and collectibles introduce new opportunities with authenticity and copyright concerns. Virtual esports tournaments become global events, requiring stringent security measures. In-game advertising and product placement offer marketing potential, but striking a balance with player experience is vital. These use cases exemplify the diverse facets of gaming in the Metaverse, highlighting innovation and challenges in the pursuit of immersive digital gaming experiences. Let’s take a closer look at some use cases. Use case 1 – game creation and modification This use case exemplifies how the Metaverse empowers gamers to become active contributors to the gaming industry, shaping its future through their creativity and innovation. It highlights the democratization of game development and the dynamic synergy between technology, interactivity, and the challenges that come with it in this evolving digital realm. The setup Within the expansive and thriving Metaverse gaming landscape, a remarkable facet emerges where 3D and 2D virtual gamers are not just players but empowered creators and modifiers of games themselves. The Metaverse offers a vast canvas, brimming with opportunities for individuals and teams to craft unique gaming experiences that cater to a global audience. Interactivity In this immersive gaming domain, players transition into creators as they engage with innovative game creation and modification tools which include the use of generative AI. These tools empower users to design levels, characters, and gameplay mechanics, breathing life into their imaginative concepts. Collaborative platforms within the Metaverse foster teamwork, allowing multiple creators to combine their skills and ideas seamlessly. Technical innovation The Metaverse’s technical innovation shines through in the form of user-friendly game development platforms that bridge the gap between novice creators and experienced developers. These platforms offer intuitive interfaces, drag-and-drop functionality, and pre-built assets, making game design accessible to a wide range of enthusiasts. AI-driven game design assistance provides suggestions and optimizations, reducing the learning curve for newcomers. And with generative AI, soon whole 3D, as well as 2D, games could be fully developed. Challenges While the Metaverse fuels creativity and democratizes game development, several challenges emerge on this vibrant frontier. Balancing the influx of user-generated content with quality control becomes pivotal. Moderation systems must ensure that games meet basic quality standards and are free from malicious or inappropriate content. Additionally, striking a harmonious balance between open creativity and maintaining fair play in modified games poses an ongoing challenge. Ensuring that user-created content doesn’t disrupt the gaming experience for others is a priority. Continuous development and refinement of moderation and quality control mechanisms are essential to maintain a thriving and enjoyable gaming ecosystem within the Metaverse. Use case 2 – cross-platform gaming integration This use case illustrates how the Metaverse transcends the limitations of individual gaming platforms, fostering a more inclusive and interconnected gaming community. Cross-platform gaming integration enhances the social and competitive aspects of gaming, enabling players to unite in a shared virtual gaming universe. As the Metaverse continues to evolve, it reshapes the way we perceive and engage in gaming, offering a glimpse into the future of interactive entertainment. The setup Within the expansive Metaverse gaming landscape, cross-platform gaming integration becomes a prominent feature. This innovation allows players from various gaming platforms and devices to seamlessly interact and play together, breaking down traditional gaming silos. Interactivity In this interconnected Metaverse, players can engage in cross-platform gaming experiences with friends and gamers from around the world. Whether you’re on a PC, console, VR headset, or mobile device, you can join the same virtual gaming universe. Gamers can form diverse teams and alliances, fostering a sense of community that transcends hardware preferences. This integration offers unprecedented opportunities for collaboration and competition. Technical innovation The technical innovation driving this use case is the development of cross-platform compatibility protocols and infrastructure. These innovations bridge the gaps between different gaming ecosystems, allowing for cross-device gameplay. Advanced matchmaking algorithms ensure that players of similar skill levels can enjoy fair and balanced gaming experiences, regardless of their chosen platform. This technical integration transforms the Metaverse into a truly inclusive gaming space. Challenges While cross-platform gaming integration is a remarkable achievement, it comes with its own set of challenges. Ensuring a level playing field for all players, regardless of their platform, requires ongoing fine-tuning of matchmaking algorithms. Addressing potential disparities in hardware capabilities, such as graphics processing power, can be complex. Additionally, maintaining a secure gaming environment across diverse platforms is essential to prevent cheating, unauthorized access, and other security concerns. Use case 3 – game-related merchandise and collectibles This use case showcases how the Metaverse transforms the concept of gaming merchandise and collectibles, offering a virtual marketplace where gamers can not only enhance their in-game experiences but also indulge in their passion for collecting virtual treasures. The integration of blockchain technology adds a layer of trust and scarcity to these digital possessions, creating a virtual economy that mirrors the real-world collectibles market. The setup Within the Metaverse, a vibrant and bustling marketplace dedicated to gaming-related merchandise and collectibles emerges. This dynamic digital marketplace transforms the concept of gaming memorabilia, offering a diverse range of 3D and 2D virtual goods that hold significant value for gamers and collectors alike. It’s a virtual bazaar where gamers can immerse themselves in the culture of their favorite games beyond the confines of traditional gameplay. Interactivity In this immersive Metaverse marketplace, players gain the opportunity to personalize their avatars with a rich array of virtual gaming apparel and accessories. Gamers can browse an extensive catalog of virtual merchandise, including iconic character costumes, in-game items, and exclusive skins. This personalized customization allows players to showcase their gaming identity and immerse themselves even deeper into their favorite game worlds. Technical innovation At the heart of this use case lies the groundbreaking implementation of blockchain technology. This innovation plays a pivotal role in securing virtual collectibles, offering gamers a sense of rarity and ownership verification akin to physical collectibles. Each virtual item is tokenized on the blockchain, ensuring its uniqueness and provenance. Gamers can confidently buy, sell, and trade virtual merchandise, knowing that their digital possessions are genuine and scarce. In terms of the companies that offer game-related merchandise and collectibles, generative AI provides an inexpensive, fast, and easy way to create assets. Challenges While this Metaverse marketplace promises exciting opportunities, it also presents unique challenges. Ensuring the authenticity of virtual merchandise is paramount. The presence of counterfeit or unauthorized virtual items could undermine the trust and value within the marketplace. Additionally, addressing potential copyright issues related to virtual merchandise is a central concern. Striking a balance between allowing creative expression and protecting intellectual property rights is essential to maintaining a thriving and ethical marketplace. Negative implications of gaming in the Metaverse Gaming in the Metaverse, while promising incredible innovation and immersive experiences, also carries negative implications that span technological, social, and ethical dimensions. These potential drawbacks must be considered alongside the benefits to ensure a balanced perspective on this digital frontier. Technological implications Dependency on technology: As gaming in the Metaverse becomes increasingly sophisticated, there is a risk of individuals becoming overly dependent on technology for their entertainment and social interactions. This dependence may lead to issues related to screen time, addiction, and reduced physical activity. Technical glitches: The reliance on advanced technology for immersive gaming experiences introduces the possibility of technical glitches, server outages, or compatibility issues. These disruptions can frustrate players and disrupt their gaming experiences. Privacy concerns: The collection and utilization of user data within the Metaverse for targeted advertising and analytics can raise privacy concerns. Users may feel uncomfortable with the extent to which their online activities are monitored and analyzed. Social implications Social isolation: Immersive gaming experiences in the Metaverse could lead to social isolation as individuals spend more time in virtual environments and less time in physical social interactions. Loneliness and a lack of real-world social skills can result from excessive immersion. Economic disparities: Access to the Metaverse and its premium gaming experiences may be limited by socioeconomic factors. Those with greater financial resources may enjoy a significant advantage, potentially creating digital divides and exclusivity. Loss of physical interaction: The allure of the Metaverse may lead to a reduction in face-toface social interactions, which are crucial for human well-being. The diminished importance of real-world connections could have adverse effects on mental health and relationships. Ethical implications Exploitative monetization: In-game purchases and microtransactions within the Metaverse can sometimes exploit players, particularly younger individuals who may not fully understand the financial implications. This raises ethical questions about the gaming industry’s practices. Digital addiction: The highly immersive nature of gaming in the Metaverse may contribute to digital addiction, where individuals struggle to disengage from virtual experiences and prioritize them over real-world responsibilities. Content regulation: Balancing freedom of expression and maintaining a safe and inclusive gaming environment can be challenging. The Metaverse may struggle with regulating hate speech, inappropriate content, and cyberbullying. Psychological implications Escapism: While gaming can be a form of entertainment, excessive escapism into the Metaverse may indicate underlying psychological issues or a desire to avoid real-world problems. Impact on mental health: Long hours spent in virtual gaming worlds may lead to mental health issues such as anxiety, depression, and a distorted sense of reality. Cognitive overload: The complexity of immersive gaming experiences within the Metaverse can lead to cognitive overload, especially in younger players, potentially impacting their academic performance and cognitive development. Environmental implications Energy consumption: The infrastructure required to support the Metaverse’s immersive experiences and multiplayer environments can consume significant amounts of energy, contributing to environmental concerns. Electronic waste: As technology evolves rapidly, older gaming equipment and hardware can quickly become obsolete, leading to electronic waste disposal challenges. Conclusion In conclusion, the Metaverse is revolutionizing gaming with new opportunities for creativity, community, and commerce. It empowers gamers as creators, enables cross-platform play, introduces blockchain-backed collectibles, and hosts virtual esports tournaments. However, these advancements come with challenges like quality control, security, and balancing ads with player experience. Additionally, potential negative impacts such as technological dependency, social isolation, and ethical concerns must be addressed. By fostering innovation responsibly, the Metaverse can become a transformative and enriching space for gamers worldwide. Author BioIrena Cronin is SVP of Product for DADOS Technology, which is making an Apple Vision Pro data analytics and visualization app. She is also the CEO of Infinite Retina, which helps companies develop and implement AI, AR, and other new technologies for their businesses. Before this, she worked as an equity research analyst and gained extensive experience in evaluating both public and private companies. Cronin has an MS with Distinction in Information Technology/Management and Systems from New York University, and a joint MBA/MA from the University of Southern California. She has a BA from the University of Pennsylvania with a major in Economics (summa cum laude). Cronin speaks four languages, with a near-fluent proficiency in Mandarin.Robert Scoble has coauthored four books on technology innovation – each a decade before the said technology went completely mainstream. He has interviewed thousands of entrepreneurs in the tech industry and has long kept his social media audiences up to date on what is happening inside the world of tech, which is bringing us so many innovations. Robert currently tracks the AI industry and is the host of a new video show, Unaligned, where he interviews entrepreneurs from the thousands of AI companies he tracks as head of strategy for Infinite Retina.
Read more
  • 0
  • 0
  • 899

article-image-why-should-you-use-unreal-engine-4-to-build-augmented-and-virtual-reality-projects
Guest Contributor
20 Dec 2019
6 min read
Save for later

Why should you use Unreal Engine 4 to build Augmented and Virtual Reality projects

Guest Contributor
20 Dec 2019
6 min read
This is an exciting time to be a game developer. New technologies like Virtual Reality (VR) and Augmented Reality (AR) are here and growing in popularity, and a whole new generation of game consoles is just around the corner. Right now everyone wants to jump onto these bandwagons and create successful games using AR, VR and other technologies (for more detailed information see Chapter 15, Virtual Reality and Beyond, of my book, Learning C++ by Building Games with Unreal Engine 4 – Second Edition). But no one really wants to create everything from scratch (reinventing the wheel is just too much work). Fortunately, you don’t have to. Unreal Engine 4 (UE4) can help! Not only does Epic Games use their engine to develop their own games (and keep it constantly updated for that purpose), but many other game companies, both AAA and indie, also use the engine, and Epic is constantly adding new features for them too. They can also update the engine themselves, and they can make some of those changes available to the general public as well. UE4 also has a robust system for addons and plugins that many other developers contribute to. Some may be free, and others, more advanced ones are available for a price. These can be extremely specialized, and the developer may release regular updates to adjust to changes in Unreal and that adds new features that could make your life even easier. So how does UE4 help with new technologies? Here are some examples: Unreal Engine 4 for Virtual Reality Virtual Reality (VR) is one of the most exciting technologies around, and many people are trying to get into that particular door. VR headsets from companies like Oculus, HTC, and Sony are becoming cheaper, more common, and more powerful. If you were creating a game yourself from scratch you would need an extremely powerful graphics engine. Fortunately, UE4 already has one with VR functionality. If you already have a project you want to convert to VR, UE4 makes this easy for you. If you have an Oculus Rift or HTC Vive installed on your computer, viewing your game in VR is as easy as launching it in VR Preview mode and viewing it in your headset. While Controls might take more work, UE4 has a Motion Controller you can add to your controller to help you get started quickly. You can even edit your project in VR Mode, allowing you to see the editor view in your VR headset, which can help with positioning things in your game. If you’re starting a new project, UE4 now has VR specific templates for new projects. You also have plenty of online documentation and a large community of other users working with VR in Unreal Engine 4 who can help you out. Unreal Engine 4 for Augmented Reality Augmented Reality (AR) is another new technology that’s extremely popular right now. Pokemon Go is extremely popular, and many companies are trying to do something similar. There are also AR headsets and possibly other new ways to view AR information. Every platform has its own way of handling Augmented Reality right now. On mobile devices, iOS has ARKit to support AR programming and Android has ARCore. Fortunately, the Unreal website has a whole section on AR and how to support these in UE4 to develop AR games at https://docs.unrealengine.com/en-US/Platforms/AR/index.html. It also has information on using Magic Leap, Microsoft HoloLens, and Microsoft Hololens 2. So by using UE4, you get a big headstart on this type of development. Working with Other New Technologies If you want to use technology, chances are UE4 supports it (and if not, just wait and it will). Whether you’re trying to do procedural programming or just use the latest AI techniques (for more information see chapters 11 and 12 of my book, Learning C++ by Building Games with Unreal Engine 4 – Second Edition), chances are you can find something to help you get a head start in that technology that already works in UE4. And with so many people using the engine, it is likely to continue to be a great way to get support for new technologies. Support for New Platforms UE4 already supports numerous platforms such as PC, Mac, Mobile, web, Xbox One, PS4, Switch, and probably any other recent platform you can think of. With the next-gen consoles coming out in 2020, chances are they’re already working on support for them. For the consoles, you do generally need to be a registered developer with Microsoft, Sony, and/or Nintendo to have access to the tools to develop for those platforms (and you need expensive devkits). But as more indie games are showing up on these platforms you don’t necessarily have to be working at a AAA studio to do this anymore. What is amazing when you develop in UE4, is that publishing for another platform should basically just work. You may need to change the controls and the screen size. An AAA 3D title might be too slow to be playable if you try to just run it n a mobile device without any changes, but the basic game functionality will be there and you can make changes from that point. The Future It’s hard to tell what new technologies may come in the future, as new devices, game types, and methods of programming are developed. Regardless of what the future holds, there’s a strong chance that UE4 will support them. So learning UE4 now is a great investment of your time. If you’re interested in learning more, see my book, Learning C++ by Building Games with Unreal Engine 4 – Second Edition Author Bio Sharan Volin has been programming games for more than a decade. She has worked on AAA titles for Behavior Interactive, Blind Squirrel Games, Sony Online Entertainment/Daybreak Games, Electronic Arts (Danger Close Games), 7 Studios (Activision), and more, as well as numerous smaller games. She has primarily been a UI Programmer but is also interested in Audio, AI, and other areas. She also taught Game Programming for a year at the Art Institute of California and is the author of Learning C++ by Building Games with Unreal Engine 4 – Second Edition.
Read more
  • 0
  • 0
  • 10893

article-image-harrison-ferrone-why-c-preferred-programming-language-building-games-unity
Sugandha Lahoti
16 Dec 2019
6 min read
Save for later

Harrison Ferrone explains why C# is the preferred programming language for building games in Unity

Sugandha Lahoti
16 Dec 2019
6 min read
C# is one of the most popular programming languages which is used to create games in the Unity game engine. Experiences (games, AR/VR apps, etc) built with Unity have reached nearly 3 billion devices worldwide and were installed 24 billion times in the last 12 months. We spoke to Harrison Ferrone, software engineer, game developer, creative technologist and author of the book, “Learning C# by Developing Games with Unity 2019”. We talked about why C# is used for game designing, the recent Unity 2019.2 release, and some tips and tricks tips for those developing games with Unity. On C# and Game development Why is C# is widely-used to create games? How does it compare to C++? How is C# being used in other areas such as mobile and web development? I think Unity chose to move forward with C# instead of Javascript or Boo because of its learning curve and its history with Microsoft. [Boo was one of the three scripting languages for the Unity game engine until it was dropped in 2014]. In my experience, C# is easier to learn than languages like C++, and that accessibility is a huge draw for game designers and programmers in general. With Xamarin mobile development and ASP.NET web applications in the mix, there’s really no stopping the C# language any time soon. What are C# scripts? How are they useful for creating games with Unity? C# scripts are the code files that store behaviors in Unity, powering everything the engine does. While there are a lot of new tools that will allow a developer to make a game without them, scripts are still the best way to create custom actions and interactions within a game space. Editor’s Tip: To get started with how to create a C# script in Unity, you can go through Chapter 1 of Harrison Ferrone’s book Learning C# by Developing Games with Unity 2019. On why Harrison wrote his book, Learning C# by Developing Games with Unity 2019 Tell us the motivation behind writing your book Learning C# by Developing Games with Unity 2019. Why is developing Unity games a good way to learn the C# programming language? Why do you prefer Unity over other game engines? My main motivation for writing the book was two-fold. First, I always wanted to be a writer, so marrying my love for technology with a lifelong dream was a no-brainer. Second, I wanted to write a beginner’s book that would stay true to a beginner audience, always keeping them in mind. In terms of choosing games as a medium for learning, I’ve found that making something interesting and novel while learning a new skill-set leads to greater absorption of the material and more overall enjoyment. Unity has always been my go-to engine because its interface is highly intuitive and easy to get started with. You have 3 years of experience building iOS applications in Swift. You also have a number of articles and tutorials on the same on the Ray Wenderlich website. Recently, you started branching out into C++ and Unreal Engine 4. How did you get into game design and Unity development? What made you interested in building games?  I actually got into Game design and Unity development first, before all the iOS and Swift experience. It was my major in university, and even though I couldn’t find a job in the game industry right after I graduated, I still held onto it as a passion. On developing games The latest release of Unity, Unity 2019.2 has a number of interesting features such as ProBuilder, Shader Graph, and effects, 2D Animation, Burst Compiler, etc. What are some of your favorite features in this release? What are your expectations from Unity 2019.3?  I’m really excited about ProBuilder in this release, as it’s a huge time saver for someone as artistically challenged as I am. I think tools like this will level the playing field for independent developers who may not have access to the environment or level builders. What are some essential tips and tricks that a game developer must keep in mind when working in Unity? What are the do’s and don’ts? I’d say the biggest thing to keep in mind when working with Unity is the component architecture that it’s built on. When you’re writing your own scripts, think about how they can be separated into their individual functions and structure them like that - with purpose. There’s nothing worse than having a huge, bloated C# script that does everything under the sun and attaching it to a single game object in your project, then realizing it really needs to be separated into its component parts. What are the biggest challenges today in the field of game development? What is your advice for those developing games using C#? Reaching the right audience is always challenge number one in any industry, and game development is no different. This is especially true for indie game developers as they have to always be mindful of who they are making their game for and purposefully design and program their games accordingly. As far as advice goes, I always say the same thing - learn design patterns and agile development methodologies, they will open up new avenues for professional programming and project management. Rust has been touted as one of the successors of the C family of languages. The present state of game development in Rust is also quite encouraging. What are your thoughts on Rust for game dev? Do you think major game engines like Unity and Unreal will support Rust for game development in the future? I don’t have any experience with Rust, but major engines like Unity and Unreal are unlikely to adopt a new language because of the huge cost associated with a changeover of that magnitude. However, that also leaves the possibility open for another engine to be developed around Rust in the future that targets games, mobile, and/or web development. About the Author Harrison Ferrone was born in Chicago, IL, and raised all over. Most days, you can find him creating instructional content for LinkedIn Learning and Pluralsight, or tech editing for the Ray Wenderlich website. After a few years as an iOS developer at small start-ups, and one Fortune 500 company, he fell into a teaching career and never looked back. Throughout all this, he's bought many books, acquired a few cats, worked abroad, and continually wondered why Neuromancer isn't on more course syllabi. You can follow him on Linkedin, and GitHub.
Read more
  • 0
  • 0
  • 15875

article-image-what-is-unitys-new-data-oriented-technology-stack-dots
Guest Contributor
04 Dec 2019
7 min read
Save for later

What is Unity’s new Data-Oriented Technology Stack (DOTS)

Guest Contributor
04 Dec 2019
7 min read
If we look at the evolution of computing and gaming over the last decade, we can see how different things are with respect to ten years ago. However, one of the most significant change was moving from a world where 90% of the code ran on a single thread on a single core, to a world where we all carry in our pockets hundreds of GPU cores, and we must design efficient code that can run in parallel. If we look at this change, we can imagine why Unity feels the urge to adapt to this new paradigm. Unity’s original design born in a different era, and now it is time for it to adjust to the future. The Data-Oriented Technology Stack (DOTS) is the collective name for Unity's attempt at reshaping its internal architecture in a way that is faster, lighter, and, more important, optimized for the current massive multi-threading world. In this article, we will take a look at the main three components of DOTS and how it can help you develop next-generation games. Want to learn more optimization techniques in Unity? Unity engine comes with a great set of features to help you build high-performance games. If you want to know the techniques for writing better game scripts and learn how to optimize a game using Unity technologies such as ECS and the Burst compiler, read the book Unity Game Optimization - Third Edition written by Chris Dickinson and Dr. Davide Aversa. This book will help you get up to speed with a series of performance-enhancing coding techniques and methods that will help you improve the performance of your Unity applications. The Data-Oriented Technology Stack Three components compose the Data-Oriented Technology Stack: The Entity Component System (ECS) The C# Job System The Burst compiler Let's see each one of them. The Entity Component System (ECS) If you know Unity, you know that two basic structures represent every part of a game: the GameObject and the MonoBehavior. Every GameObject contains one or more MonoBehavior, which in turn describes the data (what the object knows) and the behavior (what the object does) of each element in a scene. GameObject and MonoBehavior worked well during Unity’s initial years; however, with the rise of multithreaded programming, many issues with the GameObject architecture started to become more evident. First of all, a GameObject is a fat, heavy, data structure. In theory, it should only be a container of MonoBehavior instances. In practice, instead, it has a significant number of problems. To name a few:  Every GameObject has a name and an ID.  Every GameObject has a C# wrapping object pointing to the native C++ code Creating and deleting a GameObject requires to lock and edit a global list (that is, these operations cannot run in parallel). Moreover, both GameObject and MonoBehavior are dynamic objects, and they are stored everywhere in memory. It would be much better if we could keep all the MonoBehavior of a GameObject close to each other so that finding and running them would be more efficient. To solve all these issues, Unity introduced the Entity Component System (ECS), a new paradigm alternative to the traditional GameObject/MonoBehavior one. As the name suggests, there are three elements in ECS: Components: They are conceptually similar to a MonoBehavior, but they contain only data. For instance, a Position component will contain only a 3D vector representing the entity position in space; a LinearVelocity component would contain only the velocity of the object, and so on. They are just plain data. Entities: They are just a “collection” of components. For example, if I have a particle in space, I can represent it just with the list of components, e.g., Position and LinearVelocity components. System: A system is where the behavior is. Each system takes a list of components and executes a function over all the entities composed by the components of the archetype. [box type="shadow" align="" class="" width=""]To be technically correct, an entity is not a collection data structure. Instead, it is a pointer to a location in memory where the entity’s components are stored. The actual storage, though, is handled by Unity.[/box] With this system, we can store components into contiguous arrays, and an entity is just a pointer to the archetype instance. A single function for each system can define the behavior of thousands of similar entities. This is more efficient than running an Update on every MonoBehavior in every GameObject. For this reason, with ECS, we can use entities without any slowdown or system overhead where it was impossible with GameObject instances. For instance, having an entity for each particle of a particle system. For more technical info on ECS there is a very detailed blog post on Unity’s official website. The C# Job System If ECS is how we describe the scene, we need a way to run the systems efficiently. As we said in the introduction, the modern approach to efficiency is to exploit every core in our system, and this means to run code in parallel using massive multithreaded systems. Sadly, multi-threading is hard. Extremely hard. As any experienced developer can tell you, moving from single-thread to multi-thread programming introduce a large class of new issues and bugs such as race conditions. Moreover, for true multi-threading, we should go as much close as possible to the metal, avoiding all the dynamic allocations and deallocations of C# and the Garbage Collector and code part of our game in C++. Luckily for us, Unity introduced a component in Data-Oriented Technology Stack with the specific purpose of simplifying multithreaded programming in Unity using only C#: the Job System. You can imagine a Job as a piece of code that you want to run in parallel over as much cores as possible. The Unity C# Job System helps you design this code in a way to avoid all common multi-threading pitfalls using only C#. You can finally unleash all the power of your machine without writing a single line of C++ code. The Burst Compiler What if I tell you that it is possible to obtain higher performances by writing C# code instead of C++? You would think I am crazy. However, I am not, and this the goal of the last component of Data-Oriented Technology Stack (DOTS): the Burst compiler. The Burst compiler is a specialized code-generator that compiles a subset of C# (often called High-Performance C# or HPC#) into machine code that is, most of the time, smaller and faster than the one that is generated by an equivalent C++ code. The Burst compiler is still in preview, but you can already try it by using the Unity's Package Manager. Of course, you get the most from it when combined with the other two DOTS components. For more technical info on the Burst compiler, you can refer to Unity’s blog post. Learn More About Unity Optimization In this article, we only scratched the surface of Data-Oriented Technology Stack (DOTS). If you want to learn more on how to use the DOTS technologies and other optimization techniques for Unity you can read more in my  book Unity Game Optimization - Third Edition. This Unity book is your guide to optimizing various aspects of your game development, from game characters and scripts, right through to animations. You will also explore techniques for solving performance issues with your VR projects and learn best practices for project organization to save time through an improved workflow. Author Bio Dr. Davide Aversa holds a PhD in artificial intelligence and an MSc in artificial intelligence and robotics from the University of Rome La Sapienza in Italy. He has a strong interest in artificial intelligence for the development of interactive virtual agents and procedural content generation. He served as a Program Committee member of video game-related conferences such as the IEEE conference on computational intelligence and games, and he also regularly participates in game-jam contests. He also writes a blog on game design and game development. You can find him on Twitter, Github, Linkedin. Unity 2019.2 releases with updated ProBuilder, Shader Graph, 2D Animation, Burst Compiler and more Japanese Anime studio Khara is switching its primary 3D CG tools to Blender Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool
Read more
  • 0
  • 0
  • 14388

article-image-blizzard-comes-under-fire-after-banning-pro-player-for-expressing-support-for-hong-kong-protests
Sugandha Lahoti
10 Oct 2019
6 min read
Save for later

Blizzard comes under fire after banning pro-player for expressing support for Hong Kong protests

Sugandha Lahoti
10 Oct 2019
6 min read
Update: The article has now been updated to include Blizzard's press release about relaxing the ban on the pro-player.  Blizzard has been under fire since last weekend after the game publisher issued a year-long ban to a Hearthstone player who expressed support for the Hong Kong protestors during a competition live stream. The incident occurred on Sunday when Ng “Blitzchung” Wai Chung voiced support for the protesters in Hong Kong in a post-game interview. Blitzchung said, “Liberate Hong Kong. Revolution of our age!” The ban is effective from October 5th and forbids Blitzchung from participating in any tournaments for an entire year. Blizzard is also withholding any prize money he would have earned from competing in the tournament. Blizzard has also terminated its contract with the two casters who were interviewing the competitor. Explaining the reason behind this ban Blizzard issued a statement, “Per the competition rule, players aren’t allowed to do anything that brings [them] into public disrepute, offends a portion or group of the public, or otherwise damages [Blizzard’s] image. While we stand by one’s right to express individual thoughts and opinions, players and other participants that elect to participate in our esports competitions must abide by the official competition rules.” Game Players, US politicians, and Blizzard employees are outraged After the ban of Hearthstone pro,  Blizzard was at the end of major backlash from video game players, US politicians, and Blizzard employees. On Tuesday, a small group of Blizzard employees walked out of work to protest the company’s actions. The demonstration featured about 12-30 employees from multiple departments, who gathered around the Orc warrior statue in the center of the company’s main campus in Irvine, California. The Daily Beast spoke with a few employees. “The action Blizzard took against the player was pretty appalling but not surprising,” said a longtime Blizzard employee. “Blizzard makes a lot of money in China, but now the company is in this awkward position where we can’t abide by our values.” “I’m disappointed,” another current Blizzard employee said. “We want people all over the world to play our games, but no action like this can be made with political neutrality.” US Senators Marco Rubio and Ron Wyden also chastised the actions of Blizzard on Twitter. “Blizzard shows it is willing to humiliate itself to please the Chinese Communist Party,” Senator Wyden tweeted. “No American company should censor calls for freedom to make a quick buck.” “Recognize what’s happening here,” Senator Rubio said on Twitter. “People who don’t live in #China must either self-censor or face dismissal & suspensions. China using access to the market as leverage to crush free speech globally. Implications of this will be felt long after everyone in U.S. politics today is gone.” https://twitter.com/marcorubio/status/1181556058659135488 Blizzard’s own forums and subreddits were also bombarded with angry messages denouncing the ban. The r/Blizzard subreddit went down for a few hours on Tuesday after the board was drowned with posts calling for players to boycott Blizzard and its games like World of Warcraft, Overwatch, and Hearthstone. On its Hearthstone board, a redditor Hinz97 said in a post,“ I play [Hearthstone] everyday, I climbed to Legend several times. I spent more than $10k. As a [Hong Konger], I quit [ Hearthstone] without consideration.” “I’ve been playing since beta. Good riddance,” Redditor UltimaterializerX said. “Blizzard CLEARLY only cares about the Chinese market. The censorship of art was bad enough. The censorship of human life is indefensible. Finding videos of what’s going on in Hong Kong is easy and I suggest everyone do so. It’s Tiananmen Square all over again.” https://twitter.com/Espsilverfire2/status/1182001007976423424 Mark Kern, Team Lead for Vanilla World of Warcraft tweeted, “This hurts. But until Blizzard reverses their decision on @blitzchungHS.  I am giving up playing Classic WoW, which I helped make and helped convince Blizzard to relaunch. There will be no Mark of Kern guild after all.” Fortnite creator Epic Games released a statement stating that it will not ban players or content creators for political speech. “Epic supports everyone’s right to express their views on politics and human rights. We wouldn’t ban or punish a Fortnite player or content creator for speaking on these topics.” https://twitter.com/TimSweeneyEpic/status/1181933071760789504 Blizzard has not yet responded to this development or lifted the ban. Hong Kong protests began in June and now the tech industry has been caught in between the China HK political tussle. In August, Chinese state-run media agencies were caught buying advertisements and promoted tweets on Twitter and Facebook to portray Hong Kong protestors and their pro-democracy demonstrations as violent. Post this revelation, Twitter banned 936 accounts managed by the Chinese state; Facebook removed seven Pages, three Groups and five Facebook accounts involved in coordinated inauthentic behavior; Google shutdown 210 YouTube channels. Most recently Apple, after pressure from the Chinese govt, banned a protest safety app that helps people track locations of the Hong Kong police which made people very angry. Amid the protests a day later, Apple again brought it back to the iOS Store. Yesterday, according to Quartz investigations editor John Keefe, Apple has reportedly removed the Quartz application from the App Store at the request of the Chinese government. Quartz has been covering the Hong Kong protests in detail and has been blocked across all of mainland China. Update as on Oct 11: After four days of mounting public pressure, Blizzard Entertainment published a press release partially relaxing the ban on the professional player who expressed support for the Hong Kong protestors during a competition live stream. The one year ban on Ng "blitzchung" has since been changed to a six-month suspension. Additionally, the two Chinese broadcasters who had been fired are now put on a six-month suspension from their jobs. Blizzard President J. Allen Brack wrote also clarified that they were not under the influence of China. "The specific views expressed by blitzchung were NOT a factor in the decision we made," Brack wrote. "I want to be clear: our relationships in China had no influence on our decision." Apple bans HKmap.live, a Hong Kong protest safety app from the iOS Store as it makes people ‘evade law enforcement’. Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong Kong protests. Telegram faces massive DDoS attack; suspects link to the ongoing Hong Kong protests
Read more
  • 0
  • 0
  • 2802
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-unreal-engine-4-23-releases-with-major-new-features-like-chaos-virtual-production-improvement-in-real-time-ray-tracing-and-more
Vincy Davis
09 Sep 2019
5 min read
Save for later

Unreal Engine 4.23 releases with major new features like Chaos, Virtual Production, improvement in real-time ray tracing and more

Vincy Davis
09 Sep 2019
5 min read
Last week, Epic released the stable version of Unreal Engine 4.23 with a whopping 192 improvements. The major features include beta varieties like Chaos - Destruction, Multi-Bounce Reflection fallback in Real-Time Ray Tracing, Virtual Texturing, Unreal Insights, HoloLens 2 native support, Niagara improvements and many more. Unreal Engine 4.23 will no longer support iOS 10, as iOS 11 is now the minimum required version. What’s new in Unreal Engine 4.23? Chaos - Destruction Labelled as “Unreal Engine's new high-performance physics and destruction system” Chaos is available in beta for users to attain cinematic-quality visuals in real-time scenes. It also supports high level artist control over content creation and destruction. https://youtu.be/fnuWG2I2QCY Chaos supports many distinct characteristics like- Geometry Collections: It is a new type of asset in Unreal for short-lived objects. The Geometry assets can be built using one or more Static Meshes. It offers flexibility to the artist on choosing what to simulate, how to organize and author the destruction. Fracturing: A Geometry Collection can be broken into pieces either individually, or by applying one pattern across multiple pieces using the Fracturing tools. Clustering: Sub-fracturing is used by artists to increase optimization. Every sub-fracture is an extra level added to the Geometry Collection. The Chaos system keeps track of the extra levels and stores the information in a Cluster, to be controlled by the artist. Fields: It can be used to control simulation and other attributes of the Geometry Collection. Fields enable users to vary the mass, make something static, to make the corner more breakable than the middle, and others. Unreal Insights Currently in beta, Unreal Insights enable developers to collect and analyze data about Unreal Engine's behavior in a fixed way. The Trace System API system is one of its components and is used to collect information from runtime systems consistently. Another component of Unreal Insights is called the Unreal Insights Tool. It supplies interactive visualization of data through the Analysis API. For in-depth details about Unreal Insights and other features, you can also check out the first preview release of Unreal Engine 4.23. Virtual Production Pipeline Improvements Unreal Engine 4.23 explores advancements in virtual production pipeline by improving virtually scout environments and compose shots by connecting live broadcast elements with digital representations and more. In-Camera VFX: With improvements in-Camera VFX, users can achieve final shots live on set by combining real-world actors and props with Unreal Engine environment backgrounds. VR Scouting for Filmmakers: The new VR Scouting tools can be used by filmmakers to navigate and interact with the virtual world in VR. Controllers and settings can also be customized in Blueprints,rather than rebuilding the engine in C++. Live Link Datatypes and UX Improvements: The Live Link Plugin be used to drive character animation, camera, lights, and basic 3D transforms dynamically from other applications and data sources in the production pipeline. Other improvements include save and load presets for Live Link setups, better status indicators to show the current Live Link sources, and more. Remote Control over HTTP: Unreal Engine 4.23 users can send commands to Unreal Engine and Unreal Editor remotely over HTTP. This makes it possible for users to create customized web user interfaces to trigger changes in the project's content. Read Also: Epic releases Unreal Engine 4.22, focuses on adding “photorealism in real-time environments” Real-Time Ray tracing Improvements Performance and Stability Expanded DirectX 12 Support Improved Denoiser quality Increased Ray Traced Global Illumination (RTGI) quality Additional Geometry and Material Support Landscape Terrain Hierarchical Instanced Static Meshes (HISM) and Instanced Static Meshes (ISM) Procedural Meshes Transmission with SubSurface Materials World Position Offset (WPO) support for Landscape and Skeletal Mesh geometries Multi-Bounce Reflection Fallback Unreal Engine 4.23 provides improved support for multi-bounce Ray Traced Reflections (RTR) by using Reflection Captures. This will increase the performance of all types of intra-reflections. Virtual Texturing The beta version of Virtual Texturing in Unreal Engine 4.23 enables users to create and use large textures for a lower and more constant memory footprint at runtime. Streaming Virtual Texturing: The Streaming Virtual Texturing uses the Virtual Texture assets to present an option to stream textures from disk rather than the existing Mip-based streaming. It minimizes the texture memory overhead and increases performance when using very large textures. Runtime Virtual Texturing: The Runtime Virtual Texturing avails a Runtime Virtual Texture asset. It can be used to supply shading data over large areas, thus making it suitable for Landscape shading. Unreal Engine 4.23 also presents new features like Skin Weight Profiles, Animation Streaming, Dynamic Animation Graphs, Open Sound Control, Sequencer Curve Editor Improvements, and more. As expected, users love the new features in Unreal Engine 4.23, especially Chaos. https://twitter.com/rista__m/status/1170608746692673537 https://twitter.com/jayakri59101140/status/1169553133518782464 https://twitter.com/NoisestormMusic/status/1169303013149806595 To know about the full updates in Unreal Engine 4.23, users can head over to the Unreal Engine blog. Other news in Game Development Japanese Anime studio Khara is switching its primary 3D CG tools to Blender Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects
Read more
  • 0
  • 0
  • 5967

article-image-bitbucket-to-no-longer-support-mercurial-users-must-migrate-to-git-by-may-2020
Fatema Patrawala
21 Aug 2019
6 min read
Save for later

Bitbucket to no longer support Mercurial, users must migrate to Git by May 2020

Fatema Patrawala
21 Aug 2019
6 min read
Yesterday marked an end of an era for Mercurial users, as Bitbucket announced to no longer support Mercurial repositories after May 2020. Bitbucket, owned by Atlassian, is a web-based version control repository hosting service, for source code and development projects. It has used Mercurial since the beginning in 2008 and then Git since October 2011. Now almost after ten years of sharing its journey with Mercurial, the Bitbucket team has decided to remove the Mercurial support from the Bitbucket Cloud and its API. The official announcement reads, “Mercurial features and repositories will be officially removed from Bitbucket and its API on June 1, 2020.” The Bitbucket team also communicated the timeline for the sunsetting of the Mercurial functionality. After February 1, 2020 users will no longer be able to create new Mercurial repositories. And post June 1, 2020 users will not be able to use Mercurial features in Bitbucket or via its API and all Mercurial repositories will be removed. Additionally all current Mercurial functionality in Bitbucket will be available through May 31, 2020. The team said the decision was not an easy one for them and Mercurial held a special place in their heart. But according to a Stack Overflow Developer Survey, almost 90% of developers use Git, while Mercurial is the least popular version control system with only about 3% developer adoption. Apart from this Mercurial usage on Bitbucket saw a steady decline, and the percentage of new Bitbucket users choosing Mercurial fell to less than 1%. Hence they decided on removing the Mercurial repos. How can users migrate and export their Mercurial repos Bitbucket team recommends users to migrate their existing Mercurial repos to Git. They have also extended support for migration, and kept the available options open for discussion in their dedicated Community thread. Users can discuss about conversion tools, migration, tips, and also offer troubleshooting help. If users prefer to continue using the Mercurial system, there are a number of free and paid Mercurial hosting services for them. The Bitbucket team has also created a Git tutorial that covers everything from the basics of creating pull requests to rebasing and Git hooks. Community shows anger and sadness over decision to discontinue Mercurial support There is an outrage among the Mercurial users as they are extremely unhappy and sad with this decision by Bitbucket. They have expressed anger not only on one platform but on multiple forums and community discussions. Users feel that Bitbucket’s decision to stop offering Mercurial support is bad, but the decision to also delete the repos is evil. On Hacker News, users speculated that this decision was influenced by potential to market rather than based on technically superior architecture and ease of use. They feel GitHub has successfully marketed Git and that's how both have become synonymous to the developer community. One of them comments, “It's very sad to see bitbucket dropping mercurial support. Now only Facebook and volunteers are keeping mercurial alive. Sometimes technically better architecture and user interface lose to a non user friendly hard solutions due to inertia of mass adoption. So a lesson in Software development is similar to betamax and VHS, so marketing is still a winner over technically superior architecture and ease of use. GitHub successfully marketed git, so git and GitHub are synonymous for most developers. Now majority of open source projects are reliant on a single proprietary solution Github by Microsoft, for managing code and project. Can understand the difficulty of bitbucket, when Python language itself moved out of mercurial due to the same inertia. Hopefully gitlab can come out with mercurial support to migrate projects using it from bitbucket.” Another user comments that Mercurial support was the only reason for him to use Bitbucket when GitHub is miles ahead of Bitbucket. Now when it stops supporting Mercurial too, Bitbucket will end soon. The comment reads, “Mercurial support was the one reason for me to still use Bitbucket: there is no other Bitbucket feature I can think of that Github doesn't already have, while Github's community is miles ahead since everyone and their dog is already there. More importantly, Bitbucket leaves the migration to you (if I read the article correctly). Once I download my repo and convert it to git, why would I stay with the company that just made me go through an annoying (and often painful) process, when I can migrate to Github with the exact same command? And why isn't there a "migrate this repo to git" button right there? I want to believe that Bitbucket has smart people and that this choice is a good one. But I'm with you there - to me, this definitely looks like Bitbucket will die.” On Reddit, programming folks see this as a big change from Bitbucket as they are the major mercurial hosting provider. And they feel Bitbucket announced this at a pretty short notice and they require more time for migration. Apart from the developer community forums, on Atlassian community blog as well users have expressed displeasure. A team of scientists commented, “Let's get this straight : Bitbucket (offering hosting support for Mercurial projects) was acquired by Atlassian in September 2010. Nine years later Atlassian decides to drop Mercurial support and delete all Mercurial repositories. Atlassian, I hate you :-) The image you have for me is that of a harmful predator. We are a team of scientists working in a university. We don't have computer scientists, we managed to use a version control simple as Mercurial, and it was a hard work to make all scientists in our team to use a version control system (even as simple as Mercurial). We don't have the time nor the energy to switch to another version control system. But we will, forced and obliged. I really don't want to check out Github or something else to migrate our projects there, but we will, forced and obliged.” Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack Attackers wiped many GitHub, GitLab, and Bitbucket repos with ‘compromised’ valid credentials leaving behind a ransom note BitBucket goes down for over an hour
Read more
  • 0
  • 0
  • 10169

article-image-are-you-looking-at-transitioning-from-being-a-developer-to-manager-here-are-some-leadership-roles-to-consider
Packt Editorial Staff
04 Jul 2019
6 min read
Save for later

Are you looking at transitioning from being a developer to manager? Here are some leadership roles to consider

Packt Editorial Staff
04 Jul 2019
6 min read
What does the phrase "a manager" really mean anyway? This phrase means different things to different people and is often overused for the position which nearly matches an analyst-level profile! This term, although common, is worth defining what it really means, especially in the context of software development. This article is an excerpt from the book The Successful Software Manager written by an internationally experienced IT manager, Herman Fung. This book is a comprehensive and practical guide to managing software developers, software customers, and explores the process of deciding what software needs to be built, not how to build it. In this article, we’ll look into aspects you must be aware of before making the move to become a manager in the software industry. A simple distinction I once used to illustrate the difference between an analyst and a manager is that while an analyst identifies, collects, and analyzes information, a manager uses this analysis and makes decisions, or more accurately, is responsible and accountable for the decisions they make. The structure of software companies is now enormously diverse and varies a lot from one to another, which has an obvious impact on how the manager’s role and their responsibilities are defined, which will be unique to each company. Even within the same company, it's subject to change from time to time, as the company itself changes. Broadly speaking, a manager within software development can be classified into three categories, as we will now discuss: Team Leader/Manager This role is often a lead developer who also doubles up as the team spokesperson and single point of contact. They'll typically be the most senior and knowledgeable member of a small group of developers, who work on the same project, product, and technology. There is often a direct link between each developer in the team and their code, which means the team manager has a direct responsibility to ensure the product as a whole works. Usually, the team manager is also asked to fulfill the people management duties, such as performance reviews and appraisals, and day-to-day HR responsibilities. Development/Delivery Manager This person could be either a techie or a non-techie. They will have a good understanding of the requirements, design, code, and end product. They will manage running workshops and huddles to facilitate better overall team working and delivery. This role may include setting up visual aids, such as team/project charts or boards. In a matrix management model, where developers and other experts are temporarily asked to work in project teams, the development manager will not be responsible for HR and people management duties. Project Manager This person is most probably a non-techie, but there are exceptions, and this could be a distinct advantage on certain projects. Most importantly, a project manager will be process-focused and output-driven and will focus on distributing tasks to individuals. They are not expected to jump in to solve technical problems, but they are responsible for ensuring that the proper resources are available, while managing expectations. Specifically, they take part in managing the project budget, timeline, and risks. They should also be aware of the political landscape and management agenda within the organization to be able to navigate through them. The project manager ensures the project follows the required methodology or process framework mandated by the Project Management Office (PMO). They will not have people-management responsibilities for project team members. Agile practitioner As with all roles in today's world of tech, these categories will vary and overlap. They can even be held by the same person, which is becoming an increasingly common trait. They are also constantly evolving, which exemplifies the need to learn and grow continually, regardless of your role or position. If you are a true Agile practitioner, you may have issues in choosing these generalized categories, (Team Leader, Development Manager and Project Manager)  and you'd be right to do so! These categories are most applicable to an organization that practises the traditional Waterfall model. Without diving into the everlasting Waterfall vs Agile debate, let's just say that these are the categories that transcend any methodologies. Even if they're not referred to by these names, they are the roles that need to be performed, to varying degrees, at various times. For completeness, it is worth noting one role specific to Agile, that is being a scrum master. Scrum master A scrum master is a role often compared – rightly or wrongly – with that of the project manager. The key difference is that their focus is on facilitation and coaching, instead of organizing and control. This difference is as much of a mindset as it is a strict practice, and is often referred to as being attributes of Servant Leadership. I believe a good scrum master will show traits of a good project manager at various times, and vice versa. This is especially true in ensuring that there is clear communication at all times and the team stays focused on delivering together. Yet, as we look back at all these roles, it's worth remembering that with the advent of new disciplines such as big data, blockchain, artificial intelligence, and machine learning, there are new categories and opportunities to move from a developer role into a management position, for example, as an algorithm manager or data manager. Transitioning, growing, progressing, or simply changing from a developer to a manager is a wonderfully rewarding journey that is unique to everyone. After clarifying what being a “modern manager" really means, and the broad categories applicable in software development (Team / Development / Project / Agile), the overarching and often key consideration for developers is whether it means they will be managing people and writing less code. In this article, we looked into different leadership roles that are available for developers for their career progression plan. Develop crucial skills to enhance your performance and advance your career with The Successful Software Manager written by Herman Fung. “Developers don’t belong on a pedestal, they’re doing a job like everyone else” – April Wensel on toxic tech culture and Compassionate Coding [Interview] Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl” ‘I code in my dreams too’, say developers in Jetbrains State of Developer Ecosystem 2019 Survey
Read more
  • 0
  • 0
  • 4138

article-image-microsofts-xbox-team-at-e3-2019-project-scarlett-ai-powered-flight-simulator-keanu-reeves-in-cyberpunk-2077-and-more
Bhagyashree R
11 Jun 2019
6 min read
Save for later

Microsoft’s Xbox team at E3 2019: Project Scarlett, AI-powered Flight Simulator, Keanu Reeves in Cyberpunk 2077, and more

Bhagyashree R
11 Jun 2019
6 min read
On Sunday at E3 2019, Microsoft made some really big announcements that had the audience screaming. These included release date of Project Scarlett, Xbox One successor, more than 60 game trailers, Keanu Reeves humbling the stage for promoting Cyberpunk 2077, and much more. E3, which stands for Electronic Entertainment Expo, is one of the biggest gaming events of the year. Its official dates are June 11-13, however, these dates are just for the shows happening at Los Angeles Convention Center. The press conferences were held on June 8 and 9. Along with hosting the world premiere of several computer and video games, this event also showcases new hardware and software products that take the gaming experience to the next level. Here are some of the highlights from Microsoft’s press conference: Project Scarlett will arrive in fall 2020 with Halo infinite Rumors have been going around about the next-generation of Xbox since December last year. Putting all these rumors to rest, Microsoft officially announced that Project Scarlett is planned to release during fall next year. The tech giant further shared that the next big upcoming space war game, Halo Infinite will launch alongside Project Scarlett. According to Microsoft, we can expect this new device to be four times more powerful than Xbox One X. It includes a custom designed CPU based on AMD’s Zen 2 and Radeon RDNA architecture. It supports 8K gaming, framerates of 120fps, and ray-tracing. The device will also include a non-mechanical SSD hard drive enabling faster game loads than its older mechanical hard drives. https://youtu.be/-ktN4bycj9s xCloud will open for public trials in October, one month ahead of Google’s Stadia After giving a brief live demonstration of its upcoming xCloud game streaming service in March, Microsoft announced that it will be available to the public in October this year. This announcement seems to be a direct response to Google’s Stadia, which was revealed in March and will make its public debut in November. Along with sharing the release date, the tech giant also gave E3 attendees the first hands-on trial of the service. At the event, Xbox chief Phil Spencer said, “Two months ago we connected all Xbox developers to Project xCloud. Today, we invite those of you here at E3 for our first public hands-on of Project xCloud. To experience the freedom to play right here at the show.” Microsoft built xCloud to provide gamers with a new way to play Xbox games where the gamers decide how and when they want to play. With xCloud Console Streaming you will be able to “turn your Xbox One into your own personal and free xCloud server.” It will enable you to stream entire Xbox One library including games from Xbox Game Pass to any device of your choice. https://twitter.com/Xbox/status/1137833126959280128 Xbox Elite 2 Wireless Controller to reach you on November 4th for $179.99 Microsoft announced the launch of Xbox Elite Wireless Controller Series 2, which it says is the totally re-engineered version of the previous Elite controller. It is open for pre-orders now and will be available on November 4th in 24 countries, priced at $179.99. The controller’s new adjustable tension thumbsticks provide improved precision and shorter hair trigger locks enable you to fire faster. The device includes USB-C support, Bluetooth, and a rechargeable battery that lasts for up to 40 hours per charge. Along with all these updates, it also allows you to do limitless customizations with the Xbox Accessories app on Xbox One and Windows 10 PC. https://youtu.be/SYVw0KqQiOI Cyberpunk 2077 featuring Keanu Reeves to release on April 16th, 2020 Last year, CD Projekt Red, the creator of Cyberpunk 2077 said that E3 2019 will be its “most important E3” ever and we cannot agree more. Keanu Reeves aka John Wick himself came to announce the release date of Cyberpunk 2077, which is April 16th, 2020. The trailer of the game ended with the biggest surprise for the audience: the appearance of Reeves’ as a character apparently named “Mr. Fusion.” The crowd went wild as soon as Reeves took to the stage to promote Cyberpunk 2077. When the actor said that walking in the streets of Cyberpunk 2077 will be breathtaking, a guy from the crowd yelled, "you're breathtaking." To which Reeves kindly replied: https://twitter.com/Xbox/status/1137854943006605312 The guy from the crowd was YouTuber Peter Sark, who shared on Twitter that "Keanu Reeves just announced to the world that I'm breathtaking." https://twitter.com/petertheleader/status/1137846108305014784 CD Projekt Red is now giving him a free collector’s edition copy of the game, which is amazing! For everyone else, don’t be upset as you can also pre-order Cyberpunk 2077’s physical and collector's edition from their official website. Though as xCloud, attendees will not be able to get a hands-on trial now, they will still be able to see the demo presentation. The demo is happening at the South Hall in the LA Convention Center, booth 1023, on June 11-13th. The new Microsoft Flight Simulator is powered by Azure cloud AI Microsoft showcased a new installment of its long-running Microsoft Flight Simulator series. Powered by Azure cloud artificial intelligence and satellite data, this updated simulator is capable of rendering amazingly real visuals. Though not many details have been shared, its trailer shows a stunning real-time 4K footage of lifelike landscapes and aircraft. Have a look at it yourself! https://youtu.be/ReDDgFfWlS4 Though this simulator has been PC-only in the past, this newly updated simulator is coming to Xbox One and will also be available via Xbox Game Pass. The specific release dates are unknown but they're expected to be out next year. Double Fine joins Xbox Game Studios At the event, Tim Schafer, the founder of Double Fine, shared that his company has now joined Microsoft’s ever-growing gaming studio. Double Fine Productions is the studio behind games like Psychonauts, Brutal Legend, Broken Age. He jokingly said, "For the last 19 years, we've been independent. Then Microsoft came to us and said, 'What if we gave you a bunch of money.' And I said 'OK, yeah.'" Schafer posted another video on YouTube explaining what this means for the company’s existing commitments. He shared that Psychonauts 2 will be provided to crowdfunders on the platforms they chose, but going forward the company will focus on "Xbox, Game Pass, and PC.” https://youtu.be/uR9yKz2C3dY These were just a few key announcements from the event. To know more, you can watch Microsoft keynote on YouTube: https://www.youtube.com/watch?v=zeYQ-kPF0iQ 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Microsoft introduces Service Mesh Interface (SMI) for interoperability across different service mesh technologies
Read more
  • 0
  • 0
  • 3332
article-image-microsoft-build-2019-microsoft-showcases-new-updates-to-ms-365-platfrom-with-focus-on-ai-and-developer-productivity
Sugandha Lahoti
07 May 2019
10 min read
Save for later

Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with focus on AI and developer productivity

Sugandha Lahoti
07 May 2019
10 min read
At the ongoing Microsoft Build 2019 conference, Microsoft has announced a ton of new features and tool releases with a focus on innovation using AI and mixed reality with the intelligent cloud and the intelligent edge. In his opening keynote, Microsoft CEO Satya Nadella outlined the company’s vision and developer opportunity across Microsoft Azure, Microsoft Dynamics 365 and IoT Platform, Microsoft 365, and Microsoft Gaming. “As computing becomes embedded in every aspect of our lives, the choices developers make will define the world we live in,” said Satya Nadella, CEO, Microsoft. “Microsoft is committed to providing developers with trusted tools and platforms spanning every layer of the modern technology stack to build magical experiences that create new opportunity for everyone.” https://youtu.be/rIJRFHDr1QE Increasing developer productivity in Microsoft 365 platform Microsoft Graph data connect Microsoft Graphs are now powered with data connectivity, a service that combines analytics data from the Microsoft Graph with customers’ business data. Microsoft Graph data connect will provide Office 365 data and Microsoft Azure resources to users via a toolset. The migration pipelines are deployed and managed through Azure Data Factory. Microsoft Graph data connect can be used to create new apps shared within enterprises or externally in the Microsoft Azure Marketplace. It is generally available as a feature in Workplace Analytics and also as a standalone SKU for ISVs. More information here. Microsoft Search Microsoft Search works as a unified search experience across all Microsoft apps-  Office, Outlook, SharePoint, OneDrive, Bing and Windows. It applies AI technology from Bing and deep personalized insights surfaced by the Microsoft Graph to personalized searches. Other features included in Microsoft Search are: Search box displacement Zero query typing and key-phrase suggestion feature Query history feature, and personal search query history Administrator access to the history of popular searches for their organizations, but not to search history for individual users Files/people/site/bookmark suggestions Microsoft Search will begin publicly rolling out to all Microsoft 365 and Office 365 commercial subscriptions worldwide at the end of May. Read more on MS Search here. Fluid Framework As the name suggests Microsoft's newly launched Fluid framework allows seamless editing and collaboration between different applications. Essentially, it is a web-based platform and componentized document model that allows users to, for example, edit a document in an application like Word and then share a table from that document in Microsoft Teams (or even a third-party application) with real-time syncing. Microsoft says Fluid can translate text, fetch content, suggest edits, perform compliance checks, and more. The company will launch the software developer kit and the first experiences powered by the Fluid Framework later this year on Microsoft Word, Teams, and Outlook. Read more about Fluid framework here. Microsoft Edge new features Microsoft Build 2019 paved way for a bundle of new features to Microsoft’s flagship web browser, Microsoft Edge. New features include: Internet Explorer mode: This mode integrates Internet Explorer directly into the new Microsoft Edge via a new tab. This allows businesses to run legacy Internet Explorer-based apps in a modern browser. Privacy Tools: Additional privacy controls which allow customers to choose from 3 levels of privacy in Microsoft Edge—Unrestricted, Balanced, and Strict. These options limit third parties to track users across the web.  “Unrestricted” allows all third-party trackers to work on the browser. “Balanced” prevents third-party trackers from sites the user has not visited before. And “Strict” blocks all third-party trackers. Collections: Collections allows users to collect, organize, share and export content more efficiently and with Office integration. Microsoft is also migrating Edge as a whole over to Chromium. This will make Edge easier to develop for by third parties. For more details, visit Microsoft’s developer blog. New toolkit enhancements in Microsoft 365 Platform Windows Terminal Windows Terminal is Microsoft’s new application for Windows command-line users. Top features include: User interface with emoji-rich fonts and graphics-processing-unit-accelerated text rendering Multiple tab support and theming and customization features Powerful command-line user experience for users of PowerShell, Cmd, Windows Subsystem for Linux (WSL) and all forms of command-line application Windows Terminal will arrive in mid-June and will be delivered via the Microsoft Store in Windows 10. Read more here. React Native for Windows Microsoft announced a new open-source project for React Native developers at Microsoft Build 2019. Developers who prefer to use the React/web ecosystem to write user-experience components can now leverage those skills and components on Windows by using “React Native for Windows” implementation. React for Windows is under the MIT License and will allow developers to target any Windows 10 device, including PCs, tablets, Xbox, mixed reality devices and more. The project is being developed on GitHub and is available for developers to test. More mature releases will follow soon. Windows Subsystem for Linux 2 Microsoft rolled out a new architecture for Windows Subsystem for Linux: WSL 2 at the MSBuild 2019. Microsoft will also be shipping a fully open-source Linux kernel with Windows specially tuned for WSL 2. New features include massive file system performance increases (twice as much speed for file-system heavy operations, such as Node Package Manager install). WSL also supports running Linux Docker containers. The next generation of WSL arrives for Insiders in mid-June. More information here. New releases in multiple Developer Tools .NET 5 arrives in 2020 .NET 5 is the next major version of the .NET Platform which will be available in 2020. .NET 5 will have all .NET Core features as well as more additions: One Base Class Library containing APIs for building any type of application More choice on runtime experiences Java interoperability will be available on all platforms. Objective-C and Swift interoperability will be supported on multiple operating systems .NET 5 will provide both Just-in-Time (JIT) and Ahead-of-Time (AOT) compilation models to support multiple compute and device scenarios. .NET 5 also will offer one unified toolchain supported by new SDK project types as well as a flexible deployment model (side-by-side and self-contained EXEs) Detailed information here. ML.NET 1.0 ML.NET is Microsoft’s open-source and cross-platform framework that runs on Windows, Linux, and macOS and makes machine learning accessible for .NET developers. Its new version, ML.NET 1.0, was released at the Microsoft Build Conference 2019 yesterday. Some new features in this release are: Automated Machine Learning Preview: Transforms input data by selecting the best performing ML algorithm with the right settings. AutoML support in ML.NET is in preview and currently supports Regression and Classification ML tasks. ML.NET Model Builder Preview: Model Builder is a simple UI tool for developers which uses AutoML to build ML models. It also generates model training and model consumption code for the best performing model. ML.NET CLI Preview: ML.NET CLI is a dotnet tool which generates ML.NET Models using AutoML and ML.NET. The ML.NET CLI quickly iterates through a dataset for a specific ML Task and produces the best model. Visual Studio IntelliCode, Microsoft’s tool for AI-assisted coding Visual Studio IntelliCode, Microsoft’s AI-assisted coding is now generally available. It is essentially an enhanced IntelliSense, Microsoft’s extremely popular code completion tool. Intellicode is trained by using the code of thousands of open-source projects from GitHub that have at least 100 stars. It is available for C# and XAML for Visual Studio and Java, JavaScript, TypeScript, and Python for Visual Studio Code. IntelliCode also is included by default in Visual Studio 2019, starting in version 16.1 Preview 2. Additional capabilities, such as custom models, remain in public preview. Visual Studio 2019 version 16.1 Preview 2 Visual Studio 2019 version 16.1 Preview 2 release includes IntelliCode and the GitHub extensions by default. It also brings out of preview the Time Travel Debugging feature introduced with version 16.0. Also includes multiple performances and productivity improvements for .NET and C++ developers. Gaming and Mixed Reality Minecraft AR game for mobile devices At the end of Microsoft’s Build 2019 keynote yesterday, Microsoft teased a new Minecraft game in augmented reality, running on a phone. The teaser notes that more information will be coming on May 17th, the 10-year anniversary of Minecraft. https://www.youtube.com/watch?v=UiX0dVXiGa8 HoloLens 2 Development Edition and unreal engine support The HoloLens 2 Development Edition includes a HoloLens 2 device, $500 in Azure credits and three-months free trials of Unity Pro and Unity PiXYZ Plugin for CAD data, starting at $3,500 or as low as $99 per month. The HoloLens 2 Development Edition will be available for preorder soon and will ship later this year. Unreal Engine support for streaming and native platform integration will be available for HoloLens 2 by the end of May. Intelligent Edge and IoT Azure IoT Central new features Microsoft Build 2019 also featured new additions to Azure IoT Central, an IoT software-as-a-service solution. Better rules processing and customs rules with services like Azure Functions or Azure Stream Analytics Multiple dashboards and data visualization options for different types of users Inbound and outbound data connectors, so that operators can integrate with   systems Ability to add custom branding and operator resources to an IoT Central application with new white labeling options New Azure IoT Central features are available for customer trials. IoT Plug and Play IoT Plug and Play is a new, open modeling language to connect IoT devices to the cloud seamlessly without developers having to write a single line of embedded code. IoT Plug and Play also enable device manufacturers to build smarter IoT devices that just work with the cloud. Cloud developers will be able to find IoT Plug and Play enabled devices in Microsoft’s Azure IoT Device Catalog. The first device partners include Compal, Kyocera, and STMicroelectronics, among others. Azure Maps Mobility Service Azure Maps Mobility Service is a new API which provides real-time public transit information, including nearby stops, routes and trip intelligence. This API also will provide transit services to help with city planning, logistics, and transportation. Azure Maps Mobility Service will be in public preview in June. Read more about Azure Maps Mobility Service here. KEDA: Kubernetes-based event-driven autoscaling Microsoft and Red Hat collaborated to create KEDA, which is an open-sourced project that supports the deployment of serverless, event-driven containers on Kubernetes. It can be used in any Kubernetes environment — in any public/private cloud or on-premises such as Azure Kubernetes Service (AKS) and Red Hat OpenShift. KEDA has support for built-in triggers to respond to events happening in other services or components. This allows the container to consume events directly from the source, instead of routing through HTTP. KEDA also presents a new hosting option for Azure Functions that can be deployed as a container in Kubernetes clusters. Securing elections and political campaigns ElectionGuard SDK and Microsoft 365 for Campaigns ElectionGuard, is a free open-source software development kit (SDK) as an extension of Microsoft’s Defending Democracy Program to enable end-to-end verifiability and improved risk-limiting audit capabilities for elections in voting systems. Microsoft365 for Campaigns provides security capabilities of Microsoft 365 Business to political parties and individual candidates. More details here. Microsoft Build is in its 6th year and will continue till 8th May. The conference hosts over 6,000 attendees with early 500 student-age developers and over 2,600 customers and partners in attendance. Watch it live here! Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces a collaboration with Microsoft’s .NET at DockerCon 2019 How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsered by Microsoft]
Read more
  • 0
  • 0
  • 5571

article-image-google-deepminds-ai-alphastar-beats-starcraft-ii-pros-tlo-and-mana-wins-10-1-against-the-gamers
Natasha Mathur
25 Jan 2019
5 min read
Save for later

Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers

Natasha Mathur
25 Jan 2019
5 min read
It was two days back when the Blizzard team announced an update about the demo of the progress made by Google’s DeepMind AI at StarCraft II, a real-time strategy video game. The demo was presented yesterday over a live stream where it showed, AlphaStar, DeepMind’s StarCraft II AI program, beating the top two professional StarCraft II players, TLO and MaNa. The demo presented a series of five separate test matches that were held earlier on 19 December, against Team Liquid’s Grzegorz "MaNa" Komincz, and Dario “TLO” Wünsch. AlphaStar beat the two professional players, managing to score 10-0 in total (5-0 against each). After the 10 straight wins, AlphaStar finally got beaten by MaNa in a live match streamed by Blizzard and DeepMind. https://twitter.com/LiquidTLO/status/1088524496246657030 https://twitter.com/Liquid_MaNa/status/1088534975044087808 How does AlphaStar learn? AlphaStar learns by imitating the basic micro and macro-strategies used by players on the StarCraft ladder. A neural network was trained initially using supervised learning from anonymised human games released by Blizzard. This initial AI agent managed to defeat the “Elite” level AI in 95% of games. Once the agents get trained from human game replays, they’re then trained against other competitors in the “AlphaStar league”. This is where a multi-agent reinforcement learning process starts. New competitors are added to the league (branched from existing competitors). Each of these agents then learns from games against other competitors. This ensures that each competitor performs well against the strongest strategies, and does not forget how to defeat earlier ones.                                          AlphaStar As the league continues to progress, new counter-strategies emerge, that can defeat the earlier strategies. Also, each agent has its own learning objective which gets adapted during the training. One agent might have an objective to beat one specific competitor, while another one might want to beat a whole distribution of competitors. So, the neural network weights of each agent get updated using reinforcement learning, from its games against competitors. This helps optimise their personal learning objective. How does AlphaStar play the game? TLO and MaNa, professional StarCraft players, can issue hundreds of actions per minute (APM) on average. AlphaStar had an average APM of around 280 in its games against TLO and MaNa, which is significantly lower than the professional players. This is because AlphaStar starts its learning using replays and thereby mimics the way humans play the game. Moreover, AlphaStar also showed the delay between observation and action of 350ms on average.                                                    AlphaStar AlphaStar might have had a slight advantage over the human players as it interacted with the StarCraft game engine directly via its raw interface. What this means is that it could observe the attributes of its own as well as its opponent’s visible units on the map directly, basically getting a zoomed out view of the game. Human players, however, have to split their time and attention to decide where to focus the camera on the map. But, the analysis results of the game showed that the AI agents “switched context” about 30 times per minute, akin to MaNa or TLO. This proves that AlphaStar’s success against MaNa and TLO is due to its superior macro and micro-strategic decision-making. It isn’t the superior click-rate, faster reaction times, or the raw interface, that made the AI win. MaNa managed to beat AlphaStar in one match DeepMind also developed a second version of AlphaStar, which played like human players, meaning that it had to choose when and where to move the camera. Two new agents were trained, one that used the raw interface and the other that learned to control the camera, against the AlphaStar league.                                                           AlphaStar “The version of AlphaStar using the camera interface was almost as strong as the raw interface, exceeding 7000 MMR on our internal leaderboard”, states the DeepMind team. But, the team didn’t get the chance to test the AI against a human pro prior to the live stream.   In a live exhibition match, MaNa managed to defeat the new version of AlphaStar using the camera interface, which was trained for only 7 days. “We hope to evaluate a fully trained instance of the camera interface in the near future”, says the team. DeepMind team states AlphaStar’s performance was initially tested against TLO, where it won the match. “I was surprised by how strong the agent was..(it) takes well-known strategies..I hadn’t thought of before, which means there may still be new ways of playing the game that we haven’t fully explored yet,” said TLO. The agents were then trained for an extra one week, after which they played against MaNa. AlphaStar again won the game. “I was impressed to see AlphaStar pull off advanced moves and different strategies across almost every game, using a very human style of gameplay I wouldn’t have expected..this has put the game in a whole new light for me. We’re all excited to see what comes next,” said MaNa. Public reaction to the news is very positive, with people congratulating the DeepMind team for AlphaStar’s win: https://twitter.com/SebastienBubeck/status/1088524371285557248 https://twitter.com/KaiLashArul/status/1088534443718045696 https://twitter.com/fhuszar/status/1088534423786668042 https://twitter.com/panicsw1tched/status/1088524675540549635 https://twitter.com/Denver_sc2/status/1088525423229759489 To learn about the strategies developed by AlphaStar, check out the complete set of replays of AlphaStar's matches against TLO and MaNa on DeepMind's website. Best game engines for Artificial Intelligence game development Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads for AI use in healthcare
Read more
  • 0
  • 0
  • 5685

article-image-minecraft-java-team-are-open-sourcing-some-of-minecrafts-code-as-libraries
Sugandha Lahoti
08 Oct 2018
2 min read
Save for later

Minecraft Java team are open sourcing some of Minecraft's code as libraries

Sugandha Lahoti
08 Oct 2018
2 min read
Stockholm's Minecraft Java team are open sourcing some of Minecraft's code as libraries for game developers. Developers can now use them to improve their Minecraft mods, use them for their own projects, or help improve pieces of the Minecraft Java engine. The team will open up different libraries gradually. These libraries are open source and MIT licensed. For now, they have open sourced two libraries Brigadier and DataFixerUpper. Brigadier The first library, Brigadier takes random strings of text entered into Minecraft and turns into an actual function that the game will perform. Basically, if you enter in the game something like /give Dinnerbone sticks, it goes internally into Brigadier and breaks it down into pieces. Then it tries to figure out what the developer is trying to do with this random piece of text. Nathan Adams, a Java developer hopes that giving the Minecraft community access to Brigadier can make it “extremely user-friendly one day.” Brigadier has been available for a week now. It has already seen improvements in the code and the readme doc. DataFixerUpper Another important library of the Minecraft game engine, the DataFixerUpper is also being open sourced. When a developer adds a new feature into Minecraft, they have to change the way level data and save files are stored. DataFixerUpper turns these data formats to what the game should currently be using now. Also in consideration for open sourcing is the Blaze3D library, which is a complete rewrite of the render engine for Minecraft 1.14. You can check out the announcement on the Minecraft website. You can also download Brigadier and DataFixerUpper. Minecraft is serious about global warming, adds a new (spigot) plugin to allow changes in climate mechanics. Learning with Minecraft Mods A Brief History of Minecraft Modding
Read more
  • 0
  • 0
  • 7243
article-image-unity-2018-2-unity-release-for-this-year-2nd-time-in-a-row
Sugandha Lahoti
12 Jul 2018
4 min read
Save for later

Unity 2018.2: Unity release for this year 2nd time in a row!

Sugandha Lahoti
12 Jul 2018
4 min read
It has only been two months since the release of Unity 2018.1 and Unity is back again with their next release for this year. Unity 2018.2 builds on the features of Unity 2018.1 such as Scriptable Render Pipeline (SRP), Shader Graph, and Entity component system. It also adds support for managed code debugging on iOS and Android, along with the final release of 64-bit (ARM64) support for Android devices. Let us look at the features in detail. Scriptable Render Pipeline improvements As mentioned above, Unity 2018.2 builds on Scriptable Render Pipeline introduced in 2018.1. The version 2 comes with two additional features: The SRP batcher: It is a new Unity engine inner loop for speeding up CPU rendering without compromising GPU performance. It works with the High Definition Render Pipeline (HDRP) and Lightweight Render Pipeline (LWRP), with PC DirectX-11, Metal and PlayStation 4 currently supported. A Scriptable shader variants stripping: It can manage the number of shader variants generated, without affecting iteration time or maintenance complexity. This leads to a dramatic reduction in player build time and data size. Performance optimizations in Lightweight Render Pipeline and High Definition Render Pipeline Unity 2018.2 improves performance and optimization of Lightweight Render Pipeline (LWRP) with an Optimized Tile utilization. This feature adjusts the number of load-and-store to tiles in order to optimize the memory of mobile GPUs. It also shades light in batches, which reduces overdraw and draw calls. Unity 2018.2 comes with better high-end visual quality in High Definition Render Pipeline (HDRP). Improvements include volumetrics, glossy planar reflection, Geometric specular AA, and Proxy Screen Space Reflection & Refraction, Mesh decals, and Shadow Mask. Improvements in C# Job System, Entity Component System and Burst Compiler Unity 2018.2 introduces new Reactive system samples in the Entity Component system (ECS) to let developers respond when there are changes to component state and emulate event-driven behavior. Burst compiling for ECS is now available on all editor platforms (Windows, Mac, Linux), and game developers will be able to build AOT for standalone players (Desktop, PS4, Xbox, iOS and Android). The C# Job system, allows developers to take full advantage of the multicore processors currently available and write parallel code without worrying about programming. Updates to Shader Graph Shader Graph, announced as a preview package in Unity 2018.2 will allow developers to build shaders visually. It has further added additional improvements like: High Definition Render Pipeline (HDRP) support, Manual modification of vertex position, Editing of the Reference name for a property, Editable paths for graphs, Texture 2D and 3D array, and more. Texture Mipmap Streaming Game developers can now stream texture mipmaps into memory on demand to reduce the texture memory requirements of a Unity application. This feature speeds up initial load time, gives developers more control, and is simple to enable and manage. Particle System improvements Unity 2018.2 has 7 major improvements to Particle system which are: Support for eight UVs, to use more custom data. MinMaxCurve and MinMaxGradient types in custom scripts to match the style used by the Particle System UI. Particle Systems now converts colors into linear space, when appropriate, before uploading them to the GPU. Two new modes to the Shape module to emit from a sprite or SpriteRenderer component. Two new APIs for baking the geometry of a Particle System into a mesh. Show Only Selected (aka Solo Mode) with the Play/Restart/Stop, etc; controls. Shaders that use separate alpha textures can now be used with particles, while using sprites in the Texture Sheet Animation module. Unity Hub Unity Hub (v1.0) is a new tool, to be released soon, designed to streamline onboarding and setup processes for all users. It is a centralized location to manage all Unity Projects, simpliflying how developers find, download, and manage Unity Editor licenses and add-on components. The Hub 1.0 will be shipped with: Project templates Custom install location Added Asset Store packages to new projects Modified project build target Editor: Added components post-installation There are additional features like Vulkan support for Editor on Windows and Linux and improvements to Progressive Lightmapper, 2D games, SVG importer, etc. It will also support .java and .cpp source files as plugins in a Unity project along with updates to Cinematics and Unity core engine. In total, there are 183 improvements and 1426 fixes in Unity 2018.2 release. Refer to the release notes to view the full list of new features, improvements and fixes. Put your game face on! Unity 2018.1 is now available Unity plugins for augmented reality application development Unity 2D & 3D game kits simplify Unity game development for beginner
Read more
  • 0
  • 0
  • 4131

article-image-implementing-unity-2017-game-audio-tutorial
Amarabha Banerjee
11 Jul 2018
11 min read
Save for later

Implementing Unity 2017 Game Audio [Tutorial]

Amarabha Banerjee
11 Jul 2018
11 min read
Background music and audio effects play a big role in determining any game's success or failure. Creating engaging game audio, importing audio from other sources and working and customizing Audio FX clips as per the game flow is a vital task for any game developer.  In this article, we are going to discuss about how to create, customize and use third party audio in Unity games. This article is a part of the book titled "Unity 2017 2D Game Development Projects" written by Lauren S. Ferro & Francesco Sapio. Basics of audio and sound FX in Unity Adding sound in Unity is simple enough, but you can implement it better if you understand how sound travels. While this is extremely important in 3D games because of the added third dimension, it is quite important in 2D games, just in a slightly different way. Before we discuss the differences, let's first learn about what and how sound works from a quick physics lesson. Listening to the physics behind sound What we hear is not just music, sound effects (FX) and ambient background noise. The sound is a longitudinal, mechanical (vibrating) wave. These "waves" can pass through different mediums (for example, air, water, your desk) but not through a vacuum. Therefore, no one will hear your screams in space. The sound is a variation in pressure. A region of increased pressure on a sound wave is called a compression (or condensation). A region of decreased pressure on a sound wave is called a rarefaction (or dilation). You can see this concept illustrated in the following image: The density of certain materials, such as glass and plastic, allows a certain amount of light to pass through them. This will influence how the light will behave when it passes through them, such as bending/refracting (that is, the index of refraction), various materials (for example, liquids, solids, gases) have the same effect when it comes to allowing sound waves to pass. Some materials allow the sound to pass easily, while others dampen it. Therefore, sound studios/booths are made of certain materials to remove things such as echoes. It has a similar effect to when you scream underwater that there is a shark. It won't be as loud as if you scream from your kitchen to tell everyone dinner is ready. Another thing to consider is what is known as the Doppler Effect. The Doppler Effect results from an increase (or decrease) in the frequency of sound (and other things such as light, ripples in water) as the source of the sound and person/player move toward (or away from) each other. A simple example of this is when an emergency vehicle passes by you. You will notice that the sound of the siren is different before it reaches you when it is near you, and once it passes you. Considering this example, it is because there is a sudden change in pitch in the passing siren. This is visualized in the following image: So, what is the point of knowing this when it comes to developing games? Well, this is particularly important when creating games, more so in 3D, in relation to how sounds are heard by players in many ways. For example, imagine that you're nearing a creek, but there are dense bushes, large pine trees, and a rugged terrain. The sound that creek makes from where a player is in the game world is going to sound very different if it was a completely flat plane free from any vegetation. When it comes to 2D games, this is not necessarily as important because we are working without depth (z-axis) but similar principles apply when players may be navigating around a top-down environment and they are near a point of interest. You don't want that sound to be as loud when the player is far away as it would be if they were up close. Within the context of 2D and 3D sounds, Unity has a parameter for this exact thing called Spatial Blend. We will discuss this more in the Audio Source section. There are several ways that you can create audio within Unity, from importing your own/downloaded sounds to recording it live. Like images, Unity can import most standard audio file formats: AIFF, WAV, MP3, and Ogg, and tracker modules (for example, short instrument samples): .xm, .mod, .it, and .s3m. Importing audio Importing audio into Unity follows the same processes as importing any other type of asset. We will cover the basics of what you need to know in the following sections. Audio Listener Have you heard the saying, If a tree falls in a forest and no one is there to hear it, does it still make a sound? Well, in Unity, if there is nothing to hear your audio, then the answer is no. This is because Unity has a component called an Audio Listener, which works like a microphone. To locate the Audio Listener, click the Main Camera, and then look over at the Inspector; it should be located near the bottom, like in the following image: If for some reason, it isn't there, you can always add it by clicking the following button titled Add Component, type Audio Listener, and select it (click it) from the list, like in the following image: The important thing to remember is that an Audio Listener is the location of the sound, so it makes sense as to why it is typically placed on the Main Camera, but it can also be placed on a Player. A single scene can only have one Audio Listener; therefore, it's best to experiment with the one that works best for your game. It is important to remember that an Audio Listener works with an Audio Source, and must have one to work. Audio Source The Audio Source is where the sound comes from. This can be from many different objects within a Scene as well as background and sound FX. The Audio Source has several parameters; later we will briefly discuss the main ones. To see more information about all the parameters, you can check out the official Unity documentation by visiting the link or scanning the QR code: https://docs.unity3d.com/2017.2/Documentation/Manual/class-AudioSource.html You may be wondering why we should have a slider for Spatial Blend, instead of a checkbox. This is because we need to fade between 2D and 3D, and there is a good reason for this. Imagine that you're in a game and you're looking at a screen on a computer. In this case, your camera is going to be fixated on whatever is on the screen. This could be checking an inventory or even entering nuclear codes. In any case, you will want the sound that is being emitted from the screen to be the focal audio. Therefore, the slider in the Spatial Blend parameter is going to be closer to 2D. This is because you may still want ambient noises that are in the background incorporated into the experience. So, if you are closer to 2D, the sound will be the same in both speakers (or headphones). The closer you slide toward 3D, the more the volume will depend on the proximity of the Sound Listener to the Sound Source. It will also allow for things, such as the Doppler Effect, to be more noticeable, as it takes in 3D space. There are also specific settings for these things. Choosing sounds for background and FX When it comes to picking the right kind of music for your game, just like the aesthetics, you need to think about what kind of "mood" you're trying to create. Is it a somber or uplifting kind of mood, are you ironically contrasting the graphics (for example, happy) with gloomy music? There is really no right or wrong when it comes to your musical selection if you can communicate to the player what they are supposed to feel, at least in general. For this game, I have provided you with some example "moods" that you can apply to this game. Of course, you're welcome to choose sounds other than this that are more to your liking! All the sounds that we will use will be from the Free Sound website: https://freesound.org. You will need to create an account to download them, but it's free and there are many great sounds that you can use when creating games. In saying this, if you're intending to create your games for commercial purposes, please make sure that you check the Terms and Conditions on Free Sound to make sure that you're not violating any of them. Each track will have its own attribution licenses, including those for commercial use, so always check! For this project, we're going to stick with the "Happy" version. But I encourage you to experiment! Happy Collecting Angel Cakes: Chime sound (https://freesound.org/people/jgreer/sounds/333629/) Being attacked by the enemy: Cat Purr/Twit4.wav (https://freesound.org/people/steffcaffrey/sounds/262309/) Collecting health: correct (https://freesound.org/people/ertfelda/sounds/243701/) Collecting bonuses: Signal-Ring 1 (https://freesound.org/people/Vendarro/sounds/399315/) Background: Kirmes_Orgel_004_2_Rosamunde.mp3 (https://freesound.org/people/bilwiss/sounds/24720/) Sad Collecting Angel Cakes: Glass Tap (https://freesound.org/people/Unicornaphobist/sounds/262958/) Being attacked by the enemy: musicbox1.wav (https://freesound.org/people/sandocho/sounds/17700/) Collecting health: chime.wav (https://freesound.org/people/Psykoosiossi/sounds/398661/) Collecting bonuses: short metallic hit (https://freesound.org/people/waveplay/sounds/366400/) Background: improvised chill 8 (https://freesound.org/people/waveplay/sounds/238529/) Retro Collecting Angel Cakes: TF_Buzz.flac (https://freesound.org/people/copyc4t/sounds/235652/) Being attacked by the enemy: Game Die (https://freesound.org/people/josepharaoh99/sounds/364929/) Collecting health: galanghee.wav (https://freesound.org/people/metamorphmuses/sounds/91387/) Collecting bonuses: SW05.WAV (https://freesound.org/people/mad-monkey/sounds/66684/) Background: Angel-techno pop music loop (https://freesound.org/people/frankum/sounds/387410/) Not everyone can hear well or at all, so it pays to keep this in mind when you're developing games that may rely on audio to provide feedback to players. While subtitles can enable dialogue to be more accessible, sound FX can be a little trickier. Therefore, when it comes to implementing audio, think about how you could complement it, even if the same effect that you're trying to achieve with sound is subtle. For example, if you play a "bleep" for every item collected, perhaps you could associate it with a slight glow or flash of color. The choice is up to you, but it's something to keep in mind. On the other end of the spectrum, those who can hear might also want to turn the sounds off. We've all played that game (or several) that really begins to become irritating, so make sure that you also check this while you're playtesting. You don't want an awesome game to suck because your audio is intolerable and there is not an option to TURN THE SOUND OFF! You’ve been warned. Integrating background music in our game Once you choose which music better suits the kind of feel you want to create for your game, import both the sound and the music inside the project. If you want, you can create two folders for them, SoundFX and Music, respectively. Now, in our scene, we need to do the following: Create an empty game object (by clicking GameObject | Create empty), rename it Background Music. Attach an Audio Source component (in the Inspector, click Add Component | Audio | Audio Source). Next, we need to drag and drop the music we decided on/downloaded into the AudioClip variable and check the Loop option, so the background music will never stop. Also, check that Play on Awake is checked as well, even if it should be by default, so the music will start playing as soon as the game starts. Hit Play to start the game. Lastly, adjust the volume, depending on the music you chose. This may require a bit of playtesting (remember to set the value after the play mode, because the settings you adjust during play mode are not kept). In the end, this is how the component should look (in the image, I chose the happy theme music, and set a Volume of 0.1): Here in this article we have shown you how to incorporate game audio effects and background music in Unity games. If you liked this article, then check out the complete book Unity 2017 2D Game Development Projects. AI for Unity game developers: How to emulate real-world senses in your NPC agent Working with Unity Variables to script powerful Unity 2017 games How to use arrays, lists, and dictionaries in Unity for 3D game development
Read more
  • 0
  • 0
  • 7115