Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Game Development

93 Articles
article-image-valve-announces-half-life-alyx-its-first-flagship-vr-game
Savia Lobo
19 Nov 2019
3 min read
Save for later

Valve announces Half-Life: Alyx, its first flagship VR game

Savia Lobo
19 Nov 2019
3 min read
Yesterday, Valve Corporation, the popular American video game developer, announced the Half-Life: Alyx, the first new game in the popular Half-Life series in over a decade. The company tweeted that it will unveil the first look on Thursday, 21st November 2019, at 10 am Pacific Time. https://twitter.com/valvesoftware/status/1196566870360387584 Half-Life: Alyx, a brand-new game in the Half-Life universe, is designed exclusively for PC virtual reality systems (Valve Index, Oculus Rift, HTC Vive, Windows Mixed Reality). Talking about Valve’s history in PC games, it has created some of the most influential and critically games ever made. However, “Valve has famously never finished either of its Half-Life supposed trilogies of games. After Half-Life and Half-Life 2, the company created Half-Life: Episode 1 and Half-Life: Episode 2, but no third game in the series,” the Verge reports. Ars Technica reveals, “The game's name confirms what has been loudly rumored for months: that you will play this game from the perspective of Alyx Vance, a character introduced in 2004's Half-Life 2. Instead of stepping forward in time, HLA will rewind to the period between the first two mainline Half-Life games.” “A data leak from Valve's Source 2 game engine, as uncovered in September by Valve News Network, pointed to a new control system labeled as the "Grabbity Gloves" in its codebase. Multiple sources have confirmed that this is indeed a major control system in HLA,” Ars Technica claims. These Grabbity gloves can also be described as ‘Magnet gloves’, which allow pointing out and attracting distant objects to your hands. Valve has already announced plans to support all major VR PC systems for its next VR game, and these new gloves seem like the right system to scale to whatever controllers that would come to VR. Many gamers are excited to check out this Half-life version and are also looking forward to whether the company really stands up to what it says. A user on Hacker News commented, “Wonder what Valve is doubling down with this title? It seems like the previous games were all ground-breaking narratives, but with most of the storytellers having left in the last few years, I'd be curious to see what makes this different than your standard VR games.” Another user on Hacker News commented, “From the tech side it was the heavy, and smart, use of scripting that made HL1 stand out. With HL2 it was the added physics engine trough the change to Source, back then that used to be a big deal and whole gameplay mechanics revolve around that (gravity gun). In that context, I do not really consider it that surprising for the next HL project to focus on VR because even early demos of that combination looked already very promising 5 years ago” We will update this space after the Half-Life: Alyx is unveiled on Thursday. To know more about the announcement in detail, read Ars Technica’s complete coverage. Valve reveals new Index VR Kit with detail specs and costs upto $999 Why does Oculus CTO John Carmack prefer 2D VR interfaces over 3D Virtual Reality interfaces? Oculus Rift S: A new VR with inside-out tracking, improved resolution and more!
Read more
  • 0
  • 0
  • 3703

article-image-deepmind-ais-alphastar-achieves-grandmaster-level-in-starcraft-ii-with-99-8-efficiency
Vincy Davis
04 Nov 2019
5 min read
Save for later

DeepMind AI’s AlphaStar achieves Grandmaster level in StarCraft II with 99.8% efficiency

Vincy Davis
04 Nov 2019
5 min read
Earlier this year in January, Google’s DeepMind AI AlphaStar had defeated two professional players, TLO and MaNa, at StarCraft II, a real-time strategy game. Two days ago, DeepMind announced that AlphaStar has now achieved the highest possible online competitive ranking, called Grandmaster level, in StarCraft II. This makes AlphaStar the first AI to reach the top league of a widely popular game without any restrictions. AplhaStar used the multi-agent reinforcement learning technique and rated above 99.8% of officially ranked human players. It was able to achieve the Grandmaster level for all the three StarCraft II races - Protoss, Terran, and Zerg. The DeepMind researchers have published the details of AlphaStar in the paper titled, ‘Grandmaster level in StarCraft II using multi-agent reinforcement learning’. https://twitter.com/DeepMindAI/status/1189617587916689408 How did AlphaStar achieve the Grandmaster level in StarCraft II? The DeepMind researchers were able to develop a robust and flexible agent by understanding the potential and limitations of open-ended learning. This helped the researchers to make AlphaStar cope with complex real-world domains. “Games like StarCraft are an excellent training ground to advance these approaches, as players must use limited information to make dynamic and difficult decisions that have ramifications on multiple levels and timescales,” states the blog post. The StarCraft II video game requires players to balance high-level economic decisions with individual control of hundreds of units. When playing this game, humans are under physical constraints which limits their reaction time and their rate of actions. Accordingly, AphaStar was also imposed with these constraints, thus making it suffer from delays due to network latency and computation time. In order to limit its actions per minute (APM), AphaStar’s peak statistics were kept substantially lower than those of humans. To align with the standard human movement, it also had a limited viewing of the portion of the map, AlphaStar could register only a limited number of mouse clicks and had only 22 non-duplicated actions to play every five seconds. AlphaStar uses a combination of general-purpose techniques like neural network architectures, imitation learning, reinforcement learning, and multi-agent learning. The games were sampled from a publicly available dataset of anonymized human replays, which were later trained to predict the action of every player. These predictions were then used to procure a diverse set of strategies to reflect the different modes of human play. Read More: DeepMind’s Alphastar AI agent will soon anonymously play with European StarCraft II players Dario “TLO” WÜNSCH, a professional starcraft II player says, “I’ve found AlphaStar’s gameplay incredibly impressive – the system is very skilled at assessing its strategic position, and knows exactly when to engage or disengage with its opponent. And while AlphaStar has excellent and precise control, it doesn’t feel superhuman – certainly not on a level that a human couldn’t theoretically achieve. Overall, it feels very fair – like it is playing a ‘real’ game of StarCraft.” According to the paper, AlphaStar had the 1026 possible actions available at each time step, thus it had to make thousands of actions before learning if it has won or lost the game. One of the key strategies behind AlphaStar’s performance was learning human strategies. This was necessary to ensure that the agents keep exploring those strategies throughout self-play. The researchers say, “To do this, we used imitation learning – combined with advanced neural network architectures and techniques used for language modeling – to create an initial policy which played the game better than 84% of active players.” AlphaStar also uses a latent variable to encode the distribution of opening moves from human games. This helped AlphaStar to preserve the high-level strategies and enabled it to represent many strategies within a single neural network. By training the advances in imitation learning, reinforcement learning, and the League, the researchers were able to train AlphaStar Final, the agent that reached the Grandmaster level at the full game of StarCraft II without any modifications. AlphaStar used a camera interface, which helped it get the exact information that a human player would receive. All the interface and restrictions faced by AlphaStar were approved by a professional player. Finally, the results indicated that general-purpose learning techniques can be used to scale AI systems to work in complex and dynamic environments involving multiple actors. AlphaStar’s great feat has got many people excited about the future of AI. https://twitter.com/mickdooit/status/1189604170489315334 https://twitter.com/KaiLashArul/status/1190236180501139461 https://twitter.com/JoshuaSpanier/status/1190265236571459584 Interested readers can read the research paper to check AlphaStar’s performance. Head over to DeepMind’s blog for more details. Google AI introduces Snap, a microkernel approach to ‘Host Networking’ Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs
Read more
  • 0
  • 0
  • 4043

article-image-anime-studio-khara-switching-primary-3d-cg-tools-to-blender
Sugandha Lahoti
19 Aug 2019
4 min read
Save for later

Japanese Anime studio Khara is switching its primary 3D CG tools to Blender

Sugandha Lahoti
19 Aug 2019
4 min read
Popular Japanese animation studio Khara, announced on Friday that it will be moving to open source 3D software Blender as its primary 3D CG tool. Khara is a motion picture planning and production company and are currently working on “EVANGELION:3.0+1.0”, a film to be released in June 2020. Primarily, they will partially use Blender for ‘EVANGELION:3.0+1.0’ but will make the full switch once that project is finished. Khara is also helping the Blender Foundation by joining the Development Fund as a corporate member. Last month, Epic Games granted Blender $1.2 million in cash. Following Epic Games, Ubisoft also joined the Blender Development fund and adopted Blender as its main DCC tool. Why Khara opted for Blender? Khara had been using AutoDesk’s “3ds Max” as their primary tool for 3D CG so far. However, their project scale got bigger than what was possible with 3ds Max. 3ds Max is also quite expensive; according to Autodesk’s website, an annual fee for a single user is $2,396. Khara also had to reach out to small and medium-sized businesses for its projects. Another complaint was that Autodesk took time to release improvements to their proprietary software, which happens at a much faster rate in an open source software environment. They had also considered Maya as one of the alternatives, but dropped the idea as it resulted in duplication of work resource. Finally they switched to Blender, as it is open source and free. They were also intrigued by the new Blender 2.8 release which provided them with a 3D creation tool that worked like “paper and pencil”.  Blender’s Grease Pencil feature enables you to combine 2D and 3D worlds together right in the viewport. It comes with a new multi-frame edition mode with which you can change and edit several frames at the same time. It has a build modifier to animate the drawings similar to the Build modifier for 3D objects. “I feel the latest Blender 2.8 is intentionally ‘filling the gap’ with 3ds Max to make those users feel at home when coming to Blender. I think the learning curve should be no problem.”, told Mr. Takumi Shigyo, Project Studio Q Production Department. Khara founded “Project Studio Q, Inc.” in 2017, a company focusing mainly on the movie production and the training of Anime artists. Providing more information on their use of Blender, Hiroyasu Kobayashi, General Manager of Digital Dpt. and Director of Board of Khara, said in the announcement, “Preliminary testing has been done already. We are now at the stage to create some cuts actually with Blender as ‘on live testing’. However, not all the cuts can be done by Blender yet. But we think we can move out from our current stressful situation if we place Blender into our work flows. It has enough potential ‘to replace existing cuts’.” While Blender will be used for the bulk of the work, Khara does have a backup plan if there's anything Blender struggles with. Kobayashi added "There are currently some areas where Blender cannot take care of our needs, but we can solve it with the combination with Unity. Unity is usually enough to cover 3ds Max and Maya as well. Unity can be a bridge among environments." Khara is also speaking with their partner companies to use Blender together. Khara’s transition was well appreciated by people. https://twitter.com/docky/status/1162279830785646593 https://twitter.com/eoinoneillPDX/status/1154161101895950337 https://twitter.com/BesuBaru/status/1154015669110710273 Blender 2.80 released with a new UI interface, Eevee real-time renderer, grease pencil, and more Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects
Read more
  • 0
  • 0
  • 7182

article-image-unity-2019-2-releases-with-updated-probuilder-shader-graph-2d-animation-burst-compiler-and-more
Fatema Patrawala
31 Jul 2019
3 min read
Save for later

Unity 2019.2 releases with updated ProBuilder, Shader Graph, 2D Animation, Burst Compiler and more

Fatema Patrawala
31 Jul 2019
3 min read
Yesterday, the Unity team announced the release of Unity 2019.2. In this release, they have added more than 170 new features and enhancements for artists, designers, and programmers. They have updated ProBuilder, Shader Graph, 2D Animation, Burst Compiler, UI Elements, and many more. Major highlights for Unity 2019.2 ProBuilder 4.0 ships as verified with 2019.2. It is a unique hybrid of 3D modeling and level design tools, optimized for building simple geometry but capable of detailed editing and UV unwrapping as needed. Polybrush is now available via Package Manager as a Preview package. This versatile tool lets you sculpt complex shapes from any 3D model, position detail meshes, paint in custom lighting or coloring, and blend textures across meshes directly in the Editor. DSPGraph is the new audio rendering/mixing system, built on top of Unity’s C# Job System. It’s now available as a Preview package. They have improved on UI Elements, Unity’s new UI framework, which renders UI for graph-based tools such as Shader Graph, Visual Effect Graph, and Visual Scripting. To help you better organize your complex graphs, Unity has added subgraphs to Visual Effect Graph. You can share, combine, and reuse subgraphs for blocks and operators, and also embed complete VFX within VFX. There is an improvement in the integration between Visual Effect Graph and the High-Definition Render Pipeline (HDRP), which pulls VFX Graph in by default, providing you with additional rendering features. With Shader Graph you can now use Color Modes to highlight nodes on your graph with colors based on various features or select your own colors to improve readability. This is especially useful in large graphs. The team has added swappable Sprites functionality to the 2D Animation tool. With this new feature, you can change a GameObject’s rendered Sprites while reusing the same skeleton rig and animation clips. This lets you quickly create multiple characters using different Sprite Libraries or customize parts of them with Sprite Resolvers. With this release Burst Compiler 1.1 includes several improvements to JIT compilation time and some C# improvements. Additionally, the Visual Studio Code and JetBrains Rider integrations are available as packages. Mobile developers will benefit from improved OpenGL support, as the team has added OpenGL multithreading support (iOS) to improve performance on low-end iOS devices that don’t support Metal. As with all releases, 2019.2 includes a large number of improvements and bug fixes. You can find the full list of features, improvements, and fixes in Unity 2019.2 Release Notes. How to use arrays, lists, and dictionaries in Unity for 3D game development OpenWrt 18.06.4 released with updated Linux kernel, security fixes Curl and the Linux kernel and much more! How to manage complex applications using Kubernetes-based Helm tool [Tutorial]
Read more
  • 0
  • 0
  • 4386

Banner background image
article-image-blender-2-80-released-with-new-ui-interface-eevee-real-time-renderer-grease-pencil
Bhagyashree R
31 Jul 2019
3 min read
Save for later

Blender 2.80 released with a new UI interface, Eevee real-time renderer, grease pencil, and more

Bhagyashree R
31 Jul 2019
3 min read
After about three long years of development, the most awaited Blender version, Blender 2.80 finally shipped yesterday. This release comes with a redesigned UI interface, workspaces, templates, Eevee real-time renderer, grease pencil, and much more. The user interface is revamped with a focus on usability and accessibility Blender’s user interface is revamped with a better focus on usability and accessibility. It has a fresh look and feel with a dark theme and modern icon set. The icons change color based on the theme you select so that they are readable against bright or dark backgrounds. Users can easily access the most used features via the default shortcut keys or map their own. You will be able to fully use Blender with a one-button trackpad or pen input as it now supports the left mouse button by default for selection. It provides a new right-click context menu for quick access to important commands in the given context. There is also a Quick Favorites popup menu where you can add your favorite commands. Get started with templates and workspaces You can now choose from multiple application templates when starting a new file. These include templates for 3D modeling, shading, animation, rendering, grease pencil based 2D drawing and animation, sculpting, VFX, video editing, and the list goes on. Workspaces give you a screen layout for specific tasks like modeling, sculpting, animating, or editing. Each template that you choose will provide a default set of workspaces that can be customized. You can create new workspaces or copy from the templates as well. Completely rewritten 3D Viewport Blender 2.8’s completely rewritten 3D viewport is optimized for modern graphics and offers several new features. The new Workbench render engine helps you get work done in the viewport for tasks like scene layout, modeling, and sculpting. Viewport overlays allow you to decide which utilities are visible on top of the render. The LookDev new shading mode allows you to test multiple lighting conditions (HDRIs) without affecting the scene settings. The smoke and fire simulations are overhauled to make them look as realistic as possible. Eevee real-time renderer Blender 2.80 has a new physically-based real-time renderer called Eevee. It performs two roles: a renderer for final frames and the engine driving Blender's real-time viewport for creating assets. Among the various features it supports volumetrics, screen-space reflections and refractions, depth of field, camera motion blur, bloom, and much more. You can create Eevee materials using the same shader nodes as Cycles, which makes it easier to render existing scenes. 2D animation with Grease Pencil Grease Pencil enables you to combine 2D and 3D worlds together right in the viewport. With this release, it has now become a “full 2D drawing and animation system.” It comes with a new multi-frame edition mode with which you can change and edit several frames at the same time. It has a build modifier to animate the drawings similar to the Build modifier for 3D objects. There are many other features added to grease pencil. Watch this video to get a glimpse of what you can create with it: https://www.youtube.com/watch?v=JF3KM-Ye5_A Check out for more features in Blender 2.80 on its official website. Blender celebrates its 25th birthday! Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects  
Read more
  • 0
  • 0
  • 4153

article-image-following-epic-games-ubisoft-joins-blender-development-fund-adopts-blender-as-its-main-dcc-tool
Vincy Davis
23 Jul 2019
5 min read
Save for later

Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool

Vincy Davis
23 Jul 2019
5 min read
Yesterday, Ubisoft Animation Studio (UAS) announced that they will fund the development of Blender as a corporate Gold member through the Blender Foundation’s Development Fund. It has also been announced that Ubisoft will be adopting the open-source animation software Blender as their main digital content creation (DCC) tool. The exact funding amount has not been disclosed. Gold corporate members of the Blender development fund can have their prominent logo on blender.org dev fund page and have credit as Corporate Gold Member in blender.org and in official Blender foundation communication. The Gold corporate members also have a strong voice in approving projects for Blender. The Gold corporate members donate a minimum of EUR 30,000 as long as they remain a member. Pierrot Jacquet, Head of Production at UAS mentioned in the press release , “Blender was, for us, an obvious choice considering our big move: it is supported by a strong and engaged community, and is paired up with the vision carried by the Blender Foundation, making it one of the most rapidly evolving DCCs on the market.”  He also believes that since Blender is an open source project, it will allow Ubisoft to share some of their own developed tools with the community. “We love the idea that this mutual exchange between the foundation, the community, and our studio will benefit everyone in the end”, he adds. As part of their new workflow, Ubisoft is creating a development environment supported by open source and inner source solutions. The Blender software will replace Ubisoft’s in-house digital content creation tool and will be used to produce short content with the incubator. Later, the Blender software will also be used in Ubisoft’s upcoming shows in 2020. Per Jacquet, Blender 2.8 will be a “game-changer for the CGI industry”. Blender 2.8 beta is already out, and its stable version is expected to be released in the coming days. Ubisoft was impressed with the growth of the internal Blender community as well as with the innovations expected in Blender 2.8. Blender 2.8 will have a revamped UX, Grease Pencil, EEVEE real-time rendering, new 3D viewport and UV editor tools to enhance users gaming experience. Ubisoft was thus convinced that this is the “right time to bring support to our artists and productions that would like to add Blender to their toolkit.” This news comes a week after Epic Games announced that it is awarding Blender Foundation $1.2 million in cash spanning three years, to accelerate the quality of their software development projects. With two big companies funding Blender, the future does look bright for them. The Blender 2.8 preview features is expected to have made both the companies step forward and support Blender, as both Epic and Ubisoft have announced their funding just days before the stable release of Blender 2.8. In addition to Epic and Ubisoft, corporate members include animation studio Tangent, Valve, Intel, Google, and Canonical's Ubuntu Linux distribution. Ton Roosendaal, founder and chairman of Blender Foundation is surely a happy man when he says that “Good news keeps coming”. He added, “it’s such a miracle to witness the industry jumping on board with us! I’ve always admired Ubisoft, as one of the leading games and media producers in the world. I look forward to working with them and help them find their ways as a contributor to our open source projects on blender.org.” https://twitter.com/tonroosendaal/status/1153376866604113920 Users are very happy and feel that this is a big step forward for Blender. https://twitter.com/nazzagnl/status/1153339812105064449 https://twitter.com/Nahuel_Belich/status/1153302101142978560 https://twitter.com/DJ_Link/status/1153300555986550785 https://twitter.com/cgmastersnet/status/1153438318547406849 Many also see this move as the industry’s way of sidelining Autodesk, the company which is popularly used for its DCC tools. https://twitter.com/flarb/status/1153393732261072897 A Hacker News user comments, “Kudos to blender's marketing team. They get a bit of free money from this. But the true motive for Epic and Unisoft is likely an attempt to strong-arm Autodesk into providing better support and maintenance. Dissatisfaction with Autodesk, lack of care for their DCC tools has been growing for a very long time now, but studios also have a huge investment into these tools as part of their proprietary pipelines.  Expect Autodesk to kowtow soon and make sure that none of these companies will make the switch. If it means that Autodesk actually delivers bug fixes for the version the customer has instead of one or two releases down the road, it is a good outcome for the studios.” Visit the Ubisoft website for more details. CraftAssist: An open-source framework to enable interactive bots in Minecraft by Facebook researchers What to expect in Unreal Engine 4.23? Pluribus, an AI bot built by Facebook and CMU researchers, has beaten professionals at six-player no-limit Texas Hold ’Em Poker
Read more
  • 0
  • 0
  • 6779
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-craftassist-an-open-source-framework-to-enable-interactive-bots-in-minecraft-by-facebook-researchers
Vincy Davis
19 Jul 2019
5 min read
Save for later

CraftAssist: An open-source framework to enable interactive bots in Minecraft by Facebook researchers

Vincy Davis
19 Jul 2019
5 min read
Two days ago, researchers from Facebook AI Research published a paper titled “CraftAssist: A Framework for Dialogue-enabled Interactive Agents”. The authors of this research are Facebook AI research engineers Jonathan Gray and Kavya Srinet, Facebook AI research scientist C. Lawrence Zitnick and Arthur Szlam and Yacine Jernite, Haonan Yu, Zhuoyuan Chen, Demi Guo and Siddharth Goyal. The paper describes the implementation of an assistant bot called CraftAssist which appears and interacts like another player, in the open sandbox game of Minecraft. The framework enables players to interact with the bot via in-game chat through various implemented tools and platforms. The players can also record these interactions through an in-game chat. The main aim of the bot is to be a useful and entertaining assistant to all the tasks listed and evaluated by the human players. Image Source: CraftAssist paper For motivating the wider AI research community to use the CraftAssist platform in their own experiments, Facebook researchers have open-sourced the framework, the baseline assistant, data and the models. The released data includes the functions which was used to build the 2,586 houses in Minecraft, the labeling data of the walls, roofs, etc. of the houses, human rephrasing of fixed commands, and the conversion of natural language commands to bot interpretable logical forms. The technology that allows the recording of human and bot interaction on a Minecraft server has also been released so that researcher will be able to independently collect data. Why is the Minecraft protocol used? Minecraft is a popular multiplayer volumetric pixel (voxel) 3D game based on building and crafting which allows multiplayer servers and players to collaborate and build, survive or compete with each other. It operates through a client and server architecture. The CraftAssist bot acts as a client and communicates with the Minecraft server using the Minecraft network protocol. The Minecraft protocol allows the bot to connect to any Minecraft server without the need for installing server-side mods. This lets the bot to easily join a multiplayer server along with human players or other bots. It also lets the bot to join an alternative server which implements the server-side component of the Minecraft network protocol. The CraftAssist bot uses a 3rd-party open source Cuberite server. It is a fast and extensible game server used for Minecraft. Read More: Introducing Minecraft Earth, Minecraft’s AR-based game for Android and iOS users How does the CraftAssist function? The block diagram below demonstrates how the bot interacts with incoming in-game chats and reaches the desired target. Image Source: CraftAssist paper Firstly, the incoming text is transformed into a logical form called the action dictionary. The action dictionary is then translated by a dialogue object which interacts with the memory module of the bot. This produces an action or a chat response to the user. The bot’s memory uses a relational database which is structured to recognize the relation between stored items of information. The major advantage of this type of memory is the easy to convert semantic parser, which is converted into a fully specified tasks. The bot responds to higher-level actions, called Tasks. Tasks are an interruptible process which follows a clear objective of step by step actions. It can adjust to long pauses between steps and can also push other Tasks onto a stack, like the way functions can call other functions in a standard programming language. Move, Build and Destroy are few of the many basic Tasks assigned to the bot. The The Dialogue Manager checks for illegal or profane words, then queries the semantic parser. The semantic parser takes the chat as input and produces an action dictionary. The action dictionary indicates that the text is a command given by a human and then specifies the high-level action to be performed by the bot. Once the task is created and pushed onto the Task stack, it is the responsibility of the command task ‘Move’ to compare the bot’s current location to the target location. This will make the bot to undertake a sequence of low-level step movements to reach the target. The core of the bot’s understanding of natural language depends on a neural semantic parser called the Text-toAction-Dictionary (TTAD) model. This model receives the incoming command/chat and then classifies it into an action dictionary which is interpreted by the Dialogue Object. The CraftAssist framework thus enables the bots in Minecraft to interact and play with players by understanding human interactions, using the implemented tools. The researchers hope that since the dataset of CraftAssist is now open-sourced, more developers will be empowered to contribute to this framework by assisting or training the bots, which might lead to the bots learning from human dialogue interactions, in the future. Developers have found the CraftAssist framework interesting. https://twitter.com/zehavoc/status/1151944917859688448 A user on Hacker News comments, “Wow, this is some amazing stuff! Congratulations!” Check out the paper CraftAssist: A Framework for Dialogue-enabled Interactive Agents for more details. Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects What to expect in Unreal Engine 4.23? A study confirms that pre-bunk game reduces susceptibility to disinformation and increases resistance to fake news
Read more
  • 0
  • 0
  • 10177

article-image-epic-games-grants-blender-1-2-million-in-cash-to-improve-the-quality-of-their-software-development-projects
Vincy Davis
16 Jul 2019
4 min read
Save for later

Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects

Vincy Davis
16 Jul 2019
4 min read
Yesterday, Epic Games announced that it is awarding Blender Foundation $1.2 million in cash spanning three years, to accelerate the quality of their software development projects. Blender is a free and open-source 3D creation suite which supports a full range of tools to empower artists to create 3D graphics, animation, special effects or games. Ton Roosendaal, founder, and chairman of Blender Foundation thanked Epic Games in a statement. He said “Thanks to the grant we will make a significant investment in our project organization to improve on-boarding, coordination and best practices for code quality. As a result, we expect more contributors from the industry to join our projects.” https://twitter.com/tonroosendaal/status/1150793424536313862 The $1.2 million grant from Epic is part of their $100 million MegaGrants program which was announced this year in March. Tim Sweeney, CEO of Epic Games had announced that Epic will be offering $100 million in grants to game developers to boost the growth of the gaming industry by supporting enterprise professionals, media and entertainment creators, students, educators, and tool developers doing excellent work with Unreal Engine or enhancing open-source capabilities for the 3D graphics community. Sweeney believes that open tools, libraries, and platforms are critical to the future of the digital content ecosystem. “Blender is an enduring resource within the artistic community, and we aim to ensure its advancement to the benefit of all creators”, he adds. This is the biggest award announced by Epic so far. Blender has no obligation to use or promote Epic Games’ storefront or engine considering this is a pure generous offer by Epic Games with “no strings attached”. In April, Magic Leap revealed that the company will provide 500 Magic Leap One Creator Edition spatial computing devices for giveaway as part of Epic MegaGrants program. Blender users are appreciative of the support and generosity of Epic Games. https://twitter.com/JeannotLandry/status/1150812155412963328 https://twitter.com/DomAnt2/status/1150798726379839488 A Redditor comments, “There's a reason Epic as a company has an extremely positive reputation with people in the industry. They've been doing this kind of thing for years, and a huge amount of money they're making from Fortnite is planned to be turned into grants as well.  Say what you want about them, they are without question the top company in gaming when it comes to actually using their profits to immediately reinvest/donate to the gaming industry itself. It doesn't hurt that every company who works with them consistently says that they're possibly the very best company in gaming to work with.” A comment on Hacker News read, “Epic are doing a great job improving fairness in the gaming industry, and the economic conditions for developers. I'm looking forward to their Epic Store opening up to more (high quality) Indie games.” In 2015, Epic had launched Unreal Dev Grants offering a pool of $5 million to independent developers with interesting projects in Unreal Engine 4 to fund the development of their projects. In December 2018, Epic had also launched an Epic game store where developers will get 88% of the earned revenue. The large sum donation of Epic to Blender holds more value considering the highly anticipated release of Blender 2.8 is around the corner. Though its release candidate is already out, users are quite excited for its stable release. Blender 2.8 will have new 3D viewport and UV editor tools to enhance users gaming experience. With Blender aiming to increase its quality of projects, such grants from major game publishers will only help them get bigger. https://twitter.com/ddiakopoulos/status/1150826388229726209 A user on Hacker News comments, “Awesome. Blender is on the cusp of releasing a major UI overhaul (2.8) that will make it more accessible to newcomers (left-click is now the default!). I'm excited to see it getting some major support from the gaming industry as well as the film industry.” What to expect in Unreal Engine 4.23? Epic releases Unreal Engine 4.22, focuses on adding “photorealism in real-time environments” Blender celebrates its 25th birthday!
Read more
  • 0
  • 0
  • 5090

article-image-what-to-expect-in-unreal-engine-4-23
Vincy Davis
12 Jul 2019
3 min read
Save for later

What to expect in Unreal Engine 4.23?

Vincy Davis
12 Jul 2019
3 min read
A few days ago, Epic released the first preview of Unreal Engine 4.23 for the developer community to check out its features and report back in case of any issues, before the final release. This version has new additions of Skin Weight Profiles, VR Scouting tools, New Pro Video Codecs and many updates on features like XR, animation, core, virtual production, gameplay and scripting, audio and more. The previous version, Unreal Engine 4.22 focused on adding photorealism in real-time environments. Some updates in Unreal Engine 4.23 XR Hololens 2 Native Support: With updates to the Stereo Panoramic Capture tool, it will be much easier to capture high-quality stereoscopic stills and videos of the virtual world in industry-standard formats, and to view those captures in an Oculus or GearVR headset. Stereo Panoramic capture Tool Improvements: This will make it easy to capture high-quality stereoscopic stills and videos of the virtual world in industry-standard formats. Animation Skin Weight Profiles: The new Skin Weight Profile system will enable users to override the original Skin Weights that are stored with a Skeletal Mesh. Animation Streaming: This is aimed at improving memory management for animation data. Sub Animation Graphs: New Sub Anim Graphs will allow dynamic switching of sub-sections of an Animation Graph, enabling multi-user-collaboration and memory savings for vaulted or unavailable items. Core Unreal Insights Tool: This will help developers to collect and analyze data about the Engine's behavior in a uniform fashion. This system has three components: The Trace System API will gather information from runtime systems in a consistent format and captures it for later processing. Multiple live sessions can contribute data at the same time. The Analysis API will process data from the Trace System API, and convert it into a form that the Unreal Insights tool can use. The Unreal Insights tool will provide an interactive visualization of data processed through the Analysis API, which will provide developers with a unified interface for stats, logs, and metrics from their application. Virtual production Remote Control over HTTP Extended LiveLink Plugin New VR Scouting tools New Pro Video Codecs nDisplay: Warp and Blend for Curved Surfaces Virtual Camera Improvements Gameplay & Scripting UMG Widget Diffing: Expanded and improved Blueprint Diffing will now support Widget Blueprints as well as Actor and Animation Blueprints. Audio Open Sound Control: It will enable a native implementation of the Open Sound Control (OSC) standard in an Unreal Engine plugin. Wave Table Synthesis: The new monophonic Wavetable synthesizer leverages UE4’s built-in curve editor to author the time-domain wavetables, enabling a wide range of sound design capabilities can be driven by gameplay parameters. There are many more updates provided for the Editor, Niagara editor, Physics simulation, Rendering system and the Sequencer multi-track editor in Unreal Engine 4.23. The Unreal Engine team has notified users that the preview release is not fully quality tested, hence should be considered as unstable until the final release. Users are excited to try the latest version of Unreal Engine 4.23. https://twitter.com/ClicketyThe/status/1149070536762372096 https://twitter.com/cinedatabase/status/1149077027565309952 https://twitter.com/mygryphon/status/1149334005524750337 Visit the Unreal Engine page for more details. Unreal Engine 4.22 update: support added for Microsoft’s DirectX Raytracing (DXR) Unreal Engine 4.20 released with focus on mobile and immersive (AR/VR/MR) devices What’s new in Unreal Engine 4.19?
Read more
  • 0
  • 0
  • 5137

article-image-pluribus-an-ai-bot-built-by-facebook-and-cmu-researchers-has-beaten-professionals-at-six-player-no-limit-texas-hold-em-poker
Sugandha Lahoti
12 Jul 2019
5 min read
Save for later

Pluribus, an AI bot built by Facebook and CMU researchers, has beaten professionals at six-player no-limit Texas Hold ’Em Poker

Sugandha Lahoti
12 Jul 2019
5 min read
Researchers from Facebook and Carnegie Mellon University have developed an AI bot that has defeated human professionals in six-player no-limit Texas Hold’em poker.   Pluribus defeated pro players in both “five AIs + one human player” format and a “one AI + five human players” format. Pluribus was tested in 10,000 games against five human players, as well as in 10,000 rounds where five copies of the AI  played against one professional. This is the first time an AI bot has beaten top human players in a complex game with more than two players or two teams. Pluribus was developed by Noam Brown of Facebook AI Research and Tuomas Sandholm of Carnegie Mellon University. Pluribus builds on Libratus, their previous poker-playing AI which defeated professionals at Heads-Up Texas Hold ’Em, a two-player game in 2017. Mastering 6-player Poker for AI bots is difficult considering the number of possible actions. First, obviously since this involves six players, the games have a lot more variables and the bot can’t figure out a perfect strategy for each game - as it would do for a two player game. Second, Poker involves hidden information, in which a player only has access to the cards that they see. AI has to take into account how it would act with different cards so it isn’t obvious when it has a good hand. Brown wrote on a Hacker News thread, “So much of early AI research was focused on beating humans at chess and later Go. But those techniques don't directly carry over to an imperfect-information game like poker. The challenge of hidden information was kind of neglected by the AI community. This line of research really has its origins in the game theory community actually (which is why the notation is completely different from reinforcement learning). Fortunately, these techniques now work really really well for poker.” What went behind Pluribus? Initially, Pluribus engages in self-play by playing against copies of itself, without any data from human or prior AI play used as input. The AI starts from scratch by playing randomly, and gradually improves as it determines which actions, and which probability distribution over those actions, lead to better outcomes against earlier versions of its strategy. Pluribus’s self-play produces a strategy for the entire game offline, called the blueprint strategy. This online search algorithm can efficiently evaluate its options by searching just a few moves ahead rather than only to the end of the game. Pluribus improves upon the blueprint strategy by searching for a better strategy in real time for the situations it finds itself in during the game. Real-time search The blueprint strategy in Pluribus was computed using a variant of counterfactual regret minimization (CFR). The researchers used Monte Carlo CFR (MCCFR) that samples actions in the game tree rather than traversing the entire game tree on each iteration. Pluribus only plays according to this blueprint strategy in the first betting round (of four), where the number of decision points is small enough that the blueprint strategy can afford to not use information abstraction and have a lot of actions in the action abstraction. After the first round, Pluribus instead conducts a real-time search to determine a better, finer-grained strategy for the current situation it is in. https://youtu.be/BDF528wSKl8 What is astonishing is that Pluribus uses very little processing power and memory, less than $150 worth of cloud computing resources. The researchers trained the blueprint strategy for Pluribus in eight days on a 64-core server and required less than 512 GB of RAM. No GPUs were used. Stassa Patsantzis, a Ph.D. research student appreciated Pluribus’s resource-friendly compute power. She commented on Hacker News, “That's the best part in all of this. I'm hoping that there is going to be more of this kind of result, signaling a shift away from Big Data and huge compute and towards well-designed and efficient algorithms.” She also said how this is significantly lesser than ML algorithms used at DeepMind and Open AI. “In fact, I kind of expect it. The harder it gets to do the kind of machine learning that only large groups like DeepMind and OpenAI can do, the more smaller teams will push the other way and find ways to keep making progress cheaply and efficiently”, she added. Real-life implications AI bots such as Pluribus give a better understanding of how to build general AI that can cope with multi-agent environments, both with other AI agents and with humans. A six-player AI bot has better implications in reality because two-player zero-sum interactions (in which one player wins and one player loses) are common in recreational games, but they are very rare in real life.  These AI bots can be used for handling harmful content, dealing with cybersecurity challenges, or managing an online auction or navigating traffic, all of which involve multiple actors and/or hidden information. Apart from fighting online harm, four-time World Poker Tour title holder Darren Elias helped test the program's skills, said, Pluribus could spell the end of high-stakes online poker. "I don't think many people will play online poker for a lot of money when they know that this type of software might be out there and people could use it to play against them for money." Poker sites are actively working to detect and root out possible bots. Brown, Pluribus' developer, on the other hand, is optimistic. He says it's exciting that a bot could teach humans new strategies and ultimately improve the game. "I think those strategies are going to start penetrating the poker community and really change the way professional poker is played," he said. For more information on Pluribus and it’s working, read Facebook’s blog. DeepMind’s Alphastar AI agent will soon anonymously play with European StarCraft II players Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa OpenAI Five bots destroyed human Dota 2 players this weekend
Read more
  • 0
  • 0
  • 3948
article-image-deepminds-alphastar-ai-agent-will-soon-anonymously-play-with-european-starcraft-ii-players
Sugandha Lahoti
11 Jul 2019
4 min read
Save for later

DeepMind's Alphastar AI agent will soon anonymously play with European StarCraft II players

Sugandha Lahoti
11 Jul 2019
4 min read
Earlier this year, DeepMind’s AI Alphastar defeated two professional players at StarCraft II, a real-time strategy video game. Now, European Starcraft II players will get a chance to face off experimental versions of AlphaStar, as part of ongoing research into AI. https://twitter.com/MaxBakerTV/status/1149067938131054593 AlphaStar learns by imitating the basic micro and macro-strategies used by players on the StarCraft ladder. A neural network was trained initially using supervised learning from anonymised human games released by Blizzard. Once the agents get trained from human game replays, they’re then trained against other competitors in the “AlphaStar league”. This is where a multi-agent reinforcement learning process starts. New competitors are added to the league (branched from existing competitors). Each of these agents then learns from games against other competitors. This ensures that each competitor performs well against the strongest strategies, and does not forget how to defeat earlier ones. Anyone who wants to participate in this experiment will have to opt into the chance to play against the StarCraft II program. There will be an option provided in the in-game pop-up window. Users can alter their opt-in selection at any time. To ensure anonymity, all games will be blind test matches. European players that opt-in won't know if they've been matched up against AlphaStar. This will help ensure that all games are played under the same conditions, as players may tend to react differently when they know they’re against an AI. A win or a loss against AlphaStar will affect a player’s MMR (Matchmaking Rating) like any other game played on the ladder. "DeepMind is currently interested in assessing AlphaStar’s performance in matches where players use their usual mix of strategies," Blizzard said in its blog post. "Having AlphaStar play anonymously helps ensure that it is a controlled test, so that the experimental versions of the agent experience gameplay as close to a normal 1v1 ladder match as possible. It also helps ensure all games are played under the same conditions from match to match." Some people have appreciated the anonymous testing feature. A Hacker News user commented, “Of course the anonymous nature of the testing is interesting as well. Big contrast to OpenAI's public play test. I guess it will prevent people from learning to exploit the bot's weaknesses, as they won't know they are playing a bot at all. I hope they eventually do a public test without the anonymity so we can see how its strategies hold up under focused attack.” Others find it interesting to see what happens if players know they are playing against AlphaStar. https://twitter.com/hardmaru/status/1149104231967842304   AlphaStar will play in Starcraft’s three in-universe races (Terran, Zerg, or Protoss). Pairings on the ladder will be decided according to normal matchmaking rules, which depend on how many players are online while AlphaStar is playing. It will not be learning from the games it plays on the ladder, having been trained from human replays and self-play. The Alphastar will also use a camera interface and more restricted APMs. Per the blog post, “AlphaStar has built-in restrictions, which cap its effective actions per minute and per second. These caps, including the agents’ peak APM, are more restrictive than DeepMind’s demonstration matches back in January, and have been applied in consultation with pro players.” https://twitter.com/Eric_Wallace_/status/1148999440121749504 https://twitter.com/Liquid_MaNa/status/1148992401157054464   DeepMind will be benchmarking the performance of a number of experimental versions of AlphaStar to enable DeepMind to gather a broad set of results during the testing period. DeepMind will use a player’s replays and the game data (skill level, MMR, the map played, race played, time/date played, and game duration) to assess and describe the performance of the AlphaStar system. However, Deepmind will remove identifying details from the replays including usernames, user IDs and chat histories. Other identifying details will be removed to the extent that they can be without compromising the research DeepMind is pursuing. For now, AlphaStar agents will play only in Europe. The research results will be released in a peer-reviewed scientific paper along with replays of AlphaStar’s matches. Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads for AI use in healthcare
Read more
  • 0
  • 0
  • 3490

article-image-unity-learn-premium-a-learning-platform-for-professionals-to-master-real-time-3d-development
Sugandha Lahoti
27 Jun 2019
3 min read
Save for later

Unity Learn Premium, a learning platform for professionals to master real-time 3D development

Sugandha Lahoti
27 Jun 2019
3 min read
Unity has announced a new learning platform for professionals and hobbyists to advance their Unity knowledge and skills within their industry. The Unity Learn Premium, builds upon the launch of the free Unity Learn platform. The Unity Learn, platform hosts hundreds of free projects and tutorials, including two new beginner projects. Users can search learning materials by topic, content type, and level of expertise. Tutorials comes with  how-to instructions, video clips, and code snippets, making it easier to switch between Unity Learn and the Unity Editor. The Unity Learn Premium service allows creators to get immediate answers, feedback, and guidance directly from experts with Learn Live, biweekly interactive sessions with Unity-certified instructors. Learners can also track progress on guided learning paths, work through shared challenges with peers, and access an exclusive library of resources updated every month with the latest Unity releases. The premium version will offer live access to Unity experts, and learning content across industries, including architecture, engineering, and construction, automotive, transportation, and manufacturing), media and entertainment, and gaming. The Unity Learn Premium announcement comes on the heels of the launch of the Unity Academic Alliance. With this membership program,  educators and institutions can incorporate Unity into their curriculum. Jessica Lindl, VP and Global Head of Education, Unity Technologies wrote to us in a statement, “Until now, there wasn’t a definitive learning resource for learning intermediate to advanced Unity skills, particularly for professionals in industries beyond gaming. The workplace of today and tomorrow is fast-paced and driven by innovation, meaning workers need to become lifelong learners, using new technologies to upskill and ultimately advance their careers. We hope that Unity Learn Premium will be the perfect tool for professionals to continue on this learning path.” She further wrote to us, "Through our work to enable the success of creators around the world, we discovered there is no definitive source for advancing from beginner to expert across all industries, which is why we're excited to launch the Unity Learn Platform. The workplace of today and tomorrow is fast-paced and driven by innovation, forcing professionals to constantly be reskilling and upskilling in order to succeed. We hope the Unity Learn Platform enables these professionals to excel in their respective industries." Unity Learn Premium will be available at no additional cost for Plus and Pro subscribers and offered as a standalone subscription for $15/month. You can access more information here. Related News Developers can now incorporate Unity features into native iOS and Android apps Unity Editor will now officially support Linux Obstacle Tower Environment 2.0: Unity announces Round 2 of its ‘Obstacle Tower Challenge’ to test AI game players.
Read more
  • 0
  • 0
  • 5296

article-image-a-study-confirms-that-pre-bunk-game-reduces-susceptibility-to-disinformation-and-increases-resistance-to-fake-news
Fatema Patrawala
27 Jun 2019
7 min read
Save for later

A study confirms that pre-bunk game reduces susceptibility to disinformation and increases resistance to fake news

Fatema Patrawala
27 Jun 2019
7 min read
On Tuesday, the University of Cambridge published a research performed on  thousands of online game players. The study shows how an online game can work like a “vaccine'' and increase skepticism towards fake news. This was done by giving people a weak dose of the methods behind disinformation campaigns. Last year in February, University of Cambridge researchers helped in launching the browser game Bad News. In this game, you take on the role of fake news-monger. Drop all pretense of ethics and choose a path that builds your persona as an unscrupulous media magnate. But while playing the game you have to keep an eye on your ‘followers’ and ‘credibility’ meters. The task is to get as many followers as you can while slowly building up fake credibility as a news site. And you lose if you tell obvious lies or disappoint your supporters! Jon Roozenbeek, study co-author from Cambridge University, and Dr Sander van der Linden, Director of the Cambridge Social Decision-Making Lab worked with Dutch media collective DROG and design agency Gusmanson to develop Bad News. DROG develops programs and courses and also conducts research aimed at recognizing disinformation online. The game is primarily available in English, and many other languages like Czech, Dutch, German, Greek, Esperanto, Polish, Romanian, Serbian, Slovenian and Swedish. They have also developed a special Junior version for children in the age group between 8 - 11. Jon Roozenbee, said: “We are shifting the target from ideas to tactics. By doing this, we are hoping to create what you might call a general ‘vaccine’ against fake news, rather than trying to counter each specific conspiracy or falsehood.” Hu further added, “We want to develop a simple and engaging way to establish media literacy at a relatively early age, then look at how long the effects last”. The study says that the game increased psychological resistance to fake news After the game was available to play, thousands of people spent fifteen minutes completing it, and many allowed the data to be used for the research. According to a study of 15000 participants, this game has shown to increase “psychological resistance” to fake news. Players stoke anger and fear by manipulating news and social media within the simulation: they deployed twitter bots, photo-shopped evidence, and incited conspiracy theories to attract followers. All of this was done while maintaining a “credibility score” for persuasiveness. “Research suggests that fake news spreads faster and deeper than the truth, so combating disinformation after-the-fact can be like fighting a losing battle,” said Dr Sander van der Linden. “We wanted to see if we could preemptively debunk, or ‘pre-bunk’, fake news by exposing people to a weak dose of the methods used to create and spread disinformation, so they have a better understanding of how they might be deceived. “This is a version of what psychologists call ‘inoculation theory’, with our game working like a psychological vaccination.” The study was performed by asking players to rate the reliability of content before and after gameplay To gauge the effects of the game, players were asked to rate the reliability of a series of different headlines and tweets before and after gameplay. They were randomly allocated a mixture of real and fake news. There were six “badges” to earn in the game, each reflecting a common strategy used by creators of fake news: impersonation; conspiracy; polarisation; discrediting sources; trolling; emotionally provocative content. There were in-game questions too that measured the effects of Bad News deployed for four of its featured fake news badges. As a result for the disinformation tactic of “impersonation”, which involves mimicking of trusted personalities on social media, the game reduced perceived reliability of the fake headlines and tweets by 24% from pre to post gameplay. Further it reduced perceived reliability of deliberately polarising headlines by about 10%, and “discrediting sources” that is attacking a legitimate source with accusations of bias – by 19%. For “conspiracy”, the spreading of false narratives blaming secretive groups for world events, perceived reliability was reduced by 20%. The researchers also found that those who registered as most susceptible to fake news headlines in the beginning benefited most from the “inoculation”. “We find that just fifteen minutes of gameplay has a moderate effect, but a practically meaningful one when scaled across thousands of people worldwide, if we think in terms of building societal resistance to fake news,” said van der Linden. The sample for the study was skewed towards younger male The sample was self-selecting those who came across the game online and opted to play, and as such was skewed toward younger, male, liberal, and more educated demographics. Hence, the first set of results from Bad News has its limitations, say researchers. However, the study found the game to be almost equally effective across age, education, gender, and political persuasion. But researchers did not mention if they plan to do a follow up study keeping in mind the limitations of this research. “Our platform offers early evidence of a way to start building blanket protection against deception, by training people to be more attuned to the techniques that underpin most fake news,” added Roozenbeek. Community discussion revolve around various fake news reporting techniques This news has attracted much attention on Hacker News, and users have commented about various news reporting techniques that journalists use to promote different stories. One of the user comments reads, “The "best" fake news these days is the stuff that doesn't register even to people are read-in on the usual anti-patterns. Subtle framing, selective quotation, anonymous sources, "repeat the lie" techniques, and so on, are the ones that I see happening today that are hard to immunize yourself from. Ironically, the people who fall for these are more likely to self-identify as being aware and clued in on how to avoid fake news.” Another users says, “Second best. The best is selective reporting. Even if every story is reported 100% accurately and objectively, by choosing which stories are promoted, and which buried, you can set any agenda you want.” One of them also commented that the discussion diluted the term Fake news in influences and propaganda, it reads, “This discussion is falling into a trap where "Fake News" is diluted to synonym for all influencing news and propaganda. Fake News is propaganda that consists of deliberate disinformation or hoaxes. Nothing mentioned here falls into a category of Fake News. Fake News creates cognitive dissonance and distrust. More subtler methods work differently. But mainstream media also does Fake News" arguments are whataboutism.” To this another user responds, “I've upvoted you because you make a good point, but I disagree. IMO, Fake News, in your restrictive definition, is to modern propaganda what Bootstrap is to modern frontend dev. It's an easy shortcut, widely known, and even talented operators are going to use it because it's the easiest way to control a (domestic or foreign) population. But resources are there, funding is there, to build much more subtle/complex systems if needed. Cut away Bootstrap, and you don't particularly dent the startup ecosystem. Cut away fake news, and you don't particularly dent the ability of troll farms to get work done. We're in a new era, fake news or not.” Game rivals, Microsoft and Sony, form a surprising cloud gaming and AI partnership DeepMind’s AI uses reinforcement learning to defeat humans in multiplayer games Introducing Minecraft Earth, Minecraft’s AR-based game for Android and iOS users  
Read more
  • 0
  • 0
  • 2496
article-image-google-announces-early-access-of-game-builder-a-platform-for-building-3d-games-with-zero-coding
Bhagyashree R
17 Jun 2019
3 min read
Save for later

Google announces early access of ‘Game Builder’, a platform for building 3D games with zero coding

Bhagyashree R
17 Jun 2019
3 min read
Last week, a team within Area 120, Google’s workshop for experimental products, introduced an experimental prototype of Game Builder. It is a “game building sandbox” that enables you to build and play 3D games in just a few minutes. It is currently in early access and is available on Steam. https://twitter.com/artofsully/status/1139230946492682240 Here’s how Game Builder makes “building a game feel like playing a game”: Source: Google Following are some of the features that Game Builder comes with: Everything is multiplayer Game Builder’s always-on multiplayer feature allows multiple users to build and play games simultaneously. Your friends can also play the game while you are working on it. Thousands of 3D models from Google Poly You can find thousands of free 3D models (such as rocket ship, synthesizer, ice cream cone) to use in your games from Google Poly. You can also “remix” most of the models using Tilt Brush and Google Blocks application integration to make it fit for your game. Once you find the right 3D model, you can easily and instantly use it in your game. No code, no compilation required This platform is designed for all skill levels, from enabling players to build their first game to providing game developers a faster way to realize their game ideas. Game Builder’s card-based visual programming allows you to bring your game to life with bare minimum knowledge of programming. You just need to drag and drop cards to answer questions like  “How do I move?.” You can also create your own cards with Game Builder’s extensive JavaScript API. It allows you to script almost everything in the game. As the code is live, you just need to save the changes and you are ready to play the game without any compilation. Apart from these features, you can also create levels with terrain blocks, edit the physics of objects, create lighting and particle effects, and more. Once the game is ready you can share your creations on Steam Workshop. Many people are commending this easy way of game building, but also think that this is nothing new. We have seen such platforms in the past, for instance, GameMaker by YoYo Games. “I just had a play with it. It seems very well thought out. It has a very nice tutorial that introduces all the basic concepts. I am looking forward to trying out the multiplayer aspect, as that seems to be the most compelling thing about it,”  a Hacker News user commented. You can read Google’s official announcement for more details. Google Research Football Environment: A Reinforcement Learning environment for AI agents to master football Google Walkout organizer, Claire Stapleton resigns after facing retaliation from management Ian Lance Taylor, Golang team member, adds another perspective to Go being Google’s language
Read more
  • 0
  • 0
  • 4891

article-image-google-research-football-environment-a-reinforcement-learning-environment-for-ai-agents-to-master-football
Amrata Joshi
10 Jun 2019
4 min read
Save for later

Google Research Football Environment: A Reinforcement Learning environment for AI agents to master football

Amrata Joshi
10 Jun 2019
4 min read
Last week, Google researchers announced the release of  Google Research Football Environment, a reinforcement learning environment where agents can master football. This environment comes with a physics-based 3D football simulation where agents control either one or all football players on their team, they learn how to pass between them, and further manage to overcome their opponent’s defense to score goals. The Football Environment offers a game engine, a set of research problems called Football Benchmarks and Football Academy and much more. The researchers have released a beta version of open-source code on Github to facilitate the research. Let’s have a brief look at each of the elements in the Google Research Football Environment. Football engine: The core of the Football Environment Based on the modified version of Gameplay Football, the Football engine simulates a football match including fouls, goals, corner and penalty kicks, and offsides. The engine is programmed in C++,  which allows it to run with GPU as well as without GPU-based rendering enabled. The engine allows learning from different state representations that contain semantic information such as the player’s locations and learning from raw pixels. The engine can be run in both stochastic mode as well as deterministic mode for investigating the impact of randomness. The engine is also compatible with OpenAI Gym API. Read Also: Create your first OpenAI Gym environment [Tutorial] Football Benchmarks: Learning from the actual field game The researchers propose a set of benchmark problems for RL research based on the Football Engine with the help of Football Benchmarks. These benchmarks highlight the goals such as playing a “standard” game of football against a fixed rule-based opponent. The researchers have provided three versions, the Football Easy Benchmark, the Football Medium Benchmark, and the Football Hard Benchmark, which differ only in the strength of the opponent. They also provide benchmark results for two state-of-the-art reinforcement learning algorithms including DQN and IMPALA that can be run in multiple processes on a single machine or concurrently on many machines. Image Source: Google’s blog post These results indicate that the Football Benchmarks are research problems that vary in difficulties. According to the researchers, the Football Easy Benchmark is suitable for research on single-machine algorithms and Football Hard Benchmark is challenging for massively distributed RL algorithms. Football Academy: Learning from a set of difficult scenarios   Football Academy is a diverse set of scenarios of varying difficulty that allow researchers to look into new research ideas and allow testing of high-level concepts. It also provides a foundation for investigating curriculum learning, research ideas, where agents can learn harder scenarios. The official blog post states, “Examples of the Football Academy scenarios include settings where agents have to learn how to score against the empty goal, where they have to learn how to quickly pass between players, and where they have to learn how to execute a counter-attack. Using a simple API, researchers can further define their own scenarios and train agents to solve them.” Users are giving mixed reaction to this news as some find nothing new in Google Research Football Environment. A user commented on HackerNews, “I guess I don't get it... What does this game have that SC2/Dota doesn't? As far as I can tell, the main goal for reinforcement learning is to make it so that it doesn't take 10k learning sessions to learn what a human can learn in a single session, and to make self-training without guiding scenarios feasible.” Another user commented, “This doesn't seem that impressive: much more complex games run at that frame rate? FIFA games from the 90s don't look much worse and certainly achieved those frame rates on much older hardware.” While a few others think that they can learn a lot from this environment. Another comment reads, “In other words, you can perform different kinds of experiments and learn different things by studying this environment.” Here’s a short YouTube video demonstrating Google Research Football. https://youtu.be/F8DcgFDT9sc To know more about this news, check out Google’s blog post. Google researchers propose building service robots with reinforcement learning to help people with mobility impairment Researchers propose a reinforcement learning method that can hack Google reCAPTCHA v3 Researchers input rabbit-duck illusion to Google Cloud Vision API and conclude it shows orientation-bias  
Read more
  • 0
  • 0
  • 4469