Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News

3711 Articles
article-image-nvtop-an-htop-like-monitoring-tool-for-nvidia-gpus-on-linux
Prasad Ramesh
09 Oct 2018
2 min read
Save for later

NVTOP: An htop like monitoring tool for NVIDIA GPUs on Linux

Prasad Ramesh
09 Oct 2018
2 min read
People started using htop when the top just didn’t provide enough information. Now there is NVTOP, a tool that looks similar to htop but displays the process information loaded on your NVIDIA GPU. It works on Linux systems and displays detailed information about processes, memory used, which GPU and also displays the total GPU and memory usage. The first version of this tool was released in July last year. The latest change made the process list and command options scrollable. Some of the features of NVTOP are: Sorting by column To Select / Ignore a specific GPU by ID To kill selected process Monochrome option Yes, it has multi GPU support and can display the running processes from all of your GPUs. The information printed out looks like the following, and is similar to something htop would display. Source: GitHub There is also a manual page to give some guidance in using NVTOP. It can be accessed with this command: man nvtop There are OS specific installation steps on GitHub for Ubuntu/Debian, Fedora/RedHat/CentOS, OpenSUSE, and Arch Linux. Requirements There are two libraries needed to build and run NVTOP: The NVIDIA Management Library (NVML) for querying GPU information. The ncurses library for the user interface and make it colorful. Supported GPUs The NVTOP tool works only for NVIDIA GPUs and runs on Linux systems. One of the dependencies is the NVML library which does not support some queries from GPUs before the Kepler microarchitecture. That is anything before GeForce 600 series, GeForce 700 series, or GeForce 800M wouldn’t likely work. For AMD users, there is a tool called radeontop. The tool is provided under the GPLV3 license. For more details, head on to the NVTOP GitHub repository. NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning? NVIDIA announces pre-orders for the Jetson Xavier Developer Kit, an AI chip for autonomous machines, at $2,499 NVIDIA open sources its material definition language, MDL SDK
Read more
  • 0
  • 0
  • 24130

article-image-bootstrap-5-to-replace-jquery-with-vanilla-javascript
Bhagyashree R
13 Feb 2019
2 min read
Save for later

Bootstrap 5 to replace jQuery with vanilla JavaScript

Bhagyashree R
13 Feb 2019
2 min read
The upcoming major version of Bootstrap, version 5, will no longer have jQuery as a dependency and will be replaced with vanilla JavaScript. In 2017, the Bootstrap team opened a pull request with the aim to remove jQuery entirely from the Bootstrap source and it is now near completion. Under this pull request, the team has removed jQuery from 11 plugins including Util, Alert, Button, Carousel, and more. Using ‘Data’ and ‘EventHandler’ in unit tests is no longer supported. Additionally, Internet Explorer will not be compatible with this version. Despite these updates, developers will be able to use this version both with or without jQuery. Since this will be a major release, users can expect a few breaking changes. Not only just Bootstrap but many other companies have been thinking of decoupling from jQuery. For example, last year, GitHub incrementally removed jQuery from their frontend mainly because of the rapid evolution of web standards and jQuery losing its relevancy over time. This news triggered a discussion on Hacker News, and many users were happy about this development. One user commented, “I think the reason is that many of the problems jQuery was designed to solve (DOM manipulation, cross-browser compatibility issues, AJAX, cool effects) have now been implemented as standards, either in Javascript or CSS and many developers consider the 55k minified download not worth it.” Another user added, “The general argument now is that 95%+ of jQuery is now native in browsers (with arguably the remaining 5% being odd overly backward compatible quirks worth ignoring), so adding a JS dependency for them is "silly" and/or a waste of bandwidth.” Read more in detail, check out Bootstrap’s GitHub repository. jQuery File Upload plugin exploited by hackers over 8 years, reports Akamai’s SIRT researcher GitHub parts ways with JQuery, adopts Vanilla JS for its frontend Will putting limits on how much JavaScript is loaded by a website help prevent user resource abuse?
Read more
  • 0
  • 0
  • 21263

article-image-how-to-create-non-player-characters-npc-with-unity-2018
Amarabha Banerjee
26 Apr 2018
10 min read
Save for later

How to create non-player Characters (NPC) with Unity 2018

Amarabha Banerjee
26 Apr 2018
10 min read
Today, we will learn to create game characters while focusing mainly on non-player characters. Our Cucumber Beetles will serve as our game's non-player characters and will be the Cucumber Man's enemies. We will incorporate Cucumber Beetles in our game through direct placement. We will review the beetles' 11 animations and make changes to the non-player character's animation controller. In addition, we will write scripts to control the non-player characters. We will also add cucumber patches, cucumbers, and cherries to our game world. Understanding the non-player characters Non-player characters commonly referred to as NPCs, are simply game characters that are not controlled by a human player. These characters are controlled through scripts, and their behaviors are usually responsive to in-game conditions. Our game's non-player characters are the Cucumber Beetles. These beetles, as depicted in the following screenshot, have six legs that they can walk on; under special circumstances, they can also walk on their hind legs: Cucumber Beetles are real insects and are a threat to cucumbers. They cannot really walk on their hind legs, but they can in our game. Importing the non-player characters into our game You are now ready to import the asset package for our game's non-player character, the Cucumber Beetle. Go through the following steps to import the package: Download the Cucumber_Beetle.unitypackage file from the publisher's companion website In Unity, with your game project open, select Assets | Import Package | Custom Package from the top menu Navigate to the location of the asset package you downloaded in step 1 and click the Open button When presented with the Import Asset Package dialog window, click the Import button As you will notice, the Cucumber_Beetle asset package contains several assets related to the Cucumber Beetles, including a controller, scripts, a prefab, animations, and other assets: Now that the Cucumber_Beetle asset package has been imported into our game project, we should save our project. Use the File | Save Project menu option. Next, let's review what was imported. In the Project panel, under Assets | Prefabs, you will see a new Beetle.Prefab. Also in the Project panel, under Assets, you will see a Beetle folder. It is important that you understand what each component in the folder is for. Please refer to the following screenshot for an overview of the assets that you will be using in this chapter in regards to the Cucumber Beetle: The other assets in the previous screenshot that were not called out include a readme.txt file, the texture and materials for the Cucumber Beetle, and the source files. We will review the Cucumber Beetle's animations in the next section. Animating our non-player characters Several Cucumber Beetle animations have been prepared for use in our game. Here is a list of the animation names as they appear in our project, along with brief descriptions of how we will incorporate the animation into our game. The animations are listed in alphabetical order by name: Animation Name Usage Details Attack_Ground The beetle attacks the Cucumber Man's feet from the ground Attack_Standing The beetle attacks the Cucumber Man from a standing position Die_Ground The beetle dies from the starting position of on the ground Die_Standing The beetle dies from the starting position of standing on its hind legs Eat_Ground The beetle eats cucumbers while on the ground Idle_Ground The beetle is not eating, walking, fighting, or standing Idle_Standing The beetle is standing, but not walking, running, or attacking Run_Standing The beetle runs on its hind legs Stand The beetle goes from an on-the-ground position to standing (it stands up) Walk_Ground The beetle walks using its six legs Walk_Standing The beetle walks on its hind legs You can preview these animations by clicking on an animation file, such as Eat_Ground.fbx, in the Project panel. Then, in the Inspector panel, click the play button to watch the animation. There are 11 animations for our Cucumber Beetle, and we will use scripting, later to determine when an animation is played. In the next section, we will add the Cucumber Beetle to our game. Incorporating the non-player characters into our game First, let's simply drag the Beetle.Prefab from the Assets/Prefab folder in the Project panel to our game in Scene view. Place the beetle somewhere in front of the Cucumber Man so that the beetle can be seen as soon as you put the game into game mode. A suggested placement is illustrated in the following screenshot: When you put the game into game mode, you will notice that the beetle cycles through its animations. If you double-click the Beetle.controller in the Assets | Beetle folder in the Project panel, you will see, as shown in the following screenshot, that we currently have several animations set to play successively and repeatedly: This initial setup is intended to give you a first, quick way of previewing the various animations. In the next section, we will modify the animation controller. Working with the Animation Controller We will use an Animation Controller to organize our NPCs' animations. The Animation Controller will also be used to manage the transitions between animations. Before we start making changes to our Animation Controller, we need to identify what states our beetle has and then determine what transitions each state can have in relation to other states. Here are the states that the beetle can have, each tied to an animation: Idle on Ground Walking on Ground Eating on Ground Attacking on Ground Die on Ground Stand Standing Idle Standing Walk Standing Run Standing Attack Die Standing With the preceding list of states, we can assign the following transitions: From Idle on Ground to: Walking on Ground Running on Ground Eating on Ground Attacking on Ground Stand From Stand to: Standing Idle Standing Walk Standing Run Standing Attack Reviewing the transitions from Idle on Ground to Stand demonstrates the type of state-to-state transition decisions you need to make for your game. Let's turn our attention back to the Animation Controller window. You will notice that there are two tabs in the left panel of that window: Layers and Parameters. The Layers tab shows a Base Layer. While we can create additional layers, we do not need to do this for our game. The Parameters tab is empty, and that is fine. We will make our changes using the Layout area of the Animation Controller window. That is the area with the grid background. Let's start by making the following changes. For all 11 New State buttons, do the following: Left-click the state button Look in the Inspector panel to determine which animation is associated with the state button Rename the state name in the Inspector panel to reflect the animation. Click the return button Double-check the state button to ensure your change was made When you have completed the preceding five steps for all 11 states, your Animation Controller window should match the following screenshot: If you were to put the game into game mode, you would see that nothing has changed. We only changed the state names so they made more sense to us. So, we have some more work to do with the Animation Controller. Currently, the Attacking on Ground state is the default. That is not what we want. It makes more sense to have the Idle on Ground state to be our default. To make that change, right-click the Idle on Ground state and select Set as Layer Default State: Next, we need to make a series of changes to the state transitions. There are a lot of states and there will be a lot of transitions. In order to make things easier, we will start by deleting all the default transitions. To accomplish this, left-click each white line with an arrow and press your keyboard's Delete key. Do not delete the orange line that goes from Entry to Idle on Ground. After all transitions have been deleted, you can drag your states around so you have more working room. You might temporarily reorganize them in a manner similar to what is shown in the following screenshot: Our next task is to create all of our state transitions. Follow these steps for each state transition you want to add: Right-click the originating state. Select Create Transition. Click on the destination state. Once you have made all your transitions, you can reorganize your states to declutter the Animation Controller's layout area. A suggested final organization is provided in the following screenshot: As you can see in our final arrangement, we have 11 states and over two dozen transitions. You will also note that the Die on Ground and Die Standing states do not have any transitions. In order for us to use these animations in our game, they must be placed into an Animation Controller. Let's run a quick experiment: Select the Beetle character in the Hierarchy panel. In the Inspector panel, click the Add Component button. Select Physics | Box Collider. Click the Edit Collider button. Modify the size and position of the box collider so that it encases the entire beetle body. Click the Edit Collider button again to get out of edit mode. Your box collider should look similar to what is depicted in the following screenshot: Next, let's create a script that invokes the Die on Ground animation when the Cucumber Man character collides with the beetle. This will simulate the Cucumber Man stepping on the beetle. Follow these steps: Select the Beetle character in the Hierarchy panel. In the Inspector panel, click the Add Component button. Select New Script. Name the script BeetleNPC. Click the Create and Add button. In the project view, select Favorites | All Scripts | BeetleNPC. Double-click the BeetleNPC script file. Edit the script so that it matches the following code block: using System.Collections; using System.Collections.Generic; using UnityEngine; public class BeetleNPC : MonoBehaviour { Animator animator; // Use this for initialization void Start () { animator = GetComponent<Animator>(); } // Collision Detection Test void OnCollisionEnter(Collision col) { if (col.gameObject.CompareTag("Player")) { animator.Play("Die on Ground"); } } } This code detects a collision between the Cucumber Man and the beetle. If a collision is detected, the Die on Ground animation is played.  As you can see in the following screenshot, the Cucumber Man defeated the Cucumber Beetle: This short test demonstrated two important things that will help us further develop this game: Earlier in this section, you renamed all the states in the Animation Controller window. The names you gave the states are the ones you will reference in code. Since the animation we used did not have any transitions to other states, the Cucumber Beetle will remain in the final position of the animation unless we script it otherwise. So, if we had 100 beetles and defeated them all, all 100 would remain on their backs in the game world. This was a simple and successful scripting test for our Cucumber Beetle. We will need to write several more scripts to manage the beetles in our game. First, there are some game world modifications we will make. To summarize, we discussed how to create interesting character animations and bring them to life using the Unity 2018 platform. You read an extract from the book Getting Started with Unity 2018 written by Dr. Edward Lavieri. This book gives you a practical understanding of how to get started with Unity 2018. Read More Unity 2D & 3D game kits simplify Unity game development for beginners Build a Virtual Reality Solar System in Unity for Google Cardboard Unity plugins for augmented reality application development    
Read more
  • 0
  • 0
  • 21197

article-image-php-8-and-7-4-to-come-with-just-in-time-jit-to-make-most-cpu-intensive-workloads-run-significantly-faster
Bhagyashree R
01 Apr 2019
3 min read
Save for later

PHP 8 and 7.4 to come with Just-in-time (JIT) to make most CPU-intensive workloads run significantly faster

Bhagyashree R
01 Apr 2019
3 min read
Last week, Joe Watkins, a PHP developer, shared that PHP 8 will support the Just-in-Time (JIT) compilation. This decision was the result of voting among the PHP core developers for supporting JIT in PHP 8 and also in PHP 7.4 as an experimental feature. If you don’t know what JIT is, it is a compiling strategy in which a program is compiled on the fly into a form that’s usually faster, typically the host CPU’s native instruction set. To do this the JIT compiler has access to dynamic runtime information whereas a standard compiler doesn’t. How PHP programs are compiled? PHP comes with a virtual machine named the Zend VM. The human-readable scripts are compiled into instructions, which are called opcodes that are understandable to the virtual machine. Opcodes are low-level, and hence faster to translate to machine code as compared to the original PHP code. This stage of execution is called compile time. These opcodes are then executed by the Zend VM in the runtime stage. JIT is being implemented as an almost independent part of OPcache, an extension to cache the opcodes so that compilation happens only when it is required. In PHP, JIT will treat the instructions generated for the Zend VM as the intermediate representation. It will then generate an architecture dependent machine code so that the host of your code is no longer the Zend VM, but the CPU directly. Why JIT is introduced in PHP? PHP hits the brick wall Many improvements have been done to PHP since its 7.0 version including optimizations for HashTable, specializations in the Zend VM for certain opcodes, specializations in the compiler for certain sequences, and many more. After so many improvements, now PHP has reached the extent of its ability to be improved any further. PHP for non-Web scenarios Adding support for JIT in PHP will allow its use in scenarios for which it is not even considered today, i.e., in other non-web, CPU-intensive scenarios, where the performance benefits will be very substantial. Faster innovation and more secure implementations With JIT support, the team will be able to develop built-in functions in PHP instead of C without any huge performance penalty. This will make PHP less susceptible to memory management, overflows, and other similar issues associated with C-based development. We can expect the release of PHP 7.4 later this year, which will debut JIT in PHP.  Though there is no official announcement about the release schedule of PHP 8, many are speculating its release in late 2021. Read Joe Watkins’ announcement on his blog. PEAR’s (PHP Extension and Application Repository) web server disabled due to a security breach Symfony leaves PHP-FIG, the framework interoperability group Google App Engine standard environment (beta) now includes PHP 7.2
Read more
  • 0
  • 0
  • 20497

article-image-react-storybook-ui-logging-user-interactions-with-actions-add-on-tutorial
Packt Editorial Staff
17 Jul 2018
7 min read
Save for later

React Storybook UI: Logging user interactions with Actions add-on [Tutorial]

Packt Editorial Staff
17 Jul 2018
7 min read
Sometimes, you end up creating a whole new page, or a whole new app, just to see what your component can do on its own. This can be a painful process and, which is why Storybook exists in React. With Storybook, you're automating a sandboxed environment to work with. It also handles all the build steps, so you can write a story for your components and see the result. In this article we are going to use the Storybook add-ons, which you can test on any aspect of your component before worrying about integrating it in your application. To be specific we are going to look at Actions, which is a by default add-on in Storybook. This React tutorial is an extract from the book React 16 Tools written by Adam Boduch. Adam Boduch has been involved with large-scale JavaScript development for nearly 10 years. He has practical experience with real-world software systems, and the scaling challenges they pose. Working with Actions in React Storybook The Actions add-on is enabled in your Storybook by default. The idea with Actions is that once you select a story, you can interact with the rendered page elements in the main pane. Actions provide you with a mechanism that logs user interactions in the Storybook UI. Additionally, Actions can serve as a general- purpose tool to help you monitor data as it flows through your components. Let's start with a simple button component: import React from 'react'; const MyButton = ({ onClick }) => ( <button onClick={onClick}>My Button</button> ); export default MyButton; The MyButton component renders a button element and assigns it an onClick event handler. The handler is actually defined by MyComponent; it's passed in as a prop. So let's create a story for this component and pass it an onClick handler function: import React from 'react'; import { storiesOf } from '@storybook/react'; import { action } from '@storybook/addon-actions'; import MyButton from '../MyButton'; storiesOf('MyButton', module).add('clicks', () => ( <MyButton onClick={action('my component clicked')} /> )); Do you see the action() function that's imported from @storybook/addon-actions? This is a higher-order function—a function that returns another function. When you call action('my component clicked'), you're getting a new function in return. The new function behaves kind of like console.log(), in that you can assign it a label and log arbitrary values. The difference is that functions created by the Storybook action() add-on function is that the output is rendered right in the actions pane of the Storybook UI: As usual, the button element is rendered in the main pane. The content that you're seeing in the actions pane is the result of clicking on the button three times. The output is the exact same with every click, so the output is all grouped under the my component clicked label that you assigned to the handler function. In the preceding example, the event handler functions that action() creates are useful for as a substitute for actual event handler functions that you would pass to your components. Other times, you actually need the event handling behavior to run. For example, you have a controlled form field that maintains its own state and you want to see what happens as the state changes. For cases like these, I find the simplest and most effective approach is to add event handler props, even if you're not using them for anything else. Let's take a look at an example of this: import React, { Component } from 'react'; class MyRangeInput extends Component { static defaultProps = { onChange() {}, onRender() {} }; state = { value: 25 }; onChange = ({ target: { value } }) => { this.setState({ value }); this.props.onChange(value); }; render() { const { value } = this.state; this.props.onRender(value); return ( <input type="range" min="1" max="100" value={value} onChange={this.onChange} /> ); } } export default MyRangeInput; Let's start by taking a look at the defaultProps of this component. By default, this component has two default handler functions for onChange and onRender—these do nothing so that if they're not set, they can still be called and nothing will happen. As you might have guessed, we can now pass action() handlers to MyRangeInput components. Let's try this out. Here's what your stories/index.js looks like now: import React from 'react'; import { storiesOf } from '@storybook/react'; import { action } from '@storybook/addon-actions'; import MyButton from '../MyButton'; import MyRangeInput from '../MyRangeInput'; storiesOf('MyButton', module).add('clicks', () => ( <MyButton onClick={action('my component clicked')} /> )); storiesOf('MyRangeInput', module).add('slides', () => ( <MyRangeInput onChange={action('range input changed')} onRender={action('range input rendered')} /> )); Now when you view this story in the Storybook UI, you should see lots of actions logged when you slide the range input slider: As the slider handle moves, you can see the two event handler functions that you've passed to the component are logging the value at different stages of the component rendering life cycle. The most recent action is logged at the top of the pane, unlike browser dev tools which logs the most recent value at the bottom. Let's revisit the MyRangeInput code for a moment. The first function that's called when the slider handle moves is the change handler: onChange = ({ target: { value } }) => { this.setState({ value }); this.props.onChange(value); }; This onChange() method is internal to MyRangeInput. It's needed because the input element that it renders uses the component state as the single source of truth. These are called controlled components in React terminology. First, it sets the state of the value using the target.value property from the event argument. Then, it calls this.props.onChange(), passing it the same value. This is how you can see the even value in the Storybook UI. Note that this isn't the right place to log the updated state of the component. When you call setState(), you have to make the assumption that you're done dealing with state in the function because it doesn't always update synchronously. Calling setState() only schedules the state update and the subsequent re-render of your component. Here's an example of how this can cause problems. Let's say that instead of logging the value from the event argument, you logged the value state after setting it: There's a bit of a problem here now. The onChange handler is logging the old state while the onRender handler is logging the updated state. This sort of logging output is super confusing if you're trying to trace an event value to rendered output—things don't line up! Never log state values after calling setState(). If the idea of calling noop functions makes you feel uncomfortable, then maybe this approach to displaying actions in Storybook isn't for you. On the other hand, you might find that having a utility to log essentially anything at any point in the life cycle of your component without the need to write a bunch of debugging code inside your component. For such cases, Actions are the way to go. To summarize, we learned about Storybook add-on Actions. We saw it help with logging and the links provide a mechanism for navigation beyond the default. Grab the book React 16 Tooling today. This book covers the most important tools, utilities, and libraries that every React developer needs to know — in detail. What is React.js and how does it work? Is React Native is really Native framework? React Native announces re-architecture of the framework for better performance  
Read more
  • 0
  • 0
  • 19259

article-image-go-user-survey-2018-results-golang-goes-from-strength-to-strength
Amrata Joshi
29 Mar 2019
5 min read
Save for later

Go User Survey 2018 results: Golang goes from strength to strength, as more engineers than ever are using it at work

Amrata Joshi
29 Mar 2019
5 min read
Yesterday, the team at Go announced the results of their user survey for the year 2018. 5,883 users from 103 different countries participated in the survey. Key highlights from the Go User Survey 2018 According to the report, for the first time, half of the survey respondents said that they are currently using Go as part of their daily routine. It seems this year proved to be even better for Go as the graph saw a significant increase in the number of respondents who develop their projects in Go as part of their jobs and also use Go outside of their work responsibilities. Also, a majority of survey respondents said that Go is their most-preferred programming language. Here are some other findings: API/RPC services and CLI tools are the commonly used tools by Go users. VS Code and GoLand have become the most popular code editors among survey respondents. Most Go developers use more than one primary OS for development where Linux and macOS are popular. Automation tasks were declared as the fast-growing area for Go. Web development still remains the most common domain but DevOps has shown the highest year-over-year growth and is also the second most common domain now. Survey respondents have been shifting from on-premise Go deployments to containers and serverless cloud deployments. To simplify the survey report, the team at Go broke the responses down into three groups: The ones who are using Go both in and outside of work The ones who use Go professionally but not outside of work The ones who only use Go outside of their job responsibilities According to the survey, nearly half (46% of respondents) write Go code professionally as well as during their free time because the language appeals to developers who do not view software engineering only as a day job. According to the survey, 85% of respondents would prefer to use Go for their next project. Would you recommend Go to a friend? This year, the team had added a question, "How likely are you to recommend Go to a friend or colleague?" for calculating Net Promoter Score. This score measures the number of "promoters" a product has than "detractors" and it ranges from -100 to 100. A positive value would suggest most people are likely to recommend using a product, while negative values will suggest, most people wouldn’t recommend using it. The latest score (2018) is 61, where 68% are promoters - 7% are detractors. How satisfied are developers with Go? The team also asked many questions about developer satisfaction with Go, in the survey. Majority survey respondents indicated a high level of satisfaction which is consistent with prior year results. Around 89% of the respondents said that they are happy with Go and  66% felt that it is working well for their team. These metrics showed an increase in 2017 and they mostly remained stable this year. The downside About half of the survey respondents work on existing projects that are written in other languages, and ⅓ work on a team or project that prefer a language other than Go. The reason highlighted by the respondents for this is the missing language features and libraries. The team identified the biggest challenges faced by developers while using Go with the help of their machine learning tools. The top three challenges highlighted by the team as per the survey are: Package management is one of the major challenges. A response from the survey reads,“keeping up with vendoring, dependency / packet [sic] management / vendoring is not unified.” There are major differences from more familiar programming languages. A response from the survey reads, “Syntax close to C-languages with slightly different semantics makes me look up references somewhat more than I'd like", Another respondent says, "My coworkers who come from non-Go backgrounds are trying to use Go as a version of their previous language but with channels and Goroutines." Lack of generics is another problem. Another response from the survey reads, “Lack of generics makes it difficult to persuade people who have not tried Go that they would find it efficient. Hard to build richer abstractions (want generics)” Go community Go blog, Reddit's r/golang, Twitter, and Hacker News remain the primary sources for Go news. This year, 55% of survey respondents said they are interested in contributing towards the Go community, though it is slightly lesser than last year (59%). The standard library and official Go tools require interacting with the core Go team which could be one of the reasons for the dip in the percentage. Another reason is the dip in the percentage of participants who are willing to take up the Go project leadership. It was 30% last year and it has become 25% this year. This year only 46% of respondents are confident about taking the leadership of Go, it was 54% last year. You can read the complete results of the survey on Golang’s blog post. Update: The title of this article was amended on 4.1.2019. GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch Google Podcasts is transcribing full podcast episodes for improving search results State of Go February 2019 – Golang developments report for this month released  
Read more
  • 0
  • 0
  • 17531
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-symfony-leaves-php-fig-the-framework-interoperability-group
Amrata Joshi
21 Nov 2018
2 min read
Save for later

Symfony leaves PHP-FIG, the framework interoperability group

Amrata Joshi
21 Nov 2018
2 min read
Yesterday, Symfony, a community of 600,000 developers from more than 120 countries, announced that it will no longer be a member of the PHP-FIG, a framework interoperability group. Prior to Symfony, the other major members to leave this group include, Laravel, Propel, Guzzle, and Doctrine. The main goal of the PHP-FIG group is to work together and maintain interoperability, discuss commonalities between projects and work together to make them better. Why Symfony is leaving PHP-FIG PHP-FIG has been working on various PSRs (PHP Standard Recommendations). Kévin Dunglas, a core team member at Symfony, said, “It looks like it's not the goal anymore, 'cause most (but not all) new PSRs are things no major frameworks ask for, and that they can't implement without breaking their whole ecosystem.” https://twitter.com/fabpot/status/1064946913596895232 The fact that the major contributors left the group could possibly be a major reason for Symfony to quit. But it seems many are disappointed by this move of Symfony as they aren’t much satisfied by the reason given. https://twitter.com/mickael_andrieu/status/1065001101160792064 The matter of concern for Symfony was that the major projects were not getting implemented as a combined effort. https://twitter.com/dunglas/status/1065004250005204998 https://twitter.com/dunglas/status/1065002600402247680 Something similar happened while working towards PSR 7, where no commonalities between the projects were given importance. Instead, it was considered as a new separate framework. https://twitter.com/dunglas/status/1065007290217058304 https://twitter.com/titouangalopin/status/1064968608646864897 People are still arguing over why Symfony quit. https://twitter.com/gmponos/status/1064985428300914688 Will the PSRs die? With the latest move by Symfony, there are various questions raised towards the next step the company might take. Will the company still support PSRs or is it the end for the PSRs? Kévin Dunglas has answered to this question in one of his tweets, where he said, “Regarding PSRs, I think we'll implement them if relevant (such as PSR-11) but not the ones not in the spirit of a broad interop (as PSR-7/14).” To know more about this news, check out Fabien Potencier’s Twitter thread Perform CRUD operations on MongoDB with PHP Introduction to Functional Programming in PHP Building a Web Application with PHP and MariaDB – Introduction to caching
Read more
  • 0
  • 0
  • 17304

article-image-mongodb-withdraws-controversial-server-side-public-license-from-the-open-source-initiatives-approval-process
Richard Gall
12 Mar 2019
4 min read
Save for later

MongoDB withdraws controversial Server Side Public License from the Open Source Initiative's approval process

Richard Gall
12 Mar 2019
4 min read
MongoDB's Server Side Public License was controversial when it was first announced back in October. But the team were, back then, confident that the new license met the Open Source Initiative's approval criteria. However, things seem to have changed. The news that Red Hat was dropping MongoDB over the SSPL in January was a critical blow and appears to have dented MongoDB's ambitions. Last Friday, Co-founder and CTO Eliot Horowitz announced that MongoDB had withdrawn its submission to the Open Source Initiative. Horowitz wrote on the OSI approval mailing list that "the community consensus required to support OSI approval does not currently appear to exist regarding the copyleft provision of SSPL." Put simply, the debate around MongoDB's SSPL appears to have led its leadership to reconsider its approach. Update: this article was amended 19.03.2019 to clarify that the Server Side Public License only requires commercial users (ie. X-as-a-Service products) to open source their modified code. Any other users can still modify and use MongoDB code for free. What's the purpose of MongoDB's Server Side Public License? The Server Side Public License was developed by MongoDB as a means of protecting the project from "large cloud vendors" who want to "capture all of the value but contribute nothing back to the community." Essentially the license included a key modification to section 13 of the standard GPL (General Public License) that governs most open source software available today. You can read the SSPL in full here , but this is the crucial sentence: "If you make the functionality of the Program or a modified version available to third parties as a service, you must make the Service Source Code available via network download to everyone at no charge, under the terms of this License." This would mean that users are free to review, modify, and distribute the software or redistribute modifications to the software. It's only if a user modifies or uses the source code as part of an as-a-service offering that the full service must be open sourced. So essentially, anyone is free to modify MongoDB. It's only when you offer MongoDB as a commercial service that the conditions of the SSPL state that you must open source the entire service. What issues do people have with the Server Side Public License? The logic behind the SSPL seems sound, and probably makes a lot of sense in the context of an open source landscape that's almost being bled dry. But it presents a challenge to the very concept of open source software where the idea that software should be free to use and modify - and, indeed, to profit from - is absolutely central. Moreover, even if it makes sense as a way of defending open source projects from the power of multinational tech conglomerates, it could be argued that the consequences of the license could harm smaller tech companies. As one user on Hacker News explained back in October: "Let [sic] say you are a young startup building a cool SaaS solution. E.g. A data analytics solution. If you make heavy use of MongoDB it is very possible that down the line the good folks at MongoDB come calling since 'the value of your SaaS derives primarily from MongoDB...' So at that point you have two options - buy a license from MongoDB or open source your work (which they can conveniently leverage at no cost)." The Hacker News thread is very insightful on the reasons why the license has been so controversial. Another Hacker News user, for example, described the license as "either idiotic or malevolent." Read next: We need to encourage the meta-conversation around open source, says Nadia Eghbal [Interview] What next for the Server Side Public License? The license might have been defeated but Horowitz and MongoDB are still optimistic that they can find a solution. "We are big believers in the importance of open source and we intend to continue to work with these parties to either refine the SSPL or develop an alternative license that addresses this issue in a way that will be accepted by the broader FOSS community," he said. Whatever happens next, it's clear that there are some significant challenges for the open source world that will require imagination and maybe even some risk-taking to properly solve.
Read more
  • 0
  • 0
  • 17236

article-image-github-introduces-template-repository-for-easy-boilerplate-code-management-and-distribution
Bhagyashree R
10 Jun 2019
2 min read
Save for later

GitHub introduces ‘Template repository’ for easy boilerplate code management and distribution

Bhagyashree R
10 Jun 2019
2 min read
Yesterday GitHub introduced ‘Template repository’ using which you can share boilerplate code and directory structure across projects easily. This is similar to the idea of ‘Boilr’ and ‘Cookiecutter’. https://twitter.com/github/status/1136671651540738048 How to create a GitHub template repository? As per its name, ‘Template repository’ enable developers to mark a repository as a template, which they can use later for creating new repositories containing all of the template repository’s files and folders. You can create a new template repository or mark an existing one as a template with admin permissions. Just navigate to the Settings page and then click on the ‘Template repository’ checkbox. Once the template repository is created anyone who has access to it will be able to generate a new repository with same directory structure and files via ‘Use this template’ button. Source: GitHub All the templates that you own, have access to, or have used in a previous project will also be available to you when creating a new repository through ‘Choose a template’ drop-down. Every template repository will have a new URL ‘/generate’ endpoint that will allow you to distribute your template more efficiently. You just need to link your template users directly to this endpoint. Source: GitHub Templating is similar to cloning a repository, except it does not retain the history of the repository unlike cloning and gives users a clean new project with an initial commit. Though this function is still pretty basic, as GitHub will add more functionality in the future, it will be useful for junior developers and beginners to help them get started. Here’s what a Hacker News user believes we can do with this feature: “This is a part of something which could become a very powerful pattern: community-wide templates which include many best practices in a single commit: - Pre-commit hooks for linting/formatting and unit tests. - Basic CI pipeline configuration with at least build, test and release/deploy phases. - Package installation configuration for the frameworks you want. - Container/VM configuration for the languages you want to enable cross-platform and future-proof development. - Documentation to get started with it all.” Read the official announcement by GitHub for more details. Github Sponsors: Could corporate strategy eat FOSS culture for dinner? GitHub Satellite 2019 focuses on community, security, and enterprise Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack
Read more
  • 0
  • 0
  • 17211

article-image-google-releases-two-new-hardware-products-coral-dev-board-and-a-usb-accelerator-built-around-its-edge-tpu-chip
Sugandha Lahoti
06 Mar 2019
2 min read
Save for later

Google releases two new hardware products, Coral dev board and a USB accelerator built around its Edge TPU chip

Sugandha Lahoti
06 Mar 2019
2 min read
Google teased its new hardware products built around its Edge TPU at the Google Next conference last summer. Yesterday, it officially launched the Coral dev board, a Raspberry-Pi look-alike, which is designed to run machine learning algorithms ‘at the edge’, and a USB accelerator. Coral Development Board The “Coral Dev Board” has a 40-pin header that runs Linux on an i.MX8M with an Edge TPU chip for accelerating TensorFlow Lite. The board also features 8GB eMMC storage, 1GB LPDDR4 RAM, Wi-Fi and Bluetooth 4.1. It has USB 2.0/3.0 ports, 3.5mm audio jack, DSI display interface, MIPI-CSI camera interface, HDMI 2.0a connector, and two Digital PDM microphones. Source: Google Coral dev board can be used as a single-board computer when you need accelerated ML processing in a small form factor.  It can also be used as an evaluation kit for the SOM and for prototyping IoT devices and other embedded systems. This board is available for $149.00. Google has also announced a $25 MIPI-CSI 5-megapixel camera for the dev board. USB Accelerator The USB Accelerator is basically a plug-in USB 3.0 stick to add machine learning capabilities to the existing Linux machines. This 65 x 30 mm accelerator can connect to Linux-based systems via a USB Type-C port. It can also work with a Raspberry Pi board at USB 2.0 speeds. The accelerator is built around a 32-bit, 32MHz Cortex-M0+ chip with 16KB of flash and 2KB of RAM. Source: Google The USB Accelerator is available for $75. Developers can build Machine Learning models for both the devices in TensorFlow Lite. More information is available on Google’s Coral Beta website. Coming soon are the PCI-E Accelerator, for integrating the Edge TPU into legacy systems using a PCI-E interface. Also coming is a fully integrated System-on-Module with CPU, GPU, Edge TPU, Wifi, Bluetooth, and Secure Element in a 40mm x 40mm pluggable module. Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha). Intel acquires eASIC, a custom chip (FPGA) maker for IoT, cloud and 5G environments Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25.
Read more
  • 0
  • 0
  • 16604
article-image-mozilla-and-google-chrome-refuse-to-support-gabs-dissenter-extension-for-violating-acceptable-use-policy
Bhagyashree R
12 Apr 2019
5 min read
Save for later

Mozilla and Google Chrome refuse to support Gab’s Dissenter extension for violating acceptable use policy

Bhagyashree R
12 Apr 2019
5 min read
Earlier this year, Gab, the “free speech” social network and a popular forum for far-right viewpoint holders and other fringe groups, launched a browser extension named Dissenter that creates an alternative comment section for any website. The plug-in is now removed from the extension stores of both Mozilla and Google, as the extension violates their acceptable use policy. This decision comes after Columbia Journalism Review reported about the extension to the tech giants. https://twitter.com/nausjcaa/status/1116409587446484994 The Dissenter plug-in, which goes by the tagline “the comment section of the internet”, allows users to discuss any topic in real-time without fearing that their posted comment will be removed by a moderator. The plug-in failed to pass the review process of Mozilla and is now disabled for Firefox users. But, the users who have already installed the plug-in can continue to use it. The Gab team took to Twitter complaining about Mozilla’s Acceptable Use Policy. https://twitter.com/getongab/status/1116036111296544768 When asked for more clarity on which policies Dissenter did not comply with, Mozilla said that they received abuse reports for this extension. It further added that the platform is being used for promoting violence, hate speech, and discrimination, but they failed to show any examples to add any credibility to their claims. https://twitter.com/getongab/status/1116088926559666181 The extension developers responded by saying that they do moderate any illegal conduct or posts happening on their platform as and when they are brought to their attention. “We do not display content containing words from a list of the most offensive racial epithets in the English language,” added the Gab developers. Soon after this, Google Chrome also removed the extension from Chrome Extension Store stating the same reason that the extension does not comply with their policies. After getting deplatformed, the Dissenter team has come to the conclusion that the best way forward is to create their own browser. They are thinking of forking Chromium or the privacy-focused web browser, Brave. “That’s it. We are going to fork Chromium and create a browser with Dissenter, ad blocking, and other privacy tools built in along with the guarantee of free speech that Silicon Valley does not provide.” https://twitter.com/getongab/status/1116308126461046784 Gab does not moderate views posted by its users until they are flagged for any violations and says it “treats its users as adults”. So, until people are complaining, the platform will not take any appropriate action against the threats and hate speech posted in the comments. Though it is known for its tolerance for fringe views and has drawn immense heat from the public, things took turn for the worse after the recent Christchurch shooting. A far-right extremist who shot dead 20+ Muslims and left 30 others injured in two Mosques in New Zealand, had shared his extremist manifesto on social media sites like Gab and 8chan. He had also live-streamed the shooting on Facebook, Youtube, and others. This is not the first time when Gab has been involved in a controversy. Back in October last year, PayPal banned Gab following the anti-Semitic mass shooting in Pittsburgh. It was reported that the shooter was an active poster on the Gab website and has hinted his intentions shortly before the attack. In the same month, hosting provider Joyent also suspended its services for Gab. The platform has also been warned by Stripe for the violations of their policies. Torba, the co-founder of Gab, said, “Payments companies like Paypal, Stripe, Square, Cash App, Coinbase, and Bitpay have all booted us off. Microsoft Azure, Joyent, GoDaddy, Apple, Google’s Android store, and other infrastructure providers, too, have denied us service, all because we refuse to censor user-generated content that is within the boundaries of the law.” Looking at this move by Mozilla, many users felt that this actually contradicts their goal of making the web free and open for all. https://twitter.com/VerGreeneyes/status/1116216415734960134 https://twitter.com/ChicalinaT/status/1116101257494761473 A Hacker News user added, “While Facebook, Reddit, Twitter and now Mozilla may think they're doing a good thing by blocking what they consider hateful speech, it's just helping these people double down on thinking they're in the right. We should not be afraid of ideas. Speech != violence. Violence is violence. With platforms banning more and more offensive content and increasing the label of what is bannable, we're seeing a huge split in our world. People who could once agree to disagree now don't even want to share the same space with one another. It's all call out culture and it's terrible.” Many people think that this step is nothing but a step towards mass-censorship. “I see it as an active endorsement of filter funneling comments sections online, given that despite the operators of Dissenter having tried to make efforts to comply with the terms of service Mozilla have imposed for being listed in their gallery, were given an unclear rationale as to how having "broken" these terms, and no clue as to what they were supposed to do to have avoided doing so,” adds a Reddit user. Mozilla has not revoked the add-on’s signature, so Dissenter can be distributed while guaranteeing that the add-on is safe and can be updated automatically. Manual installation of the extension from Dissenter.com/download is also possible. Mozilla developers have built BugBug which uses machine learning to triage Firefox bugs Mozilla adds protection against fingerprinting and Cryptomining scripts in Firefox Nightly and Beta Mozilla is exploring ways to reduce notification permission prompt spam in Firefox
Read more
  • 0
  • 0
  • 16602

article-image-ai-can-now-help-speak-your-mind-uc-researchers-introduce-a-neural-decoder-that-translates-brain-signals-to-natural-sounding-speech
Bhagyashree R
29 Apr 2019
4 min read
Save for later

AI can now help speak your mind: UC researchers introduce a neural decoder that translates brain signals to natural-sounding speech

Bhagyashree R
29 Apr 2019
4 min read
In a research published in the Nature journal on Monday, a team of neuroscientists from the University of California, San Francisco, introduced a neural decoder that can synthesize natural-sounding speech based on brain activity. This research was led by Gopala Anumanchipalli, a speech scientist, and Josh Chartier, a bioengineering graduate student in the Chang lab. It is being developed in the laboratory of Edward Chang, a Neurological Surgery professor at University of California. Why is this neural decoder being introduced? There are many cases of people losing their voice because of stroke, traumatic brain injury, or neurodegenerative diseases such as Parkinson’s disease, multiple sclerosis, and amyotrophic lateral sclerosis. Currently,assistive devices that track very small eye or facial muscle movements to enable people with severe speech disabilities express their thoughts by writing them letter-by-letter, do exist. However, generating text or synthesized speech with such devices is often time consuming, laborious, and error-prone. Another limitation these devices have is that they only permit generating a maximum of 10 words per minute, compared to the 100 to 150 words per minute of natural speech. This research shows that it is possible to generate a synthesized version of a person’s voice that can be controlled by their brain activity. The researchers believe that in future, this device could be used to enable individuals with severe speech disability to have fluent communication. It could even reproduce some of the “musicality” of the human voice that expresses the speaker’s emotions and personality. “For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity,” said Chang. “This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.” How does this system work? This research is based on another study by Josh Chartier and Gopala K. Anumanchipalli, which shows how the speech centers in our brain choreograph the movements of the lips, jaw, tongue, and other vocal tract components to produce fluent speech. In this new study, Anumanchipalli and Chartier asked five patients being treated at the UCSF Epilepsy Center to read several sentences aloud. These patients had electrodes implanted into their brains to map the source of their seizures in preparation for neurosurgery. Simultaneously, the researchers recorded activity from a brain region known to be involved in language production. The researchers used the audio recordings of volunteer’s voice to understand the vocal tract movements needed to produce those sounds. With this detailed map of sound to anatomy in hand, the scientists created a realistic virtual vocal tract for each volunteer that could be controlled by their brain activity. The system comprised of two neural networks: A decoder for transforming brain activity patterns produced during speech into movements of the virtual vocal tract. A synthesizer for converting these vocal tract movements into a synthetic approximation of the volunteer’s voice. Here’s a video depicting the working of this system: https://www.youtube.com/watch?v=kbX9FLJ6WKw&feature=youtu.be The researchers observed that the synthetic speech produced by this system was much better as compared to the synthetic speech directly decoded from the volunteer’s brain activity. The generated sentences were also understandable to hundreds of human listeners in crowdsourced transcription tests conducted on the Amazon Mechanical Turk platform. The system is still in its early stages. Explaining its limitations, Chartier said, “We still have a ways to go to perfectly mimic spoken language. We’re quite good at synthesizing slower speech sounds like ‘sh’ and ‘z’ as well as maintaining the rhythms and intonations of speech and the speaker’s gender and identity, but some of the more abrupt sounds like ‘b’s and ‘p’s get a bit fuzzy. Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.” Read the full report on UCSF’s official website. OpenAI introduces MuseNet: A deep neural network for generating musical compositions Interpretation of Functional APIs in Deep Neural Networks by Rowel Atienza Google open-sources GPipe, a pipeline parallelism Library to scale up Deep Neural Network training  
Read more
  • 0
  • 0
  • 16497

article-image-introducing-spleeter-tensorflow-python-library-extracts-voice-sound-from-music
Sugandha Lahoti
05 Nov 2019
2 min read
Save for later

Introducing Spleeter, a Tensorflow based python library that extracts voice and sound from any music track

Sugandha Lahoti
05 Nov 2019
2 min read
On Monday, Deezer, a French online music streaming service, released Spleeter which is a music separation engine.  It comes in the form of a Python Library based on Tensorflow. Stating the reason behind Spleeter, the researchers state, “We release Spleeter to help the Music Information Retrieval (MIR) community leverage the power of source separation in various MIR tasks, such as vocal lyrics analysis from audio, music transcription, any type of multilabel classification or vocal melody extraction.” Spleeter comes with pre-trained models for 2, 4 and 5 track separation. These include: Vocals (singing voice) / accompaniment separation (2 stems) Vocals / drums / bass / other separation (4 stems) Vocals / drums / bass / piano / other separation (5 stems) It can also train source separation models or fine-tune pre-trained ones with Tensorflow if you have a dataset of isolated sources. Deezer benchmarked Spleeter against Open-Unmix another open-source model recently released and reported slightly better performances with increased speed. It can perform separation of audio files to 4 stems 100x faster than real-time when running on a GPU. You can use Spleeter straight from the command line as well as directly in your own development pipeline as a Python library. It can be installed with Conda, with pip or be used with Docker. Spleeter creators mention a number of potential applications of source separation engine including remixes, upmixing, active listening, educational purposes, and pre-processing for other tasks such as transcription. Spleeter received mostly positive feedback on Twitter, as people experimented to separate vocals from music. https://twitter.com/lokijota/status/1191580903518228480 https://twitter.com/bertboerland/status/1191110395370586113 https://twitter.com/CholericCleric/status/1190822694469734401 Wavy.org also ran several songs through the two-stem filter and evaluated them in a blog post. They tried a variety of soundtracks across multiple genres. The performance of audio was much better than expected, however, vocals sometimes felt robotically autotuned. The amount of bleed was shockingly low relative to other solutions and surpassed any available free tool and rival commercial plugins and services. https://twitter.com/waxpancake/status/1191435104788238336 Spleeter will be presented and live-demoed at the 2019 ISMIR conference in Delft. For more details refer to the official announcement. DeepMind AI’s AlphaStar achieves Grandmaster level in StarCraft II with 99.8% efficiency. Google AI introduces Snap, a microkernel approach to ‘Host Networking’ Firefox 70 released with better security, CSS, and JavaScript improvements
Read more
  • 0
  • 0
  • 16450
article-image-no-more-free-java-se-8-updates-for-commercial-use-after-january-2019
Prasad Ramesh
20 Aug 2018
2 min read
Save for later

No more free Java SE 8 updates for commercial use after January 2019

Prasad Ramesh
20 Aug 2018
2 min read
Oracle owned Java will no longer provide free public updates of Java SE 8 for commercial use after January 2019. This move is a part of their long term support (LTS) plan. However, for individual personal use the public updates for Oracle Java SE 8 will be available at least till December 2020. Borrowing ideas from Linux releases, Oracle Java releases will now follow LTS. The bi yearly updates will now have minor updates. One of the releases being termed as LTS and Oracle will provide long term support for it (3 years). Other releases will cease getting support when the next version is released. If you are using Java for personal use individually, you will have the same access to Oracle Java SE 8 updates till end of 2020. In most cases, the Java-based applications run on your PCs are licensed by a company other than Oracle. For example, games developed by gaming companies. You won’t have access to public updates beyond the date mentioned, it depends on the gaming company on how they plan to provide application support. Developers are recommended to view Oracle’s release roadmap for Java SE 8 and other versions. Accordingly, you can take action to support your applications. The next LTS version, Java 11 is set to roll out in September 2018. Here is the Oracle Java SE Support Roadmap. Release GA Date Premier Support Until Notification Extended Support Until Sustaining Support 6 December 2006 December 2015 December 2018 Indefinite 7 July 2011 July 2019 July 2022 Indefinite 8 March 2014 March 2022 March 2025 Indefinite 9 (non‑LTS) September 2017 March 2018 Not Available Indefinite 10 (18.3^)(non‑LTS) March 2018 September 2018 Not Available Indefinite 11 (18.9^ LTS) September 2018 September 2023 September 2026 Indefinite 12 (19.3^ non‑LTS) March 2019 September 2019 Not Available Indefinite The roadmap for web deployment and Java FX is different and is listed on their website. This video explains the LTS release model for Java, for more information, visit the official update. Mark Reinhold on the evolution of Java platform and OpenJDK Build Java EE containers using Docker [Tutorial] 5 Things you need to know about Java 10
Read more
  • 0
  • 0
  • 16368

article-image-paper-in-two-minutes-attention-is-all-you-need
Sugandha Lahoti
05 Apr 2018
4 min read
Save for later

Paper in Two minutes: Attention Is All You Need

Sugandha Lahoti
05 Apr 2018
4 min read
A paper on a new simple network architecture, the Transformer, based solely on attention mechanisms The NIPS 2017 accepted paper, Attention Is All You Need, introduces Transformer, a model architecture relying entirely on an attention mechanism to draw global dependencies between input and output. This paper is authored by professionals from the Google research team including Ashish Vaswani, Noam Shazeer, Niki Parmar,  Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. The Transformer – Attention is all you need What problem is the paper attempting to solve? Recurrent neural networks (RNN), long short-term memory networks(LSTM) and gated RNNs are the popularly approaches used for Sequence Modelling tasks such as machine translation and language modeling. However, RNN/CNN handle sequences word-by-word in a sequential fashion. This sequentiality is an obstacle toward parallelization of the process. Moreover, when such sequences are too long, the model is prone to forgetting the content of distant positions in sequence or mix it with following positions’ content. Recent works have achieved significant improvements in computational efficiency and model performance through factorization tricks and conditional computation. But they are not enough to eliminate the fundamental constraint of sequential computation. Attention mechanisms are one of the solutions to overcome the problem of model forgetting. This is because they allow dependency modelling without considering their distance in the input or output sequences. Due to this feature, they have become an integral part of sequence modeling and transduction models. However, in most cases attention mechanisms are used in conjunction with a recurrent network. Paper summary The Transformer proposed in this paper is a model architecture which relies entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and tremendously improves translation quality after being trained for as little as twelve hours on eight P100 GPUs. Neural sequence transduction models generally have an encoder-decoder structure. The encoder maps an input sequence of symbol representations to a sequence of continuous representations. The decoder then generates an output sequence of symbols, one element at a time. The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder. The authors are motivated to use self-attention because of three criteria.   One is that the total computational complexity per layer. Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required. The third is the path length between long-range dependencies in the network. The Transformer uses two different types of attention functions: Scaled Dot-Product Attention, computes the attention function on a set of queries simultaneously, packed together into a matrix. Multi-head attention, allows the model to jointly attend to information from different representation subspaces at different positions. A self-attention layer connects all positions with a constant number of sequentially executed operations, whereas a recurrent layer requires O(n) sequential operations. In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence length is smaller than the representation dimensionality, which is often the case with machine translations. Key Takeaways This work introduces Transformer, a novel sequence transduction model based entirely on attention mechanism. It replaces the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers for translation tasks. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, the model achieves a new state of the art.  In the former task the model outperforms all previously reported ensembles. Future Goals Transformer has only been applied to transduction model tasks as of yet. In the near future, the authors plan to use it for other problems involving input and output modalities other than text. They plan to apply attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video. The Transformer architecture from this paper has gained major traction since its release because of major improvements in translation quality and other NLP tasks. Recently, the NLP research group at Harvard have released a post which presents an annotated version of the paper in the form of a line-by-line implementation. It is accompanied with 400 lines of library code, written in PyTorch in the form of a notebook, accessible from github or on Google Colab with free GPUs.  
Read more
  • 0
  • 0
  • 16170