Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-introducing-spleeter-tensorflow-python-library-extracts-voice-sound-from-music
Sugandha Lahoti
05 Nov 2019
2 min read
Save for later

Introducing Spleeter, a Tensorflow based python library that extracts voice and sound from any music track

Sugandha Lahoti
05 Nov 2019
2 min read
On Monday, Deezer, a French online music streaming service, released Spleeter which is a music separation engine.  It comes in the form of a Python Library based on Tensorflow. Stating the reason behind Spleeter, the researchers state, “We release Spleeter to help the Music Information Retrieval (MIR) community leverage the power of source separation in various MIR tasks, such as vocal lyrics analysis from audio, music transcription, any type of multilabel classification or vocal melody extraction.” Spleeter comes with pre-trained models for 2, 4 and 5 track separation. These include: Vocals (singing voice) / accompaniment separation (2 stems) Vocals / drums / bass / other separation (4 stems) Vocals / drums / bass / piano / other separation (5 stems) It can also train source separation models or fine-tune pre-trained ones with Tensorflow if you have a dataset of isolated sources. Deezer benchmarked Spleeter against Open-Unmix another open-source model recently released and reported slightly better performances with increased speed. It can perform separation of audio files to 4 stems 100x faster than real-time when running on a GPU. You can use Spleeter straight from the command line as well as directly in your own development pipeline as a Python library. It can be installed with Conda, with pip or be used with Docker. Spleeter creators mention a number of potential applications of source separation engine including remixes, upmixing, active listening, educational purposes, and pre-processing for other tasks such as transcription. Spleeter received mostly positive feedback on Twitter, as people experimented to separate vocals from music. https://twitter.com/lokijota/status/1191580903518228480 https://twitter.com/bertboerland/status/1191110395370586113 https://twitter.com/CholericCleric/status/1190822694469734401 Wavy.org also ran several songs through the two-stem filter and evaluated them in a blog post. They tried a variety of soundtracks across multiple genres. The performance of audio was much better than expected, however, vocals sometimes felt robotically autotuned. The amount of bleed was shockingly low relative to other solutions and surpassed any available free tool and rival commercial plugins and services. https://twitter.com/waxpancake/status/1191435104788238336 Spleeter will be presented and live-demoed at the 2019 ISMIR conference in Delft. For more details refer to the official announcement. DeepMind AI’s AlphaStar achieves Grandmaster level in StarCraft II with 99.8% efficiency. Google AI introduces Snap, a microkernel approach to ‘Host Networking’ Firefox 70 released with better security, CSS, and JavaScript improvements
Read more
  • 0
  • 0
  • 18276

article-image-google-releases-two-new-hardware-products-coral-dev-board-and-a-usb-accelerator-built-around-its-edge-tpu-chip
Sugandha Lahoti
06 Mar 2019
2 min read
Save for later

Google releases two new hardware products, Coral dev board and a USB accelerator built around its Edge TPU chip

Sugandha Lahoti
06 Mar 2019
2 min read
Google teased its new hardware products built around its Edge TPU at the Google Next conference last summer. Yesterday, it officially launched the Coral dev board, a Raspberry-Pi look-alike, which is designed to run machine learning algorithms ‘at the edge’, and a USB accelerator. Coral Development Board The “Coral Dev Board” has a 40-pin header that runs Linux on an i.MX8M with an Edge TPU chip for accelerating TensorFlow Lite. The board also features 8GB eMMC storage, 1GB LPDDR4 RAM, Wi-Fi and Bluetooth 4.1. It has USB 2.0/3.0 ports, 3.5mm audio jack, DSI display interface, MIPI-CSI camera interface, HDMI 2.0a connector, and two Digital PDM microphones. Source: Google Coral dev board can be used as a single-board computer when you need accelerated ML processing in a small form factor.  It can also be used as an evaluation kit for the SOM and for prototyping IoT devices and other embedded systems. This board is available for $149.00. Google has also announced a $25 MIPI-CSI 5-megapixel camera for the dev board. USB Accelerator The USB Accelerator is basically a plug-in USB 3.0 stick to add machine learning capabilities to the existing Linux machines. This 65 x 30 mm accelerator can connect to Linux-based systems via a USB Type-C port. It can also work with a Raspberry Pi board at USB 2.0 speeds. The accelerator is built around a 32-bit, 32MHz Cortex-M0+ chip with 16KB of flash and 2KB of RAM. Source: Google The USB Accelerator is available for $75. Developers can build Machine Learning models for both the devices in TensorFlow Lite. More information is available on Google’s Coral Beta website. Coming soon are the PCI-E Accelerator, for integrating the Edge TPU into legacy systems using a PCI-E interface. Also coming is a fully integrated System-on-Module with CPU, GPU, Edge TPU, Wifi, Bluetooth, and Secure Element in a 40mm x 40mm pluggable module. Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha). Intel acquires eASIC, a custom chip (FPGA) maker for IoT, cloud and 5G environments Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25.
Read more
  • 0
  • 0
  • 18154

article-image-blender-celebrates-its-25th-birthday
Natasha Mathur
03 Jan 2019
3 min read
Save for later

Blender celebrates its 25th birthday!

Natasha Mathur
03 Jan 2019
3 min read
Blender, a free and open source 3D computer graphics software, celebrated its 25th birthday yesterday. Blender team celebrated the birthday by publishing a post that talked about the journey of blender from 1993 to 2018, taking a trip down the memory lane. Blender’s Journey (1994 - 2018) The Blender team states that during the 1993 Christmas Ton Roosendaal, creator of Blender started working on the Blender software, making use of the designs that he made during his 1993 course.                                                   Original design doc from 1993 The first blender version came to life on January 2nd, 1994 and used the subdivision-based windowing system working. This date has now been marked as Blender’s official Birthday and Roosendaal even has an old backup of this version on his SGI Indigo2 workstation. Blender was first released publicly online on 1st January 1998 as an SGI freeware. The Linux and Windows versions of Blender were released shortly after. In May 2002, Roosendaal started the non-profit Blender Foundation. The first goal for the Blender Foundation was to find a way to continue the development and promotion of Blender as a community-based open source project. https://www.youtube.com/watch?time_continue=164&v=8A-LldprfiE Blender's 25th birthday With the popularity of the internet in the early 2000s, the source code for Blender became available under GNU General Public License (GPL) on October 13th, 2002. This day marked Blender as the open source and free 3D creation software that we use till date. Blender team started “Project Orange” in 2005, that resulted in the world’s first and widely recognized Open Movie “Elephants Dream”. The success of the open movie project led to Roosendaal establishing the “Blender Institute” in summer 2007. Blender Institute has now become the permanent office and studio where the team organizes the Blender Foundation goals and facilitates the Open Projects related to 3D movies, games or visual effects. In early 2008, Roosendaal started the Blender 2.5 project, which was a major overhaul of the UI, tool definitions, data access system, event handling, and animation system. The main goal of the project was to bring the core of Blender to the contemporary interface standards as well as the input methods. The first alpha version for Blender 2.5 was presented on Siggraph 2009, with the final release of 2.5 getting published in 2011. In 2012, the Blender team put its focus on further developing and exploring a Visual Effect creation pipeline that included features such as motion tracking, camera solving, masking, grading and good color pipeline. Coming back to 2018, it was just last week when the Blender team released Blender 2.8 with a revamped user interface, high-end viewport, and other great features. Mozilla partners with Khronos Group to bring glTF format to Blender Building VR objects in React V2 2.0: Getting started with polygons in Blender Blender 2.5: Detailed Render of the Earth from Space
Read more
  • 0
  • 0
  • 18149

article-image-symfony-leaves-php-fig-the-framework-interoperability-group
Amrata Joshi
21 Nov 2018
2 min read
Save for later

Symfony leaves PHP-FIG, the framework interoperability group

Amrata Joshi
21 Nov 2018
2 min read
Yesterday, Symfony, a community of 600,000 developers from more than 120 countries, announced that it will no longer be a member of the PHP-FIG, a framework interoperability group. Prior to Symfony, the other major members to leave this group include, Laravel, Propel, Guzzle, and Doctrine. The main goal of the PHP-FIG group is to work together and maintain interoperability, discuss commonalities between projects and work together to make them better. Why Symfony is leaving PHP-FIG PHP-FIG has been working on various PSRs (PHP Standard Recommendations). Kévin Dunglas, a core team member at Symfony, said, “It looks like it's not the goal anymore, 'cause most (but not all) new PSRs are things no major frameworks ask for, and that they can't implement without breaking their whole ecosystem.” https://twitter.com/fabpot/status/1064946913596895232 The fact that the major contributors left the group could possibly be a major reason for Symfony to quit. But it seems many are disappointed by this move of Symfony as they aren’t much satisfied by the reason given. https://twitter.com/mickael_andrieu/status/1065001101160792064 The matter of concern for Symfony was that the major projects were not getting implemented as a combined effort. https://twitter.com/dunglas/status/1065004250005204998 https://twitter.com/dunglas/status/1065002600402247680 Something similar happened while working towards PSR 7, where no commonalities between the projects were given importance. Instead, it was considered as a new separate framework. https://twitter.com/dunglas/status/1065007290217058304 https://twitter.com/titouangalopin/status/1064968608646864897 People are still arguing over why Symfony quit. https://twitter.com/gmponos/status/1064985428300914688 Will the PSRs die? With the latest move by Symfony, there are various questions raised towards the next step the company might take. Will the company still support PSRs or is it the end for the PSRs? Kévin Dunglas has answered to this question in one of his tweets, where he said, “Regarding PSRs, I think we'll implement them if relevant (such as PSR-11) but not the ones not in the spirit of a broad interop (as PSR-7/14).” To know more about this news, check out Fabien Potencier’s Twitter thread Perform CRUD operations on MongoDB with PHP Introduction to Functional Programming in PHP Building a Web Application with PHP and MariaDB – Introduction to caching
Read more
  • 0
  • 0
  • 18138

article-image-whats-new-in-ecmascript-2018
Pravin Dhandre
20 Apr 2018
4 min read
Save for later

What's new in ECMAScript 2018 (ES9)?

Pravin Dhandre
20 Apr 2018
4 min read
ECMAScript 2018 -  also known as ES9 - is now complete with lots of features. Since the major features released with ECMAScript 2015 the language has matured with yearly update releases. After multiple draft submissions and completion of the 5-stage process, the TC39 committee has finalized the set of features that will be rolled out in June. The full list of proposals that were submitted to TC39 committee can be viewed in Github repository. What are the key features of ECMAScript 2018? Let’s have a quick look at the key ES9 features and how it is going to add value to web developers. Lifting template literal restriction Template literals generally allow the embedding of languages such as DSLs. However, restrictions on escape sequences make this quite complicated. Removing the restriction will create difficulty in handling cooked template values containing illegal escape sequences. The proposed feature will redefine the cooked value for illegal escape sequences to “undefined”. This lifting of restriction will allow illegal values like \xerxes and makes embedding of language simpler. The detailed proposal with templates can be viewed at Github. Asynchronous iterators The newer version will provide syntactic support for asynchronous iteration with both AsyncIterable and AsyncIterator protocols. The syntactic support will help in reading lines of text from HTTP connection easily. In my opinion, this is one of the most important and useful features which make the code looks simpler. It introduces a new IterationStatement, for-await-of, and also adds syntax which can create async generator functions. The detailed proposal can be viewed at Github. Promise.prototype.finally library As you are aware promise make execution of callback functions easy. Many promise libraries have a "finally" method through which you can run code no matter how the Promise provides resolution. It works by registering a callback which gets invoked when a promise gets fulfilled or denied. Bluebird, Q, and when are some examples. The detailed proposal can be viewed at Github. Unicode property escapes in regular expressions The ECMAScript 2018 version will have addition of Unicode property escapes `\p{…}` and `\P{…}` to regular expressions. These are a new and unique type of escape sequences with u flag set. With this feature, one can create Unicode-aware regular expressions with utmost ease. These escapes are easily readable, compact and get updated automatically from ECMAScript engine. The detailed proposal can be viewed at Github. RegExp lookbehind assertions Assertions are regular expressions which consist of anchors and lookarounds that either succeeds or fails based on the match found. ECMAScript has extended assertion, that does lookaround in forward direction, with lookbehind assertions that does in backward direction. This assertion will be helpful in instances like validating a dollar amount without capturing the dollar sign where a pattern/design is or is not preceded by another. The detailed proposal can be viewed at Github. Object Rest/spread properties The earlier version of ECMAScript includes rest and spread properties for array literals. Likewise, the newer version would be introducing rest and so read elements for object literals. Both these operations for Object would help in extracting properties which we want along with removing unwanted ones. The detailed proposal can be viewed at Github. RegExp named capture groups Capture Groups is another RegExp feature, similar to so called “named Groups” in Java and Python. With this, you can write RegExp to provide names in a format viz. (?<name>...) for different parts of the group. This allows you to use that name and grab whichever group you need in a simplistic way. The detailed proposal can be viewed at Github. s ‘dotAll’ flag for regular expressions In regular expression patterns, the earlier version allows dot (.) to match any character but with astral and line terminator characters like \n \f etc, creating regex was complicated. The newer version proposes addition of a new s flag which can match any character, including astral and line terminators. The detailed proposal can be viewed at Github. When will ECMAScript 2018 be available? All of the features above are expected to be implemented and available in browsers this year. It's in the name after all. But there are likely to be even more new features and capabilities in the 2019 release. Read this to get a clearer picture of what’s likely to feature in the 2019 release.
Read more
  • 0
  • 0
  • 18058

article-image-mongodb-withdraws-controversial-server-side-public-license-from-the-open-source-initiatives-approval-process
Richard Gall
12 Mar 2019
4 min read
Save for later

MongoDB withdraws controversial Server Side Public License from the Open Source Initiative's approval process

Richard Gall
12 Mar 2019
4 min read
MongoDB's Server Side Public License was controversial when it was first announced back in October. But the team were, back then, confident that the new license met the Open Source Initiative's approval criteria. However, things seem to have changed. The news that Red Hat was dropping MongoDB over the SSPL in January was a critical blow and appears to have dented MongoDB's ambitions. Last Friday, Co-founder and CTO Eliot Horowitz announced that MongoDB had withdrawn its submission to the Open Source Initiative. Horowitz wrote on the OSI approval mailing list that "the community consensus required to support OSI approval does not currently appear to exist regarding the copyleft provision of SSPL." Put simply, the debate around MongoDB's SSPL appears to have led its leadership to reconsider its approach. Update: this article was amended 19.03.2019 to clarify that the Server Side Public License only requires commercial users (ie. X-as-a-Service products) to open source their modified code. Any other users can still modify and use MongoDB code for free. What's the purpose of MongoDB's Server Side Public License? The Server Side Public License was developed by MongoDB as a means of protecting the project from "large cloud vendors" who want to "capture all of the value but contribute nothing back to the community." Essentially the license included a key modification to section 13 of the standard GPL (General Public License) that governs most open source software available today. You can read the SSPL in full here , but this is the crucial sentence: "If you make the functionality of the Program or a modified version available to third parties as a service, you must make the Service Source Code available via network download to everyone at no charge, under the terms of this License." This would mean that users are free to review, modify, and distribute the software or redistribute modifications to the software. It's only if a user modifies or uses the source code as part of an as-a-service offering that the full service must be open sourced. So essentially, anyone is free to modify MongoDB. It's only when you offer MongoDB as a commercial service that the conditions of the SSPL state that you must open source the entire service. What issues do people have with the Server Side Public License? The logic behind the SSPL seems sound, and probably makes a lot of sense in the context of an open source landscape that's almost being bled dry. But it presents a challenge to the very concept of open source software where the idea that software should be free to use and modify - and, indeed, to profit from - is absolutely central. Moreover, even if it makes sense as a way of defending open source projects from the power of multinational tech conglomerates, it could be argued that the consequences of the license could harm smaller tech companies. As one user on Hacker News explained back in October: "Let [sic] say you are a young startup building a cool SaaS solution. E.g. A data analytics solution. If you make heavy use of MongoDB it is very possible that down the line the good folks at MongoDB come calling since 'the value of your SaaS derives primarily from MongoDB...' So at that point you have two options - buy a license from MongoDB or open source your work (which they can conveniently leverage at no cost)." The Hacker News thread is very insightful on the reasons why the license has been so controversial. Another Hacker News user, for example, described the license as "either idiotic or malevolent." Read next: We need to encourage the meta-conversation around open source, says Nadia Eghbal [Interview] What next for the Server Side Public License? The license might have been defeated but Horowitz and MongoDB are still optimistic that they can find a solution. "We are big believers in the importance of open source and we intend to continue to work with these parties to either refine the SSPL or develop an alternative license that addresses this issue in a way that will be accepted by the broader FOSS community," he said. Whatever happens next, it's clear that there are some significant challenges for the open source world that will require imagination and maybe even some risk-taking to properly solve.
Read more
  • 0
  • 0
  • 18044
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-paper-in-two-minutes-attention-is-all-you-need
Sugandha Lahoti
05 Apr 2018
4 min read
Save for later

Paper in Two minutes: Attention Is All You Need

Sugandha Lahoti
05 Apr 2018
4 min read
A paper on a new simple network architecture, the Transformer, based solely on attention mechanisms The NIPS 2017 accepted paper, Attention Is All You Need, introduces Transformer, a model architecture relying entirely on an attention mechanism to draw global dependencies between input and output. This paper is authored by professionals from the Google research team including Ashish Vaswani, Noam Shazeer, Niki Parmar,  Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. The Transformer – Attention is all you need What problem is the paper attempting to solve? Recurrent neural networks (RNN), long short-term memory networks(LSTM) and gated RNNs are the popularly approaches used for Sequence Modelling tasks such as machine translation and language modeling. However, RNN/CNN handle sequences word-by-word in a sequential fashion. This sequentiality is an obstacle toward parallelization of the process. Moreover, when such sequences are too long, the model is prone to forgetting the content of distant positions in sequence or mix it with following positions’ content. Recent works have achieved significant improvements in computational efficiency and model performance through factorization tricks and conditional computation. But they are not enough to eliminate the fundamental constraint of sequential computation. Attention mechanisms are one of the solutions to overcome the problem of model forgetting. This is because they allow dependency modelling without considering their distance in the input or output sequences. Due to this feature, they have become an integral part of sequence modeling and transduction models. However, in most cases attention mechanisms are used in conjunction with a recurrent network. Paper summary The Transformer proposed in this paper is a model architecture which relies entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and tremendously improves translation quality after being trained for as little as twelve hours on eight P100 GPUs. Neural sequence transduction models generally have an encoder-decoder structure. The encoder maps an input sequence of symbol representations to a sequence of continuous representations. The decoder then generates an output sequence of symbols, one element at a time. The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder. The authors are motivated to use self-attention because of three criteria.   One is that the total computational complexity per layer. Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required. The third is the path length between long-range dependencies in the network. The Transformer uses two different types of attention functions: Scaled Dot-Product Attention, computes the attention function on a set of queries simultaneously, packed together into a matrix. Multi-head attention, allows the model to jointly attend to information from different representation subspaces at different positions. A self-attention layer connects all positions with a constant number of sequentially executed operations, whereas a recurrent layer requires O(n) sequential operations. In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence length is smaller than the representation dimensionality, which is often the case with machine translations. Key Takeaways This work introduces Transformer, a novel sequence transduction model based entirely on attention mechanism. It replaces the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers for translation tasks. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, the model achieves a new state of the art.  In the former task the model outperforms all previously reported ensembles. Future Goals Transformer has only been applied to transduction model tasks as of yet. In the near future, the authors plan to use it for other problems involving input and output modalities other than text. They plan to apply attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video. The Transformer architecture from this paper has gained major traction since its release because of major improvements in translation quality and other NLP tasks. Recently, the NLP research group at Harvard have released a post which presents an annotated version of the paper in the form of a line-by-line implementation. It is accompanied with 400 lines of library code, written in PyTorch in the form of a notebook, accessible from github or on Google Colab with free GPUs.  
Read more
  • 0
  • 0
  • 17837

article-image-llvm-officially-migrating-to-github-from-apache-svn
Prasad Ramesh
14 Jan 2019
2 min read
Save for later

LLVM officially migrating to GitHub from Apache SVN

Prasad Ramesh
14 Jan 2019
2 min read
In October last year, it was reported that LLVM (Low-Level Virtual Machine) is moving from Apache Subversion (SVN) to GitHub. Now the migration is complete and LLVM is available on GitHub. This transition was long under discussion and is now officially complete. LLVM is a toolkit for creating compilers, optimizers, and runtime environments. This migration comes in place as continuous integration is sometimes broken in LLVM because the SVN server was down. They migrated to GitHub for services lacking in SVN such as better 24/7 stability, disk space, code browsing, forking etc. GitHub is also used by most of the LLVM community. There already were unofficial mirrors on GitHub before this official migration. Last week, James Y Knight from the LLVM team wrote to a mailing list: “The new official monorepo is published to LLVM's GitHub organization, at: https://github.com/llvm/llvm-project. At this point, the repository should be considered stable -- there won't be any more rewrites which invalidate commit hashes (barring some _REALLY_ good reason...)” Along with LLVM, this monorepo also hosts Clang, LLD, Compiler-RT, and other LLVM sub-projects. Commits are being made to the LLVM GitHub repository even at the time of writing and the repo currently has about 200 stars. Updated workflow documents and instructions on migrating user work that is in-progress are being drafted and will be available soon. This move was initiated after positive responses from LLVM community members to migrate to GitHub. If you want to be up to date with more details, you can follow the LLVM mailing list. LLVM will be relicensing under Apache 2.0 start of next year A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer LLVM 7.0.0 released with improved optimization and new tools for monitoring
Read more
  • 0
  • 0
  • 17462

article-image-google-researchers-introduce-jax-a-tensorflow-like-framework-for-generating-high-performance-code-from-python-and-numpy-machine-learning-programs
Bhagyashree R
11 Dec 2018
2 min read
Save for later

Google researchers introduce JAX: A TensorFlow-like framework for generating high-performance code from Python and NumPy machine learning programs

Bhagyashree R
11 Dec 2018
2 min read
Google researchers have build a tool called JAX, a domain-specific tracing JIT compiler, which generates high-performance accelerator code from pure Python and Numpy machine learning programs. It combines Autograd and XLA for high-performance machine learning research. At its core, it is an extensible system for transforming numerical functions. Autograd helps JAX automatically differentiate native Python and Numpy code. It can handle a large subset of Python features such as loops, branches, recursion, and closures. It comes with support for reverse-mode (backpropagation) and forward-mode differentiation, and these two can be composed arbitrarily in any order. XLA or Accelerated Linear Algebra is a linear algebra compiler used for optimizing TensorFlow computations. To run the NumPy programs on GPUs and TPUs, JAX uses XLA. The library calls are compiled and executed just-in-time. JAX also allows compiling your own Python functions just-in-time into XLA-optimized kernels using a one-function API, jit. How JAX works? The basic function of JAX is specializing and translating high-level Python and NumPy functions into a representation that can be transformed and then lifted back into a Python function. It traces Python functions by monitoring all the basic operations applied to its input to produce output and then records these operations and the data-flow between them in a directed acyclic graph (DAG). For tracing the functions, it wraps primitive operations and when they’re called they add themselves to a list of operations performed along with their inputs and outputs. In order to keep track of the data flow between these primitive operations, the values being tracked are wrapped in the Tracer class instances. The team is working towards expanding this project and provide support for cloud TPU, multi-GPU, and multi-TPU. In future, it will come with full NumPy coverage and some SciPy coverage, and more. As this is still a research project, we can expect bugs and is not recommended to be used in production. To read more in detail and contribute to this project, head over to GitHub. Google AdaNet, a TensorFlow-based AutoML framework Graph Nets – DeepMind’s library for graph networks in Tensorflow and Sonnet Dopamine: A Tensorflow-based framework for flexible and reproducible Reinforcement Learning research by Google
Read more
  • 0
  • 0
  • 17419

article-image-netflix-adopts-spring-boot-as-its-core-java-framework
Amrata Joshi
19 Dec 2018
2 min read
Save for later

Netflix adopts Spring Boot as its core Java framework

Amrata Joshi
19 Dec 2018
2 min read
This year, Netflix decided to make Spring Boot as their core Java framework, while leveraging the community’s contributions via Spring Cloud Netflix. The team at Netflix started working towards fully operating in the cloud in 2007. It also built several cloud infrastructure libraries and systems  including, Ribbon, an Inter Process Communication (IPC) library for load balancing, Eureka, an AWS service registry for service discovery, and Hystrix, latency and fault tolerance library for fault tolerance. Spring Cloud Netflix provides Netflix OSS integrations for Spring Boot apps with the help of autoconfiguration and binding to the Spring Environment.  It was updated to version 1.0. in 2015. The idea behind Spring Cloud was to bring the Netflix OSS components using Spring Boot instead of Netflix internal solutions. It has now become the preferred way for the community to adopt Netflix’s Open Source software. It features Eureka, Ribbon, and Hystrix. Why did Netflix opt for Spring Boot framework? In the early 2010s, the requirements for Netflix cloud infrastructure were efficiency, reliability, scalability, and security. Since there were no other suitable alternatives, the team at Netflix created solutions in-house. By adopting the Spring Boot framework, Netflix has managed to meet all of these requirements as it provides great experiences such as: Data access with spring-data, Complex security management with spring-security, and Integration with cloud providers with spring-cloud-aws. Spring framework also features proven and long lasting abstractions and APIs. The Spring team has also provided quality implementations from abstractions and APIs. This abstract-and-implement methodology also matches well with Netflix’ principle of being “highly aligned, loosely coupled”. “We plan to leverage the strong abstractions within Spring to further modularize and evolve the Netflix infrastructure. Where there is existing strong community direction such as the upcoming Spring Cloud Load Balancer , we intend to leverage these to replace aging Netflix software. ” - Netflix Read more about this news on Netflix Tech blog. Netflix’s culture is too transparent to be functional, reports the WSJ Tech News Today: Facebook’s SUMO challenge; Netflix AVA; inmates code; Japan’s AI, blockchain uses How Netflix migrated from a monolithic to a microservice architecture [Video]  
Read more
  • 0
  • 0
  • 17338
article-image-patreon-speaks-out-against-the-protests-over-its-banning-sargon-of-akkad-for-violating-its-rules-on-hate-speech
Natasha Mathur
19 Dec 2018
3 min read
Save for later

Patreon speaks out against the protests over its banning Sargon of Akkad for violating its rules on hate speech

Natasha Mathur
19 Dec 2018
3 min read
Patreon, a popular crowdfunding platform published a post yesterday in defense of its removal of Sargon of Akkad or Carl Benjamin, an English YouTuber famous for his anti-feminist content, last week, over the concerns of him violating its policies on hate speech. Patreon has been receiving backlash ever since from the users and patrons of the website who are calling for a boycott. “Patreon does not and will not condone hate speech in any of its forms. We stand by our policies against hate speech. We believe it’s essential for Patreon to have strong policies against hate speech to build a safe community for our creators and their patrons”, says the Patreon team. Patreon mentioned that it reviews the creations posted by the content creators on other platforms that are funded via Patreon. Since Benjamin is quite popular for his collaborations with other creators, Patreon’s community guidelines, which strictly prohibits hate speech also get applied to those collaborations. According to Patreon’s community guidelines, “Hate speech includes serious attacks, or even negative generalizations, of people based on their race [and] sexual orientation.” Benjamin in one of his interviews on another YouTuber’s channel used racial slurs linked with “negative generalizations of behavior” quite contrasting to how people of those races actually act, to insult others. Apart from using racial slurs, he also used sexual orientation related slurs which violates Patreon’s community guidelines. However, a lot of people are not happy with Patreon’s decision. For instance, Sam Harris, a popular American author, podcast host, and neuroscientist, who had one of the top-grossing accounts (with nearly 9,000 paying patrons at the end of November) on Patreon deleted his account earlier this week, accusing the platform of “political bias”. He wrote “the crowdfunding site Patreon has banned several prominent content creators from its platform. While the company insists that each was in violation of its terms of service, these recent expulsions seem more readily explained by political bias. I consider it no longer tenable to expose any part of my podcast funding to the whims of Patreon’s ‘Trust and Safety” committee’”.     https://twitter.com/SamHarrisOrg/status/1074504882210562048 Apart from banning Carl Benjamin, Patreon also banned Milo Yiannopoulos, a British public speaker and YouTuber with over 839,286 subscribers earlier this month over his association with the Proud Boys, which Patreon has classified as a hate group. https://twitter.com/Patreon/status/1070446085787668480 James Allsup, an alt-right political commentator, and associate of Yiannopoulus', was also banned from Patreon last month for their association with hate groups. Amidst this controversy, some of the top Patreon creators such as Jordan Peterson, a popular Canadian clinical psychologist whose YouTube channel has over 1.6 M subscribers and Dave Rubin, an American libertarian political commentator announced their plans of starting an alternative to Patreon, earlier this week. Peterson said that the new platform will work on a subscriber model similar to Patreon’s, only with few additional features. https://www.youtube.com/watch?v=GWz1RDVoqw4 “We understand some people don’t believe in the concept of hate speech and don’t agree with Patreon removing creators on the grounds of violating our Community Guidelines for using hate speech. We have a different view,” says the Patreon team. Emmanuel Macron teams up with Facebook in a bid to fight hate speech on social media Twitter takes action towards dehumanizing speech with its new policy How IRA hacked American democracy using social media and meme warfare to promote disinformation and polarization: A new report to Senate Intelligence Committee
Read more
  • 0
  • 0
  • 17279

article-image-ai-can-now-help-speak-your-mind-uc-researchers-introduce-a-neural-decoder-that-translates-brain-signals-to-natural-sounding-speech
Bhagyashree R
29 Apr 2019
4 min read
Save for later

AI can now help speak your mind: UC researchers introduce a neural decoder that translates brain signals to natural-sounding speech

Bhagyashree R
29 Apr 2019
4 min read
In a research published in the Nature journal on Monday, a team of neuroscientists from the University of California, San Francisco, introduced a neural decoder that can synthesize natural-sounding speech based on brain activity. This research was led by Gopala Anumanchipalli, a speech scientist, and Josh Chartier, a bioengineering graduate student in the Chang lab. It is being developed in the laboratory of Edward Chang, a Neurological Surgery professor at University of California. Why is this neural decoder being introduced? There are many cases of people losing their voice because of stroke, traumatic brain injury, or neurodegenerative diseases such as Parkinson’s disease, multiple sclerosis, and amyotrophic lateral sclerosis. Currently,assistive devices that track very small eye or facial muscle movements to enable people with severe speech disabilities express their thoughts by writing them letter-by-letter, do exist. However, generating text or synthesized speech with such devices is often time consuming, laborious, and error-prone. Another limitation these devices have is that they only permit generating a maximum of 10 words per minute, compared to the 100 to 150 words per minute of natural speech. This research shows that it is possible to generate a synthesized version of a person’s voice that can be controlled by their brain activity. The researchers believe that in future, this device could be used to enable individuals with severe speech disability to have fluent communication. It could even reproduce some of the “musicality” of the human voice that expresses the speaker’s emotions and personality. “For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity,” said Chang. “This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.” How does this system work? This research is based on another study by Josh Chartier and Gopala K. Anumanchipalli, which shows how the speech centers in our brain choreograph the movements of the lips, jaw, tongue, and other vocal tract components to produce fluent speech. In this new study, Anumanchipalli and Chartier asked five patients being treated at the UCSF Epilepsy Center to read several sentences aloud. These patients had electrodes implanted into their brains to map the source of their seizures in preparation for neurosurgery. Simultaneously, the researchers recorded activity from a brain region known to be involved in language production. The researchers used the audio recordings of volunteer’s voice to understand the vocal tract movements needed to produce those sounds. With this detailed map of sound to anatomy in hand, the scientists created a realistic virtual vocal tract for each volunteer that could be controlled by their brain activity. The system comprised of two neural networks: A decoder for transforming brain activity patterns produced during speech into movements of the virtual vocal tract. A synthesizer for converting these vocal tract movements into a synthetic approximation of the volunteer’s voice. Here’s a video depicting the working of this system: https://www.youtube.com/watch?v=kbX9FLJ6WKw&feature=youtu.be The researchers observed that the synthetic speech produced by this system was much better as compared to the synthetic speech directly decoded from the volunteer’s brain activity. The generated sentences were also understandable to hundreds of human listeners in crowdsourced transcription tests conducted on the Amazon Mechanical Turk platform. The system is still in its early stages. Explaining its limitations, Chartier said, “We still have a ways to go to perfectly mimic spoken language. We’re quite good at synthesizing slower speech sounds like ‘sh’ and ‘z’ as well as maintaining the rhythms and intonations of speech and the speaker’s gender and identity, but some of the more abrupt sounds like ‘b’s and ‘p’s get a bit fuzzy. Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.” Read the full report on UCSF’s official website. OpenAI introduces MuseNet: A deep neural network for generating musical compositions Interpretation of Functional APIs in Deep Neural Networks by Rowel Atienza Google open-sources GPipe, a pipeline parallelism Library to scale up Deep Neural Network training  
Read more
  • 0
  • 0
  • 17229

article-image-prefer-print-copy
Packt Publishing
13 Aug 2015
2 min read
Save for later

Get 35% off the print copy of your favorite eBook

Packt Publishing
13 Aug 2015
2 min read
Here at Packt we've always aimed to be ahead of the curve when it comes to new formats for learning, that's why we offer a whole range of eBooks, videos, and even our Mapt subscription service so you can get the best experience no matter what type of learner you are. Sometimes you just want an old-fashioned book though; after all, who doesn't like to actually hold a physical page or have a handy print book to pass around the office when problems start springing up? So if you've bought an eBook directly from us and it wasn't a free download we’re offering an Upgrade to Print, which allows you to save 35% off the print version! No more worries about sharing your new eReader with your colleagues. All you have to do is log into your account from where you have purchased the eBook, go to the product page of that ebook where you will see a 35% discount applied to the 'Print + eBook'. You’ll save 35% off on the list price and a brand new print copy will be on its way to you! Plus when you upgrade to print we'll throw in another copy of the original eBook for you to share out to someone else – all you have to go is go back into My eBooks and just click the Share button, no hassle!
Read more
  • 0
  • 1
  • 17217
article-image-introducing-remove-bg-a-deep-learning-based-tool-that-automatically-removes-the-background-of-any-person-based-image-within-5-seconds
Amrata Joshi
18 Dec 2018
3 min read
Save for later

Introducing remove.bg, a deep learning based tool that automatically removes the background of any person based image within 5 seconds

Amrata Joshi
18 Dec 2018
3 min read
Yesterday, Benjamin Groessing, a web consultant and developer at byteq, released remove.bg, a tool built on python, ruby and deep learning. This tool automatically removes the background of any image within 5 seconds. It uses various custom algorithms for the processing of the image. https://twitter.com/hammer_flo_/status/1074914463726350336 It is a free service and users don’t have to manually select the background/foreground layers to separate them. One can simply select an image and instantly download the resulting image with the background removed. Features of remove.bg Personal and professional use Remove.bg can be used by graphic designer, photographer or selfie lover for removing backgrounds. Saves time and money It saves time as it is automated and it is free of cost. 100% Automatic Apart from the image file, this release doesn’t require inputs such as selecting pixels, marking persons, etc. How does remove.bg work? https://twitter.com/begroe/status/1074645152487129088 Remove.bg uses AI technology for detecting foreground layers and separating them from the background. It uses additional algorithms for improving fine details and preventing color contamination. The AI detects persons as foreground and everything else as background. So, it only works if there is at least one person in the image. Users can upload images of any resolution but for performance reasons, the output image has been limited to 500 × 500 pixels. Privacy in remove.bg User images are uploaded through a secure SSL/TLS-encrypted connection. These images are processed and the result is temporarily stored till the time a user can download them. After which, approximately an hour later, these image files get deleted. Privacy message on the official website of remove.bg states, “We do not share your images or use them for any other purpose than removing the background and letting you download the result.” What can be expected from the next release? The next set of releases might support other kinds of images such as product images. The team at Remove.bg might also release an easy-to-use API. Users are very excited about this release and the technology used behind it. Many users are comparing it with the portrait mode on iPhone X. Though it is not that fast but users are still liking it. https://twitter.com/Baconbrix/status/1074805036264316928 https://twitter.com/hammer_flo_/status/1074914463726350336 But how strong is remove.bg with regards to privacy is a bigger question. Though the website gives a privacy note at the end but it will take more to win the user’s trust. The images uploaded to remove.bg’ cloud might be at risk. How strong is the security and what preventive measures have they taken? These are few of the questions that might bother many. To have a look at the ongoing discussion on remove.bg, check out Benjamin Groessing’s AMA twitter thread. Facebook open-sources PyText, a PyTorch based NLP modeling framework Deep Learning Indaba presents the state of Natural Language Processing in 2018 NYU and AWS introduce Deep Graph Library (DGL), a python package to build neural network graphs
Read more
  • 0
  • 0
  • 16384

article-image-after-backlash-for-rejecting-a-ublock-origin-update-from-the-chrome-web-store-google-accepts-ad-blocking-extension
Bhagyashree R
15 Oct 2019
6 min read
Save for later

After backlash for rejecting a uBlock Origin update from the Chrome Web Store, Google accepts ad-blocking extension

Bhagyashree R
15 Oct 2019
6 min read
Last week, Raymond Hill, the developer behind uBlock Origin shared that the extension’s dev build 1.22.5rc1 was rejected by Google's Chrome Web Store (CWS). uBlock Origin is a free and open-source browser extension widely used for content-filtering and adblocking.  Google stated that the extension did not comply with its extension standards as it bundles up different purposes into a single extension. An email to Hill from Google reads, “Do not create an extension that requires users to accept bundles of unrelated functionality, such as an email notifier and a news headline aggregator.” Hill on a GitHub issue mentioned that this is basically “stonewalling” and in the future, users may have to switch to another browser to use uBlock Origin. He does plans to upload the stable version. He commented, “I will upload stable to the Chrome Web Store, but given 1.22.5rc2 is rejected, logic dictates that 1.23.0 will be rejected. Actually, logic dictates that 1.22.5rc0 should also be rejected and yet it's still available in the Chrome Web Store.” Users’ reaction on Google rejecting the uBlock Origin dev build This news sparked a discussion on Hacker News and Reddit. Users speculated that probably this outcome is the result of the “crippling” update Google has introduced in Chrome (beta and dev versions currently): deprecating the blocking ability of the WebRequest API. The webRequest API permits extensions to intercept requests to modify, redirect, or block them. The basic flow of handling a request using this API is, Chrome receives the request, asks the extension, and then gets the result. In Manifest V3, the use of this API will be limited in its blocking form. While the non-blocking form of the API, which permit extensions to observe network requests will be allowed.  In place of webRequest API, Google has introduced the declarativeNetRequest API. This API allows adding up to 30,000 rules, 5000 dynamic rules, and 100 pages. Due to its limiting nature, many ad blocker developers and maintainers have expressed that this API will impact the capabilities of modern content blocking extensions. Google’s reasoning for introducing this change is that this API is much more performant and provides better privacy guarantees. However, many developers think otherwise. Hill had previously shared his thoughts on deprecating the blocking ability of the webRequest API.  “Web pages load slow because of bloat, not because of the blocking ability of the webRequest API -- at least for well-crafted extensions. Furthermore, if performance concerns due to the blocking nature of the webRequest API was their real motive, they would just adopt Firefox's approach and give the ability to return a Promise on just the three methods which can be used in a blocking manner.” Many users also mentioned that Chrome is using its dominance in the browser market to dictate what type of extensions are developed and used. A user commented, “As Chrome is a dominant platform, our work is prevented from reaching users if it does not align with the business goals of Google, and extensions that users want on their devices are effectively censored out of existence.” Others expressed that it is better to avoid all the drama by simply switching to some other browser, mainly Firefox. “Or you could cease contributing to the Blink monopoly on the web and join us of Firefox. Microsoft is no longer challenging Google in this space,” a user added. While some others were in support of Google saying that Hill could have moved some of the functionalities to a separate extension. “It's an older rule. It does technically apply here, but it's not a great look that they're only enforcing it now. If Gorhill needed to, some of that extra functionality could be moved out into a separate extension. uBlock has done this before with uBlock Origin Extra. Most of the extra features (eg. remote font blocking) aren't a huge deal, in my opinion.” How Google reacted to the public outcry Simeon Vincent, a developer advocate for Chrome extensions commented on a Reddit discussion that the updated extension was approved and published on the Chrome Web Store.  “This morning I heard from the review team; they've approved the current draft so next publish should go through. Unfortunately it's the weekend, so most folks are out, but I'm planning to follow up with u/gorhill4 with more details once I have them. EDIT: uBlock Origin development build was just successfully published. The latest version on the web store is 1.22.5.102.” He also further said that this whole confusion was because of a “clunkier” developer communication process. When users asked him about the Manifest V3 change he shared, “We've made progress on better supporting ad blockers and content blockers in general in Manifest V3. We've added rule modification at runtime, bumped the rule limits, added redirect support, header modification, etc. And more improvements are on the way.” He further added, “But Gorhill's core objection is to removing the blocking version of webRequest. We're trying to move the extension platform in a direction that's more respectful of end-user privacy, more secure, and less likely to accidentally expose data – things webRequest simply wasn't designed to do.” Chrome ignores the autocomplete=off property In other Chrome related news, it was reported that Chrome continues to autofill forms even if you disable it using the autocomplete=off property. A user commented, “I've had to write enhancements for Web apps several times this year with fields which are intended to be filled by the user with information *about other users*. Not respecting autocomplete="off" is a major oversight which has caused a lot of headache for those enhancements.” Chrome decides on which field should be filled with what data based on a combination of form and field signatures. If these do not match, the browser will resort to only checking the field signatures.  A developer from the Google Chrome team shared, “This causes some problems, e.g. in input type="text" name="name", the "name" can refer to different concepts (a person's name or the name of a spare part).” To solve this problem the team is working on an experimental feature that gives users the choice to “(permanently) hide the autofill suggestions.”  Check out the reported issue to know more in detail. Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument Is it time to ditch Chrome? Ad blocking extensions will now only be for enterprise users Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end GitHub updates to Rails 6.0 with an incremental approach React DevTools 4.0 releases with support for Hooks, experimental Suspense API, and more!
Read more
  • 0
  • 0
  • 15978