Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News

3711 Articles
article-image-paper-in-two-minutes-attention-is-all-you-need
Sugandha Lahoti
05 Apr 2018
4 min read
Save for later

Paper in Two minutes: Attention Is All You Need

Sugandha Lahoti
05 Apr 2018
4 min read
A paper on a new simple network architecture, the Transformer, based solely on attention mechanisms The NIPS 2017 accepted paper, Attention Is All You Need, introduces Transformer, a model architecture relying entirely on an attention mechanism to draw global dependencies between input and output. This paper is authored by professionals from the Google research team including Ashish Vaswani, Noam Shazeer, Niki Parmar,  Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. The Transformer – Attention is all you need What problem is the paper attempting to solve? Recurrent neural networks (RNN), long short-term memory networks(LSTM) and gated RNNs are the popularly approaches used for Sequence Modelling tasks such as machine translation and language modeling. However, RNN/CNN handle sequences word-by-word in a sequential fashion. This sequentiality is an obstacle toward parallelization of the process. Moreover, when such sequences are too long, the model is prone to forgetting the content of distant positions in sequence or mix it with following positions’ content. Recent works have achieved significant improvements in computational efficiency and model performance through factorization tricks and conditional computation. But they are not enough to eliminate the fundamental constraint of sequential computation. Attention mechanisms are one of the solutions to overcome the problem of model forgetting. This is because they allow dependency modelling without considering their distance in the input or output sequences. Due to this feature, they have become an integral part of sequence modeling and transduction models. However, in most cases attention mechanisms are used in conjunction with a recurrent network. Paper summary The Transformer proposed in this paper is a model architecture which relies entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and tremendously improves translation quality after being trained for as little as twelve hours on eight P100 GPUs. Neural sequence transduction models generally have an encoder-decoder structure. The encoder maps an input sequence of symbol representations to a sequence of continuous representations. The decoder then generates an output sequence of symbols, one element at a time. The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder. The authors are motivated to use self-attention because of three criteria.   One is that the total computational complexity per layer. Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required. The third is the path length between long-range dependencies in the network. The Transformer uses two different types of attention functions: Scaled Dot-Product Attention, computes the attention function on a set of queries simultaneously, packed together into a matrix. Multi-head attention, allows the model to jointly attend to information from different representation subspaces at different positions. A self-attention layer connects all positions with a constant number of sequentially executed operations, whereas a recurrent layer requires O(n) sequential operations. In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence length is smaller than the representation dimensionality, which is often the case with machine translations. Key Takeaways This work introduces Transformer, a novel sequence transduction model based entirely on attention mechanism. It replaces the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers for translation tasks. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, the model achieves a new state of the art.  In the former task the model outperforms all previously reported ensembles. Future Goals Transformer has only been applied to transduction model tasks as of yet. In the near future, the authors plan to use it for other problems involving input and output modalities other than text. They plan to apply attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video. The Transformer architecture from this paper has gained major traction since its release because of major improvements in translation quality and other NLP tasks. Recently, the NLP research group at Harvard have released a post which presents an annotated version of the paper in the form of a line-by-line implementation. It is accompanied with 400 lines of library code, written in PyTorch in the form of a notebook, accessible from github or on Google Colab with free GPUs.  
Read more
  • 0
  • 0
  • 16179

article-image-patreon-speaks-out-against-the-protests-over-its-banning-sargon-of-akkad-for-violating-its-rules-on-hate-speech
Natasha Mathur
19 Dec 2018
3 min read
Save for later

Patreon speaks out against the protests over its banning Sargon of Akkad for violating its rules on hate speech

Natasha Mathur
19 Dec 2018
3 min read
Patreon, a popular crowdfunding platform published a post yesterday in defense of its removal of Sargon of Akkad or Carl Benjamin, an English YouTuber famous for his anti-feminist content, last week, over the concerns of him violating its policies on hate speech. Patreon has been receiving backlash ever since from the users and patrons of the website who are calling for a boycott. “Patreon does not and will not condone hate speech in any of its forms. We stand by our policies against hate speech. We believe it’s essential for Patreon to have strong policies against hate speech to build a safe community for our creators and their patrons”, says the Patreon team. Patreon mentioned that it reviews the creations posted by the content creators on other platforms that are funded via Patreon. Since Benjamin is quite popular for his collaborations with other creators, Patreon’s community guidelines, which strictly prohibits hate speech also get applied to those collaborations. According to Patreon’s community guidelines, “Hate speech includes serious attacks, or even negative generalizations, of people based on their race [and] sexual orientation.” Benjamin in one of his interviews on another YouTuber’s channel used racial slurs linked with “negative generalizations of behavior” quite contrasting to how people of those races actually act, to insult others. Apart from using racial slurs, he also used sexual orientation related slurs which violates Patreon’s community guidelines. However, a lot of people are not happy with Patreon’s decision. For instance, Sam Harris, a popular American author, podcast host, and neuroscientist, who had one of the top-grossing accounts (with nearly 9,000 paying patrons at the end of November) on Patreon deleted his account earlier this week, accusing the platform of “political bias”. He wrote “the crowdfunding site Patreon has banned several prominent content creators from its platform. While the company insists that each was in violation of its terms of service, these recent expulsions seem more readily explained by political bias. I consider it no longer tenable to expose any part of my podcast funding to the whims of Patreon’s ‘Trust and Safety” committee’”.     https://twitter.com/SamHarrisOrg/status/1074504882210562048 Apart from banning Carl Benjamin, Patreon also banned Milo Yiannopoulos, a British public speaker and YouTuber with over 839,286 subscribers earlier this month over his association with the Proud Boys, which Patreon has classified as a hate group. https://twitter.com/Patreon/status/1070446085787668480 James Allsup, an alt-right political commentator, and associate of Yiannopoulus', was also banned from Patreon last month for their association with hate groups. Amidst this controversy, some of the top Patreon creators such as Jordan Peterson, a popular Canadian clinical psychologist whose YouTube channel has over 1.6 M subscribers and Dave Rubin, an American libertarian political commentator announced their plans of starting an alternative to Patreon, earlier this week. Peterson said that the new platform will work on a subscriber model similar to Patreon’s, only with few additional features. https://www.youtube.com/watch?v=GWz1RDVoqw4 “We understand some people don’t believe in the concept of hate speech and don’t agree with Patreon removing creators on the grounds of violating our Community Guidelines for using hate speech. We have a different view,” says the Patreon team. Emmanuel Macron teams up with Facebook in a bid to fight hate speech on social media Twitter takes action towards dehumanizing speech with its new policy How IRA hacked American democracy using social media and meme warfare to promote disinformation and polarization: A new report to Senate Intelligence Committee
Read more
  • 0
  • 0
  • 15925

article-image-prefer-print-copy
Packt Publishing
13 Aug 2015
2 min read
Save for later

Get 35% off the print copy of your favorite eBook

Packt Publishing
13 Aug 2015
2 min read
Here at Packt we've always aimed to be ahead of the curve when it comes to new formats for learning, that's why we offer a whole range of eBooks, videos, and even our Mapt subscription service so you can get the best experience no matter what type of learner you are. Sometimes you just want an old-fashioned book though; after all, who doesn't like to actually hold a physical page or have a handy print book to pass around the office when problems start springing up? So if you've bought an eBook directly from us and it wasn't a free download we’re offering an Upgrade to Print, which allows you to save 35% off the print version! No more worries about sharing your new eReader with your colleagues. All you have to do is log into your account from where you have purchased the eBook, go to the product page of that ebook where you will see a 35% discount applied to the 'Print + eBook'. You’ll save 35% off on the list price and a brand new print copy will be on its way to you! Plus when you upgrade to print we'll throw in another copy of the original eBook for you to share out to someone else – all you have to go is go back into My eBooks and just click the Share button, no hassle!
Read more
  • 0
  • 1
  • 15900

article-image-llvm-officially-migrating-to-github-from-apache-svn
Prasad Ramesh
14 Jan 2019
2 min read
Save for later

LLVM officially migrating to GitHub from Apache SVN

Prasad Ramesh
14 Jan 2019
2 min read
In October last year, it was reported that LLVM (Low-Level Virtual Machine) is moving from Apache Subversion (SVN) to GitHub. Now the migration is complete and LLVM is available on GitHub. This transition was long under discussion and is now officially complete. LLVM is a toolkit for creating compilers, optimizers, and runtime environments. This migration comes in place as continuous integration is sometimes broken in LLVM because the SVN server was down. They migrated to GitHub for services lacking in SVN such as better 24/7 stability, disk space, code browsing, forking etc. GitHub is also used by most of the LLVM community. There already were unofficial mirrors on GitHub before this official migration. Last week, James Y Knight from the LLVM team wrote to a mailing list: “The new official monorepo is published to LLVM's GitHub organization, at: https://github.com/llvm/llvm-project. At this point, the repository should be considered stable -- there won't be any more rewrites which invalidate commit hashes (barring some _REALLY_ good reason...)” Along with LLVM, this monorepo also hosts Clang, LLD, Compiler-RT, and other LLVM sub-projects. Commits are being made to the LLVM GitHub repository even at the time of writing and the repo currently has about 200 stars. Updated workflow documents and instructions on migrating user work that is in-progress are being drafted and will be available soon. This move was initiated after positive responses from LLVM community members to migrate to GitHub. If you want to be up to date with more details, you can follow the LLVM mailing list. LLVM will be relicensing under Apache 2.0 start of next year A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer LLVM 7.0.0 released with improved optimization and new tools for monitoring
Read more
  • 0
  • 0
  • 15763

article-image-netflix-adopts-spring-boot-as-its-core-java-framework
Amrata Joshi
19 Dec 2018
2 min read
Save for later

Netflix adopts Spring Boot as its core Java framework

Amrata Joshi
19 Dec 2018
2 min read
This year, Netflix decided to make Spring Boot as their core Java framework, while leveraging the community’s contributions via Spring Cloud Netflix. The team at Netflix started working towards fully operating in the cloud in 2007. It also built several cloud infrastructure libraries and systems  including, Ribbon, an Inter Process Communication (IPC) library for load balancing, Eureka, an AWS service registry for service discovery, and Hystrix, latency and fault tolerance library for fault tolerance. Spring Cloud Netflix provides Netflix OSS integrations for Spring Boot apps with the help of autoconfiguration and binding to the Spring Environment.  It was updated to version 1.0. in 2015. The idea behind Spring Cloud was to bring the Netflix OSS components using Spring Boot instead of Netflix internal solutions. It has now become the preferred way for the community to adopt Netflix’s Open Source software. It features Eureka, Ribbon, and Hystrix. Why did Netflix opt for Spring Boot framework? In the early 2010s, the requirements for Netflix cloud infrastructure were efficiency, reliability, scalability, and security. Since there were no other suitable alternatives, the team at Netflix created solutions in-house. By adopting the Spring Boot framework, Netflix has managed to meet all of these requirements as it provides great experiences such as: Data access with spring-data, Complex security management with spring-security, and Integration with cloud providers with spring-cloud-aws. Spring framework also features proven and long lasting abstractions and APIs. The Spring team has also provided quality implementations from abstractions and APIs. This abstract-and-implement methodology also matches well with Netflix’ principle of being “highly aligned, loosely coupled”. “We plan to leverage the strong abstractions within Spring to further modularize and evolve the Netflix infrastructure. Where there is existing strong community direction such as the upcoming Spring Cloud Load Balancer , we intend to leverage these to replace aging Netflix software. ” - Netflix Read more about this news on Netflix Tech blog. Netflix’s culture is too transparent to be functional, reports the WSJ Tech News Today: Facebook’s SUMO challenge; Netflix AVA; inmates code; Japan’s AI, blockchain uses How Netflix migrated from a monolithic to a microservice architecture [Video]  
Read more
  • 0
  • 0
  • 15606

article-image-whats-new-in-ecmascript-2018
Pravin Dhandre
20 Apr 2018
4 min read
Save for later

What's new in ECMAScript 2018 (ES9)?

Pravin Dhandre
20 Apr 2018
4 min read
ECMAScript 2018 -  also known as ES9 - is now complete with lots of features. Since the major features released with ECMAScript 2015 the language has matured with yearly update releases. After multiple draft submissions and completion of the 5-stage process, the TC39 committee has finalized the set of features that will be rolled out in June. The full list of proposals that were submitted to TC39 committee can be viewed in Github repository. What are the key features of ECMAScript 2018? Let’s have a quick look at the key ES9 features and how it is going to add value to web developers. Lifting template literal restriction Template literals generally allow the embedding of languages such as DSLs. However, restrictions on escape sequences make this quite complicated. Removing the restriction will create difficulty in handling cooked template values containing illegal escape sequences. The proposed feature will redefine the cooked value for illegal escape sequences to “undefined”. This lifting of restriction will allow illegal values like \xerxes and makes embedding of language simpler. The detailed proposal with templates can be viewed at Github. Asynchronous iterators The newer version will provide syntactic support for asynchronous iteration with both AsyncIterable and AsyncIterator protocols. The syntactic support will help in reading lines of text from HTTP connection easily. In my opinion, this is one of the most important and useful features which make the code looks simpler. It introduces a new IterationStatement, for-await-of, and also adds syntax which can create async generator functions. The detailed proposal can be viewed at Github. Promise.prototype.finally library As you are aware promise make execution of callback functions easy. Many promise libraries have a "finally" method through which you can run code no matter how the Promise provides resolution. It works by registering a callback which gets invoked when a promise gets fulfilled or denied. Bluebird, Q, and when are some examples. The detailed proposal can be viewed at Github. Unicode property escapes in regular expressions The ECMAScript 2018 version will have addition of Unicode property escapes `\p{…}` and `\P{…}` to regular expressions. These are a new and unique type of escape sequences with u flag set. With this feature, one can create Unicode-aware regular expressions with utmost ease. These escapes are easily readable, compact and get updated automatically from ECMAScript engine. The detailed proposal can be viewed at Github. RegExp lookbehind assertions Assertions are regular expressions which consist of anchors and lookarounds that either succeeds or fails based on the match found. ECMAScript has extended assertion, that does lookaround in forward direction, with lookbehind assertions that does in backward direction. This assertion will be helpful in instances like validating a dollar amount without capturing the dollar sign where a pattern/design is or is not preceded by another. The detailed proposal can be viewed at Github. Object Rest/spread properties The earlier version of ECMAScript includes rest and spread properties for array literals. Likewise, the newer version would be introducing rest and so read elements for object literals. Both these operations for Object would help in extracting properties which we want along with removing unwanted ones. The detailed proposal can be viewed at Github. RegExp named capture groups Capture Groups is another RegExp feature, similar to so called “named Groups” in Java and Python. With this, you can write RegExp to provide names in a format viz. (?<name>...) for different parts of the group. This allows you to use that name and grab whichever group you need in a simplistic way. The detailed proposal can be viewed at Github. s ‘dotAll’ flag for regular expressions In regular expression patterns, the earlier version allows dot (.) to match any character but with astral and line terminator characters like \n \f etc, creating regex was complicated. The newer version proposes addition of a new s flag which can match any character, including astral and line terminators. The detailed proposal can be viewed at Github. When will ECMAScript 2018 be available? All of the features above are expected to be implemented and available in browsers this year. It's in the name after all. But there are likely to be even more new features and capabilities in the 2019 release. Read this to get a clearer picture of what’s likely to feature in the 2019 release.
Read more
  • 0
  • 0
  • 14963
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-google-researchers-introduce-jax-a-tensorflow-like-framework-for-generating-high-performance-code-from-python-and-numpy-machine-learning-programs
Bhagyashree R
11 Dec 2018
2 min read
Save for later

Google researchers introduce JAX: A TensorFlow-like framework for generating high-performance code from Python and NumPy machine learning programs

Bhagyashree R
11 Dec 2018
2 min read
Google researchers have build a tool called JAX, a domain-specific tracing JIT compiler, which generates high-performance accelerator code from pure Python and Numpy machine learning programs. It combines Autograd and XLA for high-performance machine learning research. At its core, it is an extensible system for transforming numerical functions. Autograd helps JAX automatically differentiate native Python and Numpy code. It can handle a large subset of Python features such as loops, branches, recursion, and closures. It comes with support for reverse-mode (backpropagation) and forward-mode differentiation, and these two can be composed arbitrarily in any order. XLA or Accelerated Linear Algebra is a linear algebra compiler used for optimizing TensorFlow computations. To run the NumPy programs on GPUs and TPUs, JAX uses XLA. The library calls are compiled and executed just-in-time. JAX also allows compiling your own Python functions just-in-time into XLA-optimized kernels using a one-function API, jit. How JAX works? The basic function of JAX is specializing and translating high-level Python and NumPy functions into a representation that can be transformed and then lifted back into a Python function. It traces Python functions by monitoring all the basic operations applied to its input to produce output and then records these operations and the data-flow between them in a directed acyclic graph (DAG). For tracing the functions, it wraps primitive operations and when they’re called they add themselves to a list of operations performed along with their inputs and outputs. In order to keep track of the data flow between these primitive operations, the values being tracked are wrapped in the Tracer class instances. The team is working towards expanding this project and provide support for cloud TPU, multi-GPU, and multi-TPU. In future, it will come with full NumPy coverage and some SciPy coverage, and more. As this is still a research project, we can expect bugs and is not recommended to be used in production. To read more in detail and contribute to this project, head over to GitHub. Google AdaNet, a TensorFlow-based AutoML framework Graph Nets – DeepMind’s library for graph networks in Tensorflow and Sonnet Dopamine: A Tensorflow-based framework for flexible and reproducible Reinforcement Learning research by Google
Read more
  • 0
  • 0
  • 14872

article-image-introducing-remove-bg-a-deep-learning-based-tool-that-automatically-removes-the-background-of-any-person-based-image-within-5-seconds
Amrata Joshi
18 Dec 2018
3 min read
Save for later

Introducing remove.bg, a deep learning based tool that automatically removes the background of any person based image within 5 seconds

Amrata Joshi
18 Dec 2018
3 min read
Yesterday, Benjamin Groessing, a web consultant and developer at byteq, released remove.bg, a tool built on python, ruby and deep learning. This tool automatically removes the background of any image within 5 seconds. It uses various custom algorithms for the processing of the image. https://twitter.com/hammer_flo_/status/1074914463726350336 It is a free service and users don’t have to manually select the background/foreground layers to separate them. One can simply select an image and instantly download the resulting image with the background removed. Features of remove.bg Personal and professional use Remove.bg can be used by graphic designer, photographer or selfie lover for removing backgrounds. Saves time and money It saves time as it is automated and it is free of cost. 100% Automatic Apart from the image file, this release doesn’t require inputs such as selecting pixels, marking persons, etc. How does remove.bg work? https://twitter.com/begroe/status/1074645152487129088 Remove.bg uses AI technology for detecting foreground layers and separating them from the background. It uses additional algorithms for improving fine details and preventing color contamination. The AI detects persons as foreground and everything else as background. So, it only works if there is at least one person in the image. Users can upload images of any resolution but for performance reasons, the output image has been limited to 500 × 500 pixels. Privacy in remove.bg User images are uploaded through a secure SSL/TLS-encrypted connection. These images are processed and the result is temporarily stored till the time a user can download them. After which, approximately an hour later, these image files get deleted. Privacy message on the official website of remove.bg states, “We do not share your images or use them for any other purpose than removing the background and letting you download the result.” What can be expected from the next release? The next set of releases might support other kinds of images such as product images. The team at Remove.bg might also release an easy-to-use API. Users are very excited about this release and the technology used behind it. Many users are comparing it with the portrait mode on iPhone X. Though it is not that fast but users are still liking it. https://twitter.com/Baconbrix/status/1074805036264316928 https://twitter.com/hammer_flo_/status/1074914463726350336 But how strong is remove.bg with regards to privacy is a bigger question. Though the website gives a privacy note at the end but it will take more to win the user’s trust. The images uploaded to remove.bg’ cloud might be at risk. How strong is the security and what preventive measures have they taken? These are few of the questions that might bother many. To have a look at the ongoing discussion on remove.bg, check out Benjamin Groessing’s AMA twitter thread. Facebook open-sources PyText, a PyTorch based NLP modeling framework Deep Learning Indaba presents the state of Natural Language Processing in 2018 NYU and AWS introduce Deep Graph Library (DGL), a python package to build neural network graphs
Read more
  • 0
  • 0
  • 14753

article-image-microsoft-ai-toolkit-connect-2017
Sugandha Lahoti
17 Nov 2017
3 min read
Save for later

Microsoft showcases its edgy AI toolkit at Connect(); 2017

Sugandha Lahoti
17 Nov 2017
3 min read
At the ongoing Microsoft Connect(); 2017, Microsoft has unveiled their latest innovations in AI development platforms. The Connect(); conference this year is all about developing new tools and cloud services that help developers seize the growing opportunity around artificial intelligence and machine learning. Microsoft has made two major announcements to capture the AI market. Visual Studio Tools for AI Microsoft has announced new tools for its Visual Studio IDE specific for building AI applications. Visual Studio Tools for AI is currently in the beta stage and is an extension to the Visual Studio 2017. It allows developers, data scientists, and machine learning engineers to embed deep learning models into applications. They also have built-in support for popular machine learning frameworks such as Microsoft Cognitive Toolkit (CNTK), Google TensorFlow, Caffe2, and MXNet. It also comes packed with features such as custom metrics, history tracking, enterprise-ready collaboration, and data science reproducibility and auditing. Visual Studio Tools for AI allows interactive debugging of deep learning applications with built-in features like syntax highlighting, IntelliSense and text auto formatting. Training of AI models on the cloud is also possible using the integration with Azure Machine Learning. This integration also allows deploying a model into production. Visualization and monitoring of AI models is available using TensorBoard, which is an integrated open tool and can be run both locally and in remote VMs. Azure IoT Edge Microsoft sees IoT as a mission-critical business asset. With this in mind, they have developed a product for IoT solutions. Termed as Azure IoT Edge, it enables developers to run cloud intelligence on the edge of IoT devices. Azure IoT Edge can operate on Windows and Linux as well as on multiple hardware architectures (x64 and ARM). Developers can work on languages such as C#, C and Python to deploy models on Azure IoT Edge. The Azure IoT edge is a bundle of multiple components. With AI Toolkit, developers can start building AI applications. With Azure Machine learning, AI applications can be created, deployed, and managed with the toolkit on any framework. Azure Machine Learning also includes a set of pre-built AI models for common tasks. In addition, using the Azure IoT Hub, developers can deploy Edge modules on multiple IoT Edge devices. Using a combination of Azure Machine Learning, Azure Stream Analytics, Azure Functions, and any third-party code, a complex data pipeline can be created to build and test container-based workloads. This pipeline can be managed using the Azure IoT Hub. The customer reviews on Azure IoT edge have been positive up till now. Here’s what Matt Boujonnier, Analytics Application Architect at Schneider Electric says: "Azure IoT Edge provided an easy way to package and deploy our Machine Learning applications. Traditionally, machine learning is something that has only run in the cloud, but for many IoT scenarios that isn’t good enough, because you want to run your application as close as possible to any events. Now we have the flexibility to run it in the cloud or at the edge—wherever we need it to be." With the launch of these two new tools, Microsoft is catching up quickly with the likes of Google and IBM to capture the AI market and providing developers with an intelligent edge.
Read more
  • 0
  • 0
  • 14254

article-image-after-backlash-for-rejecting-a-ublock-origin-update-from-the-chrome-web-store-google-accepts-ad-blocking-extension
Bhagyashree R
15 Oct 2019
6 min read
Save for later

After backlash for rejecting a uBlock Origin update from the Chrome Web Store, Google accepts ad-blocking extension

Bhagyashree R
15 Oct 2019
6 min read
Last week, Raymond Hill, the developer behind uBlock Origin shared that the extension’s dev build 1.22.5rc1 was rejected by Google's Chrome Web Store (CWS). uBlock Origin is a free and open-source browser extension widely used for content-filtering and adblocking.  Google stated that the extension did not comply with its extension standards as it bundles up different purposes into a single extension. An email to Hill from Google reads, “Do not create an extension that requires users to accept bundles of unrelated functionality, such as an email notifier and a news headline aggregator.” Hill on a GitHub issue mentioned that this is basically “stonewalling” and in the future, users may have to switch to another browser to use uBlock Origin. He does plans to upload the stable version. He commented, “I will upload stable to the Chrome Web Store, but given 1.22.5rc2 is rejected, logic dictates that 1.23.0 will be rejected. Actually, logic dictates that 1.22.5rc0 should also be rejected and yet it's still available in the Chrome Web Store.” Users’ reaction on Google rejecting the uBlock Origin dev build This news sparked a discussion on Hacker News and Reddit. Users speculated that probably this outcome is the result of the “crippling” update Google has introduced in Chrome (beta and dev versions currently): deprecating the blocking ability of the WebRequest API. The webRequest API permits extensions to intercept requests to modify, redirect, or block them. The basic flow of handling a request using this API is, Chrome receives the request, asks the extension, and then gets the result. In Manifest V3, the use of this API will be limited in its blocking form. While the non-blocking form of the API, which permit extensions to observe network requests will be allowed.  In place of webRequest API, Google has introduced the declarativeNetRequest API. This API allows adding up to 30,000 rules, 5000 dynamic rules, and 100 pages. Due to its limiting nature, many ad blocker developers and maintainers have expressed that this API will impact the capabilities of modern content blocking extensions. Google’s reasoning for introducing this change is that this API is much more performant and provides better privacy guarantees. However, many developers think otherwise. Hill had previously shared his thoughts on deprecating the blocking ability of the webRequest API.  “Web pages load slow because of bloat, not because of the blocking ability of the webRequest API -- at least for well-crafted extensions. Furthermore, if performance concerns due to the blocking nature of the webRequest API was their real motive, they would just adopt Firefox's approach and give the ability to return a Promise on just the three methods which can be used in a blocking manner.” Many users also mentioned that Chrome is using its dominance in the browser market to dictate what type of extensions are developed and used. A user commented, “As Chrome is a dominant platform, our work is prevented from reaching users if it does not align with the business goals of Google, and extensions that users want on their devices are effectively censored out of existence.” Others expressed that it is better to avoid all the drama by simply switching to some other browser, mainly Firefox. “Or you could cease contributing to the Blink monopoly on the web and join us of Firefox. Microsoft is no longer challenging Google in this space,” a user added. While some others were in support of Google saying that Hill could have moved some of the functionalities to a separate extension. “It's an older rule. It does technically apply here, but it's not a great look that they're only enforcing it now. If Gorhill needed to, some of that extra functionality could be moved out into a separate extension. uBlock has done this before with uBlock Origin Extra. Most of the extra features (eg. remote font blocking) aren't a huge deal, in my opinion.” How Google reacted to the public outcry Simeon Vincent, a developer advocate for Chrome extensions commented on a Reddit discussion that the updated extension was approved and published on the Chrome Web Store.  “This morning I heard from the review team; they've approved the current draft so next publish should go through. Unfortunately it's the weekend, so most folks are out, but I'm planning to follow up with u/gorhill4 with more details once I have them. EDIT: uBlock Origin development build was just successfully published. The latest version on the web store is 1.22.5.102.” He also further said that this whole confusion was because of a “clunkier” developer communication process. When users asked him about the Manifest V3 change he shared, “We've made progress on better supporting ad blockers and content blockers in general in Manifest V3. We've added rule modification at runtime, bumped the rule limits, added redirect support, header modification, etc. And more improvements are on the way.” He further added, “But Gorhill's core objection is to removing the blocking version of webRequest. We're trying to move the extension platform in a direction that's more respectful of end-user privacy, more secure, and less likely to accidentally expose data – things webRequest simply wasn't designed to do.” Chrome ignores the autocomplete=off property In other Chrome related news, it was reported that Chrome continues to autofill forms even if you disable it using the autocomplete=off property. A user commented, “I've had to write enhancements for Web apps several times this year with fields which are intended to be filled by the user with information *about other users*. Not respecting autocomplete="off" is a major oversight which has caused a lot of headache for those enhancements.” Chrome decides on which field should be filled with what data based on a combination of form and field signatures. If these do not match, the browser will resort to only checking the field signatures.  A developer from the Google Chrome team shared, “This causes some problems, e.g. in input type="text" name="name", the "name" can refer to different concepts (a person's name or the name of a spare part).” To solve this problem the team is working on an experimental feature that gives users the choice to “(permanently) hide the autofill suggestions.”  Check out the reported issue to know more in detail. Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument Is it time to ditch Chrome? Ad blocking extensions will now only be for enterprise users Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end GitHub updates to Rails 6.0 with an incremental approach React DevTools 4.0 releases with support for Hooks, experimental Suspense API, and more!
Read more
  • 0
  • 0
  • 13884
article-image-numpy-drops-python-2-support-now-you-need-python-3-5-or-later
Prasad Ramesh
17 Dec 2018
2 min read
Save for later

NumPy drops Python 2 support. Now you need Python 3.5 or later.

Prasad Ramesh
17 Dec 2018
2 min read
In a GitHub pull request last week, the NumPy community decided to remove support for Python 2.7. Python 3.4 support will also be dropped with this pull request. So now, to use NumPy 1.17 and newer versions, you will need Python 3.5 or later. NumPy has been supporting both Python versions since 2010. This move doesn't come as a surprise with the Python core team itself dropping support for Python 2 in 2020. The NumPy team had mentioned that this move comes in “Python 2 is an increasing burden on our limited resources”. The discussion to drop Python 2 support in NumPy started almost a year ago. Running pip install numpy on Python 2 will still install the last working version. But here on now, it may not contain the latest features as released for Python 3.5 or higher. However, NumPy on Python 2 will still be supported until December 31, 2019. After January 1, 2020, it may not contain the newest bug fixes. The Twitter audience sees this as a welcome move: https://twitter.com/TarasNovak/status/1073262599750459392 https://twitter.com/esc___/status/1073193736178462720 A comment on Hacker News reads: “Let's hope this move helps with the transitioning to Python 3. I'm not a Python programmer myself, but I'm tired of things getting hairy on Linux dependencies written in Python. It almost seems like I always got to have a Python 2 and a Python 3 version of some packages so my system doesn't break.” Another one reads: “I've said it before, I'll say it again. I don't care for everything-is-unicode-by-default. You can take my Python 2 when you pry it from my cold dead hands.” Some researchers who use NumPy and SciPy stick Python 2, this move from the NumPy team will help in getting everyone to work on a single version. One single supported version will sure help with the fragmentation. Often, Python developers find themselves in a situation where they have one version installed and a specific module is available/works properly in another version. Some also argue about stability, that Python 2 has greater stability and x or y feature. But the general sentiment is more supportive of adopting Python 3. Introducing numpywren, a system for linear algebra built on a serverless architecture NumPy 1.15.0 release is out! Implementing matrix operations using SciPy and NumPy  
Read more
  • 0
  • 0
  • 12034

article-image-gcc-9-will-come-with-improved-diagnostics-simpler-c-errors-and-much-more
Amrata Joshi
11 Mar 2019
2 min read
Save for later

GCC 9.1 releases with improved diagnostics, simpler C++ errors and much more

Amrata Joshi
11 Mar 2019
2 min read
Just two months ago, the team behind GCC (GNU Compiler Collection) made certain changes to GCC 9.1. And Last week, the team released GCC 9.1 with improved diagnostics, location and simpler C++ errors.  What’s new in GCC 9.1? Changes to diagnostics The team added a left-hand margin that shows line numbers. GCC 9.1 now has a new look for the diagnostics. The diagnostics can label regions of the source code in order to show relevant information. The diagnostics come with left-hand and right-hand sides of the “+” operator, so GCC highlights them inline. The team has added a JSON output format such that GCC 9.1 now has a machine-readable output format for diagnostics. C++ errors  The compiler usually has to consider several functions while dealing with C++ at a given call site and reject all of them for different reasons. Also, the g++‘s error messages need to be handled and a specific reason needs to be given for rejecting each function. This makes simple cases difficult to read. This release comes with a  special-casing to simplify g++ errors for common cases. Improved C++ syntax in GCC 9.1 The major issue within GCC’s internal representation is that not every node within the syntax tree has a source location. For GCC 9.1, the team has worked to solve this problem so that most of the places in the C++ syntax tree now retain location information for longer. Users can now emit optimization information GCC 9.1 can now automatically vectorize loops and reorganize them to work on multiple iterations at once. Users will now have an option, -fopt-info, that will help in emitting optimization information. Improved runtime library in GCC 9.1 This release comes with improved experimental support for C++17, including <memory_resource>. There will also be a support for opening file streams with wide character paths on Windows. Arm specific This release comes with support for the deprecated Armv2 and Armv3 architectures and their variants have been removed. Support for the Armv5 and Armv5E architectures has also been removed. To know more about this news, check out RedHat’s blog post. DragonFly BSD 5.4.1 released with new system compiler in GCC 8 and more The D language front-end support finally merged into GCC 9 GCC 8.1 Standards released!
Read more
  • 0
  • 0
  • 11896

article-image-gnome-3-32-released-with-fractional-scaling-improvements-to-desktop-web-and-much-more
Amrata Joshi
14 Mar 2019
3 min read
Save for later

GNOME 3.32 released with fractional scaling, improvements to desktop, web and much more

Amrata Joshi
14 Mar 2019
3 min read
Yesterday, the team at GNOME released the latest version of GNOME 3, GNOME 3.32, a free open-source desktop environment for Unix-like operating systems. This release comes with improvements to desktop, web and much more. What’s new in GNOME 3.32? Fractional Scaling Fractional scaling is available as an experimental option that includes several fractional values with good visual quality on any given monitor. This feature is a major enhancement for the GNOME desktop. It requires manually adding scale-monitor-framebuffer to the settings keyorg.gnome.mutter.experimental-features. Improved data structures in GNOME desktop This release comes with improvements to foundation data structures in the GNOME Desktop for faster and snappier feel to the animations, icons and top shell panel. The search database has been improved which helps in searching faster. Even the on-screen keyboard has been improved, it now supports an emoji chooser. New automation mode in the GNOME Web GNOME Web now comes with a new automation mode which allows the application to be controlled by WebDriver. The reader mode has been enhanced now that features a set of customizable preferences and an improved style. With this release, the touchpad users can now take advantage of more gestures while browsing. For example, swipe left or right to go back or forward through browsing history. New settings for permissions Settings come with a new “Application Permissions” panel that shows resources and permissions for various applications, including installed Flatpak applications. Users can now grant permissions to certain resources when requested by the application. The Sound settings have been enhanced for supporting a vertical layout and an intuitive placement of options. With this release, the night light color temperature can now be adjusted for a warmer or cooler setting. GNOME Boxes GNOME Boxes tries to enable 3D acceleration for virtual machines if both the guest and host support it. This leads to better performance of graphics-intensive guest applications such as games and video editors. Application Management from multiple sources This release can handle apps available from multiple sources, such as Flatpak and distribution repositories. With this release, Flatpak app entries now can list the permissions required on the details page. This will give users a comprehensive understanding of what data the software will need access to. Even browsing application details will get faster now with the new XML parsing library used in this release. To know more about this release, check out the official announcement. GNOME team adds Fractional Scaling support in the upcoming GNOME 3.32 GNOME 3.32 says goodbye to application menus Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes  
Read more
  • 0
  • 0
  • 11600
article-image-unity-switches-to-webassembly-as-the-output-format-for-the-unity-webgl-build-target
Sugandha Lahoti
16 Aug 2018
2 min read
Save for later

Unity switches to WebAssembly as the output format for the Unity WebGL build target

Sugandha Lahoti
16 Aug 2018
2 min read
With the launch of Unity 2018.2 release last month, Unity is finally making the switch to WebAssembly as their output format for the Unity WebGL build target. WebAssembly support was first teased in Unity 5.6 as an experimental feature. Unity 2018.1 marked the removal of the experimental label. And finally in 2018.2, Web Assembly replaces asm.js as the default linker target. Source: Unity Blog WebAssembly replaced asm.js because it is faster, smaller and more memory-efficient, which are all pain points of the Unity WebGL export. A WebAssembly file is a binary file (which is a more compact way to deliver code), as opposed to asm.js, which is text. In addition, code modules that have already been compiled can be stored into an IndexedDB cache, resulting in a really fast startup when reloading the same content. In WebAssembly, the code size for an empty project is ~12% smaller or ~18% if 3D physics is included. Source: Unity Blog WebAssembly also has its own instruction set. In Development builds, it adds more precise error-detection in arithmetic operations. In non-development builds, this kind of detection of arithmetic errors is masked, so the user experience is not affected. Asm.js added a restriction on the size of the Unity Heap; its size had to be specified at build-time and could never change. WebAssembly enables the Unity Heap size to grow at runtime, which lets Unity content memory-usage exceed the initial heap size. Unity is now working on multi-threading support, which will initially be released as an experimental feature and will be limited to internal native threads (no C# threads yet). Debugging hasn’t got any better. While browsers have begun to provide WebAssembly debugging in their devtools suites, these debuggers do not yet scale well to Unity3D sizes of content. What’s next to come Unity is still working on new features and optimizations to improve startup times and performance: Asynchronous instantiation Structured cloning, which allows compiled WebAssembly to be cached in the browser Baseline and tiered compilation, to speed-up instantiation Streaming instantiation to compile Assembly code while downloading it Multi-Threading You can read the full details on the Unity Blog. Unity 2018.2: Unity release for this year second time in a row! GitHub for Unity 1.0 is here with Git LFS and file locking support What you should know about Unity 2018 Interface
Read more
  • 0
  • 0
  • 11491

article-image-spotify-has-one-of-the-most-intricate-uses-of-javascript-in-the-world-says-former-engineer
Richard Gall
19 Jul 2018
3 min read
Save for later

Spotify has "one of the most intricate uses of JavaScript in the world," says former engineer

Richard Gall
19 Jul 2018
3 min read
A former Spotify engineer, Mattias Peter Johansson, has outlined how the music streaming platform uses JavaScript on it's desktop application. It's complicated and, according to Reddit, "kind of insane". Responding to a question on Quora, Johansson says it could be "among the top 25 most intricate uses of JavaScript in the world." What's particularly interesting is how this intricate JavaScript has influenced the Spotify architecture and the way the development teams are organized. How JavaScript is used on the Spotify desktop app JavaScript is used across the Spotify desktop client. Wherever UI is concerned, it uses JavaScript. C++ is used for functionality beneath the UI, with JavaScript sitting on top of it. The languages are connected by an interface aptly called a 'bridge.' Spotify's squads and spotlets The Spotify team is made up of small squads of anywhere from 3 to 12 people. Johansson explains that  "a feature is generally owned by a single squad, and during normal conditions the squad has all it needs to develop and maintain its feature." Each team has as many backend, front end, and mobile developers as necessary for the particular feature it owns. These features are known as 'spotlets.' Each of these spotlets are essentially web apps that come together to power the desktop app's UI. Johansson explains how they work, saying: They all run inside Chromium Embedded Framework, each app living within their own little iframe, which gives squads the ability to work with whatever frameworks they need, without the need to coordinate tooling and dependencies with other squads. The advantage of this is that it makes technical decision making much easier. As Johansson explains, "introducing a library is a discussion between a few people instead of decision that involves ~100 people and their various needs." Shared functionalities across the Spotify development team Although spotlets and squads create a somewhat fragmented picture of a development team, things are unified. "The latest versions of all Spotlets are zipped and bundled with the desktop client binary on every release, assets and all," says Johansson. Individual spotlets are also sometimes released where an emergency fix might be needed. Although tooling decisions are left up to individual squads, there are a couple of tools that are used across the team. This includes GLUE, a CSS framework that allows some coordination and alignment in terms of design. The team also rely heavily on npm, as you might expect. "We have our own internal npm repository where we publish internal modules, and we package the code together using a Browserify-like tool."
Read more
  • 0
  • 1
  • 11234