Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Data

1209 Articles
article-image-microsofts-visual-studio-intellicode-gets-improved-features-whole-line-code-completions-ai-assisted-refactoring-and-more
Savia Lobo
06 Nov 2019
3 min read
Save for later

Microsoft’s Visual Studio IntelliCode gets improved features: Whole-line code completions, AI-assisted refactoring, and more!

Savia Lobo
06 Nov 2019
3 min read
At the Ignite 2019, Microsoft shared a few improvements to the Visual Studio IntelliCode, Microsoft’s tool for AI-assisted coding that offers intelligent suggestions to improve code quality and productivity. Amanda Silver, a director of Microsoft’s developer division, in her official blog post writes, “At Microsoft Ignite, we showed a vision of how AI can be applied to developer tools. After talking with thousands of developers over the last couple years, we found that the most highly effective assistance can only come from one source: the collective knowledge of the open source, GitHub community.” Latest improvements in Microsoft’s IntelliCode Whole-line code completions and AI-assisted suggestions IntelliCode provides whole-line code completion suggestions IntelliCode extends the GPT-2 transformer language model to learn about programming languages and coding patterns. OpenAI-generated GPT model architecture has the ability to generate conditional synthetic text examples without needing domain-specific training datasets. For initial language-specific base models, the team adopted an unsupervised learning approach that learns from over 3000 top GitHub repositories. The base model then extracts statistical coding patterns and learns the intricacies of programming languages from GitHub repos to assist developers in their coding. Based on the code context, as the user types, IntelliCode uses semantic information and sourced patterns to predict the most likely completion in-line with the user’s code. IntelliCode has also extended machine-learning model training capabilities beyond the initial base model to enable teams to train their own team completions. AI-assisted refactoring detection IntelliCode suggests code changes in the IDE and also locally synthesizes, on-demand, edit scripts from any set of repetitive pattern changes. IntelliCode saves developers a lot of time with a new AI technology called program synthesis or programming-by-examples (PBE). PBE has been developed at Microsoft by the PROSE team and has been applied to various products including Flash Fill in Excel and webpage table extraction in PowerBI. “IntelliCode advances the state-of-the-art in PBE by allowing patterns to be learned from noisy traces as opposed to explicitly provided examples, without any additional steps on your part,” Silver writes. Talking about security, Silver says, “our PROSE-based models work entirely locally, so your code never leaves your machine.”  She also said that over the past few months, the team has used unsupervised machine learning techniques to create a model that is predictive for Python. Silver also told VentureBeat, “So the result is that as you’re coding Python, it actually feels more like the editing experience that you might get from a statically typed programming language — without actually having to make Python statically typed. And so as you type, you get statement completion for APIs and you can get argument completion that’s based on the context of the code that you’ve written thus far.” Many users are impressed with the improvements in IntelliCode. A user tweeted, “Training ML against repos is super clever.” https://twitter.com/nathaniel_avery/status/1191760019479519232 https://twitter.com/raschneiderman/status/1191704366035734530 To know more about improvements in IntelliCode, in detail, read Microsoft’s official blog post. Microsoft releases TypeScript 3.7 with much-awaited features like Optional Chaining, Assertion functions and more Mapbox introduces MARTINI, a client-side terrain mesh generation code DeepMind AI’s AlphaStar achieves Grandmaster level in StarCraft II with 99.8% efficiency
Read more
  • 0
  • 0
  • 3149

article-image-introducing-spleeter-tensorflow-python-library-extracts-voice-sound-from-music
Sugandha Lahoti
05 Nov 2019
2 min read
Save for later

Introducing Spleeter, a Tensorflow based python library that extracts voice and sound from any music track

Sugandha Lahoti
05 Nov 2019
2 min read
On Monday, Deezer, a French online music streaming service, released Spleeter which is a music separation engine.  It comes in the form of a Python Library based on Tensorflow. Stating the reason behind Spleeter, the researchers state, “We release Spleeter to help the Music Information Retrieval (MIR) community leverage the power of source separation in various MIR tasks, such as vocal lyrics analysis from audio, music transcription, any type of multilabel classification or vocal melody extraction.” Spleeter comes with pre-trained models for 2, 4 and 5 track separation. These include: Vocals (singing voice) / accompaniment separation (2 stems) Vocals / drums / bass / other separation (4 stems) Vocals / drums / bass / piano / other separation (5 stems) It can also train source separation models or fine-tune pre-trained ones with Tensorflow if you have a dataset of isolated sources. Deezer benchmarked Spleeter against Open-Unmix another open-source model recently released and reported slightly better performances with increased speed. It can perform separation of audio files to 4 stems 100x faster than real-time when running on a GPU. You can use Spleeter straight from the command line as well as directly in your own development pipeline as a Python library. It can be installed with Conda, with pip or be used with Docker. Spleeter creators mention a number of potential applications of source separation engine including remixes, upmixing, active listening, educational purposes, and pre-processing for other tasks such as transcription. Spleeter received mostly positive feedback on Twitter, as people experimented to separate vocals from music. https://twitter.com/lokijota/status/1191580903518228480 https://twitter.com/bertboerland/status/1191110395370586113 https://twitter.com/CholericCleric/status/1190822694469734401 Wavy.org also ran several songs through the two-stem filter and evaluated them in a blog post. They tried a variety of soundtracks across multiple genres. The performance of audio was much better than expected, however, vocals sometimes felt robotically autotuned. The amount of bleed was shockingly low relative to other solutions and surpassed any available free tool and rival commercial plugins and services. https://twitter.com/waxpancake/status/1191435104788238336 Spleeter will be presented and live-demoed at the 2019 ISMIR conference in Delft. For more details refer to the official announcement. DeepMind AI’s AlphaStar achieves Grandmaster level in StarCraft II with 99.8% efficiency. Google AI introduces Snap, a microkernel approach to ‘Host Networking’ Firefox 70 released with better security, CSS, and JavaScript improvements
Read more
  • 0
  • 0
  • 17185

article-image-introducing-postgrest-a-rest-api-for-any-postgresql-database-written-in-haskell
Bhagyashree R
04 Nov 2019
3 min read
Save for later

Introducing PostgREST, a REST API for any PostgreSQL database written in Haskell

Bhagyashree R
04 Nov 2019
3 min read
Written in Haskell, PostgREST is a standalone web server that enables you to turn your existing PostgreSQL database into a RESTful API. It offers you a much “cleaner, more standards-compliant, faster AP than you are likely to write from scratch.” The PostgREST documentation describes it as an “alternative to manual CRUD programming.” Explaining the motivation behind this tool, the documentation reads, “Writing business logic often duplicates, ignores or hobbles database structure. Object-relational mapping is a leaky abstraction leading to slow imperative code. The PostgREST philosophy establishes a single declarative source of truth: the data itself.” Performant by design In terms of performance, PostgREST shows subsecond response times for up to 2000 requests/sec on Heroku free tier. The main contributor to this impressive performance is its Haskell implementation using the Warp HTTP server. To maintain fast response times, it delegates most of the calculation part to the database including serializing JSON responses directly in SQL, data validation, and more. Along with that, it takes the help of the Hasql library to efficiently use the database. A single declarative source of truth for security PostgREST is responsible for handling authentication via JSON Web Tokens. You can also build other forms of authentication on top of the JWT primitive. It delegates authorization to the role information defined in the database to ensure there is a single declarative source of truth for security. Data integrity PostgREST does not rely on an Object Relational Mapper (ORM) and custom imperative coding. Instead, developers need to put declarative constraints directly into their database preventing any kind of data corruption. In a Hacker News discussion, many users praised the tool. “I think PostgREST is the first big tool written in Haskell that I’ve used in production. From my experience, it’s flawless. Kudos to the team,” a user commented. Some others also expressed that using this tool for systems in production can further complicate things. A user added, “Somebody in our team put this on production. I guess this solution has some merits if you need something quick, but in the long run it turned out to be painful. It's basically SQL over REST. Additionally, your DB schema becomes your API schema and that either means you force one for the purposes of the other or you build DB views to fix that.” You can read about PostgREST on its official website. Also, check out its GitHub repository. After PostgreSQL, DigitalOcean now adds MySQL and Redis to its managed databases’ offering Amazon Aurora makes PostgreSQL Serverless generally available PostgreSQL 12 progress update
Read more
  • 0
  • 0
  • 7034
Banner background image

article-image-deepmind-ais-alphastar-achieves-grandmaster-level-in-starcraft-ii-with-99-8-efficiency
Vincy Davis
04 Nov 2019
5 min read
Save for later

DeepMind AI’s AlphaStar achieves Grandmaster level in StarCraft II with 99.8% efficiency

Vincy Davis
04 Nov 2019
5 min read
Earlier this year in January, Google’s DeepMind AI AlphaStar had defeated two professional players, TLO and MaNa, at StarCraft II, a real-time strategy game. Two days ago, DeepMind announced that AlphaStar has now achieved the highest possible online competitive ranking, called Grandmaster level, in StarCraft II. This makes AlphaStar the first AI to reach the top league of a widely popular game without any restrictions. AplhaStar used the multi-agent reinforcement learning technique and rated above 99.8% of officially ranked human players. It was able to achieve the Grandmaster level for all the three StarCraft II races - Protoss, Terran, and Zerg. The DeepMind researchers have published the details of AlphaStar in the paper titled, ‘Grandmaster level in StarCraft II using multi-agent reinforcement learning’. https://twitter.com/DeepMindAI/status/1189617587916689408 How did AlphaStar achieve the Grandmaster level in StarCraft II? The DeepMind researchers were able to develop a robust and flexible agent by understanding the potential and limitations of open-ended learning. This helped the researchers to make AlphaStar cope with complex real-world domains. “Games like StarCraft are an excellent training ground to advance these approaches, as players must use limited information to make dynamic and difficult decisions that have ramifications on multiple levels and timescales,” states the blog post. The StarCraft II video game requires players to balance high-level economic decisions with individual control of hundreds of units. When playing this game, humans are under physical constraints which limits their reaction time and their rate of actions. Accordingly, AphaStar was also imposed with these constraints, thus making it suffer from delays due to network latency and computation time. In order to limit its actions per minute (APM), AphaStar’s peak statistics were kept substantially lower than those of humans. To align with the standard human movement, it also had a limited viewing of the portion of the map, AlphaStar could register only a limited number of mouse clicks and had only 22 non-duplicated actions to play every five seconds. AlphaStar uses a combination of general-purpose techniques like neural network architectures, imitation learning, reinforcement learning, and multi-agent learning. The games were sampled from a publicly available dataset of anonymized human replays, which were later trained to predict the action of every player. These predictions were then used to procure a diverse set of strategies to reflect the different modes of human play. Read More: DeepMind’s Alphastar AI agent will soon anonymously play with European StarCraft II players Dario “TLO” WÜNSCH, a professional starcraft II player says, “I’ve found AlphaStar’s gameplay incredibly impressive – the system is very skilled at assessing its strategic position, and knows exactly when to engage or disengage with its opponent. And while AlphaStar has excellent and precise control, it doesn’t feel superhuman – certainly not on a level that a human couldn’t theoretically achieve. Overall, it feels very fair – like it is playing a ‘real’ game of StarCraft.” According to the paper, AlphaStar had the 1026 possible actions available at each time step, thus it had to make thousands of actions before learning if it has won or lost the game. One of the key strategies behind AlphaStar’s performance was learning human strategies. This was necessary to ensure that the agents keep exploring those strategies throughout self-play. The researchers say, “To do this, we used imitation learning – combined with advanced neural network architectures and techniques used for language modeling – to create an initial policy which played the game better than 84% of active players.” AlphaStar also uses a latent variable to encode the distribution of opening moves from human games. This helped AlphaStar to preserve the high-level strategies and enabled it to represent many strategies within a single neural network. By training the advances in imitation learning, reinforcement learning, and the League, the researchers were able to train AlphaStar Final, the agent that reached the Grandmaster level at the full game of StarCraft II without any modifications. AlphaStar used a camera interface, which helped it get the exact information that a human player would receive. All the interface and restrictions faced by AlphaStar were approved by a professional player. Finally, the results indicated that general-purpose learning techniques can be used to scale AI systems to work in complex and dynamic environments involving multiple actors. AlphaStar’s great feat has got many people excited about the future of AI. https://twitter.com/mickdooit/status/1189604170489315334 https://twitter.com/KaiLashArul/status/1190236180501139461 https://twitter.com/JoshuaSpanier/status/1190265236571459584 Interested readers can read the research paper to check AlphaStar’s performance. Head over to DeepMind’s blog for more details. Google AI introduces Snap, a microkernel approach to ‘Host Networking’ Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs
Read more
  • 0
  • 0
  • 4591

article-image-netflix-open-sources-polynote-an-ide-like-polyglot-notebook-with-scala-support-apache-spark-integration-multi-language-interoperability-and-more
Vincy Davis
31 Oct 2019
4 min read
Save for later

Netflix open sources Polynote, an IDE-like polyglot notebook with Scala support, Apache Spark integration, multi-language interoperability, and more

Vincy Davis
31 Oct 2019
4 min read
Last week, Netflix announced the open source launch of Polynote which is a polyglot notebook. It comes with a full scale Scala support, Apache Spark integration, multi-language interoperability including Scala, Python, SQL, and provides IDE-like features such as interactive autocomplete, a rich text editor with LaTeX support, and more. Polynote renders a seamless integration of Netflix’s Scala employed JVM-based ML platform with Python’s machine learning and visualization libraries. It is currently used by Netflix’s personalization and recommendation teams and is also being integrated with the rest of the Netflix research platform. The Netflix team says, “Polynote originated from a frustration with the shortcomings of existing notebook tools, especially with respect to their support of Scala.” Also, “we found that our users were also frustrated with the code editing experience within notebooks, especially those accustomed to using IntelliJ IDEA or Eclipse.”  Key features supported by Polynote Reproducibility A traditional notebook generally relies on a Read–eval–print loop (REPL) environment to build an interactive environment with other users. According to Netflix, the expressions and the results of a REPL evaluation is quite rigid. Thus, Netflix built the Polynote’s code interpretation from scratch, instead of relying on a REPL. This helps Polynote to keep track of the variables defined in each cell by constructing the input state for a given cell based on the cells that have run above it. By making the position of a cell important in its execution semantics, Polynote allows the users to read the notebook from top to bottom. This ensures reproducibility in Polynote by increasing the chances of running the notebook sequentially. Editing Improvements Polynote provides editing enhancements like: It integrates code editing with the Monaco editor for interactive auto-complete. It highlights errors internally to help users rectify it quickly. A rich text editor for text cells which allows users to easily insert LaTeX equations. Visibility One of the major guiding principles of Polynote is its visibility. It enables live view of what the kernel is doing at any given time, without requiring logs. A single glance at a user interface imparts with many information like- The notebook view and task list displays the current running cell, and also shows the queue to be run. The exact statement running in the system is highlighted in colour. Job and stage level Spark progress information is shown in the task list. The kernel status area provides information about the execution status of the kernel. Polyglot Currently, Polynote supports Scala, Python, and SQL cell types and enables users to seamlessly move from one language to another within the same notebook. When a cell is running in the system, the kernel handovers the typed input values to the cell’s language interpreter. Successively, the interpreter provides the resulted typed output values back to the kernel. This enables the cell in a Polynote notebook to run irrespective of the language with the same context and the same shared state. Dependency and Configuration Management In order to ease reproducibility, Polynote yields configuration and dependency setup within the notebook itself. It also provides a user-friendly Configuration section where users can set dependencies for each notebook. This allows Polynote to fetch the dependencies locally and also load the Scala dependencies into an isolated ClassLoader. This reduces the chances of a class conflict of Polynote with the Spark libraries. When Polynote is used in Spark mode, it creates a Spark Session for the notebook, where the Python and Scala dependencies are automatically added to the Spark Session. Data Visualization One of the most important use cases of a notebook is its ability to explore and visualize data. Polynote integrates with two open source visualization libraries- Vega and Matplotlib. It also has a native support for data exploration such as including a data schema view, table inspector and  plot constructor. Hence, this feature helps users to learn about their data without cluttering their notebooks. Users have appreciated Netflix efforts of open sourcing their Polynote notebook and have liked its features https://twitter.com/SpirosMargaris/status/1187164558382845952 https://twitter.com/suzatweet/status/1187531789763399682 https://twitter.com/SpirosMargaris/status/1187164558382845952 https://twitter.com/julianharris/status/1188013908587626497 Visit the Netflix Techblog for more information of Polynote. You can also check out the Polynote website for more details. Netflix security engineers report several TCP networking vulnerabilities in FreeBSD and Linux kernels Netflix adopts Spring Boot as its core Java framework Netflix’s culture is too transparent to be functional, reports the WSJ Linux foundation introduces strict telemetry data collection and usage policy for all its projects Fedora 31 releases with performance improvements, dropping support for 32 bit and Docker package
Read more
  • 0
  • 0
  • 5016

article-image-linux-foundation-introduces-strict-telemetry-data-collection-and-usage-policy-for-all-its-projects
Fatema Patrawala
31 Oct 2019
3 min read
Save for later

Linux Foundation introduces strict telemetry data collection and usage policy for all its projects

Fatema Patrawala
31 Oct 2019
3 min read
Last week, the Linux Foundation introduced a new policy around the collection and usage of telemetry data. As per this new policy all linux projects before using any telemetry data collection mechanism will have to take permissions from the Linux Foundation and the proposed mechanism will undergo a detailed review. The Linux Foundation’s announcement follows closely after Gitlab’s telemetry data collection plan came to a halt. Last week, GitLab announced that it would begin collecting new data by inserting JavaScript snippets and interact with both GitLab and a third-party SaaS telemetry service. However, after receiving severe backlash from users, the company reversed its decision. The official statement from the Linux Foundation reads as follows, “Any Linux Foundation project is required to obtain permission from the Linux Foundation before using a mechanism to collect Telemetry Data from an open source project. In reviewing a proposal to collect Telemetry Data, the Linux Foundation will review a number of factors and considerations.” The Linux Foundation also states that the software sometimes includes the functionality to collect telemetry data. The data is collected through a “phone home” mechanism built into the software. And the end user deploying the software is typically presented with an option to opt-in to share this data with the developers. In doing so certain personal and sensitive information of the users might also get shared without realizing. Hence, to address such data breach and to adhere to the recent data privacy legislation like GDPR, the Linux Foundation has introduced this stringent telemetry data policy. Dan Lopez, a representative of the Linux Foundation states, “by default, projects of the Linux Foundation should not collect Telemetry Data from users of open source software that is distributed on behalf of the project.” New policy for telemetry data As per the new policy, if a project community desires to collect telemetry data, it must first coordinate with members of the legal team of the Linux Foundation to undergo a detailed review of the proposed telemetry data and collection mechanism. The review will include an analysis of the following: the specific data proposed to be collected demonstrating that the data is fully anonymized, and does not contain any sensitive or confidential information of users the manner in which users of the software are (1) notified of all relevant details of the telemetry data collection, use and distribution; and (2) required to consent prior to any telemetry data collection being initiated the manner in which the collected telemetry data is stored and used by the project community the security mechanisms that are used to ensure that collection of telemetry data will not result in (1) unintentional collection of data; or (2) security vulnerabilities resulting from the “phone home” functionality The Linux Foundation has also emphasized that telemetry data should not be collected unless and until the legal team approves the proposed collection. Additionally any telemetry data collection approved by the Linux Foundation must be fully documented, must make the collected data available to all participants in the project community, and at all times comply with the Linux Foundation’s Privacy Policy. A recap of the Linux Plumbers Conference 2019 IBM open-sources Power ISA and other chips; brings OpenPOWER foundation under the Linux Foundation Introducing kdevops, a modern DevOps framework for Linux kernel development GitLab retracts its privacy invasion policy after backlash from community
Read more
  • 0
  • 0
  • 4126
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-are-we-entering-the-quantum-computing-era-googles-sycamore-achieves-quantum-supremacy-while-ibm-refutes-the-claim
Vincy Davis
25 Oct 2019
6 min read
Save for later

Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim

Vincy Davis
25 Oct 2019
6 min read
Two days ago, Google astonished many people around the world with claims that they have achieved a major milestone in quantum computing, one which has largely been an unattainable feat until now. In a paper titled “Quantum supremacy using a programmable superconducting processor”, Google explains how its 53-bit quantum computer named ‘Sycamore’ took only 200 seconds to perform a sensitive computation that would otherwise take the world's fastest supercomputer 10,000 years. This, Google claims is their attainment of ‘quantum supremacy’. If confirmed, this would be the first major milestone in harnessing the principles of quantum mechanics to solve computational problems. Google’s AI Quantum team and John Martinis, physicist at the University of California are the prime contributors to this achievement. NASA Ames Research Center, Oak Ridge National Laboratory and Forschungszentrum Jülich have also helped Google in implementing this experiment. In quantum computing, quantum supremacy is the potential ability of any device to solve problems that classical computers practically cannot. According to Sundar Pichai, Google CEO, “For those of us working in science and technology, it’s the “hello world” moment we’ve been waiting for—the most meaningful milestone to date in the quest to make quantum computing a reality.” This announcement from Google comes exactly one month after the same paper was leaked online. However, following Google’s announcement, IBM is arguing “an ideal simulation of the same task can be performed on a classical system in 2.5 days with far greater fidelity.” According to IBM, as proposed by John Preskill in 2012, the original meaning of the term “quantum supremacy,” is the point where quantum computers can do things that classical computers can’t. Since, Google has not yet achieved this threshold, IBM argues that their claims are wrong. IBM says that in the published paper, Google has assumed that the RAM storage requirements in a traditional computer would be massive. However, if a different approach of using both RAM and hard drive space to store and manipulate the state vector is employed, the 10,000 years specified by Google will drop considerably. Thus, IBM refutes Google’s claims and stated that in its strictest definition, the quantum supremacy standard has not been met by anybody until now. IBM believes that “fundamentally, quantum computers will never reign “supreme” over classical computers, but will rather work in concert with them, since each have their unique strengths.”  IBM further added that the term ‘supremacy’ is currently being misunderstood and urged everybody in the community to treat Google’s claims “with a large dose of skepticism.” Read More: Has IBM edged past Google in the battle for Quantum Supremacy? Though Google has not directly responded to IBM’s accusations, in a statement to Forbes, a Google spokesperson said, “We welcome ideas from the research community on new applications that work only on NISQ-era processors like Sycamore and its successors. We’ve published our circuits so the whole community can explore this new space. We’re excited by what’s possible now that we have this unique new resource.” Although IBM is skeptical of Google’s claims, the news of Google’s accomplishment is making waves all around the world. https://twitter.com/rhhackett/status/1186949190695313409 https://twitter.com/nasirdaniya/status/1187152799055929346 Google’s experiment with the Sycamore processor To achieve this milestone, Google researchers developed Sycamore, the high-fidelity quantum logic gates processor consisting of a two-dimensional array of 54 transmon qubits. Sycamore consists of a two-dimensional grid where each qubit is connected to four other qubits. Each qubit in the processor is tunably coupled to four nearest neighbors, in a rectangular lattice and are forward-compatible for error correction. As a result, the chip has enough connectivity to let  the qubit states quickly interact throughout the entire processor. This feature of Sycamore is what makes it distinct from a classical computer. Image Source: Google blog In the Sycamore quantum processor, “Each run of a random quantum circuit on a quantum computer produces a bitstring, for example 0000101. Owing to quantum interference, some bit strings are much more likely to occur than others when we repeat the experiment many times. However, finding the most likely bit strings for a random quantum circuit on a classical computer becomes exponentially more difficult as the number of qubits (width) and number of gate cycles (depth) grow,” states John Martinis, Chief Scientist Quantum Hardware and Sergio Boixo, Chief Scientist Quantum Computing Theory, Google AI Quantum. Image Source: Google blog For the experiment, the Google researchers ran a random simplified circuit from 12 to 53 qubits with the circuit depth kept constant. Next, they checked the performance of the quantum computer using classical simulations and compared the performance of a quantum computer with a theoretical model. After verifying that the quantum system was working, the researchers ran a random hard circuit with 53 qubits and this time, allowed the circuit depth to expand until the point where the classical simulation became infeasible. At the end, it was found that this quantum computation cannot be emulated on a classical computer and hence, this opens “a new realm of computing to be explored,” says Google. The Google team is now working on quantum supremacy applications like quantum physics simulation, quantum chemistry, generative machine learning, and more. After procuring “certifiable quantum randomness”, Google is now working on testing this algorithm to develop a prototype that can provide certifiable random numbers. Leaving IBM’s accusation aside, many people are excited about Google’s great achievement. https://twitter.com/christina_dills/status/1187074109550800897 https://twitter.com/Inzilya777/status/1187102111429021696 Few people believe that Google is making a hullabaloo of a not-so-successful experiment. A user on Hacker News comments, “Summary: - Google overhyped their results against a weak baseline. This seems to be commonplace in academic publishing, especially new-ish fields where benchmarks aren't well-established. There was a similar backlash against OpenAI's robot hand, where they used simulation for the physical robotic movements and used a well-known algorithm for the actual Rubik's Cube solving. I still think it's an impressive step forward for the field.” Check out the video of Google’s demonstration of quantum supremacy below. https://youtu.be/-ZNEzzDcllU Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs Made by Google 2019: Google’s hardware event unveils Pixel 4 and announces the launch date of Google Stadia After backlash for rejecting a uBlock Origin update from the Chrome Web Store, Google accepts ad-blocking extension
Read more
  • 0
  • 0
  • 4044

article-image-postgis-3-0-0-releases-with-raster-support-as-a-separate-extension
Fatema Patrawala
24 Oct 2019
3 min read
Save for later

PostGIS 3.0.0 releases with raster support as a separate extension

Fatema Patrawala
24 Oct 2019
3 min read
Last week, the PostGIS development team released PostGIS 3.0.0. This release works with PostgreSQL 9.5-12 and GEOS >= 3.6. If developers are using postgis_sfcgal extension, they need to compile against SFCGAL 1.3.1 or higher. The major change in the PostGIS 3.0.0 version is the raster functionality which has been broken out as a separate extension. Take a look at other breaking changes in this release below: Breaking changes in PostGIS 3.0.0 Raster support now in a separate extension - postgis_raster Extension library files no longer include the minor version. If developers need the old behavior, they can use the new configure switch --with-library-minor-version. This change is intended to smoothen future pg_upgrade since lib file names will not change between version 3.0, 3.1, 3.* releases. ND box operators (overlaps, contains, within, equals) will not look at dimensions that aren’t present in both operands. Developers will need to REINDEX their ND indexes after upgrade. Includes 32-bit hash fix (requires reindexing hash(geometry) indexes) Sorting now uses Hilbert curve and Postgres Abbreviated Compare. New features in PostGIS 3.0.0 PostGIS used to expose a SQL function named geosnoop(geometry) to test the cost of deserializing and re-serializing from the PostgreSQL backend. In this release they have brought that function back named as postgis_geos_noop(geometry) with the SFCGAL counterpart. Added ST_AsMVT support for Feature ID. ST_AsMVT transforms a geometry into the coordinate space of a Mapbox Vector Tile of a set of rows corresponding to a Layer. It makes best effort to keep and even correct validity and might collapse geometry into a lower dimension in the process. Added SP-GiST and GiST support for ND box operators overlaps, contains, within, equals.  SP-Gist in PostGIS has been designed to support K Dimensional-trees and other spatial partitioning indexes. Added ST_3DLineInterpolatePoint. ST_Line_Interpolate_Point returns a point interpolated along a line. Introduced WAGYU to validate MVT polygons. Wagyu can be chosen at configure time to clip and validate MVT polygons. This library is faster and produces more correct results than the GEOS default, but it might drop small polygons. It will require a C++11 compiler and will use CXXFLAGS (not CFLAGS). With PostGIS 3.0, it is now possible to generate GeoJSON features directly without any intermediate code, using the new ST_AsGeoJSON(record) function. The GeoJSON format is a common transport format, between servers and web clients, and even between components of processing chains. Added ST_ConstrainedDelaunayTriangles SFCGAL function. This function returns a Constrained Delaunay triangulation around the vertices of the input geometry. This method needs SFCGAL backend, supports 3d media file and will not drop the z-index. Additionally the team has done other enhancements in this release. To know more about this news, you can check out the official blog post by the PostGIS team. PostgreSQL 12 Beta 1 released Writing PostGIS functions in Python language [Tutorial] Top 7 libraries for geospatial analysis Percona announces Percona Distribution for PostgreSQL to support open source databases  After PostgreSQL, DigitalOcean now adds MySQL and Redis to its managed databases’ offering
Read more
  • 0
  • 0
  • 3723

article-image-openais-ai-robot-hand-learns-to-solve-a-rubik-cube-using-reinforcement-learning-and-automatic-domain-randomization-adr
Savia Lobo
16 Oct 2019
5 min read
Save for later

OpenAI’s AI robot hand learns to solve a Rubik Cube using Reinforcement learning and Automatic Domain Randomization (ADR)

Savia Lobo
16 Oct 2019
5 min read
A team of OpenAI researchers shared their research of training neural networks to solve a Rubik’s Cube with a human-like robot hand. The researchers trained the neural networks only in simulation using the same reinforcement learning code as OpenAI Five paired with a new technique called Automatic Domain Randomization (ADR). In their research paper, the team demonstrates how the system trained only in simulation can handle situations it never saw during training. “Solving a Rubik’s Cube one-handed is a challenging task even for humans, and it takes children several years to gain the dexterity required to master it. Our robot still hasn’t perfected its technique though, as it solves the Rubik’s Cube 60% of the time (and only 20% of the time for a maximally difficult scramble),” the researchers mention on their official blog. The Neural networks were also trained with Kociemba’s algorithm along with RL algorithms, for picking the solution steps. Read Also: DeepCube: A new deep reinforcement learning approach solves the Rubik’s cube with no human help What is Automatic Domain Randomization (ADR)? Domain randomization enables networks trained solely in simulation to transfer to a real robot. However, it was a challenge for the researchers to create an environment with real-world physics in the simulation environment. The team realized that it was difficult to measure factors like friction, elasticity, and dynamics for complex objects like Rubik’s Cubes or robotic hands and domain randomization alone was not enough. To overcome this, the OpenAI researchers developed a new method called Automatic Domain Randomization (ADR), which endlessly generates progressively more difficult environments in simulation. In ADR, the neural network learns to solve the cube with a single, nonrandomized environment. As the neural network gets better at the task and reaches a performance threshold, the amount of domain randomization is increased automatically. This makes the task harder since the neural network must now learn to generalize to more randomized environments. The network keeps learning until it again exceeds the performance threshold, when more randomization kicks in, and the process is repeated. “The hypothesis behind ADR is that a memory-augmented network combined with a sufficiently randomized environment leads to emergent meta-learning, where the network implements a learning algorithm that allows itself to rapidly adapt its behavior to the environment it is deployed in,” the researchers state. Source: OpenAI.com OpenAI's AI-hand and the Giiker Cube The researchers have used the Shadow Dexterous E Series Hand (E3M5R) as a humanoid robot hand and the PhaseSpace motion capture system to track the Cartesian coordinates of all five fingertips. They have also used RGB Basler cameras for vision pose estimation. Sensing the state of a Rubik’s cube from vision alone is a challenging task. The team, therefore, used a “smart” Rubik’s cube with built-in sensors and a Bluetooth module as a stepping stone. They also used a Giiker cube for some of the experiments to test the control policy without compounding errors made by the vision model’s face angle predictions. The hardware is based on the Xiaomi Giiker cube. This cube is equipped with a Bluetooth module and allows one to sense the state of the Rubik’s cube. However, it is limited to a face angle resolution of 90◦ , which is not sufficient for state tracking purposes on the robot setup. The team, therefore, replaced some of the components of the original Giiker cube with custom ones in order to achieve a tracking accuracy of approximately 5 degrees. A few challenges faced OpenAI’s method currently solves the Rubik’s Cube 20% of the time when applying a maximally difficult scramble that requires 26 face rotations. For simpler scrambles that require 15 rotations to undo, the success rate is 60%. Researchers consider an attempt to have failed when the Rubik’s Cube is dropped or a timeout is reached. However, their network is capable of solving the Rubik’s Cube from any initial condition. So if the cube is dropped, it is possible to put it back into the hand and continue solving. The neural network is much more likely to fail during the first few face rotations and flips. The team says this happens because the neural network needs to balance solving the Rubik’s Cube with adapting to the physical world during those early rotations and flips. The team also implemented a few perturbations while training the AI-robot hand, including: Resetting the hidden state: During a trial, the hidden state of the policy was reset. This leaves the environment dynamics unchanged but requires the policy to re-learn them since its memory has been wiped. Re-sampling environment dynamics: This corresponds to an abrupt change of environment dynamics by resampling the parameters of all randomizations while leaving the simulation state18 and hidden state intact. Breaking a random joint: This corresponds to disabling a randomly sampled joint of the robot hand by preventing it from moving. This is a more nuanced experiment since the overall environment dynamics are the same but the way in which the robot can interact with the environment has changed. https://twitter.com/OpenAI/status/1184145789754335232 Here’s the complete video on how the AI-robot hand swiftly solved the Rubik cube single-handedly! https://www.youtube.com/watch?time_continue=84&v=x4O8pojMF0w To know more about this research in detail, you can read the research paper. Open AI researchers advance multi-agent competition by training AI agents in a simple hide and seek environment Introducing Open AI’s Reptile: The latest scalable meta-learning Algorithm on the block Build your first Reinforcement learning agent in Keras [Tutorial]
Read more
  • 0
  • 0
  • 3059

article-image-california-bans-political-deepfakes-ahead-of-2020-presidential-election
Fatema Patrawala
09 Oct 2019
5 min read
Save for later

California bans the distribution of political deepfakes ahead of 2020 Presidential election

Fatema Patrawala
09 Oct 2019
5 min read
Yesterday, the California bans political deepfakes, government passed a law that makes it illegal to distribute deepfakes or deceptively edited videos and audio clips intended to damage a politician’s reputation or deceive someone into voting for or against a candidate. Last week, the Governor of California, Gavin Newsom signed a law AB 730, which states that it is a crime to distribute audio or video that gives a false, damaging impression of a politician’s words or actions. The law applies to any candidate within 60 days of an election, but also includes exceptions. For example, the news media will be exempted from the requirement, and the videos made for satire or parody will also be exempted. Potentially deceptive video or audio will also be allowed if it includes a disclaimer noting that it’s fake. It also states that the law will sunset in 2023. Marc Berman, a Democratic member of the California state assembly and chair of the Elections and Redistricting Committee, explained that he was motivated to introduce AB 730 ahead of the 2020 election due to concerns about voter manipulation: “Deepfakes are a powerful and dangerous new technology that can be weaponized to sow misinformation and discord among an already hyper-partisan electorate,” he said in a statement. “Deepfakes distort the truth, making it extremely challenging to distinguish real events and actions from fiction and fantasy.” https://twitter.com/AsmMarcBerman/status/1181689932693168129   Challenges likely for California bans political deepfakes Challenges are likely to arise in the enforcement of this legislation of California bans political deepfakes, given the extremely realistic nature of deepfake content. The legislation could also face legal challenges from groups citing the First Amendment right to free political expression; the American Civil Liberties Union and the Electronic Frontier Foundation have criticized the law for potentially harming political speech. At the same time Newson also signed in another bill, AB 602, that will allow victims of deepfake pornography to seek legal compensation if their image is manipulated for sexually explicit purposes without their consent. This law came into effect in connection with a recent report conducted by cybersecurity firm Deeptrace – which offers deepfake detection tools – estimated that 96% of deepfakes are pornographic, with 99% of them featuring women from the entertainment industry. “When deepfake technology is abused to create sexually explicit material without someone’s permission, it can cause irreparable harm to a victim’s reputation, job prospects, personal relationships and mental health,” Berman said. “Women are disproportionately being harassed and humiliated when their photos are scraped from the internet and seamlessly grafted into pornographic content.”, he added. On Hacker News, users are discussing that such state laws of California bans political deepfakes will not be able to change anything at a grass root level and propogandas will exist in one form or another. One of them commented, “Propaganda will always exist in one form or another. State law is not going to change that or even put a dent in it. The only decent option to fight propaganda is through the education system. The incoming generations should be armed with sharp critical thinking skills, common sense, and empathy (this one is especially important). There needs to be more demonstrative sessions in classrooms where students actively participate in distinguishing fake content from real ones (and specifically how they can deem it to be fake). My kid's public school does an ok job at teaching the above skills on a surface level, but it comes off as an afterthought as opposed to a primary lesson. I wish they would take it to a more granular level and make it a primary aspect of education.” Update on 11th Oct, 2019 California Governor Gavin Newsom has signed into law gig worker protections bill AB-5. This comes shortly after AB-5 passed in the California State Assembly and Senate. https://twitter.com/ssmith_calabor/status/1182482321695395840 “Today, we are disrupting the status quo and taking a bold step forward to rebuild our middle class and reshape the future of workers as we know it,” bill author and Assemblyperson Lorena Gonzalez said in a statement. “As one of the strongest economies in the world, California is now setting the global standard for worker protections for other states and countries to follow.” AB-5 will help to ensure gig economy workers are entitled to minimum wage, workers’ compensation and other benefits by requiring employers to apply the ABC test. The bill, first introduced in December 2018, aims to codify the ruling established in Dynamex Operations West, Inc. v Superior Court of Los Angeles. In that case, the court applied the ABC test and decided Dynamex wrongfully classified its workers as independent contractors. How hackers are using Deepfakes to trick people Deepfakes House Committee Hearing: Risks, Vulnerabilities and Recommendations Lawmakers introduce new Consumer privacy bill and Malicious Deep Fake Prohibition Act to support consumer privacy and battle deepfakes Facebook research suggests chatbots and conversational AI are on the verge of empathizing with humans How to handle categorical data for machine learning algorithms
Read more
  • 0
  • 0
  • 1996
article-image-adobe-bans-accounts-of-all-venezuelan-users-in-compliance-with-the-us-sanctions
Savia Lobo
08 Oct 2019
3 min read
Save for later

Adobe bans accounts of all Venezuelan users in compliance with the US sanctions

Savia Lobo
08 Oct 2019
3 min read
The U.S. based software company Adobe Inc. announced yesterday that will be canceling all subscriptions and deactivating all accounts for Venezuelan users. This move from Adobe is to become compliant to U.S. Government's Executive Order 13884 issued on August 5, 2019. In the support document released yesterday, Adobe explains their decision and informs Venezuelans that they have until October 28 to download any files stored in their Adobe accounts, following which their accounts will be deactivated. https://twitter.com/AdobeCare/status/1181289777397735424 This Adobe ban will impact users of both free or paid Adobe services. The users will not be able to pay for new services, nor will they get any refunds as cited in Executive Order 13884. President Trump's new executive order, backed by the US Department of Treasury, bans US companies from having any business relations with Venezuelan entities, private companies, government organizations, non-profits, or individual citizens. According to PCMag, “Trump administration issued the sanction order against the government of Venezuelan President Nicolas Maduro for allegedly usurping the presidency and perpetrating human rights abuses against the country's citizens.” https://twitter.com/AenderLara/status/1181291242531020800 “Under Executive Order 13884, U.S. companies are severely restricted in the business it carries out within Venezuela. As a result, we are ceasing all activity with entities and individuals in Venezuela as well as those who otherwise meet the criteria of Executive Order 13884 or other U.S. sanctions regulations,” the support document mentions. "We apologize for the inconvenience," adds Adobe. The U.S. has also imposed similar bans on other countries including Iran, North Korea, and Syria. “Not all US companies follow these bans, but the bigger tech giants do follow US Treasury sanctions to the letter of the law,” ZDNet reports. This ban has sparked a lot of complaints from Adobe customers in Venezuela. https://twitter.com/faintenkiu/status/1181296289155293191 https://twitter.com/GRamsey_LatAm/status/1181288171562135552 A user on Hacker News commented, “What makes this even worse is that this is only a huge issue because Adobe moved to the whole 'Creative Cloud' thing rather than the old 'buy each product outright' model. With the old model, it wouldn't hurt these creators all that much if their accounts got deactivated since the software would just not get updates. Now on the other hand... they're screwed. It's a 'brilliant' example of how these 'cloud' based services are a bad deal for the user because it puts them at the risk of getting locked out their own purchases due to legal hassles like this.” Similar to Adobe, in July, Microsoft started enforcing the US Treasury ban/sanctions list on GitHub, a service it bought last year. FCC can’t block states from passing their own net neutrality laws, states a U.S. court “621 U.S. government, schools, and healthcare entities are impacted by ransomware attacks since January’19”, highlights Emisoft report Facebook must face privacy class action lawsuit, loses facial recognition appeal, U.S. Court of Appeals rules
Read more
  • 0
  • 0
  • 1222

article-image-googles-dns-over-https-encryption-plan-faces-scrutiny-from-isps-and-the-congress
Savia Lobo
04 Oct 2019
4 min read
Save for later

Google’s DNS over HTTPS encryption plan faces scrutiny from ISPs and the Congress

Savia Lobo
04 Oct 2019
4 min read
On September 29, the House Judiciary Committee scrutinized Google’s plans for using DNS over HTTPS (DoH) this is “because of concerns that it could give the company a competitive advantage by making it harder for others to access consumer data,” The Wall Street Journal reported. The Congress is investigating Google’s move to encrypt DNS requests over claims that the switchover could stifle competition, WSJ further mentions. In a September 13 letter, the Judiciary Committee asked Google for details about “decision regarding whether to adopt or promote the adoption” of the protocol. Further, in a letter written to the Congress on September 19, Big Cable and other telecom industry groups mentioned that the DNS over HTTPS "could interfere on a mass scale with critical Internet functions, as well as raise data-competition issues." In an email to Ars Technica, Google wrote, "Google has no plans to centralize or change people's DNS providers to Google by default. Any claim that we are trying to become the centralized encrypted DNS provider is inaccurate." Google laid out this DNS-over-HTTPS upgrade experiment in a blog posted on September 10. Starting with version 78, Chrome will begin experimenting with the new DoH feature. Under the experiment, Chrome will "check if the user's current DNS provider is among a list of DoH-compatible providers, and upgrade to the equivalent DoH service from the same provider," Google wrote. "If the DNS provider isn't in the list, Chrome will continue to operate as it does today." According to WSJ, “The new standard would encrypt internet traffic to improve security, which could help prevent hackers from snooping on websites, and from spoofing—faking an internet website to obtain a consumer’s credit card information or other data.” However, it could also alter the internet’s competitive landscape, cable and wireless companies said. “They fear being shut out from much of user data if browser users move wholesale to this new standard, which many internet service providers don’t currently support. Service providers also worry that Google may compel its Chrome browser users to switch to Google services that support the protocol, something Google said it has no intention of doing,” The WSJ reports. Mozilla plans a more aggressive DoH rollout for its users Mozilla is also planning a more aggressive rollout of the technology by gradually shifting all of its users to DoH—whether or not their existing DNS provider supports it. The shift will make Cloudflare the default DNS provider for many Firefox users, regardless of the DNS settings of the underlying OS. In July, Mozilla said that it “wouldn't enable DoH by default in the UK, where ISPs are planning to use DNS to implement legally mandated porn filtering,” Ars Technica reports. Mozilla sees the antitrust concerns raised about Google as “fundamentally misleading,” according to Marshall Erwin, Mozilla’s senior director of trust and safety. Service providers are raising these concerns to undermine the new standard and ensure that they have continued access to DNS data, he said. Also Read: ISPA nominated Mozilla in the “Internet Villain” category for DNS over HTTPs push, withdrew nominations and category after community backlash The adoption of DoH would limit ISPs' ability to both monitor and modify customer queries. However, for those using the ISP's own DNS servers, ISPs will be able to monitor them. “If customers switched to third-party DNS servers—either from Google or one of its various competitors—then ISPs would no longer have an easy way to tell which sites customers were accessing. ISPs could still see which IP addresses a customer had accessed, which would give them some information—this can be an effective way to detect malware infections,” according to Ars Technica. The Sept. 19 letter to lawmakers said, “Because the majority of world-wide internet traffic…runs through the Chrome browser or the Android operating system, Google could become the overwhelmingly predominant DNS lookup provider.” “Google would acquire greater control over user data across networks and devices around the world. This could inhibit competitors and possibly foreclose competition in advertising and other industries.” The ISPs urged the lawmakers to call on Google not to impose the new standard as a default standard in Chrome and Android. A few stakeholders also said that the new system could harm security by bypassing parental controls and filters that have been developed under the current, unencrypted system, the WSJ said. To know more about this news in detail, read The Wall Street Journal’s exclusive coverage. The major DNS blunder at Microsoft Azure affects Office 365, One Drive, Microsoft Teams, Xbox Live, and many more services Moscow’s blockchain-based internet voting system uses an encryption scheme that can be easily broken “Five Eyes” call for backdoor access to end-to-end encryption to tackle ‘emerging threats’ despite warnings from cybersecurity and civil rights communities
Read more
  • 0
  • 0
  • 3493

article-image-introducing-spacy-v2-2
Savia Lobo
03 Oct 2019
4 min read
Save for later

Introducing spaCy v2.2!

Savia Lobo
03 Oct 2019
4 min read
Yesterday, the team at Explosion announced a new version of the Natural Language Processing library, spaCy v2.2, highlighting that this version is much leaner, cleaner and even more user-friendly. spaCy v2.2 includes new model packages and features for training, evaluation, and serialization. This version also includes a lot many bug fixes, improved debugging and error handling, and greatly reduced the size of the library on disk. What’s new in spaCy v2.2 Added more languages and improvements in existing pretrained models This spaCy version introduces pretrained models for two additional languages: Norwegian and Lithuanian. The accuracy of both these languages is likely to improve in subsequent releases, as the current models make use of neither pretrained word vectors nor the spacy pretrain command. The team looks forward to adding more languages soon. The pretrained Dutch NER model now includes a new dataset making it much more useful. The new dataset provides OntoNotes 5 annotations over the LaSSy corpus. This allows the researchers to replace the semi-automatic Wikipedia NER model with one trained on gold-standard entities of 20 categories. Source: explosion.ai New CLI features for training spaCy v2.2 now includes various usability improvements to the training and data development workflow, especially for text categorization. The developers have made improvements in the error messages, updated the documentation, and made the evaluation metrics more detailed – for example, the evaluation now provides per-entity-type and per-text-category accuracy statistics by default. To make training even easier, the developers have also introduced a new debug-data command, to validate user training and development data, get useful stats, and find problems like invalid entity annotations, cyclic dependencies, low data labels and more. Reduced disk foot-print and improvements in language resource handling As spaCy has supported more languages, the disk footprint has crept steadily upwards, especially when support was added for lookup-based lemmatization tables. These tables were stored as Python files, and in some cases became quite large. The spaCy team has switched these lookup tables over to gzipped JSON and moved them out to a separate package, spacy-lookups-data, that can be installed alongside spaCy if needed. Depending on the system, your spaCy installation should now be 5-10× smaller. Also, large language resources are now powered by a consistent LookupsAPI that you can also take advantage of when writing custom components. Custom components often need lookup tables that are available to the Doc, Token or Spanobjects. The natural place for this is in the shared Vocab. Now custom components can place data there too, using the new lookups API. New DocBin class to efficiently serialize Doc collections The new DocBin class makes it easy to serialize and deserialize a collection of Doc objects together and is much more efficient than calling Doc.to_bytes on each individual Doc object. You can also control what data gets saved, and you can merge pallets together for easy map/reduce-style processing. Up to 10 times faster phrase matching spaCy’s previous PhraseMatcher algorithm could easily scale to large query sets. However, it wasn't necessarily that fast when fewer queries were used – making its performance characteristics a bit unintuitive. The spaCy v2.2 replaces the PhraseMatcher with a more straight-forward trie-based algorithm. Because the search is performed over tokens instead of characters, matching is very fast – even before the implementation was optimized using Cython data structures. When a few queries are used, the new implementation is almost 20× faster – and it's still almost 5× faster when 10,000 queries are used. Benchmarks for searching over 10,000 Wikipedia articles Source: explosion.ai Few bug fixes in spaCy v2.2 Reduced package size on disk by moving and compressing large dictionaries. Updated lemma and vector information after splitting a token. This version automatically skips duplicates in Doc.retokenize. Allows customizing entity HTML template in displaCy. Ensures training doesn't crash with empty batches. To know about the other bug fixes in detail, read the release notes on GitHub Many are excited to try the new version of SpaCy. A user on Hacker News commented, “Nice! I'm excited! I've been working on a heavy NLP project with Spanish and been having some issues and so this will be nice to test out and see if it helps!” To know more about spaCy v2.2 in detail, read the official post. Dr Joshua Eckroth on performing Sentiment Analysis on social media platforms using CoreNLP Generating automated image captions using NLP and computer vision [Tutorial] Facebook open-sources PyText, a PyTorch based NLP modeling framework
Read more
  • 0
  • 0
  • 2159
article-image-apple-bans-hkmap-live-a-hong-kong-protest-safety-app-from-the-ios-store
Sugandha Lahoti
03 Oct 2019
4 min read
Save for later

Apple bans HKmap.live, a Hong Kong protest safety app from the iOS Store as it makes people ‘evade law enforcement’

Sugandha Lahoti
03 Oct 2019
4 min read
Update: A day after banning HKmap.live, Apple brought it back on the iOS Store after backlash from the general public. Apple told the creators of HKMap, "Congratulations! We're pleased to let you know that your app, HKmap, has been approved for the App Store. Once your app has been released, it can take up to 24 hours before your app becomes available on the App Store." In response, creators of HKMap tweeted, "Thanks everyone, Apple finally made the right decision."  Amid the escalating tensions in Hong Kong, Apple has banned a protest safety app that helps people track locations of the police and the protestors in Hong Kong. HKmap.live is a crowdsourced map that integrates with Telegram and uses emojis to help people track and avoid areas where protesters, police, and traffic are present. It also showcases areas where there is tear gas, mass arrests of people, etc. According to a tweet, Apple told HKmap.live, “Your app contains content - or facilitates, enables, and encourages an activity - that is not legal ... Specifically, the app allowed users to evade law enforcement." Hong Kong is currently experiencing dangerous clashes between the police and pro-democracy demonstrators and the police are becoming more violent attacking not only protestors but also families, elderly people, and innocent bystanders. The sole purpose of HKmap.live is to track police activity on the streets of Hong Kong and ‘not to help people navigate to other locations’. The application is used widely by Hong Kong residents who wish to avoid inadvertently wandering into violent situations. The creators of HKmap wrote, “Apple assumes our users are lawbreakers and therefore evading law enforcement, which is clearly not the case.” They argue that other apps such as driving app Waze (which also notes locations of users) should be banned as well. This can also be Apple’s way of simply avoiding China’s anger. HKMLive wrote on Twitter, “This is getting way more feedback than I expected. To make it clear, I still believe this is more a bureaucratic f up than censorship. Everything can be used for illegal purposes on the wrong hand. Our App is for info, and we do not encourage illegal activity.” The ban has got people quite angry. Pinboard tweeted, “To deny the people of Hong Kong one of the few tools that defend them against police aggression is such a craven act that I can't even put it into words. Is Apple going to side with "law enforcement" in every dictatorship on the planet? Is coddling China worth that much to them?” The tweet further adds, “On behalf of tech people in America, I would like to apologize to the people of Hong Kong for this humiliating display by our biggest tech company. These are not the fundamental American values you have in mind when you wave our flag at your protests, and we must do better” A user wrote on Hacker News, “Really hard to believe that Apple is the "privacy-oriented company we can trust", that the company constantly touts in their advertising as a reason to buy their products mind you when at the same time, you have news like this constantly coming out.” Previous to this, Chinese state-run media agencies have also been buying advertisements and promoted tweets on Twitter and Facebook to portray Hong Kong protestors and their pro-democracy demonstrations as violent. These ads, reported by Pinboard’s Twitter account were circulated by State-run news agency Xinhua calling these protesters as those “escalating violence” and calls for “order to be restored.” In reality, Hong Kong protests have been called a completely peaceful march. Pinboard warned and criticized Twitter about these tweets and asked for its takedown. For now, HKMap.live is also available as a web app, so it can still be used. Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong Kong protests Telegram faces massive DDoS attack; suspects link to the ongoing Hong Kong protests As Kickstarter reels in the aftermath of its alleged union-busting move, is the tech industry at a tipping point?
Read more
  • 0
  • 0
  • 2295

article-image-dropbox-paper-leaks-out-email-addresses-and-names-on-sharing-document-publicly
Amrata Joshi
27 Sep 2019
3 min read
Save for later

‘Dropbox Paper’ leaks out email addresses and names on sharing document publicly

Amrata Joshi
27 Sep 2019
3 min read
This week, Koen Rouwhorst, a security engineer at Framer, reported that a feature of Dropbox Paper, a document collaboration tool, leaks out, “the full name and email address of _any_ Dropbox user whoever opened that document, which seems problematic.” https://twitter.com/koenrh/status/1176523837866946561 https://twitter.com/koenrh/status/1176794225075204097   Dropbox Support responded that their privacy considerations were built into how they designed their features. Also, according to the support team, displaying this information is required for enabling collaboration and security features for their users. Also, admins and users receive additional control over who can view a Paper doc. According to The Register, “if someone gets to know the link, because in your enthusiasm you posted it on social media, or sent to your contact and they posted it, they may click the link and visit the page. On arrival, if they are logged into Dropbox, a warning displays, though in faint type, that says -when you open a doc, your name, email, avatar photo and viewer and visit information is always visible to other people in it.” Though Dropbox differentiates between active and inactive viewers, this information will remain with Dropbox even after the user has left the page,  Anyone who has logged into the document will be able to see the names and email addresses of others. However, when a user clicks the link without being logged into Dropbox, the user will be shown to other users as a guest, and won’t be able to comment or edit on the document. Users may be logged into Dropbox by default so they might see a warning and, if they proceed, they would end up sharing their name and email address. This works while working with a team where people know each other. As per Dropbox’s permissions page, a user can create a private document that’s not inside of a folder and they should be the only person editing it. While sharing the doc with others, the user can choose who can open the doc and who can comment or edit. In case a user creates a doc within a folder then all the members of that folder can open, search for, and edit the doc. Users on HackerNews seem to be sceptical about this feature, a user commented on the thread, “Not only that, but Dropbox lets you pick any publicly visible document that's been viewed by a large number of peopl and easily spam them simply by writing @doc. I may have just pissed off a lot of people with my experiment. I realized immediately afterwards how reckless that was, but Dropbox - WTF? Why is this even allowed?” Few others are complaining about not being notified about the warning, “I just created a Paper document on my Dropbox account and then viewed it on another account. As best I can tell, Dropbox saying there is a notification is a lie. I did not get a visible notification when creating it although there may have been one buried under some links or button. Paper documents are publicly editable by default if you have the url.” Other interesting news in data Can a modified MIT ‘Hippocratic License’ to restrict misuse of open source software prompt a wave of ethical innovation in tech? ImageNet Roulette: New viral app trained using ImageNet exposes racial biases in artificial intelligent system GitLab 12.3 releases with web application firewall, keyboard shortcuts, productivity analytics, system hooks and more  
Read more
  • 0
  • 0
  • 3772