Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Data

1209 Articles
article-image-github-along-with-weights-biases-introduced-codesearchnet-challenge-evaluation-and-codesearchnet-corpus
Amrata Joshi
27 Sep 2019
3 min read
Save for later

GitHub along with Weights & Biases introduced CodeSearchNet challenge evaluation and CodeSearchNet Corpus

Amrata Joshi
27 Sep 2019
3 min read
Yesterday, the team at GitHub along with its partners from Weights & Biases introduced the CodeSearchNet challenge evaluation environment and leaderboard. The team is also releasing a large dataset to help data scientists in building models for this task and several baseline models that highlight the current state of the art. Semantic code search involves retrieving relevant code when a natural language query is given. While dealing with other information retrieval tasks, it needs to bridge the gap between the language used in code and natural language. Also, the standard information retrieval methods don’t work effectively in the code search domain because usually there is little shared vocabulary between search terms and results. Evaluating methods for this task is very difficult, as there are no substantial datasets that were made for this task.  Considering these issues and to evaluate the progress on code search, the team is releasing CodeSearchNet Corpus and they are presenting the CodeSearchNet Challenge. The CodeSearchNet Challenge consists of 99 natural language queries and around 4k expert relevance annotations.  The CodeSearchNet Corpus  The CodeSearchNet corpus contains around 6 million functions from open-source code spanning six programming languages including Go, Java, Python, JavaScript, PHP, and Ruby. For collecting a large dataset of functions, the team used TreeSitter infrastructure, a parser generator tool and an incremental parsing library. The team is also releasing its data preprocessing pipeline for others to use as it will be a starting point in applying machine learning to code. This data is not directly related to code search but if used with related natural language description, it can help in training models.  CodeSearchNet corpus contains automatically generated query-like natural language for around 2 million functions. It also includes the metadata that indicates the original location where the data was found. CodeSearchNet Corpus collection The team collects the corpus from publicly available open-source non-fork GitHub repositories and uses libraries.io for identifying all projects which are used by at least one other project. They further sort these projects based on their ‘popularity’ by checking the number of stars and forks. The team removes the projects that do not have a license or whose license does not allow the re-distribution of parts of the project.  The team has also tokenized all the functions, including Go, JavaScript, Python, Java, PHP and Ruby with the help of TreeSitter. For generating the training data for the CodeSearchNet Challenge, the team considers those functions in the corpus hat have documentation associated with them. The CodeSearchNet Challenge The team collected an initial set of code search queries for evaluating code search models. They started by collecting the common search queries that had high click-through rates from Bing and then combined these with queries from StaQC. The team manually filtered out those queries that were clearly ‘technical keywords’ for obtaining a set of 99 natural language queries. The team then used a standard Elasticsearch installation and baseline models for obtaining 10 results per query from their CodeSearchNet Corpus. They then asked data scientists, programmers, and machine learning researchers for annotating the results for relevance to the query. For evaluating the CodeSearchNet Challenge, a method should return a set of results from CodeSearchNet Corpus for each of 99 pre-defined natural language queries.  Other interesting news in data Can a modified MIT ‘Hippocratic License’ to restrict misuse of open source software prompt a wave of ethical innovation in tech? ImageNet Roulette: New viral app trained using ImageNet exposes racial biases in artificial intelligent system GitLab 12.3 releases with web application firewall, keyboard shortcuts, productivity analytics, system hooks and more
Read more
  • 0
  • 0
  • 2680

article-image-facebook-will-no-longer-involve-third-party-fact-checkers-to-review-the-political-content-on-their-platform
Amrata Joshi
26 Sep 2019
5 min read
Save for later

Facebook will no longer involve third-party fact-checkers to review the political content on their platform

Amrata Joshi
26 Sep 2019
5 min read
On Tuesday, at The Atlantic Festival in Washington DC, Nick Clegg, communications VP at  Facebook outlined the measures that Facebook is taking to prevent the involvement of the third-party fact-checkers in elections.  Facebook relies on third-party fact-checkers for reducing the spread of fake news and misinformation. The company has now decided to exempt the politicians’ speech from their third-party fact-checking program as they don’t want to intervene in any political speech coming from politicians.  Clegg said in his speech, “At Facebook, our role is to make sure there is a level playing field, not to be a political participant ourselves.” The company will no longer send any organic content or ads coming from the politicians to their third-party fact-checking partners for review. In case a politician shares previously debunked content then Facebook will demote that content. How do these third-party fact-checkers handle fake news? The third-party fact-checkers review the posts and stories and identify the fake news by looking into the feedback coming from Facebook users. After checking the facts, they rate the accuracy of the content. In case fact-checker rates the content as false, the content will get demoted and will appear lower in News Feed. This also reduces the number of people who can view this content. The third-party fact-checkers restrict the pages and websites that repeatedly share false news and reduce their distribution on the platform. They will be restricted by the third-party fact-checkers from monetising and advertising and registering as a news Page. As per the newsworthiness exemption, Facebook does not ban content even if violates its guidelines  Since 2016, Facebook had a newsworthiness exemption which means if a user makes a statement or shares a post that goes against community standards, Facebook will still allow it on their platform if it doesn’t increase the risk of harm.  Clegg announced that from now on, the speech from politicians will be considered as newsworthy content.  Clegg’s speech reads, “Today, I announced that from now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard. However, in keeping with the principle that we apply different standards to content for which we receive payment, this will not apply to ads – if someone chooses to post an ad on Facebook, they must still fall within our Community Standards and our advertising policies.” How is the newsworthiness determined? While determining the newsworthiness, the public’s interest in the value of speech is evaluated against the risk of harm. While balancing these interests, the company takes a number of factors into consideration which includes country-specific circumstances such as elections or war scenario.  He further added, “In evaluating the risk of harm, we will consider the severity of the harm. Content that has the potential to incite violence, for example, may pose a safety risk that outweighs the public interest value. Each of these evaluations will be holistic and comprehensive in nature, and will account for international human rights standards." Facebook makes it clear that they are just a platform for content  While explaining their stand, Clegg said, “At Facebook, our role is to make sure there is a level playing field, not to be a political participant ourselves. To use tennis as an analogy, our job is to make sure the court is ready – the surface is flat, the lines painted, the net at the correct height. But we don’t pick up a racket and start playing. How the players play the game is up to them, not us.” So, in this case, if the politicians are using nasty comments on the platform, the platform won’t be responsible or it, they can try to curb it to a limit by demoting the content in case it is inappropriate. The politicians have been given a platform to use it and it is up to them how they use it. According to Clegg, it is not possible for the platform to become a referee, each time a politician posts nasty comments and posts a fake claim.  Clegg added, “Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say? I don’t believe it would be. In open democracies, voters rightly believe that, as a general rule, they should be able to judge what politicians say themselves.” Facebook clarifies that it is not their job to intervene when the politicians speak and they won’t allow interference from third-party fact-checkers for reviewing them. How are YouTube and Twitter planning to deal with political speech? Yesterday, Susan Wojcicki, CEO at YouTube said that they won’t ban politicians from using YouTube even if their content goes against company’s guidelines, Politico reports. Wojcicki talked about how YouTube approach political figures and suggested that whatever they post is important for people to be aware of.  Wojcicki said at The Atlantic Festival, “When you have a political officer that is making information this is really important for their constituents to see, or for other global leaders to see, that is content that we would leave up because we think it’s important for other people to see.”  On the contrast, Twitter won’t take down the content by politicians or political figures because it’s of public interest, but the content that goes against the guidelines of the platform will be labeled as ‘rule-breaking’. The platform will de-prioritize the labeled tweets in the company’s algorithms and search bar so that they aren’t visible to a larger audience.  To know more about this news, check out Facebook’s post. Other interesting news in data Can a modified MIT ‘Hippocratic License’ to restrict misuse of open source software prompt a wave of ethical innovation in tech? ImageNet Roulette: New viral app trained using ImageNet exposes racial biases in artificial intelligent system GitLab 12.3 releases with web application firewall, keyboard shortcuts, productivity analytics, system hooks and more  
Read more
  • 0
  • 0
  • 2078

article-image-imagenet-roulette-viral-app-trained-using-imagenet-exposes-racial-biases-artificial-intelligent-system
Sugandha Lahoti
24 Sep 2019
4 min read
Save for later

ImageNet Roulette: New viral app trained using ImageNet exposes racial biases in artificial intelligent system

Sugandha Lahoti
24 Sep 2019
4 min read
A new facial recognition app is going viral on Twitter under the hashtag #ImageNetRoulette for all the wrong reasons. This app, ‘ImageNet Roulette’ uses artificial intelligence to analyze each face and describe what it sees. However, the kind of tags that this AI is revealing speaks volumes about the spread of biased artificial intelligent systems. For some people, it tags them as “orphan” or “nonsmoker.” Black and ethnic minority people was being tagged with labels such as “negroid” or “black person”. https://twitter.com/imy/status/1173868441599709185 https://twitter.com/lostblackboy/status/1174112872638689281 The idea behind ImageNet Roulette was to make people aware of biased AI The designers of the app are American artist Trevor Paglen and Microsoft researcher and Co-founder and Director of Research at the AI Now Institute, Kate Crawford. ImageNet Roulette was trained using popular image recognition database, ImageNet.  It uses a neural network trained on the “Person” categories from the ImageNet dataset which has over 2,500 labels used to classify images of people. The idea behind the app, Paglen said was to expose racist and sexist flaws in artificial intelligence systems and infer that similar biases can be present in other facial recognition systems used by other big companies. The app’s website notes in bold, “ImageNet Roulette regularly returns racist, misogynistic and cruel results.” Paglen and Crawford explicitly state that the project is a "provocation designed to help us see into the ways that humans are classified in machine learning systems." “We object deeply to stereotypical classifications, yet we think it is important that they are seen, rather than ignored and tacitly accepted. Our hope was that we could spark in others the same sense of shock and dismay that we felt as we studied ImageNet and other benchmark datasets over the last two years.” “Our project,” they add, “highlights why classifying people in this way is unscientific at best, and deeply harmful at worst.” ImageNet removes 600,000 images The ImageNet team has been working since the beginning of this year to address bias in AI systems and submitted a paper as part of these efforts in August. As the app went viral, ImageNet posted an update on 17th September stating, "Over the past year, we have been conducting a research project to systematically identify and remedy fairness issues that resulted from the data collection process in the people subtree of ImageNet," Among the 2,382 people subcategories, the researchers have decided to remove 1,593 that have been deemed ‘unsafe’ and ‘sensitive’. A total of 600,000 images will be removed from the database. Crawford and Paglen applauded the ImageNet team for taking the first step. However, they feel this “technical debiasing” of training data will not resolve the deep issues of facial recognition bias. The researchers state, “There needs to be a substantial reassessment of the ethics of how AI is trained, who it harms, and the inbuilt politics of these ‘ways of seeing.’” ImageNet Roulette is removing the app from the internet on Friday, September 27th, 2019. Although, it will remain in circulation as a physical art installation, currently on view at the Fondazione Prada Osservertario in Milan until February 2020. In recent months, a number of biases have been found in facial recognition services offered by companies like Amazon, Microsoft, and IBM. Researchers like those behind the ImageNet Roulette app, call for big tech giants to check and evaluate how opinion, bias and offensive points of view can drive the creation of artificial intelligence. Other interesting news in Tech Facebook suspends tens of thousands of apps amid an ongoing investigation into how apps use personal data. Twitter announces to test ‘Hide Replies’ feature in the US and Japan, after testing it in Canada Media manipulation by Deepfakes and cheap fakes require both AI and social fixes, finds a Data and Society report.
Read more
  • 0
  • 0
  • 2887
Banner background image

article-image-gitlab-12-3-releases-with-web-application-firewall-keyboard-shortcuts-productivity-analytics-system-hooks-and-more
Amrata Joshi
23 Sep 2019
3 min read
Save for later

GitLab 12.3 releases with web application firewall, keyboard shortcuts, productivity analytics, system hooks and more

Amrata Joshi
23 Sep 2019
3 min read
Yesterday, the team at GitLab released GitLab 12.3, a DevOps lifecycle tool that provides a Git-repository manager. This release comes with Web Application Firewall, Productivity Analytics, new Environments section and much more. What’s new in GitLab 12.3? Web Application Firewall In GitLab 12.3, the team has shipped the first iteration of the Web Application Firewall that is built in the GitLab SDLC platform. The Web Application Firewall focuses on monitoring and reporting the security concerns related to Kubernetes clusters.  Productivity Analytics  From GitLab 12.3, the team has started releasing Productivity Analytics that will help teams and their leaders in discovering the best practices for better productivity. This release will help in drilling into the data and learning insights for improvements in future. Group level analytics workspace can be used to provide performance insight, productivity, and visibility across multiple projects. Environments section This release comes with “Environments” section in the cluster page that gives an overview of all the projects that are making use of the Kubernetes cluster. License compliance  License Compliance feature can be used to disallow a merger when a blacklisted license is found in a merge request.  Keyboard shortcuts This release comes with the new ‘n’ and ‘p’ keyboard shortcuts that can be used to move to the next and previous unresolved discussions in Merge Requests. System hooks System hooks allow automation by triggering requests whenever a variety of events in GitLab take place. Multiple IP subnets This release introduces the ability to specify multiple IP subnets so instead of specifying a single range, it is now possible for large organizations to restrict incoming traffic to their specific needs. GitLab Runner 12.3 Yesterday, the team also released GitLab Runner 12.3, an open-source project that is used for running CI/CD jobs and sending the results back to GitLab. Audit logs In this release, the audit logs for push events are disabled by default for preventing performance degradation on GitLab instances. Few GitLab users are unhappy as some of the features of this release including Productivity Analytics are available to Premium or Ultimate users only. https://twitter.com/gav_taylor/status/1175798696769916932 To know more about this news, check out the official page. Other interesting news in cloud and networking Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020 Istio 1.3 releases with traffic management, improved security, and more!    
Read more
  • 0
  • 0
  • 2380

article-image-facebook-suspends-tens-of-thousands-of-apps-data-investigation
Sugandha Lahoti
23 Sep 2019
4 min read
Save for later

Facebook suspends tens of thousands of apps amid an ongoing investigation into how apps use personal data

Sugandha Lahoti
23 Sep 2019
4 min read
In a blog post on Friday, Facebook revealed to suspend tens of thousands of apps as a part of their ongoing App Developer investigation. Facebook’s app suspension began in March 2018, in a response to the episode involving Cambridge Analytica scandal. According to the investigation, these apps have mishandled the users’ personal data. Facebook says it now also identifies apps based on signals associated with an app’s potential to abuse its policies. The apps suspended by Facebook come from just 400 developers. “The review is ongoing,” said Facebook “and comes from hundreds of contributors, including attorneys, external investigators, data scientists, engineers, policy specialists, and teams within Facebook”. However, the company failed to provide details about what the apps had done wrong or their names, instead stating they were targeted for a “variety of reasons.” “App developers remain a vital part of the Facebook ecosystem,” said the company in a blog post, “They help to make our world more social and more engaging. But people need to know we’re protecting their privacy. And across the board, we’re making progress.” Facebook has also banned myPersonality an app, which shared information with researchers and companies with only limited protections in place and refused to participate in an audit. It has also taken legal action against Rankwave, a South Korean data analytics company and filed an action against LionMobi and JediMobi. These two companies used their apps to infect users’ phones with malware in a profit-generating scheme. Facebook says this is part of an ongoing investigation and is just a progress report. Facebook was fined a record $5bn imposed in July 2019 for data breaches and revelations of illegal data sharing. Facebook’s new agreement with the FTC will bring its own set of requirements for bringing oversight to app developers. It requires developers to annually certify compliance with Facebook’s policies. Any developer that doesn’t go along with these requirements will be held accountable. It has also developed new rules to more strictly control a developer’s access to user data, including suspension or revoking of a developer’s access to any API that has not been used in the past 90 days. Facebook’s app suspension sheds light on broader privacy issues The extent of how many apps Facebook had suspended was revealed later on Friday in new court documents from Massachusetts’ attorney general, which has been probing Facebook’s data-collection practices for months. Per these documents, Facebook had suspended 69,000 apps. They also “identified approximately 10,000 applications that may also have misappropriated and/or misused consumers’ personal data,” The court filings say 6,000 apps had a “large number of installing users,” and 2,000 exhibited behaviors that “may suggest data misuse.” Experts still believe that the social-networking giant has escaped tough consequences for its past privacy abuses. Per NYT, “Facebook's announcement was "a tacit admission that the scale of its data privacy issues was far larger than it had previously acknowledged." Ron Wyden, U.S Senator from Oregon tweeted on Facebook’s app suspension, “This wasn’t some accident. Facebook put up a neon sign that said “Free Private Data,” and let app developers have their fill of Americans’ personal info. The FTC needs to hold Mark Zuckerberg personally responsible.” David Heinemeier Hansson, creator of Ruby on Rails also talked about Facebook’s Facebook’s app suspension. “Another day, another Facebook privacy scandal. Tens of thousands of apps had improper access to data ala Cambridge Analytica. FB has previously claimed only hundreds did. If you still use FB or IG, ask yourself, is any scandal enough to make you quit?”, he tweeted. The company’s lack of information about the said disclosures is also likely to reignite calls for heightened data regulation of Facebook. It also shows that the company’s privacy practices remain a work in progress. Other news in Tech France and Germany reaffirm blocking Facebook’s Libra cryptocurrency The House Judiciary Antitrust Subcommittee asks Facebook, Apple for details including private emails in the wake of antitrust investigations. Media manipulation by Deepfakes and cheap fakes require both AI and social fixes, Data and society reports.
Read more
  • 0
  • 0
  • 2046

article-image-introducing-microsofts-airsim-an-open-source-simulator-for-autonomous-vehicles-built-on-unreal-engine
Bhagyashree R
19 Sep 2019
4 min read
Save for later

Introducing Microsoft’s AirSim, an open-source simulator for autonomous vehicles built on Unreal Engine

Bhagyashree R
19 Sep 2019
4 min read
Back in 2017, the Microsoft Research team developed and open-sourced Aerial Informatics and Robotics Simulation (AirSim). On Monday, the team shared how AirSim can be used to solve the current challenges in the development of autonomous systems. Microsoft AirSim and its features Microsoft AirSim is an open-source, cross-platform simulation platform for autonomous systems including autonomous cars, wheeled robotics, aerial drones, and even static IoT devices. It works as a plugin for Epic Games’ Unreal Engine. There is also an experimental release for the Unity game engine. Here is an example of drone simulation in AirSim: https://www.youtube.com/watch?v=-WfTr1-OBGQ&feature=youtu.be AirSim was built to address two main problems developers face during the development of autonomous systems. First, the requirement of large datasets for training and testing the systems and second, the ability to debug in a simulator. With AirSim, the team aims to equip developers with a platform that has various training experiences so that the autonomous systems could be exposed to different scenarios before they are deployed in the real-world. “Our goal is to develop AirSim as a platform for AI research to experiment with deep learning, computer vision and reinforcement learning algorithms for autonomous vehicles. For this purpose, AirSim also exposes APIs to retrieve data and control vehicles in a platform-independent way,” the team writes. AirSim provides physically and visually realistic simulations by supporting hardware-in-the-loop simulation with popular flight controllers such as PX4, an open-source autopilot system. It can be easily extended to accommodate various new types of autonomous vehicles, hardware platforms, and software protocols. Its extensible architecture also allows them to quickly add custom autonomous system models and new sensors to the simulator. AirSim for tackling the common challenges in the autonomous systems’ development In April, the Microsoft Research team collaborated with Carnegie Mellon University and Oregon State University, collectively called Team Explorer, to solve the DARPA Subterranean (SubT) Challenge. The challenge was to build robots that can autonomously map, navigate, and search underground environments during time-sensitive combat operations or disaster response scenarios. On Monday, Microsoft’s Senior Research Manager, Ashish Kapoor shared how they used AirSim to solve this challenge. Team Explorer and Microsoft used AirSim to create an “intricate maze” of man-made tunnels in a virtual world. To create this maze the team used reference material from real-world mines to modularly generate a network of interconnected tunnels. This was a high-definition simulation of man-made tunnels that also included robotic vehicles and a suite of sensors. AirSim also provided a rich platform that Team Explorer could use to test their methods along with generating training experiences for creating various decision-making components for autonomous agents. Microsoft believes that AirSim can also help accelerate the creation of a real dataset for underground environments. “Microsoft’s ability to create near-realistic autonomy pipelines in AirSim means that we can rapidly generate labeled training data for a subterranean environment,” Kapoor wrote. Kapoor also talked about another collaboration with Air Shepherd and USC to help counter wildlife poaching using AirSim. In this collaboration, they developed unmanned aerial vehicles (UAVs) equipped with thermal infrared cameras that can fly through national parks to search for poachers and animals. AirSim was used to create a simulation of this use case, in which virtual UAVs flew over virtual environments at an altitude from 200 to 400 feet above ground level. “The simulation took on the difficult task of detecting poachers and wildlife, both during the day and at night, and ultimately ended up increasing the precision in detection through imaging by 35.2%,” the post reads. These were some of the recent use cases where AirSim was used. To explore more and to contribute you can check out its GitHub repository. Other news in Data 4 important business intelligence considerations for the rest of 2019 How artificial intelligence and machine learning can help us tackle the climate change emergency France and Germany reaffirm blocking Facebook’s Libra cryptocurrency
Read more
  • 0
  • 0
  • 8710
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-media-manipulation-by-deepfakes-and-cheap-fakes-require-both-ai-and-social-fixes-finds-a-data-society-report
Sugandha Lahoti
19 Sep 2019
3 min read
Save for later

Media manipulation by Deepfakes and cheap fakes require both AI and social fixes, finds a Data & Society report

Sugandha Lahoti
19 Sep 2019
3 min read
A new report from Data and Society published by researchers Britt Paris and Joan Donovan argues that the violence of Audio Visual manipulation - namely Deepfakes and Cheap fakes can not be addressed by artificial intelligence alone. It requires a combination of technical and social solutions. What are Deepfakes and cheap fakes One form of Audio Visual manipulation can be executed using experimental machine learning which is deepfakes. Most recently, a terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise went viral on YouTube. Facebook creator Mark Zuckerberg also became the target of the world’s first high profile white hat deepfake operation. This video was created by artists Bill Posters and Daniel Howe in partnership with advertising company Canny where Zuckerberg appears to give a threatening speech about the power of Facebook. Read Also Now there is a Deepfake that can animate your face with just your voice and a picture. Worried about Deepfakes? Check out the new algorithm that manipulates talking-head videos by altering the transcripts. However, fake videos can also be rendered through Photoshop, lookalikes, re-contextualizing footage, speeding, or slowing. This form of AV manipulation – are cheap fakes. The researchers have coined the term stating they rely on cheap, accessible software, or no software at all. Deepfakes can’t be fixed with Artificial Intelligence alone The researchers argue that deepfakes, while new, are part of a long history of media manipulation — one that requires both a social and a technical fix. They determine that deepfakes need to address structural inequality; groups most vulnerable to that violence should be able to influence public media systems. The authors say, “Those without the power to negotiate truth–including people of color, women, and the LGBTQA+ community–will be left vulnerable to increased harms.” Researchers worry that AI-driven content filters and other technical fixes could cause real harm. “They make things better for some but could make things worse for others. Designing new technical models creates openings for companies to capture all sorts of images and create a repository of online life.” “It’s a massive project, but we need to find solutions that are social as well as political so people without power aren’t left out of the equation.” This technical fix, the researchers say, must work alongside the legal system to prosecute bad actors and stop the spread of faked videos. “We need to talk about mitigation and limiting harm, not solving this issue, Deepfakes aren’t going to disappear.” The report states, “There should be “social” policy solutions that penalize individuals for harmful behavior. More encompassing solutions should also be formed to enact federal measures on corporations to encourage them to more meaningfully address the fallout from their massive gains.” It concludes, “Limiting the harm of AV manipulation will require an understanding of the history of evidence, and the social processes that produce truth, in order to avoid new consolidations of power for those who can claim exclusive expertise.” Other interesting news in tech $100 million ‘Grant for the Web’ to promote innovation in web monetization jointly launched by Mozilla, Coil and Creative Commons The House Judiciary Antitrust Subcommittee asks Amazon, Facebook, Alphabet, and Apple for details including private emails in the wake of antitrust investigations UK’s NCSC report reveals significant ransomware, phishing, and supply chain threats to businesses
Read more
  • 0
  • 0
  • 3035

article-image-percona-announces-percona-distribution-for-postgresql-to-support-open-source-databases
Amrata Joshi
18 Sep 2019
3 min read
Save for later

Percona announces Percona Distribution for PostgreSQL to support open source databases 

Amrata Joshi
18 Sep 2019
3 min read
Yesterday, the team at Percona, an open-source database software, and services provider announced Percona Distribution for PostgreSQL to offer expanded support for open source databases. It provides a fully supported distribution of the database and management tools to the organizations so that running applications based on PostgreSQL can deliver higher performance. Based on v11.5 of PostgreSQL, Percona Distribution for PostgreSQL provides support of database for cloud or on-premises deployments. This new database distribution will be unveiled at Percona Live Europe in Amsterdam(30th September- 2nd). Percona Distribution for PostgreSQL includes the following open-source tools to manage database instances and ensure that the data is available, secure, and backed up for recovery: pg_repack, a third-party extension rebuilds PostgreSQL database objects without the need of a table lock. pgaudit is a third-party extension that gives in-depth session and/or object audit logging via the standard logging facility in PostgreSQL. This helps the PostgreSQL users in providing detailed audit logs for compliance and certification purposes. pgBackRest is a backup tool that is responsible for replacing the built-in PostgreSQL backup offering. pgBackRest can scale up for handling large database workloads and can help companies minimize storage requirements by using streaming compression. It uses delta restores to lower the amount of time required to complete a restore. Patroni, a high availability solution for PostgreSQL implementations can be used in production deployments. This list also includes additional extensions that are supported by the PostgreSQL Global Development Group. This new database distribution will provide users with enterprise support, services as well as consulting for their open-source database instances across multiple distributions, across on-premises and cloud deployments. The team further announced that Percona Monitoring and Management will now support PostgreSQL. Peter Zaitsev, co-founder, and CEO of Percona said, “Companies are creating more data than ever, and they have to store and manage this data effectively.” Zaitsev further added, “Open source databases are becoming the platforms of choice for many organizations, and Percona provides the consultancy and support services that these companies rely on to be successful. Adding a distribution of PostgreSQL alongside our current options for MySQL and MongoDB helps our customers leverage the best of open source for their applications as well as get reliable and efficient support.” To know more about Percona Distribution for PostgreSQL, check out the official page. Other interesting news in data Open AI researchers advance multi-agent competition by training AI agents in a simple hide and seek environment The House Judiciary Antitrust Subcommittee asks Amazon, Facebook, Alphabet, and Apple for details including private emails in the wake of antitrust investigations $100 million ‘Grant for the Web’ to promote innovation in web monetization jointly launched by Mozilla, Coil and Creative Commons
Read more
  • 0
  • 0
  • 3343

article-image-keras-2-3-0-the-first-release-of-multi-backend-keras-with-tensorflow-2-0-support-is-now-out
Bhagyashree R
18 Sep 2019
4 min read
Save for later

Keras 2.3.0, the first release of multi-backend Keras with TensorFlow 2.0 support is now out

Bhagyashree R
18 Sep 2019
4 min read
Yesterday, the Keras team announced the release of Keras 2.3.0, which is the first release of multi-backend Keras with TensorFlow 2.0 support. This is also the last major release of multi-backend Keras. It is backward-compatible with TensorFlow 1.14, 1.13, Theano, and CNTK. Keras to focus mainly on tf.keras while continuing support for Theano/CNTK This release comes with a lot of API changes to bring the multi-backend Keras API “in sync” with tf.keras, TensorFlow’s high-level API. However, there are some TensorFlow 2.0 features that are not supported. This is why the team recommends developers to switch their Keras code to tf.keras in TensorFlow 2.0. Read also: TensorFlow 2.0 beta releases with distribution strategy, API freeze, easy model building with Keras and more Moving to tf.keras will give developers access to features like eager execution, TPU training, and much better integration between low-level TensorFlow and high-level concepts like Layer and Model. Following this release, the team plans to mainly focus on the further development of tf.keras. “Development will focus on tf.keras going forward. We will keep maintaining multi-backend Keras over the next 6 months, but we will only be merging bug fixes. API changes will not be ported,” the team writes. To make it easier for the community to contribute to the development of Keras, the team will be developing tf.keras in its own standalone GitHub repository at keras-team/keras. François Chollet, the creator of Keras, further explained on Twitter why they are moving away from the multi-backend Keras: https://twitter.com/fchollet/status/1174019142774452224 API updates in Keras 2.3.0 Here are some of the API updates in Keras 2.3.0: The add_metric method is added to Layer/Model, which is similar to the add_loss method but for metrics. Keras 2.3.0 introduces several class-based losses including MeanSquaredError, MeanAbsoluteError, BinaryCrossentropy, Hinge, and more. With this update, losses can be parameterized via constructor arguments. Many class-based metrics are added including Accuracy, MeanSquaredError, Hinge, FalsePositives, BinaryAccuracy, and more. This update enables metrics to be stateful and parameterized via constructor arguments. The train_on_batch and test_on_batch methods now have a new argument called resent_metrics. You can set this argument to True for maintaining metric state across different batches when writing lower-level training or evaluation loops. The model.reset_metrics() method is added to Model to clear metric state at the start of an epoch when writing lower-level training or evaluation loops. Breaking changes in Keras 2.3.0 Along with the API changes, Keras 2.3.0 includes a few breaking changes. In this release, batch_size, write_grads, embeddings_freq, and embeddings_layer_names are deprecated and hence are ignored when used with TensorFlow 2.0. Metrics and losses will now be reported under the exact name specified by the user. Also, the default recurrent activation is changed from hard_sigmoid to sigmoid in all RNN layers. Read also: Build your first Reinforcement learning agent in Keras [Tutorial] The release started a discussion on Hacker News where developers appreciated that Keras will mainly focus on the development of tf.keras. A user commented, “Good move. I'd much rather it worked well for one backend then sucked mightily on all of them. Eager mode means that for the first time ever you can _easily_ debug programs using the TensorFlow backend. That will be music to the ears of anyone who's ever tried to debug a complex TF-backed model.” Some also raised the question that Google might acquire Keras in the future considering TensorFlow has already included Keras in its codebase and its creator, François Chollet works as an AI researcher at Google. Check out the official announcement to know what more has landed in Keras 2.3.0. Other news in Data The CAP Theorem in practice: The consistency vs. availability trade-off in distributed databases InfluxData launches new serverless time series cloud database platform, InfluxDB Cloud 2.0 Different types of NoSQL databases and when to use them
Read more
  • 0
  • 0
  • 4866

article-image-open-ai-researchers-advance-multi-agent-competition-by-training-ai-agents-in-a-simple-hide-and-seek-environment
Sugandha Lahoti
18 Sep 2019
5 min read
Save for later

Open AI researchers advance multi-agent competition by training AI agents in a simple hide and seek environment

Sugandha Lahoti
18 Sep 2019
5 min read
Open AI researchers have built a simple hide and seek game environment for multi-agent competition where they observed that AI agents can learn complex strategies and skills on their own as the game progresses. In fact, these AI agents built six distinct strategies and counterstrategies, some of which were not even supported by the training environment. The researchers concluded that such multi-agent co-adaptation may one day produce extremely complex and intelligent behavior. The Hide and Seek training environment AI agents play a team-based hide-and-seek game in a physics-based environment. Hiders (blue) avoid the line of sight from the seekers, and the seekers keep the vision of the hiders. The environment has various objects (walls, ramps, blocks) that agents can grab and also lock in place. There are also randomly generated immovable rooms and walls that the agents must learn to navigate. Before the game, hiders are given a preparation time to run away or change their environment and the seekers are immobilized. Agents are given a team-based reward; hiders are given a reward of +1 if all hiders are hidden and -1 if any hider is seen by a seeker. Seekers are given the opposite reward, -1 if all hiders are hidden and +1 otherwise. There are no explicit incentives for agents to interact with objects in the environment; they are penalized if they go too far outside the play area. Agent characteristics The agents can move by setting a force on themselves in the x and y directions as well as rotate along the z-axis. The agents can see objects in their line of sight and within a frontal cone. The agents can sense the distance to objects, walls, and other agents around them using a lidar-like sensor. The agents can grab and move objects in front of them. The agents can lock objects in place. Only the team that locked an object can unlock it. AI agents develop six distinct strategies Agents are trained using self-play and agent policies are composed of two separate networks with different parameters. This includes a policy network that produces an action distribution and a critic network that predicts the discounted future returns. Policies are optimized using Proximal Policy Optimization (PPO) and Generalized Advantage Estimation (GAE); training is performed using OpenAI’s rapid, it’s general-purpose RL training system. The researchers noticed that as agents train against each other in hide-and-seek, six distinct strategies emerge. Initially, hiders and seekers learn to crudely run away and chase. After approximately 25 million episodes of hide-and-seek, the hiders learn to use the tools at their disposal and intentionally modify their environment. After another 75 million episodes, the seekers also learn rudimentary tool use; they learn to move and use ramps to jump over obstacles, etc. 10 million episodes later, the hiders learn to defend against this strategy; the hiders learn to bring the ramps to the edge of the play area and lock them in place, seemingly removing the only tool the seekers have at their disposal. After 380 million total episodes of training, the seekers learn to bring a box to the edge of the play area where the hiders have locked the ramps. The seekers then jump on top of the box and surf it to the hiders’ shelter In response, the hiders learn to lock all of the boxes in place before building their shelter. https://youtu.be/kopoLzvh5jY They also found some surprising behavior by these AI agents. Box surfing: Since agents move by applying forces to themselves, they can grab a box while on top of it and “surf” it to the hider’s location. Endless running: Without adding explicit negative rewards for agents leaving the play area, in rare cases, hiders will learn to take a box and endlessly run with it. Ramp exploitation (hiders): Hiders abuse contact physics and remove ramps from the play area. Ramp exploitation (seekers): Seekers learn that if they run at a wall with a ramp at the right angle, they can launch themselves upward. The researchers concluded that complex human-relevant strategies and skills can emerge from multi-agent competition and standard reinforcement learning algorithms at scale. They state, “our results with hide-and-seek should be viewed as a proof of concept showing that multi-agent auto-curricula can lead to physically grounded and human-relevant behavior.” This research was well appreciated by readers. Many people took to Hacker News to congratulate the researchers. Here are a few comments. “Amazing. Very cool to see this sort of multi-agent emergent behavior. Along with the videos, I can't help but get a very 'Portal' vibe from it all. "Thank you for helping us help you help us all." “This is incredible. The various emergent behaviors are fascinating. It seems that OpenAI has a great little game simulated for their agents to play in. The next step to make this even cooler would be to use physical, robotic agents learning to overcome challenges in real meatspace!” “I'm completely amazed by that. The hint of a simulated world seems so matrix-like as well, imagine some intelligent thing evolving out of that. Wow.” Read the research paper for a deeper analysis. The code is available on GitHub. More news in Artificial Intelligence Google researchers present Weight Agnostic Neural Networks (WANNs) that perform tasks without learning weight parameters DeepMind introduces OpenSpiel, a reinforcement learning-based framework for video games Google open sources an on-device, real-time hand gesture recognition algorithm built with MediaPipe
Read more
  • 0
  • 0
  • 3439
article-image-the-house-judiciary-antitrust-subcommittee-asks-amazon-facebook-alphabet-and-apple-for-details-including-private-emails-in-the-wake-of-antitrust-investigations
Bhagyashree R
17 Sep 2019
4 min read
Save for later

The House Judiciary Antitrust Subcommittee asks Amazon, Facebook, Alphabet, and Apple for details including private emails in the wake of antitrust investigations

Bhagyashree R
17 Sep 2019
4 min read
On Friday last week, the House Judiciary Antitrust Subcommittee sent out four separate requests for information letters to Amazon, Facebook, Alphabet, and Apple as a part of antitrust investigations into the tech giants. The companies are expected to respond by October 14th. The antitrust investigation was launched earlier this year to determine whether big tech is abusing its market dominance and violating antitrust law. Stating the reason behind this investigation, Judiciary Chairman Jerrold Nadler said in a statement, “The open internet has delivered enormous benefits to Americans, including a surge of economic opportunity, massive investment, and new pathways for education online. But there is growing evidence that a handful of gatekeepers have come to capture control over key arteries of online commerce, content, and communications.” The House Judiciary Antitrust Subcommittee asks the big tech for a broad range of documents The letters issued by the antitrust subcommittee to these big tech companies ask them to share company organization charts, financial reports, and records they’ve produced for earlier antitrust investigation by the FTC or Department of Justice. Along with these details, the letters also ask a wide-range of questions specific to the individual companies. The letter to Amazon demands details about any provision it takes to guarantee that its prices are best in its contracts with suppliers or merchants. As there have been speculations that Amazon tweaks its search algorithm in favor of its own products, the letter asks detailed questions regarding its ranking and search algorithms. https://twitter.com/superglaze/status/1173861273014022144 In the letter, there are questions regarding the promotion and marketing services Amazon provides to suppliers or merchants and whether it treats its own products differently from third-party products. Congress has also asked about Amazon’s acquisition across medicine, home security, and grocery stores. The letter to Facebook asks details about its Onavo app that was reported to have been used for monitoring users’ mobile activity. It asks Facebook to present details of all the product decisions and acquisitions Facebook made based on the data collected by Onavo. The letter also focuses on how Facebook plans to keep all the promises it made when acquiring WhatsApp in 2014 like “We are absolutely not going to change plans around WhatsApp and the way it uses user data.” In the letter addressed to Alphabet, the antitrust subcommittee has asked detailed questions regarding the algorithm behind Google Search. The committee has also demanded executive emails discussing Google’s acquisitions including DoubleClick, YouTube, and Android. There are also several questions touching upon Google Maps Platform, Google Adsense and AdX, Play Store, YouTube’s ad inventory, and much more. In the letter to Apple, the antitrust subcommittee has asked whether Apple restricts its users from using web browsers other than Safari. It has asked for emails about its crackdown on screen-tracking and parental control apps. Also, there are questions regarding Apple’s restrictions on third-party repairs. The letter reads, “Isn’t this just a way for Apple to elbow out the competition and extend its monopoly into the market for repairs?” Read also: Is Apple’s ‘Independent Repair Provider Program’ a bid to avoid the ‘Right To Repair’ bill? Rep. David N. Cicilline, chairman of the panel’s antitrust subcommittee, believes that the requests for information mark an “important milestone in this investigation.” In a statement, he said, “We expect stakeholders to use this opportunity to provide information to the Committee to ensure that the Internet is an engine for opportunity for everyone, not just a select few gatekeepers.” This step by the antitrust subcommittee adds to the antitrust pressure on Silicon Valley. Last week, more than 40 state attorney generals launched an antitrust investigation targeting Google and its advertising practices. Meanwhile, Facebook is also facing a multistate investigation for possible antitrust violations. Other news in data Google is circumventing GDPR, reveals Brave’s investigation for the Authorized Buyers ad business case ‘Hire by Google’, the next product killed by Google; services to end in 2020 Apple accidentally unpatches a fixed bug in iOS 12.4 that enables its jailbreaking
Read more
  • 0
  • 0
  • 1786

article-image-100-million-grant-for-the-web-web-monetization-mozilla-coil-creative-commons
Sugandha Lahoti
17 Sep 2019
3 min read
Save for later

$100 million ‘Grant for the Web’ to promote innovation in web monetization jointly launched by Mozilla, Coil and Creative Commons

Sugandha Lahoti
17 Sep 2019
3 min read
Coil, Mozilla and Creative Commons are launching a major $100 million ‘Grant for the Web’ to award people who help develop best web monetization practices. The Grant will give roughly $20 million per year for five years to content sites, open-source infrastructure developers, and independent creators that contribute to a ‘privacy-centric, open, and accessible web monetization ecosystem’. This is a great initiative to move the workings of the internet from an ad-focused business model to a new privacy-focused internet. Grant for the Web is primarily funded by Coil a content-monetization company, with Mozilla and Creative Commons as founding collaborators. Coil is known for developing Interledger and Web Monetization as the first comprehensive set of open standards for monetizing content on the Web. Web Monetization allows users to reward creators on the Web without having to rely on one particular company, currency, or payment platform. Read Also:  Mozilla announces a subscription-based service for providing ad-free content to users Apple announces ‘WebKit Tracking Prevention Policy’ that considers web tracking as a security vulnerability Coil cited a number of issues in the internet domain such as privacy abuses related to ads, demonetization to appease advertisers, unethical sponsored content, large platforms abusing their market power. “All of these issues can be traced back to one simple problem,” says Coil, “browsers don’t pay”. This forces sites to raise funds through workarounds like ads, data trafficking, sponsored content, and site-by-site subscriptions. In order to demote these activities, Coil will now grant money to people interested in experimenting with Web Monetization as a more user-friendly, privacy-preserving way to make money. Award amounts will vary from small to large ($1,000-$100,000), depending on the scope of the project. The majority of the grant money (at least 50%) will go to openly-licensed software and content. Special focus will be given to people who promote diversity and inclusion on the internet, and for communities and individuals that have historically been marginalized, disadvantaged, or without access. Awardees will be approved by an Advisory Council initially made up of representatives from Coil, Mozilla, and Creative Commons. “The business models of the web are broken and toxic, and we need to identify new ways to support creators and to reward creativity,” says Ryan Merkley, CEO of Creative Commons, in a statement. “Creative Commons is unlikely to invent these solutions on its own, but we can partner with good community actors who want to build things that are in line with our values. Mark Surman, Mozilla’s executive director said, “In the current web ecosystem, big platforms and invasive, targeted advertising make the rules and the profit. Consumers lose out, too — they unwittingly relinquish reams of personal data when browsing content. That’s the whole idea behind ‘surveillance capitalism.’ Our goal in joining Grant for the Web is to support a new vision of the future. One where creators and consumers can thrive.” Coil CEO, Stefan Thomas is aware of the hurdles. "The grant is structured to run over five years because we think that's enough time to get to a tipping point where this either becomes a viable ecosystem or not," he said. "If it does happen, one of the nice things about this ecosystem is that it tends to attract more momentum." Check out grantfortheweb.org and join the Community Forum to ask questions and learn more. Next up in Privacy Google open sources their differential privacy library to help protect user’s private data Microsoft contractors also listen to Skype and Cortana audio recordings, joining Amazon, Google and Apple in privacy violation scandals. How Data Privacy awareness is changing how companies do business
Read more
  • 0
  • 0
  • 2094

article-image-france-and-germany-reaffirm-blocking-facebooks-libra-cryptocurrency
Sugandha Lahoti
16 Sep 2019
4 min read
Save for later

France and Germany reaffirm blocking Facebook’s Libra cryptocurrency

Sugandha Lahoti
16 Sep 2019
4 min read
Update Oct 14: After Paypal, Visa, Mastercard, eBay, Stripe, and Mercado Pago have also withdrawn from Facebook's Libra Association. These withdrawals leave Libra with no major US payment processor denting a big hole in Facebook's plans for a distributed, global cryptocurrency. David Marcus, Libra chief called this 'no great news in the short term'. https://twitter.com/davidmarcus/status/1182775730427572224 Update Oct 4: After countries, PayPal, a corporate backer is backing away from Facebook’s Libra Association the company announced on October 4. “PayPal has made the decision to forgo further participation in the Libra Association at this time and to continue to focus on advancing our existing mission and business priorities as we strive to democratize access to financial services for underserved populations,” PayPal said in a statement. In a joint statement released last week, Friday, Facebook and Germany have agreed to block Facebook’s Libra In Europe. France had been debating banning Libra for quite some time now. On Thursday at the OECD Conference 2019 on virtual currencies, French Finance Minister Bruno Le Maire told attendees that he would do everything in his power to stop Libra. He said, “I want to be absolutely clear: in these conditions, we cannot authorize the development of Libra on European soil.” Le Maire also was in favor of Eurozone issuing its own digital currency solutions, commonly dubbed ‘EuroCoin’ in the press. In a joint statement released Friday, the two governments of France and Germany wrote, “As already expressed during the meeting of G7 Finance Ministers and Central Bank’s Governers in Chantilly in July, France and Germany consider that the Libra project, as set out in Facebook’s blueprint, fails to convince that risks will be properly addressed. We believe that no private entity can claim monetary power, which is inherent to the sovereignty of Nations”. In June, Facebook had announced its ambitious plans to launch its own cryptocurrency, Libra in a move to disrupt the digital ecosystem. Libra’s launch alarmed certain experts who foresee this as a control shift of the economy from governments and their central banks to privately-held tech giants. Co-founder of Chainspace, Facebook’s blockchain acquisition said that he was “concerned about Libra’s model for decentralization”. He added, “My concern is that Libra could end up creating a financial system that is *less* censorship-resistant than our current traditional financial system. You see, our current banking system is somewhat decentralized on a global scale, as money travels through a network of banks.” The US administration is also worried about a non-governmental currency in the hands of big tech companies. Early July, the US Congress asked Facebook to suspend the implementation of Libra until the ramifications were investigated. In an interview to Bloomberg, Mu Changchun, deputy director of the People’s Bank of China’s payments department wrote, “As a convertible crypto asset or a type of stablecoin, Libra can flow freely across borders, and it “won’t be sustainable without the support and supervision of central banks.” People enthusiastically shared this new development on Twitter. “Europe is leading the way to become the blockchain hub” https://twitter.com/AltcoinSara/status/1172582618971422720 “I always thought China would be first off the blocks on regulating Libra.” https://twitter.com/Frances_Coppola/status/1148420964264370179 “France blocks libra and says not tax for crypto to crypto exchanges. America still clinging on and stifling innovation hurting investors and developers” https://twitter.com/cryptoMD45/status/1172228992532983808 For now, a working group has been tasked by the G7 Finance Ministers to analyze the challenges posed by cryptocurrencies. Its final report will be presented in October. More interesting Tech News Google is circumventing GDPR, reveals Brave’s investigation for the Authorized Buyers ad business case Margrethe Vestager, EU’s Competition Commissioner gets another term and expanded power to make “Europe fit for the digital age” Hundreds of millions of Facebook users’ phone numbers found online, thanks to an exposed server
Read more
  • 0
  • 0
  • 2324
article-image-margrethe-vestager-eus-competition-commissioner-gets-another-term-and-expanded-power-to-make-europe-fit-for-the-digital-age
Bhagyashree R
12 Sep 2019
4 min read
Save for later

Margrethe Vestager, EU’s Competition Commissioner gets another term and expanded power to make “Europe fit for the digital age”

Bhagyashree R
12 Sep 2019
4 min read
Danish politician, Margrethe Vestager, who has been behind several tough enforcement decisions in the EU against the tech behemoths, was reappointed for a second five-year term as European Competition Commissioner on Tuesday. With this unprecedented second-time appointment, Margrethe Vestager will also be taking up the “Executive Vice-President for a Europe fit for the Digital Age” role. In this role, she will be responsible for overseeing the EU’s digital innovation and leadership efforts, including artificial intelligence. Margrethe Vestager’s appointment was announced by the incoming European Commission president, Ursula von der Leyen as she revealed her new team of commissioners. She said in a press conference, "Margrethe Vestager will coordinate the whole agenda and be the commissioner for competition. She will work together with the internal market, innovation and youth, transport, health, and justice." Margrethe Vestager has been a driving force behind several major steps taken by the EU against the tech industry’s abuse of market power, underpayment of corporate taxes, and violations of user privacy. She was instrumental in making Google pay a settlement of €8.25 billion ($9.1 billion) for its non-competitive practices in the advertising market that it dominates, in antitrust cases regarding its online shopping service, its Android software and Adsense ad service. She ordered Apple to pay back up to €13 billion ($15 billion) in taxes to Ireland saying, “Tax rulings that artificially reduce a company’s tax burden are not in line with EU state aid rules.” In July this year, the EU fined US chipmaker Qualcomm $271 million for selling its 3G baseband chipsets below the cost of production to force startup Icera out of the market almost a decade ago. She has also opened a formal investigation against Amazon to find out whether it is using data from independent retailers to gain an unfair advantage over third-party merchants. Read also: EU Commission opens an antitrust case against Amazon on grounds of violating EU competition rules Margrethe Verstager’s efforts may have also inspired the US authorities who recently opened several antitrust investigations against tech giants. On Tuesday, Texas state attorney general, Ken Paxton and a gathering of attorneys general said that they are opening an antitrust investigation against Google that will focus on its advertising practices. Margrethe Verstager’s responsibilities in the new role A number of priorities are listed in the President’s mission letter to Margrethe Vestager. She will be responsible for formulating a new long-term strategy for Europe’s industrial future. She will ensure that “cross-fertilisation between civil, defence and space industries" is improved.  The president has also asked her to coordinate work on a European approach to AI, within the first 100 days of her appointment. The priorities set with regards to Margrethe Vestager's Competition Commissioner mandate are quite broad. Her tasks will include strengthening competition enforcement in all sectors, coming up with tools and policies to better tackle the market abuse by big companies, sharing any relevant market knowledge within the Commission, especially regarding the digital sector. In a statement, the Computer and Communications Industry Association (CCIA), an international non-profit advocacy organization with members including Google, Facebook and Amazon, responded, “We encourage the new Commissioners to assess the impact of all the recent EU tech regulation to ensure that future legislation will be evidence-based, proportionate and beneficial.” The 27 commissioners that Ursula von der Leyen has appointed includes 13 women and 14 men from every EU member state except the UK. They will take up their mandates on 1st November after the approval of the EU parliament. Other news in data Google is circumventing GDPR, reveals Brave’s investigation for the Authorized Buyers ad business case Google faces multiple scrutiny from the Irish DPC, FTC, and an antitrust probe by US state attorneys over its data collection and advertising practices Google, Facebook and Twitter submit reports to EU Commission on progress to fight disinformation
Read more
  • 0
  • 0
  • 1772

article-image-influxdata-launches-new-serverless-time-series-cloud-database-platform-influxdb-cloud-2-0
Sugandha Lahoti
12 Sep 2019
2 min read
Save for later

InfluxData launches new serverless time series cloud database platform, InfluxDB Cloud 2.0

Sugandha Lahoti
12 Sep 2019
2 min read
Two days earlier, InfluxData launched InfluxDB Cloud 2.0, its new serverless time-series cloud database platform. The new product includes a free rate-limited tier, transparent usage-based pricing and advanced analytic capabilities that allow customers to convert data into actionable information. It is also the first specialized time-series database cloud service which is serverless. The design direction of the 2.0 platform is to make it more visual and, in some cases, codeless. The company’s goal is to change the APIs and query language to unify the entire stack behind a single, common set of APIs. “Time series data is becoming increasingly important across a range of applications, notably operational and IoT analytics. Cloud and web developers today expect convenient access to specialist data engines,” said James Governor, analyst, and co-founder at RedMonk. “InfluxDB Cloud 2.0 is designed for developer experience, to make time-series data easier to work with.” Features of InfluxDB Cloud 2.0 It collects, stores, queries, processes and visualizes raw, high-precision, time-stamped data. InfluxDB Cloud 2.0 can outperform non-specialized time series solutions by up to 100x. It provides customers real-time observability into their systems and supports a wide range of customer applications. InfluxDB Cloud 2.0 also features Flux, a new data scripting, and query language. Flux can extract more complex and valuable insights from data, better detect anomalies and enable real-time action with alerts and notifications. The new user interface includes native client library collections and pre-built dashboards and scripts for common monitoring projects, such as Docker, Kubernetes, Nginx, Redis and more. InfluxDB Cloud 2.0 will also be available as an integrated solution on the Google Cloud Platform later this year. You can get started with InfluxDB Cloud here. Next up in Data FaunaDB brings its serverless database to Netlify to help developers create apps Different types of NoSQL databases and when to use them Google open sources their differential privacy library to help protect user’s private data
Read more
  • 0
  • 0
  • 2514