Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Artificial Intelligence

61 Articles
article-image-2018-prediction-was-reinforcement-learning-applied-to-many-real-world-situations
Prasad Ramesh
27 Feb 2019
4 min read
Save for later

2018 prediction: Was reinforcement learning applied to many real-world situations?

Prasad Ramesh
27 Feb 2019
4 min read
Back in 2017, we predicted that reinforcement learning would be an important subplot in the growth of artificial intelligence. After all, a machine learning agent that adapts and ‘learns’ according to environmental changes has all the makings of an incredibly powerful strain of artificial intelligence. Surely, then, the world was going to see new and more real-world uses for reinforcement learning. But did that really happen? You can bet it did. However, with all things intelligent subsumed into the sexy, catch-all term artificial intelligence, you might have missed where reinforcement learning was used. Let’s go all the way back to 2017 to begin. This was the year that marked a genesis in reinforcement learning. The biggest and most memorable event was perhaps when Google’s AlphaGo defeated the world’s best Go player. Ultimately, this victory could be attributed to reinforcement learning; AlphaGo ‘played’ against itself multiple times, each time becoming ‘better’ at the game, developing an algorithmic understanding of how it could best defeat an opponent. However, reinforcement learning went well beyond board games in 2018. Reinforcement learning in cancer treatment MIT researchers used reinforcement learning to improve brain cancer treatment. Essentially, the reinforcement learning system is trained on a set of data on established treatment regimes for patients, and then ‘learns’ to find the most effective strategy for administering cancer treatment drugs. The important point is that artificial intelligence here can help to find the right balance between administering and withholding the drugs. Reinforcement learning in self-driving cars In 2018, UK self-driving car startup Wayve trained a car to drive using its ‘imagination’. Real world data was collected offline to train the model, which was then used to observe and predict the ‘motion’ of items in a scene and drive on the road. Even though the data was collected in sunny conditions, the system can also drive in rainy situations adjusting itself to reflections from puddles etc. As the data is collected from the real world, there aren’t any major differences in simulation versus real application. UC Berkeley researchers also developed a deep reinforcement learning method to optimize SQL joins. The join ordering problem is formulated as a Markov Decision Process (MDP). A method called Q-learning is applied to solve the join-ordering MDP. The deep reinforcement learning optimizer called DQ offers out solutions that are close to an optimal solution across all cost models. It does so without any previous information about the index structures. Robot prosthetics OpenAI researchers created a robot hand called Dactyl in 2018. Dactyl has human-like dexterity for performing complex in hand manipulations, achieved through the use of reinforcement learning. Finally, it’s back to Go. Well, not just Go - chess, and a game called Shogi too. This time, Deepmind’s AlphaZero was the star. Whereas AlphaGo managed to master Go, AlphaZero mastered all three. This was significant as it indicates that reinforcement learning could help develop a more generalized intelligence than can currently be developed through artificial intelligence. This is an intelligence that is able to adapt to new contexts and situations - to almost literally understand the rules of very different games. But there was something else impressive about AlphaZero - it was only introduced to a set of basic rules for each game. Without any domain knowledge or examples, the newer program outperformed the current state-of-the-art programs in all three games with only a few hours of self-training. Reinforcement learning: making an impact irl These were just some of the applications of reinforcement learning to real-world situations to come out of 2018. We’re sure we’ll see more as 2019 develops - the only real question is just how extensive its impact will be. This AI generated animation can dress like humans using deep reinforcement learning Deep reinforcement learning – trick or treat? DeepMind open sources TRFL, a new library of reinforcement learning building blocks
Read more
  • 0
  • 0
  • 2537

article-image-youtube-promises-to-reduce-recommendations-of-conspiracy-theory-ex-googler-explains-why-this-is-a-historic-victory
Sugandha Lahoti
12 Feb 2019
4 min read
Save for later

Youtube promises to reduce recommendations of ‘conspiracy theory’. Ex-googler explains why this is a 'historic victory'

Sugandha Lahoti
12 Feb 2019
4 min read
Talks of AI algorithms causing harms including addiction, radicalization. political abuse and conspiracies, disgusting kids videos and the danger of AI propaganda are all around. Last month, YouTube announced an update regarding YouTube recommendations aiming to reduce the recommendations of videos that promote misinformation ( eg: conspiracy videos, false claims about historical events, flat earth videos, etc). In a historical move, Youtube changed its Artificial Intelligence algorithm instead of favoring another solution, which may have cost them fewer resources, time, and money. Last Friday, an ex-googler who helped build the YouTube algorithm, Guillaume Chaslot, appreciated this change in AI, calling it “a great victory” which will help thousands of viewers from falling down the rabbit hole of misinformation and false conspiracy theories. In a twitter thread, he presented his views as someone who has had experience working on Youtube’s AI. Recently, there has been a trend in Youtube promoting conspiracy videos such as ‘Flat Earth theories’. In a blog post, Guillaume Chaslot explains, “Flat Earth is not a ’small bug’. It reveals that there is a structural problem in Google’s AIs and they exploit weaknesses of the most vulnerable people, to make them believe the darnedest things.” Youtube realized this problem and has made amends to its algorithm. “It’s just another step in an ongoing process, but it reflects our commitment and sense of responsibility to improve the recommendations experience on YouTube. To be clear, this will only affect recommendations of what videos to watch, not whether a video is available on YouTube. As always, people can still access all videos that comply with our Community Guidelines”, states the YouTube team in a blog post. Chaslot appreciated this fact in his twitter thread saying that although Youtube had the option to ‘make people spend more time on round earth videos’, they chose the hard way by tweaking their AI algorithm. AI algorithms also often get biased by tiny groups of hyperactive users. As Chaslot notes, people who spend their lives on YouTube affect recommendations more. The content they watch gets more views, which leads to Youtubers noticing and creating more of it, making people spend even more time on that content. This is because YouTube optimizes for things you might watch, not things you might like. As a hacker news user observed, “The problem was that pathological/excessive users were overly skewing the recommendations algorithms. These users tend to watch things that might be unhealthy in various ways, which then tend to get over-promoted and lead to the creation of more content in that vein. Not a good cycle to encourage.” The new change in Youtube’s AI makes use of machine learning along with human evaluators and experts from all over the United States to train these machine learning systems responsible for generating recommendations. Evaluators are trained using public guidelines and offer their input on the quality of a video. Currently, the change is applied only to a small set of videos in the US as the machine learning systems are not very accurate currently. The new update will roll out in different countries once the systems become more efficient. However, there is another problem lurking around which is probably even bigger than conspiracy videos. This is the addiction to spending more and more time online. AI engines used in major social platforms, including but not limited to YouTube, Netflix, Facebook all want people to spend as much time as possible. A hacker news user commented, “This is just addiction peddling. Nothing more. I think we have no idea how much damage this is doing to us. It’s as if someone invented cocaine for the first time and we have no social norms or legal framework to confront it.” Nevertheless, Youtube updating it’s AI engine was taken generally positively by Netizens. As Chaslot, concluded on his Twitter thread, “YouTube's announcement is a great victory which will save thousands. It's only the beginning of a more humane technology. Technology that empowers all of us, instead of deceiving the most vulnerable.” Now it is on Youtube’s part how they will strike a balance between maintaining a platform for free speech and living up to their responsibility to users. Is the YouTube algorithm’s promoting of #AlternativeFacts like Flat Earth having a real-world impact? YouTube to reduce recommendations of ‘conspiracy theory’ videos that misinform users in the US. YouTube bans dangerous pranks and challenges Is YouTube’s AI Algorithm evil?
Read more
  • 0
  • 0
  • 2206

article-image-amazon-admits-that-facial-recognition-technology-needs-to-be-regulated
Richard Gall
08 Feb 2019
4 min read
Save for later

Amazon admits that facial recognition technology needs to be regulated

Richard Gall
08 Feb 2019
4 min read
The need to regulate facial recognition technology has been a matter of debate for the last year. Since news that Amazon had sold its facial recognition product Rekognition to a number of law enforcement agencies in the U.S. in the first half of 2018, criticism of the technology has been constant. It has arguably become the focal point for the ongoing discussion about the relationship between tech and government. Despite months of criticism and scrutiny - from inside and outside the company - Amazon's leadership has said it, too, believes that facial recognition technology needs to be regulated. In a blog post published yesterday, Michael Punke, VP of Public Policy at AWS (and author of The Revenant, trivia fans), clarified Amazon's position on the use and abuse of Rekognition. He also offered some guidelines that he argued should be followed when using facial recognition technologies to protect against misuse. Michael Punke defends Rekognition Punke initially takes issue with some of the tests done by the likes of ACLU, which found that the tool matched 28 members of Congress with mugshots. Tests like this are misleading, Punke claims, because "the service was not used properly... When we’ve re-created their tests using the service correctly, we’ve shown that facial recognition is actually a very valuable tool for improving accuracy and removing bias when compared to manual, human processes." Punke also highlights that where Rekognition has been used by law enforcement agencies, Amazon has not "received a single report of misuse." Nevertheless, he goes on to mphasise that Amazon does indeed accept the need for regulation. This suggests that in spite of its apparent success, there has been an ongoing conversation on the topic inside AWS. Managing public perception was likely an important factor here. "We’ve talked to customers, researchers, academics, policymakers, and others to understand how to best balance the benefits of facial recognition with the potential risks," he writes. Out of these guidelines, Punke explains, Amazon has developed its own set of guidelines for how Rekognition should be used. Amazon's proposed guidelines for facial recognition technology Punke - and by extension Amazon - argues that, first and foremost, facial recognition technology must be used in accordance with the law. He stresses that this includes any civil rights legislation designed to protect vulnerable and minority groups. "Our customers are responsible for following the law in how they use the technology," he writes. He also points out that that Amazon already has a policy forbidding the illegal use of its products - the AWS Acceptable Use policy. This does, of course, only go so far. Punke seems well aware of this, however, writing that Amazon "have and will continue to offer our support to policymakers and legislators in identifying areas to develop guidance or legislation to clarify the proper application of those laws." Human checks and transparency Beyond this basic point, there are a number of other guidelines specified by Punke. These are mainly to do with human checks and transparency. Punke writes that when facial recognition technology is used by law enforcement agencies, human oversight is required to act as a check on the algorithm. This is particularly important when the use of facial recognition technology could violate an individual's civil liberties. Put simply, the deployment of any facial recognition technology requires human judgement at every stage. However, Punke does provide a caveat to this, saying that a 99% confidence threshold should be met in cases where facial recognition could violate someone's civil liberties. However, he stresses that the technology should only ever be one component within a given investigation. It shouldn't be the "sole determinant" in an investigation. Finally, Punke stresses the importance of transparency. This means two things: law enforcement agencies being transparent in how they actually use facial recognition technology, and physical public notices when facial recognition technology could be used in a surveillance context. What does it all mean? In truth, Punke's blog post doesn't really mean that much. The bulk of it is, after all, about actions Amazon is already taking, and conversations it claims are ongoing. But it does tell us that Amazon can see trouble is brewing and that it wants to control the narrative when it comes to facial recognition technology. "New technology should not be banned or condemned because of its potential misuse," Punke argues - a point which sounds reasonable but fails to properly engage with the reality that potential misuse outweighs usefulness, especially in the hands of government and law enforcement.
Read more
  • 0
  • 0
  • 2082
Banner background image

article-image-ai-chipmaking-startup-graphcore-raises-200m-from-bmw-microsoft-bosch-dell
Melisha Dsouza
18 Dec 2018
2 min read
Save for later

AI chipmaking startup ‘Graphcore’ raises $200m from BMW, Microsoft, Bosch, Dell

Melisha Dsouza
18 Dec 2018
2 min read
Today, Graphcore, a UK-based chipmaking startup has raised $200m in a series D funding round from investors including Microsoft and BMW, valuing the company at $1.7bn. This new funding brings the total capital raised by Graphcore to date to more than $300m. The funding round was led by U.K.venture capital firm Atomico and Sofina, with participation from the biggest names in the AI and machine learning industry like Merian Global Investors, BMW iVentures, Microsoft, Amadeus Capital Partners, Robert Bosch Venture Capital, Dell Technologies Capital, amongst many others. The company intends to use the funds generated to execute on its product roadmap, accelerate scaling and expand its global presence. Graphcore, which designs chips purpose-built for artificial intelligence, is attempting to create a new class of chips that are better able to deal with the huge amounts of data needed to make AI computers. The company is ramping up production to meet customer demand for its Intelligence Processor Unit (UPU) PCIe processor cards, the first to be designed specifically for machine intelligence training and inference. Mr. Nigel Toon, CEO, and co-founder, Graphcore said that Graphcore’s processing units can be used for both the training and deployment of machine learning systems, and they were “much more efficient”. Tobias Jahn, principal at BMW i Ventures stated that Graphcore’s technology "is well-suited for a wide variety of applications from intelligent voice assistants to self-driving vehicles.” Last year the company raised $50 million from investors including Demis Hassabis, co-founder of DeepMind; Zoubin Ghahramani of Cambridge University and chief scientist at Uber, Pieter Abbeel from UC Berkeley, and Greg Brockman, Scott Grey and Ilya Sutskever, from OpenAI. Head over to Graphcore’s official blog for more insights on this news. Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report NVIDIA makes its new “brain for autonomous AI machines”, Jetson AGX Xavier Module, available for purchase NVIDIA demos a style-based generative adversarial network that can generate extremely realistic images; has ML community enthralled
Read more
  • 0
  • 0
  • 2501

article-image-apache-spark-2-4-0-released
Amrata Joshi
09 Nov 2018
2 min read
Save for later

Apache Spark 2.4.0 released

Amrata Joshi
09 Nov 2018
2 min read
Last week, Apache Spark released its latest version, Apache Spark 2.4.0. It is the fifth release in the 2.x line. This release comes with Barrier Execution Mode for better integration with deep learning frameworks. Apache Spark 2.4.0 brings 30+ built-in and higher-order functions to deal with complex data types. These functions work with  Scala 2.12 and improve the K8s (Kubernetes) integration. This release also focuses on usability, stability, and polish while resolving around 1100 tickets. What’s new in Apache Spark 2.4.0? Built-in Avro data source Image data source Flexible streaming sinks Elimination of the 2GB block size limitation during transfer Pandas UDF improvements Major changes Apache Spark 2.4.0 supports Barrier Execution Mode in the scheduler, for better integration with deep learning frameworks. One can now build Spark with Scala 2.12 and write Spark applications in Scala 2.12. Apache Spark 2.4.0 supports Spark-Avro package with logical type support for better performance and usability. Some users are SQL experts but aren’t much aware of Scala/Python or R. Thus, this version of Apache comes with support for Pivot. Apache Spark 2.4.0 has added Structured Streaming ForeachWriter for Python. This lets users write ForeachWriter code in Python, that is, they can use the partitionId and the version/batchId/epochId to conditionally process rows. This new release has also introduced Spark data source for the image format. Users can now load images through the Spark source reader interface. Bug fixes: The LookupFunctions are used to check the same function name again and again. This version includes a latest LookupFunctions rule which performs a check for each invocation. A PageRank change in the Apache Spark 2.3 introduced a bug in the ParallelPersonalizedPageRank implementation. This change prevents serialization of a Map which needs to be broadcast to all workers. This issue has been resolved with the release of Apache Spark 2.4.0 Read more about Apache Spark 2.4.0 on the official website of Apache Spark. Building Recommendation System with Scala and Apache Spark [Tutorial] Apache Spark 2.3 now has native Kubernetes support! Implementing Apache Spark K-Means Clustering method on digital breath test data for road safety
Read more
  • 0
  • 0
  • 2627

article-image-filestack-workflows-comes-with-machine-learning-capabilities-to-help-business-manage-their-digital-images
Sugandha Lahoti
25 Oct 2018
3 min read
Save for later

Filestack Workflows comes with machine learning capabilities to help business manage their digital images

Sugandha Lahoti
25 Oct 2018
3 min read
Filestack has come up with Filestack Workflows, a machine learning powered solution to help businesses detect, analyze, moderate and curate content in scalable and automated ways. Filestack and Workflows have traditionally been providing tools for companies to handle content as it is uploaded. Their tools checked for NSFW content, cropped photos, performed copyright detection on Word Docs, etc. However, handling content at scale using tools they've built in-house was proving to be difficult. They relied heavily on developers to implement the code or set up a chain of events. This brought them to develop a new interface that allows businesses to upload, moderate, transform and understand content at scale, freeing them to innovate more and manage less. The Filestack Workflows platform is built on a logic-driven intelligence functionality which uses machine learning to provide quick analysis of images and return actionable insights. This includes object recognition and detection, explicit content detection, optical character recognition, and copyright detection. Filestack Workflows provide flexibility for integration either from Filestack’s own API or from a simple user Interface. Workflows also have several new features that extend far beyond simple image transformation: Optical Character Recognition (OCR) allows users to abstract text from any given image. Images of everything from tax documents to street signs can be uploaded through their system, returning a raw text format of all characters in that image. Not Safe for Work (NSFW) Detection for filtering out content that is not appropriate for the workplace. Their image tagging feature can automate content moderations by implementing “safe for work” and a “not safe for work” score. Copyright Detection to determine if a file is an original work. A single API call will display the copyright status of single or multiple images. They have also released a quick demo to highlight the features of Filestack Workflows. This demo creates a Workflow that takes uploaded content (images or documents) and determines a filetype and then curates ‘safe for work’ images. It determines the Filetype using the following logic: If it is an 'Image' then: Determine if the image is 'Safe for Work' If it is 'Safe', then store to a specific storage source. If it is 'Not Safe' then, pixelate the image, and then store to a specific storage source for modified images. If it is a 'Document', then store to a specific storage source for documents. Read more about the news on Filestack’s blog. Facebook introduces Rosetta, a scalable OCR system that understands text on images using Faster-RCNN and CNN How Netflix uses AVA, an Image Discovery tool to find the perfect title image for each of its shows Datasets and deep learning methodologies to extend image-based applications to videos
Read more
  • 0
  • 0
  • 3737
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-amazon-tried-to-sell-its-facial-recognition-technology-to-ice-in-june-emails-reveal
Richard Gall
24 Oct 2018
3 min read
Save for later

Amazon tried to sell its facial recognition technology to ICE in June, emails reveal

Richard Gall
24 Oct 2018
3 min read
It has emerged that Amazon representatives met with Immigrations and Customs Enforcement (ICE) this Summer in a bid to sell its facial recognition tool Rekognition. Emails obtained by The Daily Beast show that officials from Amazon met with ICE on June 12 in Redwood City. In that meeting, Amazon outlined some of AWS capabilities, stating that "we are ready and willing to help support the vital HSI [Homeland Security Investigations] mission." The emails (which you can see for yourself here) also show that Amazon were keen to set up a "workshop" with U.S. Homeland Security, and "a meeting to review the process in more depth and help assess your target list of 'Challenges [capitalization intended]'." What these 'Challenges' are referring to exactly is unclear. The controversy around Amazon's Rekognition tool These emails will only serve to increase the controversy around Rekognition and Amazon's broader involvement with security services. Earlier this year the ACLU (American Civil Liberties Union) revealed that a small number of law enforcement agencies were using Rekognition for various purposes. Later, in July, the ACLU published the results of its own experiment with Rekognition in which it incorrectly matched mugshots with 28 Congress members. Amazon responded to this research with a rebuttal on the AWS blog. In it, the Dr. Matt Wood stated that "machine learning is a very valuable tool to help law enforcement agencies, and while being concerned it’s applied correctly, we should not throw away the oven because the temperature could be set wrong and burn the pizza." This post was referenced in the email correspondence between Amazon and ICE. Clearly, the issue of accuracy was an issue in the company's discussion with security officials. The controversy continued this month after an employee published an anonymous letter on Medium, urging the company not to sell Rekognition to police. They wrote: "When a company puts new technologies into the world, it has a responsibility to think about the consequences." Amazon claims Rekognition isn't a surveillance service We covered this story on the Packt Hub last week. Following publication, an Amazon PR representative contacted us, stating that  "Amazon Rekognition is NOT a surveillance service" [emphasis the writer's, not mine]. The representative also cited the post mentioned above by Dr. Matt Wood, keen to tackle some of the challenges presented by the ACLU research. Although Amazon's position is clear, it will be difficult for the organization to maintain that line given these emails. Separating the technology from its deployment is all well and good until its clear that you're courting the kind of deployment for which you are being criticised. Note 10.30.2018 - Amazon spokesperson responded with a comment, wishing to clarify the events described from its perspective: “We participated with a number of other technology companies in technology “boot camps” sponsored by McKinsey Company, where a number of technologies were discussed, including Rekognition. As we usually do, we followed up with customers who were interested in learning more about how to use our services (Immigration and Customs Enforcement was one of those organizations where there was follow-up discussion).”
Read more
  • 0
  • 0
  • 2316

article-image-graph-nets-deepminds-library-for-graph-networks-in-tensorflow-and-sonnet
Sunith Shetty
19 Oct 2018
3 min read
Save for later

Graph Nets – DeepMind's library for graph networks in Tensorflow and Sonnet

Sunith Shetty
19 Oct 2018
3 min read
Graph Nets is a new DeepMind’s library used for building graph networks in TensorFlow and Sonnet. Last week a paper Relational inductive biases, deep learning, and graph networks was published on arXiv by researchers from DeepMind, Google Brain, MIT and University of Edinburgh. The paper introduces a new machine learning framework called Graph networks which is expected to bring new innovations in artificial general intelligence realm. What are graph networks? Graph networks can generalize and extend various types of neural networks to perform calculations on the graph. It can implement relational inductive bias, a technique used for reasoning about inter-object relations. The graph networks framework is based on graph-to-graph modules. Each graph’s features are represented in three characteristics: Nodes Edges: Relations between the nodes Global attributes: System-level properties The graph network takes a graph as an input, performs the required operations and calculations from the edge, to the node, and to the global attributes, and then returns a new graph as an output. The research paper argues that graph networks can support two critical human-like capabilities: Relational reasoning: Drawing logical conclusions of how different objects and things relate to one another Combinatorial Generalization: Constructing new inferences, behaviors, and predictions from known building blocks To understand and learn more about graph networks you can refer the official research paper. Graph Nets Graph Nets library can be installed from pip. To install the library, run the following command: $ pip install graph_nets The installation is compatible with Linux/Mac OSX, and Python versions 2.7 and 3.4+ The library includes Jupyter notebook demos which allow you to create, manipulate, and train graph networks to perform operations such as shortest path-finding task, a sorting task, and prediction task. Each demo uses the same graph network architecture, thus showing the flexibility of the approach. You can try out various demos in your browser using Colaboratory. In other words, you don’t need to install anything locally when running the demos in the browser (or phone) via cloud Colaboratory backend. You can also run the demos on your local machine by installing the necessary dependencies. What’s ahead? The concept was released with ideas not only based in artificial intelligence research but also from the computer and cognitive sciences. Graph networks are still an early-stage research theory which does not yet offer any convincing experimental results. But it will be very interesting to see how well graph networks live up to the hype as they mature. To try out the open source library, you can visit the official Github page. In order to provide any comments or suggestions, you can contact graph-nets@google.com. Read more 2018 is the year of graph databases. Here’s why. Why Neo4j is the most popular graph database Pytorch.org revamps for Pytorch 1.0 with design changes and added Static graph support
Read more
  • 0
  • 0
  • 7638

article-image-bitcoin-core-escapes-a-collapse-from-a-denial-of-service-vulnerability
Savia Lobo
21 Sep 2018
2 min read
Save for later

Bitcoin Core escapes a collapse from a Denial-of-Service vulnerability

Savia Lobo
21 Sep 2018
2 min read
A few days back, Bitcoin Core developers discovered a vulnerability in its Bitcoin Core software that would have allowed a miner to insert a ‘poisoned block’ in its blockchain. This would have crashed the nodes running the Bitcoin software around the world. The software patch notes state, “A denial-of-service vulnerability (CVE-2018-17144) exploitable by miners has been discovered in Bitcoin Core versions 0.14.0 up to 0.16.2.” The developers further recommended users to upgrade any of the vulnerable versions to 0.16.3 as soon as possible. CVE-2018-17144: The denial-of-service vulnerability The vulnerability was introduced in Bitcoin Core version 0.14.0, which was first released in March 2017. But the issue wasn't found until just two days ago, prompting contributors of the codebase to take action and ultimately release a tested fix within 24 hours. In a report by The Next Web, “The bug relates to its consensus code. It meant that some miners had the option to send transaction data twice, causing the Bitcoin network to crash when attempting to validate them. As such invalid blocks need to be mined anyway, only those willing to disregard block reward of 12.5BTC ($80,000) could actually do any real damage.” Also, the bug was not only in the Bitcoin protocol but also in its most popular software implementation. Some cryptocurrencies built using Bitcoin Core’s code were also affected. For example, Litecoin patched the same vulnerability on Tuesday. However, the bitcoin is far too decentralized to be brought down by any single entity. TNW also states, “While never convenient, responding appropriately to such potential dangers is crucial to maintaining the integrity of blockchain tech – especially when reversing transactions is not an option.” This vulnerability discovery, however, was a great escape from the Bitcoin collapse. To read about this news in detail, head over to The Next Web’s full coverage. A Guide to safe cryptocurrency trading Apple changes app store guidelines on cryptocurrency mining Crypto-ML, a machine learning powered cryptocurrency platform
Read more
  • 0
  • 0
  • 3162

article-image-baidu-releases-ezdl-a-platform-that-lets-you-build-ai-and-machine-learning-models-without-any-coding-knowledge
Melisha Dsouza
03 Sep 2018
3 min read
Save for later

Baidu releases EZDL - a platform that lets you build AI and machine learning models without any coding knowledge

Melisha Dsouza
03 Sep 2018
3 min read
Chinese internet giant Baidu released ‘EZDL’ on September 1. EZDL allows businesses to create and deploy AI and machine learning models without any prior coding skills. With a simple drag-and-drop interface, it takes only four steps to train a deep learning model that’s built specifically for a business’ needs. This is particularly good news for small and medium sized businesses for whom leveraging artificial intelligence might ordinarily prove challenging. Youping Yu, general manager of Baidu’s AI ecosystem division, claims that EZDL will allow everyone to access AI “in the most convenient and equitable way”. How does EZDL work? EZDL focuses on three important aspects of machine learning: image classification, sound classification, and object detection. One of the most notable features about EZDL is the small size of the training data sets required to create artificial intelligence models. For image classification and object recognition, it requires just 20 to 100 images per label. For sound classification, it needs only 50 audio files at the most. The training can be completed in just 15 minutes in some cases, or a maximum of one hour for more complex models. After a model has been trained, the algorithm can be downloaded as a SDK or uploaded into a public or private cloud platform. The algorithms created support a range of operating systems, including Android and iOS. Baidu also claims an accuracy of more than 90 percent in two-thirds of the models it creates. How EZDL is already being used by businesses Baidu has demonstrated many use cases for EZDL. For example: A home decorating website called ‘Idcool’ uses EZDL to train systems that automatically identify the design and style of a room with 90 percent accuracy. An unnamed medical institution is using EZDL to develop a detection model for blood testing. A security monitoring firm used it to make a sound-detecting algorithm that can recognize “abnormal” audio patterns that might signal a break-in. Baidu is clearly making its mark in the AI race. This latest release follows the launch of its Baidu Brain platform for enterprises two years ago. Baidu Brain is already used by more than 600,000 developers. Another AI service launched by the company is its conversational DuerOS digital assistant, which is installed on more than 100 million devices. As if all that weren't enough, Baidu has also been developing hardware for artificial intelligence systems in the form of its Kunlun chip, designed for edge computing and data center processing - it’s slated for launch later this year. Baidu will demo EZDL at TechCrunch Disrupt SF, September 5th to 7th at Moscone West, 800 Howard St., San Francisco. For more on EZDL visit the Baidu's website for the project. Read next Baidu Apollo autonomous driving vehicles gets machine learning based auto-calibration system Baidu announces ClariNet, a neural network for text-to-speech synthesis
Read more
  • 0
  • 0
  • 3703
article-image-amazon-is-supporting-research-into-conversational-ui-with-alexa-fellowships
Sugandha Lahoti
03 Sep 2018
3 min read
Save for later

Amazon is supporting research into conversational AI with Alexa fellowships

Sugandha Lahoti
03 Sep 2018
3 min read
Amazon has chosen recipients from all over the world to be awarded the Alexa fellowships. The Alexa Fellowships program is open for PhD and post-doctoral students specializing in conversational AI at select universities. The program was launched last year, when four researchers won awards. Amazon's Alexa Graduate fellowship The Alexa Graduate Fellowship supports conversational AI research by providing funds and mentorship to PhD and postdoctoral students. Faculty Advisors and Alexa Graduate Fellows will also teach conversational AI to undergraduate and graduate students using the Alexa Skills Kit (ASK) and Alexa Voice Services (AVS). The graduate fellowship recipients are selected based on their research interests, planned coursework and existing conversational AI curriculum. This year the institutions include six in the United States, two in the United Kingdom, one in Canada and one in India. The 10 universities are: Carnegie Mellon University, Pittsburgh, PA International Institute of Information Technology, Hyderabad, India Johns Hopkins University, Baltimore, MD MIT App Inventor, Boston, MA University of Cambridge, Cambridge, United Kingdom University of Sheffield, Sheffield, United Kingdom University of Southern California, Los Angeles, CA University of Texas at Austin, Austin, TX University of Washington, Seattle, WA University of Waterloo, Waterloo, Ontario, Canada Amazon's Alexa Innovation Fellowship The Alexa Innovation Fellowship is dedicated to innovations in conversational AI. The program was introduced this year and Amazon has partnered with university entrepreneurship centers to help student-led startups build their innovative conversational interfaces. The fellowship also provides resources to faculty members. This year ten leading entrepreneurship center faculty members were selected as the inaugural class of Alexa Innovation Fellows. They are invited to learn from the Alexa team and network with successful Alexa Fund entrepreneurs. Instructors will receive funding, Alexa devices, hardware kits and regular training, as well as introductions to successful Alexa Fund-backed entrepreneurs. The 10 universities selected to receive the 2018-2019 Alexa Innovation Fellowship include: Arizona State University, Tempe, AZ California State University, Northridge, CA Carnegie Mellon University, Pittsburgh, PA Dartmouth College, Hanover, NH Emerson College, Boston, MA Texas A&M University, College Station, TX University of California, Berkeley, CA University of Illinois, Urbana-Champaign, IL University of Michigan, Ann Arbor, MI University of Southern California, Los Angeles, CA “We want to make it easier and more accessible for smart people outside of the company to get involved with conversational AI. That's why we launched the Alexa Skills Kit (ASK) and Alexa Voice Services (AVS) and allocated $200 million to promising startups innovating with voice via the Alexa Fund.” wrote Kevin Crews, Senior Product Manager for the Amazon Alexa Fellowship, in a blog post. Read more about the 2018-2019 Alexa Fellowship class on the Amazon blog. Read next Cortana and Alexa become best friends: Microsoft and Amazon release a preview of this integration Voice, natural language, and conversations: Are they the next web UI?
Read more
  • 0
  • 0
  • 2633

article-image-deepmind-artificial-intelligence-can-spot-over-50-sight-threatening-eye-diseases-with-expert-accuracy
Sugandha Lahoti
14 Aug 2018
3 min read
Save for later

DeepMind Artificial Intelligence can spot over 50 sight-threatening eye diseases with expert accuracy

Sugandha Lahoti
14 Aug 2018
3 min read
DeepMind Health division has achieved a major milestone by developing an artificial intelligence system that can detect over 50 sight-threatening eye diseases with the accuracy of an expert doctor. This system can quickly interpret eye scans and correctly recommend how patients should be referred for treatment. It is the result of a collaboration with Moorfields Eye Hospital; the partnership was announced in 2016 to jointly address some of the current eye conditions. How Artificial Intelligence beats current OCT scanners Currently, eyecare doctors use optical coherence tomography (OCT) scans to help diagnose eye conditions. OCT scans are often hard to read and require time to be interpreted by experts. The time required can cause long delays between scan and treatment, which can be troublesome if someone needs urgent care. Deepmind’s AI system can automatically detect the features of eye diseases within seconds. It can also prioritize patients by recommending whether they should be referred for treatment urgently. System architecture The system uses an easily interpretable representation sandwiched between two different neural networks. The first neural network, known as the segmentation network, analyses the OCT scan and provides a map of the different types of eye tissue and the features of the disease it observes. The second network, known as the classification network, analyses the map to present eyecare professionals with diagnoses and a referral recommendation. The system expresses the referral recommendation as a percentage, allowing clinicians to assess the system’s confidence. AI-powered dataset DeepMind has also developed one of the best AI-ready databases for eye research in the world. The original dataset held by Moorfields was suitable for clinical use, but not for machine learning research. The improved database is a non-commercial public asset owned by Moorfield. It is currently being used by hospital researchers for nine separate studies into a wide range of conditions. DeepMind’s initial research is yet to turn into a usable product and then undergo rigorous clinical trials and regulatory approval before being used in practice. Once validated for general use, the system would be used for free across all 30 of Moorfields’ UK hospitals and community clinics, for an initial period of five years. You can read more about the announcement on the DeepMind Health blog. You can also read the paper on Nature Medicine. Reinforcement learning optimizes brain cancer treatment to improve patient quality of life. AI beats Chinese doctors in a tumor diagnosis competition. 23andMe shares 5mn client genetic data with GSK for drug target discovery
Read more
  • 0
  • 0
  • 2694

article-image-google-cloud-next-fei-fei-li-reveals-new-ai-tools-for-developers
Richard Gall
25 Jul 2018
3 min read
Save for later

Google Cloud Next: Fei-Fei Li reveals new AI tools for developers

Richard Gall
25 Jul 2018
3 min read
AI was always going to be a central theme of this year's Google Cloud Next, and the company hasn't disappointed. In a blog post, Fei-Fei Li, Chief Scientist at Google AI, has revealed a number of new products that will make AI more accessible for developers. Expanding Cloud AutoML [caption id="attachment_21059" align="alignright" width="300"] Fei-Fei Li at Ai for Good in 2017 (via commons.wikimedia.org)[/caption] In her blog post, Li notes that there is a "significant gap" in the machine learning world. On the one hand data scientists build solutions from the ground up, while on the other, pre-trained solutions can deliver immediate results with little work from engineers. With Cloud AutoML Google has made a pitch to the middle ground: those that require more sophistication that pre-built models, but don't have the resources to build a system from scratch. Li provides detail on a number of new developments within the Cloud AutoML project, that are being launched as part of Google Cloud Next. This includes AutoML Vision, which "extends the Cloud Vision API to recognize entirely new categories of images." It also includes two completely new language-related machine learning tools: AutoML Natural Language and AutoML Translation. AutoML Natural Language will allow users to perform Natural Language Processing - this could, for example, help organizations manage content at scale. AutoML Translation meanwhile could be particularly useful for organizations looking to go global with content distribution and marketing. Improvements to Google machine learning APIs Li also revealed that Google are launching updates to a number of key updates to APIs: The Google Cloud Vision API "now recognizes handwriting, supports additional file types (PDF and TIFF) and product search, and can identify where an object is located within an image" according to Li. The Cloud Text-to-Speech and Cloud Speech-to-Text also have updates that build in greater sophistication in areas such as translation, for example. Bringing AI to customer service with Contact Center AI The final important announcement by Li centers on conversational UI using AI. Part of this was an update to Diagflow Enterprise Edition, a Google-owned tool that makes building conversational UI easier. Text to speech capabilities have been added to the tool alongside its speech to text capability, which came with its launch in November 2017. But the big reveal is Contact Center AI. This builds on Diagflow and is essentially a complete customer service AI solution. Contact Center AI bridges the gap between virtual assistant and human custom service representative, supporting the entire journey from customer query to resolution. It has the potential to be a game changer when it comes to customer support. Read next: Decoding the reasons behind Alphabet’s record high earnings in Q2 2018 Google Cloud Launches Blockchain Toolkit to help developers build apps easily Google’s Daydream VR SDK finally adds support for two controllers
Read more
  • 0
  • 0
  • 3678
article-image-tensorflow-1-10-rc0-released
Amey Varangaonkar
24 Jul 2018
2 min read
Save for later

Tensorflow 1.10 RC0 released

Amey Varangaonkar
24 Jul 2018
2 min read
Continuing the recent trend of rapid updates introducing significant fixes and new features, Google have released the first release candidate for Tensorflow 1.10. TensorFlow 1.10 RC0 brings some improvements in model training and evaluation, and also how Tensorflow runs in a local environment. This is Tensorflow’s fifth update release in just over a month, which includes two major version updates, the previous one being Tensorflow 1.9 What’s new in Tensorflow 1.10 RC0? The tf.contrib.distributions module will be deprecated in this version. This module is primarily used to work with statistical distributions Upgrade to NCCL  2.2 will be mandatory in order to perform GPU computing with this version of Tensorflow, for added performance and efficiency. Model training speed can now be optimized by improving the communication between the model and the Tensorflow resources. For this, the RunConfig function has been updated in this version. The Tensorflow development team also announced support for Bazel - a popular build and testing automation software - and deprecated support for cmake starting with Tensorflow 1.11. This version also incorporated some bug fixes and performance improvements to the tf.data, tf.estimator and other related modules. To get full details on the features list of this release candidate, you can check out Tensorflow’s official release page on Github. No news on Tensorflow 2.0 yet Many developers were expecting the next major release of Tensorflow, Tensorflow 2.0, to be released in late July or August. However, the announcement of this release candidate and the mention of the next version update (1.11) means they will have to wait for some more time before they get to know more about the next breakthrough release. Read more Why Twitter (finally!) migrated to Tensorflow Python, Tensorflow, Excel and more – Data professionals reveal their top tools Can a production ready Pytorch 1.0 give TensorFlow a tough time?
Read more
  • 0
  • 0
  • 3647

article-image-nvidia-and-ai-researchers-create-ai-agent-noise2noise-that-can-denoise-images
Richard Gall
10 Jul 2018
2 min read
Save for later

Nvidia and AI researchers create AI agent Noise2Noise that can denoise images

Richard Gall
10 Jul 2018
2 min read
Nvidia has created an an AI agent that can clean 'noisy images' - without ever having seen a 'clean' one. Working alongside AI researchers from MIT and Aalto University, they have created something they've called 'Noise2Noise'. The team's findings could, they claim, "lead to new capabilities in learned signal recovery using deep neural networks." This could have a big impact on a number of areas, including healthcare. How researchers trained the Noise2Noise AI agent The team took 50,000 images from the ImageNet database which were then manipulated to look 'noisy'. Noise2Noise then ran on these images and was able to 'denoise' them - without knowing what a clean image looked like. This is the most significant part of the research. The AI agent wan't learning from clean data, but was instead simply learning the denoising process. This is an emerging and exciting area in data analysis and machine learning. In the introduction to their recently published journal article, which coincides with a presentation at International Conference on Machine Learning in Stockholm this week the research team explain: "Signal reconstruction from corrupted or incomplete measurements is an important subfield of statistical data analysis. Recent advances in deep neural networks have sparked significant interest in avoiding the traditional, explicit a priori statistical modeling of signal corruptions, and instead learning to map corrupted observations to the unobserved clean versions." The impact and potential applications of Noise2Noise Because the Noise2Noise AI agent doesn't require 'clean data' - or the 'a priori statistical modeling of signal corruptions' - it could be applied in a number of very exciting ways. It "points the way significant benefits in many applications by removing the need for potentially strenuous collection of clean data" the team argue. One of the most interesting potential applications of the research is in the field of MRI scans. Essentially, an agent like Noise2Noise could give a much more accurate MRI scan than those done by traditional MRI scan agents which use something called Fast Fourier Transform. This could subsequently lead to a greater level of detail in MRI scans which will massively support medical professionals to make quicker diagnoses. Read next: Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads Nvidia’s Volta Tensor Core GPU hits performance milestones. But is it the best? How to Denoise Images with Neural Networks
Read more
  • 0
  • 0
  • 3022