Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Artificial Intelligence

84 Articles
article-image-so-you-want-to-learn-artificial-intelligence-heres-how-you-do-it
Richard Gall
27 Feb 2019
8 min read
Save for later

So, you want to learn artificial intelligence. Here's how you do it.

Richard Gall
27 Feb 2019
8 min read
If you want to learn how to build artificial intelligence systems, the first step is simple: forget all about artificial intelligence. Instead focus your attention on machine learning. That way, you can be sure you’re in the domain of the practical rather than the domain of hype. Okay, this position might sound a little too dramatic. But there are a number of jokes doing the rounds on Twitter along these lines. Mat Velloso, an adviser to Satya Nadella at Microsoft, wrote late last year that “if it’s written in Python, it’s machine learning. If it’s written in PowerPoint, it’s probably AI.” https://twitter.com/matvelloso/status/1065778379612282885 There are similar jokes that focus on the use of the different words depending on whether you’re talking to investors or colleagues - either way, it’s clear that if you’re starting to explore artificial intelligence and machine learning, understanding what’s important and what you can ignore will help you to get a better view on where you need to go as your learning journey unfolds. So, once you understand that artificial intelligence is merely the word describing the end goal we’re trying to achieve, and machine learning is a means of achieving that goal, you can begin to start trying to develop intelligent systems yourself. Clearly, a question will keep cropping up: where next? Well, this post should go some way to helping you. Do you want to learn artificial intelligence? Read Packt's extensive Learning Path Python: Beginner's Guide to Artificial Intelligence. For a more advanced guide, check out Python: Advanced Guide to Artificial Intelligence. The basics of machine learning If you want to build artificial intelligence, you need to start by learning the basics of machine learning. Follow these steps: Get to grips with the basics of Python and core programming principles - if you’re reading this, you probably know enough to begin, but if you don’t there are plenty of resources to get you started. (We suggest you start with Learning Python) Make sure you understand basic statistical principles - machine learning is really just statistics, automated by code. Venturing further into machine learning and artificial intelligence The next step builds on those foundations. This is where you begin thinking about the sorts of problems you want to solve and the types of questions you want to ask. This is actually a creative step where you set the focus for your project - whatever kind of pattern or relationship you want to understand, this is where you can do just that. One of the difficulties, however, is making sure you have access to the data you need to actually do what you want. Sometimes, you might need to do some serious web scraping or data mining to get hold of the data you want - that’s beyond the scope of this piece, but there are plenty of resources out there to help you do just that. But there are also plenty of ready made data sets available for you to use in your machine learning project in whichever way you wish. You can find 50 data sets for machine learning here, all for a range of different uses. (If you’re trying machine learning for the first time, we’d suggest using one of these data sets and playing around to save you collecting data). Getting to grips with data modelling Although machine learning modelling is the next step in the learning journey, arguably it should happen at the same time as you’re thinking about both the questions you’re asking and the different data sources you might require. This is because the model - or models - you decide to employ in your project will follow directly from the problems you’re trying to tackle and, indeed, the nature and size of the data sets you eventually use. It’s important to note that no model is perfect. There’s a rule in the data science and machine learning world called the ‘no free lunch’ rule - basically, there’s no model that offers a short cut. There will always be trade offs between different algorithms in how they perform in various factors. To manage this issue you need to understand what’s important to you - maybe you’re not worried about speed, for example? Or perhaps accuracy isn’t crucial, you just want to build something that runs quickly. Broadly, the models you use will fall into these categories: supervised or unsupervised. Supervised machine learning algorithms Supervised learning is where you have an input and an output and you use an algorithm to better understand the relationship between the two. Ultimately, you want to get to a point when your machine learning system understands the relationship in such a way that you could predict an output. Supervised learning can also be broken down into regression or classification. Regression is where the output is a number or value, while classification is a specific category, or descriptor. Some algorithms can be used for both regression and classification problems, such as random forest, while others can be used for one or the other. For example, support vector machines can be used for classification problems, while linear regression algorithms can, as the name indicates, be used for regression problems. Unsupervised machine learning algorithms Unsupervised machine learning contrasts from supervised machine learning in that there are no outputs on which the algorithm works. If supervised learning 'tells' the algorithm the answers from which it then needs to understand how those answers were generated, unsupervised learning aims to understand the underlying structure within a given set of data. There aren’t any answers to guide the machine learning algorithm. As above, there are a couple of different approaches to unsupervised machine learning: clustering and association. Clustering helps you understand different groups within a set of data, while association is simply a way of understanding relationship or rules: if this happens, then this will happen too. Okay, so what about artificial intelligence? By now you will have a solid foundation of knowledge in machine learning. However, this is only the tip of the iceberg - machine learning at its most basic provides a very limited form of artificial intelligence. Advances in artificial intelligence are possible through ever more powerful algorithms - artificial or deep neural networks - that have additional layers of complexity (quite literally additional neurons). These are the algorithms that are used to power sophisticated applications and tools. From image recognition to image identification, through to speech to text and machine translation, the applications of these algorithms are radically transforming our relationship with technology. But you probably already knew that. The important question is how you actually go about doing it. Well, luckily in many ways, if you know the core components of machine learning, more advanced elements of deep learning and artificial neural networks shouldn’t actually be as complex as you might at first think. There are, however, a couple of considerations that become more important as you move deeper into deep learning. Hardware considerations for deep learning One of the most important considerations for any deep learning projects you want to try is the hardware you’re using. For a basic machine learning problem, this shouldn’t be an issue. However, but as the computations on which your deep learning system is working become more extensive, the hardware you use to run will become a challenge you need to resolve. This is too big an issue to explore here, but you can look in detail at our comparison of different processors here. Getting started with deep learning frameworks One of the reasons the whole world is talking about artificial intelligence is because it’s easier to do. And this is thanks, in part, to the growth of new deep learning frameworks that make it relatively straightforward to build complex deep learning models. The likes of TensorFlow, Keras, and PyTorch are all helping engineers and data scientists build deep learning models of considerable sophistication. Although they each have their own advantages, and it’s well worth spending some time comparing them, there’s certainly a lot to be said for simply getting started with them yourself. What about cloud's impact on machine learning and artificial intelligence? An interesting development in the machine learning space is the impact of cloud based solutions. The likes of Azure, AWS and Google Cloud Platform are all offering a number of different services and tools from within their overarching cloud products that make performing machine and deep learning tasks much easier. While this is undoubtedly going to be an important development, and, indeed, one you may have encountered already, there is no substitute for simply getting your hands dirty with the data and seeing how the core principles behind machine learning and artificial intelligence actually work. Conclusion: Don’t be scared, take machine learning and artificial intelligence one step at a time Clearly, with so much hype around artificial intelligence its easy to get stuck before you begin. However, by focusing on the core principles and practical application of machine learning you will be well on your way to helping drive the future of artificial intelligence. Learn artificial intelligence from scratch with Python: Beginner's Guide to Artificial Intelligence. Dive deeper into deep learning and artificial intelligence with Python: Advanced Guide to Artificial Intelligence.  
Read more
  • 0
  • 0
  • 6723

article-image-google-engineers-works-towards-large-scale-federated-learning-dub-it-federated-computing
Prasad Ramesh
22 Feb 2019
4 min read
Save for later

Google engineers work towards large scale federated learning

Prasad Ramesh
22 Feb 2019
4 min read
In a paper published on February 4, Google engineers drafted out plans to forward federated learning at a scale. It showcases the high-level plans, challenges, solutions, and applications. Federated learning was first introduced in 2017 by Google. The idea is to use data from a number of computing devices like smartphones instead of a centralized data source. Federated learning can help with privacy Federated learning can be beneficial as it addresses the privacy concern. Android phones are used for the system where the data is only used but never uploaded to any server. A deep neural network is trained by using TensorFlow on the data stored in the Android phone. The Federated averaging algorithm by Brendan McMahan uses a similar approach as synchronous training. The weights of the neural network are combined in the cloud using Federated Averaging. This creates a global model which is then pushed back to the phones as results/desirable actions. To enhance privacy approaches like differential privacy and Secure aggregation are taken. The paper addresses challenges like time zone differences, connectivity issues, interrupted execution etc,. Their work is mature enough to deploy the system in production for tens of millions of devices. They are working towards supporting billions of devices now. The training protocol The system involves devices and the Federated Learning server communicating availability and the server selecting devices to run a task. A subset of the available devices are selected for a task. The Federated Learning server instructs the devices what computing task to run with a plan. A plan would consist a TensorFlow graph and instructions to execute it. There are three phases for the training to take place: Selection of the devices that meet eligibility criteria Configuring the server with simple or Secure Aggregation Reporting from the devices where reaching a certain number would get the training round started Source: Towards Federated Learning at Scale: System Design The devices are supposed to maintain a repository of the collected data and the applications are responsible to provide data to the Federated Learning runtime as an example store. The Federated Learning server is designed to operate on orders of many magnitudes. Each round can mean updates from devices in the range of KBs to tens of MBs coming going the server. Data collection To avoid harming the phone’s battery life and performance, various analytics are collected in the cloud. The logs don’t contain any personally identifiable information. Secure aggregation Secure aggregation uses encryption to make individual device updates uninspectable. They plant to use it for protection against threats in data centers. Secure aggregation would ensure data encryption even when it is in-memory. Challenges of federated learning Compared to a centralized dataset, federated learning poses a number of challenges. The training data is not inspectable, tooling is required to work with proxy data. Models cannot be run interactively and must be compiled to be deployed in the Federated Learning server. Model resource consumption and runtime compatibility also come into the picture when working with many devices in real-time. Applications of Federated Learning It is best for cases where the data on devices is more relevant than data on servers. Ranking items for better navigation, suggestions for on-device keyboard, and next word prediction. This has already been implemented on Google pixel and Gboard. Future work is to eliminate bias caused be restrictions in device selection, algorithms to support better parallelism (more devices in one round), avoiding retraining already trained tasks on devices, and compression to save bandwidth. Federated computation, not federated learning The authors do no mention machine learning explicitly anywhere in the paper. They believe that the applications of such a model are not limited to machine learning. Federated Computation is the term they want to use for this concept. Federated computation and edge computing Federated learning and edge computing are very similar, there are but subtle differences in the purpose of these two. Federated learning is used to solve problems with specific tasks assigned to endpoint smartphones. Edge computing is for predefined tasks to be processed at end nodes, for example, IoT cameras. Federated learning decentralizes the data used while edge computing decentralizes the task computation to various devices. For more details on the architecture and its working, you can check out the research paper. Technical and hidden debts in machine learning – Google engineers’ give their perspective Researchers introduce a machine learning model where the learning cannot be proved What if AIs could collaborate using human-like values? DeepMind researchers propose a Hanabi platform.
Read more
  • 0
  • 0
  • 5069

article-image-5-blog-posts-that-could-make-you-a-better-python-programmer
Sam Wood
11 Feb 2019
2 min read
Save for later

5 blog posts that could make you a better Python programmer

Sam Wood
11 Feb 2019
2 min read
Python is one of the most important languages to master. It’s top rated, fast growing, and in demand by businesses around the globe. There’s a host of excellent insight across the web about how to become a better programmer with Python. Here’s five blogs we think you need to read to upgrade your skills and knowledge. 1. A Brief History of Python Did you know Python is actually older than Java, R and JavaScript? If you want to be a better Python programmer, it pays to know your history. This quick blog post takes you through the language's journey from Christmas hobby project to its modern ascendancy with version 3. 2. Do you write Python Code or Pythonic Code? Are you writing code in Python, or code for Python? When people talk about Pythonic code they mean that the code uses Python idioms well, that is natural or displays fluency in the language. Are you writing code like you would write Java or C++? This 4-minute blog post gives quick tips on how to make your code Pythonic. 3. The Singleton Python Design Pattern in Depth The singleton pattern is a powerful design pattern that allows you to create only one instance of data. You’d generally use it for things like the logging class and its subclasses, managing a connection to a database, or use read-only singletons to store some global states. This in-depth blog post takes you through the three principle ways to implement singletons, for better Python code. 4. Why is Python so good for artificial intelligence and machine learning? 5 Experts Explain. Python is the breakout language of data, zooming ahead of rival R to be dominant in the field of artificial intelligence and machine learning. But what is it about the programming language that makes it so well suited for this fast-growing field? In this blog post, five artificial intelligence experts all weigh in on what they think makes Python perfect for AI and machine learning. 5. Top 7 Python Programming Books You Need To Read That’s right - we put a list in our list. But if you really want to become a better Python programmer, you’ll want to get to grips with this stack of amazing Python books. Whether you’re a complete beginner or more experienced, these seven Python titles are the perfect way to upgrade your knowledge.
Read more
  • 0
  • 0
  • 7045

article-image-what-the-us-china-tech-and-ai-arms-race-means-for-the-world-frederick-kempe-at-davos-2019
Sugandha Lahoti
24 Jan 2019
6 min read
Save for later

What the US-China tech and AI arms race means for the world - Frederick Kempe at Davos 2019

Sugandha Lahoti
24 Jan 2019
6 min read
Atlantic Council CEO, Frederick Kempe spoke in the World Economic Forum (WEF) in Davos, Switzerland. He talked about the Cold war between the US and China and why the countries need to co-operate and not compete in the tech arms race, in his presentation Future Frontiers of Technology Control. He began his presentation by posing a question set forth by Former US Foreign National Security Advisor Stephen Hadley, “Can the incumbent US and insurgent China become strategic collaborators and strategic competitors in this tech space at the same time?” Read also: The New AI Cold War Between China and the USA Kempe’s three framing arguments Geopolitical Competition This fusion of tech breakthroughs blurring lines of the physical, digital, and biological space is reaching an inflection point that makes it already clear that they will usher in a revolution that will determine the shape of the global economy. It will also determine which nations and political constructs may assume the commanding heights of global politics in the coming decade. Technological superiority Over the course of history, societies that dominated economic innovation and progress have dominated in international relations — from military superiority to societal progress and prosperity. On balance, technological progress has contributed to higher standards of living in most parts of the world; however, the disproportionate benefit goes to first movers. Commanding Heights The technological arms race for supremacy in the fourth industrial revolution has essentially become a two-horse contest between the United States and China. We are in the early stages of this race, but how it unfolds and is conducted will do much to shape global human relations. The shift in 2018 in US-China relations from a period of strategic engagement to greater strategic competition has also significantly accelerated the Tech arms race. China vs the US: Why China has the edge? It was Vladimir Putin, President of the Russian Federation who said that “The one who becomes the leader in Artificial Intelligence, will rule the world.” In 2017, DeepMind’s AlphaGo defeated a Chinese master in Go, a traditional Chinese game. Following this defeat, China launched an ambitious roadmap, called the next generation AI plan. The goal was to become the Global leader in AI by 2030 in theory, technology, and application. On current trajectories, in the four primary areas of AI over the next 5 years, China will emerge the winner of this new technology race. Kempe also quotes, author of the book, AI superpowers, Kai-fu Lee who argues that harnessing of the power of AI today- the electricity of the 21st century- requires abundant data, hungry entrepreneurs, AI scientists, and an AI friendly policy. He believes that China has the edge in all of these. The current AI has translated from out of the box research, where the US has expertise in, to actual implementation, where China has the edge. Per, Kai-fu Lee China already has the edge in entrepreneurship, data, and government support, and is rapidly catching up to the U.S. in expertise. The world has translated from the age of world-leading expertise (US department) to the age of data, where China wins hands down. Economists call China the Saudi Arabia of Data and with that as the fuel for AI, it has an enormous advantage. The Chinese government without privacy restrictions can gain and use data in a manner that is out of reach of any democracy. Kemper concludes that the nature of this technological arms contest may favor insurgent China rather than the incumbent US. What are the societal implications of this tech cold war He also touched upon the societal implications of AI and the cold war between the US and China. A number of jobs will be lost by 2030. Quoting from Kai-fu Lee’s book, Kempe says that Job displacement caused by artificial intelligence and advanced robotics could possibly displace up to 54 million US workers which comprise 30% of the US labor force. It could also displace up to 100 million Chinese workers which are 12% of the Chinese labor force. What is the way forward with these huge societal implications of a bi-lateral race underway? Kempe sees three possibilities. A sloppy Status Quo A status quo where China and the US will continue to cooperate but increasingly view each other with suspicion. They will manage their rising differences and distrust imperfectly, never bridging them entirely, but also not burning bridges, either between researchers, cooperations, or others. Techno Cold War China and the US turn the global tech contest into more of a zero-sum battle for global domination. They organize themselves in a manner that separates their tech sectors from each other and ultimately divides up the world. Collaborative Future - the one we hope for Nicholas Thompson and Ian Bremmer argued in a wired interview that despite the two countries’ societal difference, the US should wrap China in a tech embrace. The two countries should work together to establish international standards to ensure that the algorithms governing people’s lives and livelihoods are transparent and accountable. They should recognize that while the geopolitics of technological change is significant, even more important will be the challenges AI poses to all societies across the world in terms of job automation and the social disruptions that may come with it. It may sound utopian to expect US and China to cooperate in this manner, but this is what we should hope for. To do otherwise would be self-defeating and at the cost of others in the global community which needs our best thinking to navigate the challenges of the fourth industrial revolution. Kempe concludes his presentation with a quote by Henry Kissinger, Former US Secretary of State and National Security Advisor, “We’re in a position in which the peace and prosperity of the world depend on whether China and the US can find a method to work together, not always in agreement, but to handle our disagreements...This is the key problem of our time.” Note: All images in this article are taken from Frederick Kempe’s presentation. We must change how we think about AI, urge AI founding fathers Does AI deserve to be so Overhyped? Alarming ways governments are using surveillance tech to watch you
Read more
  • 0
  • 0
  • 3736

article-image-conversational-ai-in-2018-an-arms-race-of-new-products-acquisitions-and-more
Bhagyashree R
21 Jan 2019
5 min read
Save for later

Conversational AI in 2018: An arms race of new products, acquisitions, and more

Bhagyashree R
21 Jan 2019
5 min read
Conversational AI is one of the most interesting applications of artificial intelligence in recent years. While the trend isn’t yet ubiquitous in the way that recommendation systems are (perhaps unsurprising), it has been successfully productized by a number of tech giants, in the form of Google Home and Amazon Echo (which is ‘powered by’ Alexa). The conversational AI arms race Arguably, 2018 has seen a bit of an arms race in conversational AI. As well as Google and Amazon, the likes of IBM, Microsoft, and Apple have wanted a piece of the action. Here are some of the new conversational AI tools and products these companies introduced this year: Google Google worked towards enhancing its conversational interface development platform, Dialogflow. In July, at the Google Cloud Next event, it announced several improvements and new capabilities to Dialogflow including Text to Speech via DeepMind's WaveNet and Dialogflow Phone Gateway for telephony integration. It also launched a new product called Contact Center AI that comes with Dialogflow Enterprise Edition and additional capabilities to assist live agents and perform analytics. Google Assistant became better in having a back-and-forth conversation with the help of Continued Conversation, which was unveiled at the Google I/O conference. The assistant became multilingual in August, which means users can speak to it in more than one language at a time, without having to adjust their language settings. Users can enable this multilingual functionality by selecting two of the supported languages. Following the footsteps of Amazon, Google also launched its own smart display named Google Home Hub at the ‘Made by Google’ event held in October. Microsoft Microsoft in 2018 introduced and improved various bot-building tools for developers. In May, at the Build conference, Microsoft announced major updates in their conversational AI tools: Azure Bot Service, Microsoft Cognitive Services Language Understanding, and QnAMaker. To enable intelligent bots to learn from example interactions and handle common small talk, it launched new experimental projects from named Conversation Learner and Personality Chat. At Microsoft Ignite, Bot Framework SDK V4.0 was made generally available. Later in November, Microsoft announced the general availability of the Bot Framework Emulator V4 and Web Chat control. In May, to drive more research and development in its conversational AI products, Microsoft acquired Semantic Machines and established conversational AI center of excellence in Berkeley. In November, the organization's acquisition of Austin-based bot startup XOXCO was a clear indication that it wants to get serious about using artificial intelligence for conversational bots. Producing guidelines on developing ‘responsible’ conversational AI further confirmed Microsoft wants to play a big part in the future evolution of the area. Microsoft were the chosen tech partner by UK based conversational AI startup ICS.ai. The team at ICS are using Azure and LUIS from Microsoft in their public sector AI chatbots, aimed at higher education, healthcare trusts and county councils. Amazon Amazon with the aims to improve Alexa’s capabilities released Alexa Skills Kit (ASK) which consists of APIs, tools, documentation, and code samples using which developers can build new skills for Alexa. In September, it announced a preview of a new design language named Alexa Presentation Language (APL). With APL, developers can build visual skills that include graphics, images, slideshows, and video, and to customize them for different device types. Amazon’s smart speaker Echo Dot saw amazing success with becoming the best seller in smart speaker category on Amazon. At its 2018 hardware event in Seattle, Amazon announced the launch of redesigned Echo Dot and a new addition to Alexa-powered A/V device called Echo Plus. As well as the continuing success of Alexa and the Amazon Echo, Amazon’s decision to launch the Alexa Fellowship at a number of leading academic institutions also highlights that for the biggest companies conversational AI is as much about research and exploration as it is products. Like Microsoft, it appears that Amazon is well aware that conversational AI is an area only in its infancy, still in development - as much as great products, it requires clear thinking and cutting-edge insight to ensure that it develops in a way that is both safe and impactful. What’s next? This huge array of products is a result of advances in deep learning researches. Now conversational AI is not just limited to small tasks like setting an alarm or searching the best restaurant. We can have a back and forth conversation with the conversational agent. But, needless to say, it still needs more work. Conversational agents are yet to meet user expectations related to sensing and responding with emotion. In the coming years, we will see these systems understand and do a good job at generating natural language. They will be able to have reasonably natural conversations with humans in certain domains, grounded in context. Also, the continuous development in IoT will provide AI systems with more context. Apple has introduced Shortcuts for iOS 12 to automate your everyday tasks Microsoft amplifies focus on conversational AI: Acquires XOXCO; shares guide to developing responsible bots Amazon is supporting research into conversational AI with Alexa fellowships
Read more
  • 0
  • 0
  • 3470

article-image-all-of-my-engineering-teams-have-a-machine-learning-feature-on-their-roadmap-will-ballard-talks-artificial-intelligence-in-2019-interview
Packt Editorial Staff
02 Jan 2019
3 min read
Save for later

“All of my engineering teams have a machine learning feature on their roadmap” - Will Ballard talks artificial intelligence in 2019 [Interview]

Packt Editorial Staff
02 Jan 2019
3 min read
The huge advancements of deep learning and artificial intelligence were perhaps the biggest story in tech in 2018. But we wanted to know what the future might hold - luckily, we were able to speak to Packt author Will Ballard about what they see as in store for artificial in 2019 and beyond. Will Ballard is the chief technology officer at GLG, responsible for engineering and IT. He was also responsible for the design and operation of large data centers that helped run site services for customers including Gannett, Hearst Magazines, NFL, NPR, The Washington Post, and Whole Foods. He has held leadership roles in software development at NetSolve (now Cisco), NetSpend, and Works (now Bank of America). Explore Will Ballard's Packt titles here. Packt: What do you think the biggest development in deep learning / AI was in 2018? Will Ballard: I think attention models beginning to take the place of recurrent networks is a pretty impressive breakout on the algorithm side. In Packt’s 2018 Skill Up survey, developers across disciplines and job roles identified machine learning as the thing they were most likely to be learning in the coming year. What do you think of that result? Do you think machine learning is becoming a mandatory multidiscipline skill, and why? Almost all of my engineering teams have an active, or a planned machine learning feature on their roadmap. We’ve been able to get all kinds of engineers with different backgrounds to use machine learning -- it really is just another way to make functions -- probabilistic functions -- but functions. What do you think the most important new deep learning/AI technique to learn in 2019 will be, and why? In 2019 -- I think it is going to be all about PyTorch and TensorFlow 2.0, and learning how to host these on cloud PaaS. The benefits of automated machine learning and metalearning How important do you think automated machine learning and metalearning will be to the practice of developing AI/machine learning in 2019? What benefits do you think they will bring? Even ‘simple’ automation techniques like grid search and running multiple different algorithms on the same data are big wins when mastered. There is almost no telling which model is ‘right’ till you try it, so why not let a cloud of computers iterate through scores of algorithms and models to give you the best available answer? Artificial intelligence and ethics Do you think ethical considerations will become more relevant to developing AI/machine learning algorithms going forwards? If yes, how do you think this will be implemented? I think the ethical issues are important on outcomes, and on how models are used, but aren’t the place of algorithms themselves. If a developer was looking to start working with machine learning/AI, what tools and software would you suggest they learn in 2019? Python and PyTorch.
Read more
  • 0
  • 0
  • 4194
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-neurips-2018-how-machine-learning-experts-can-work-with-policymakers-to-make-good-tech-decisions-invited-talk
Bhagyashree R
18 Dec 2018
6 min read
Save for later

NeurIPS 2018: How machine learning experts can work with policymakers to make good tech decisions [Invited Talk]

Bhagyashree R
18 Dec 2018
6 min read
At the 32nd annual  NeurIPS conference held earlier this month, Edward William Felten, a professor of computer science and public affairs at Princeton University spoke about how decision makers and tech experts can work together to make better policies. The talk was aimed at answering questions such as why should public policy matter to AI researchers, what role can researchers play in policy debates, and how can researchers help bridge divides between the research and policy communities. While AI and machine learning are being used in high impact areas and have seen heavy adoption in every field, in recent years, they have also gained a lot of attention from the policymakers. Technology has become a huge topic of discussion among policymakers mainly because of its cases of failure and how it is being used or misused. They have now started formulating laws and regulations and holding discussions about how society will govern the development of these technologies. Prof. Felten explained how having constructive engagement with policymakers will lead to better outcomes for technology, government, and society. Why tech should be regulated? Regulating tech is important, and for that researchers, data scientists, and other people in tech fields have to close the gap between their research labs, cubicles, and society. Prof. Felten emphasizes that it is up to the tech people to bridge this gap as we not only have the opportunity but also a duty to be more active and productive in participating in public life. There are many people coming to the conclusion that tech should be regulated before it is too late. In a piece published by the Wall Street Journal, three experts debated about whether the government should regulate AI. One of them, Ryan Calo explains, “One of the ironies of artificial intelligence is that proponents often make two contradictory claims. They say AI is going to change everything, but there should be no changes to the law or legal institutions in response.” Prof. Felten points out that law and policies are meant to change in order to adapt according to the current conditions. They are not just written once and for all for the cases of today and the future, rather law is a living system that adapts to what is going on in the society. And, if we believe that technology is going to change everything, we can expect that law will change. Prof. Felten also said that not only the tech researchers and policymakers but the society also should also have some say in how the technology is developed, “After all the people who are affected by the change that we are going to cause deserve some say in how that change happens, how it is used. If we believe in a society which is fundamentally democratic in which everyone has a stake and everyone has a voice then it is only fair that those lives we are going to change have some say in how that change come about and what kind of changes are going to happen and which are not.” How experts can work with decision makers to make good tech decisions The three key approaches that we can take to engage with policymakers to take a decision about technology: Engage in a two-way dialogue with policymakers As a researcher, we might think that we are tech experts/scientists and we do not need to get involved in politics. We need to just share the facts we know and our job is done. But if researchers really want to maximize their impact in policy debates, they need to combine the knowledge and preferences of policymakers with their knowledge and preferences. Which means, they need to take into account what policymakers might already have heard about a particular subject and the issues or approaches that resonate with them. Prof. Felten explains that this type of understanding and exchange of ideas can be done in two stages. Researchers need to ask several questions to policymakers, which is not a one-time thing, rather a multi-round protocol. They have to go back and forth with the person and need to build engagement over time and mutual trust. And, then they need to put themselves into the shoes of a decision maker and understand how to structure the decision space for them. Be present in the room when the decisions are being made To have their influence on the decisions that get made, researchers need to have “boots on the ground.” Though not everyone has to engage in this deep and long-term process of decision making, we need some people from the community to engage on behalf of the community. Researchers need to be present in the room when the decisions are being made. This means taking posts as advisers or civil servants. We already have a range of such posts at both local and national government levels, alongside a range of opportunities to engage less formally in policy development and consultations. Creating a career path and rewarding policy engagement To drive this engagement, we need to create a career path which rewards policy engagement. We should have a way through which researchers can move between policy and research careers. Prof. Felten pointed to a range of US-based initiatives that seek to bring those with technical expertise into policy-oriented roles, such as the US Digital Service. He adds that if we do not create these career paths and if this becomes something that people can do only after sacrificing their careers then very few people will do it. This needs to be an activity that we learn to respect when people in the community do it well. We need to build incentives whether it is in career incentives in academia, whether it is understanding that working in government or on policy issues is a valuable part of one kind of academic career and not thinking of it as deter or a stop. To watch the full talk, check out NeurIPS Facebook page. NeurIPS 2018: Rethinking transparency and accountability in machine learning NeurIPS 2018: Developments in machine learning through the lens of Counterfactual Inference [Tutorial] Accountability and algorithmic bias: Why diversity and inclusion matters [NeurIPS Invited Talk]
Read more
  • 0
  • 0
  • 2364

article-image-neurips-2018-rethinking-transparency-and-accountability-in-machine-learning
Bhagyashree R
16 Dec 2018
8 min read
Save for later

NeurIPS 2018: Rethinking transparency and accountability in machine learning

Bhagyashree R
16 Dec 2018
8 min read
Key takeaways from the discussion To solve problems with machine learning, you must first understand them. Different people or groups of people are going to define a problem in a different way. So, we shouldn't believe that the way we want to frame the problem computationally is the right way. If we allow that our systems include people and society, it is clear that we have to help negotiate values, not simply define them. Last week, at the 32nd NeurIPS 2018 annual conference, Nitin Koli, Joshua Kroll, and Deirdre Mulligan presented the common pitfalls we see when studying the human side of machine learning. Machine learning is being used in high-impact areas like medicine, criminal justice, employment, and education for making decisions. In recent years, we have seen that this use of machine learning and algorithmic decision making have resulted in unintended discrimination.  It’s becoming clear that even models developed with the best of intentions may exhibit discriminatory biases and perpetuate inequality. Although researchers have been analyzing how to put concepts like fairness, accountability, transparency, explanation, and interpretability into practice in machine learning, properly defining these things can prove a challenge. Attempts have been made to define them mathematically, but this can bring new problems. This is because applying mathematical logic to human concepts that have unique and contested political and social dimensions necessarily has blind spots - every point of contestation can’t be integrated into a single formula. In turn, this can cause friction with other disciplines as well as the public. Based on their research on what various terms mean in different contexts, Nitin Koli, Joshua Krill, and Deirdre Mulligan drew out some of the most common misconceptions machine learning researchers and practitioners hold. Sociotechnical problems To find a solution to a particular problem, data scientists need precise definitions. But how can we verify that these definitions are correct? Indeed, many definitions will be contested, depending on who you are and what you want them to mean. A definition that is fair to you will not necessarily be fair to me”, remarks Mr. Kroll. Mr. Kroll explained that while definitions can be unhelpful, they are nevertheless essential from a mathematical perspective.  This means there appears to be an unresolved conflict between concepts and mathematical rigor. But there might be a way forward. Perhaps it’s wrong to simply think in this dichotomy of logical rigor v. the messy reality of human concepts. One of the ways out of this impasse is to get beyond this dichotomy. Although it’s tempting to think of the technical and mathematical dimension on one side, with the social and political aspect on the other, we should instead see them as intricately related. They are, Kroll suggests, socio-technical problems. Kroll goes on to say that we cannot ignore the social consequences of machine learning: “Technologies don’t live in a vacuum and if we pretend that they do we kind of have put our blinders on and decided to ignore any human problems.” Fairness in machine learning In the real world, fairness is a concept directly linked to processes. Think, for example, of the voting system. Citizens cast votes to their preferred candidates and the candidate who receives the most support is elected. Here, we can say that even though the winning candidate was not the one a candidate voted for, but at least he/she got the chance to participate in the process. This type of fairness is called procedural fairness. However, in the technical world, fairness is often viewed in a subtly different way. When you place it in a mathematical context, fairness centers on outcome rather than process. Kohli highlighted that trade offs between these different concepts can’t be avoided. They’re inevitable. A mathematical definition of fairness places a constraint over the behavior of a system, and this constraint will narrow down the cause of models that can satisfy these conditions. So, if we decide to add too many fairness constraints to the system, some of them will be self-contradictory. One more important point machine learning practitioners should keep in mind is that when we talk about the fairness of a system, that system isn’t a self-contained and coherent thing. It is not a logical construct - it’s a social one. This means there are a whole host of values, ideas, and histories that have an impact on its reality.. In practice, this ultimately means that the complexity of the real world from which we draw and analyze data can have an impact on how a model works. Kohli explained this by saying, “it doesn’t really matter... whether you are building a fair system if the context in which it is developed and deployed in is fundamentally unfair.” Accountability in machine learning Accountability is ultimately about trust. It’s about the extent you can be sure you know what is ‘true’ about a system. It refers to the fact that you know how it works and why it does things in certain ways. In more practical terms, it’s all about invariance and reliability. To ensure accountability inside machine learning models, we need to follow a layered model. The bottom layer is an accounting or recording layer, that keeps track of what a given system is doing and the ways in which it might have been changed.. The next layer is a more analytical layer. This is where those records on the bottom layer are analyzed, with decisions made about performance - whether anything needs to be changed and how they should be changed. The final and top-most layer is about responsibility. It’s where the proverbial buck stops - with those outside of the algorithm, those involved in its construction. “Algorithms are not responsible, somebody is responsible for the algorithm,”  explains Kroll. Transparency Transparency is a concept heavily tied up with accountability. Arguably you have no accountability without transparency. The layered approach discussed above should help with transparency, but it’s also important to remember that transparency is about much more than simply making data and code available. Instead, it demands that the decisions made in the development of the system are made available and clear too. Mr. Kroll emphasizes, “to the person at the ground-level for whom the decisions are being taken by some sort of model, these technical disclosures aren’t really useful or understandable.” Explainability In his paper Explanation in Artificial Intelligence: Insights from the Social Sciences, Tim Miller describes what is explainable artificial intelligence. According to Miller, explanation takes many forms such as causal, contrastive, selective, and social. Causal explanation gives reasons behind why something happened, for example, while contrastive explanations can provide answers to questions like“Why P rather than not-P?". But the most important point here is that explanations are selective. An explanation cannot include all reasons why something happened; explanations are always context-specific, a response to a particular need or situation. Think of it this way: if someone asks you why the toaster isn’t working, you could just say that it’s broken. That might be satisfactory in some situations, but you could, of course, offer a more substantial explanation, outlining what was technically wrong with the toaster, how that technical fault came to be there, how the manufacturing process allowed that to happen, how the business would allow that manufacturing process to make that mistake… you could, of course, go on and on. Data is not the truth Today, there is a huge range of datasets available that will help you develop different machine learning models. These models can be useful, but it’s essential to remember that they are models. A model isn’t the truth - it’s an abstraction, a representation of the world in a very specific way. One way of taking this fact into account is the concept of ‘construct validity’. This sounds complicated, but all it really refers to is the extent to which a test - say a machine learning algorithm - actually measures what it says it’s trying to measure. The concept is widely used in disciplines like psychology, but in machine learning, it simply refers to the way we validate a model based on its historical predictive accuracy. In a nutshell, it’s important to remember that just as data is an abstraction of the world, models are also an abstraction of the data. There’s no way of changing this, but having an awareness that we’re dealing in abstractions ensures that we do not lapse into the mistake of thinking we are in the realm of ‘truth’. To build a fair(er) systems will ultimately require an interdisciplinary approach, involving domain experts working in a variety of fields. If machine learning and artificial intelligence is to make a valuable and positive impact in fields such as justice, education, and medicine, it’s vital that those working in those fields work closely with those with expertise in algorithms. This won’t fix everything, but it will be a more robust foundation from which we can begin to move forward. You can watch the full talk on the Facebook page of NeurIPS. Researchers unveil a new algorithm that allows analyzing high-dimensional data sets more effectively, at NeurIPS conference Accountability and algorithmic bias: Why diversity and inclusion matters [NeurIPS Invited Talk] NeurIPS 2018: A quick look at data visualization for Machine learning by Google PAIR researchers [Tutorial]
Read more
  • 0
  • 0
  • 4940

article-image-deep-learning-indaba-presents-the-state-of-natural-language-processing-in-2018
Sugandha Lahoti
12 Dec 2018
5 min read
Save for later

Deep Learning Indaba presents the state of Natural Language Processing in 2018

Sugandha Lahoti
12 Dec 2018
5 min read
The ’Strengthening African Machine Learning’ conference organized by Deep Learning Indaba, at Stellenbosch, South Africa, is ongoing right now. This 6-day conference will celebrate and strengthen machine learning in Africa through state-of-the-art teaching, networking, policy debate, and through support programmes. Yesterday, three conference organizers, Sebastian Ruder, Herman Kamper, and Stephan Gouws asked tech experts their view on the state of Natural Language Processing, more specifically these 4 questions: What do you think are the three biggest open problems in Natural Language Processing at the moment? What would you say is the most influential work in Natural Language Processing in the last decade, if you had to pick just one? What, if anything, has led the field in the wrong direction? What advice would you give a postgraduate student in Natural Language Processing starting their project now? The tech experts interviewed included the likes of Yoshua Bengio, Hal Daumé III, Barbara Plank, Miguel Ballesteros, Anders Søgaard, Lea Frermann, Michael Roth, Annie Louise, Chris Dyer, Felix Hill,  Kevin Knight and more. https://twitter.com/seb_ruder/status/1072431709243744256 Biggest open problems in Natural Language Processing at the moment Although each expert talked about a variety of Natural Language Processing open issues, the following common key themes recurred. No ‘real’ understanding of Natural language understanding Many experts argued that natural Language understanding is central and also important for natural language generation. They agreed that most of our current Natural Language Processing models do not have a “real” understanding. What is needed is to build models that incorporate common sense, and what (biases, structure) should be built explicitly into these models. Dialogue systems and chatbots were mentioned in several responses. Maletšabisa Molapo, a Research Scientist at IBM Research and one of the experts answered, “Perhaps this may be achieved by general NLP Models, as per the recent announcement from Salesforce Research, that there is a need for NLP architectures that can perform well across different NLP tasks (machine translation, summarization, question answering, text classification, etc.)” NLP for low-resource scenarios Another open problem is using NLP for low-resource scenarios. This includes generalization beyond the training data, learning from small amounts of data and other techniques such as Domain-transfer, transfer learning, multi-task learning. Also includes different supervised learning techniques, semi-supervised, weakly-supervised, “Wiki-ly” supervised, distantly-supervised, lightly-supervised, minimally-supervised and unsupervised learning. Per Karen Livescu, Associate Professor Toyota Technological Institute at Chicago, “Dealing with low-data settings (low-resource languages, dialects (including social media text "dialects"), domains, etc.).  This is not a completely "open" problem in that there are already a lot of promising ideas out there; but we still don't have a universal solution to this universal problem.” Reasoning about large or multiple contexts Experts believed that NLP has problems in dealing with large contexts. These large context documents can be either text or spoken documents, which currently lack common sense incorporation. According to, Isabelle Augenstein, tenure-track assistant professor at the University of Copenhagen, “Our current models are mostly based on recurrent neural networks, which cannot represent longer contexts well. One recent encouraging work in this direction I like is the NarrativeQA dataset for answering questions about books. The stream of work on graph-inspired RNNs is potentially promising, though has only seen modest improvements and has not been widely adopted due to them being much less straight-forward to train than a vanilla RNN.” Defining problems, building diverse datasets and evaluation procedures “Perhaps the biggest problem is to properly define the problems themselves. And by properly defining a problem, I mean building datasets and evaluation procedures that are appropriate to measure our progress towards concrete goals. Things would be easier if we could reduce everything to Kaggle style competitions!” - Mikel Artetxe. Experts believe that current NLP datasets need to be evaluated. A new generation of evaluation datasets and tasks are required that show whether NLP techniques generalize across the true variability of human language. Also what is required are more diverse datasets. “Datasets and models for deep learning innovation for African Languages are needed for many NLP tasks beyond just translation to and from English,” said Molapo. Advice to a postgraduate student in NLP starting their project Do not limit yourself to reading NLP papers. Read a lot of machine learning, deep learning, reinforcement learning papers. A PhD is a great time in one’s life to go for a big goal, and even small steps towards that will be valued. — Yoshua Bengio Learn how to tune your models, learn how to make strong baselines, and learn how to build baselines that test particular hypotheses. Don’t take any single paper too seriously, wait for its conclusions to show up more than once. — George Dahl I believe scientific pursuit is meant to be full of failures. If every idea works out, it’s either because you’re not ambitious enough, you’re subconsciously cheating yourself, or you’re a genius, the last of which I heard happens only once every century or so. so, don’t despair! — Kyunghyun Cho Understand psychology and the core problems of semantic cognition. Understand machine learning. Go to NeurIPS. Don’t worry about ACL. Submit something terrible (or even good, if possible) to a workshop as soon as you can. You can’t learn how to do these things without going through the process. — Felix Hill Make sure to go through the complete list of all expert responses for better insights. Google open sources BERT, an NLP pre-training technique Use TensorFlow and NLP to detect duplicate Quora questions [Tutorial] Intel AI Lab introduces NLP Architect Library  
Read more
  • 0
  • 0
  • 4141

article-image-accountability-and-algorithmic-bias-why-diversity-and-inclusion-matters-neurips-invited-talk
Sugandha Lahoti
08 Dec 2018
4 min read
Save for later

Accountability and algorithmic bias: Why diversity and inclusion matters [NeurIPS Invited Talk]

Sugandha Lahoti
08 Dec 2018
4 min read
One of the most awaited machine learning conference, NeurIPS 2018 is happening throughout this week in Montreal, Canada. It will feature a series of tutorials, invited talks, product releases, demonstrations, presentations, and announcements related to machine learning research. For the first time, NeurIPS invited a diversity and inclusion (D&I) speaker Laura Gomez to talk about the lack of diversity in the tech industry, which leads to biased algorithms, faulty products, and unethical tech. Laura Gomez is the CEO of Atipica that helps tech companies find and hire diverse candidates. Being a Latina woman herself, she had to face oppression when seeking capital and funds for her startup trying to establish herself in Silicon Valley. This experience led to her realization that there is a strong need to talk about why diversity and inclusion matters. Her efforts were not in vain and recently, she raised $2M in seed funding led by True Ventures. “At Atipica, we think of Inclusive AI in terms of data science, algorithms, and their ethical implications. This way you can rest assure our models are not replicating the biases of humans that hinder diversity while getting patent-pending aggregate demographic insights of your talent pool,” reads the website. She talks about her journey as a Latina woman in the tech industry. She reminisced on how she was the only one like her who got an internship with Hewlett Packard and the fact that she hated it. Nevertheless, she still decided to stay, determined not to let the industry turn her into a victim. She believes she made the right choice going forward with tech; now, years later, diversity is dominating the conversation in the industry. After HP, she also worked at Twitter and YouTube, helping them translate and localize their applications for a global audience. She is also a founding advisor of Project Include, which is a non-profit organization run by women, that uses data and advocacy to accelerate diversity and inclusion solutions in the tech industry. She opened her talk by agreeing to a quote from Safiya Noble, who wrote Algorithms of Oppression. “Artificial Intelligence will become a major human rights issue in the twenty-first century.” She believes we need to talk about difficult questions such as where AI is heading? And where should we hold ourselves and each other accountable.” She urges people to evaluate their role in AI, bias, and inclusion, to find the empathy and value in difficult conversations, and to go beyond your immediate surroundings to consider the broader consequences. It is important to build accountable AI in a way that allows humanity to triumph. She touched upon discriminatory moves by tech giants like Amazon and Google. Amazon recently killed off its AI recruitment tool because it couldn’t stop discriminating against women. She also criticized upon Facebook’s Myanmar operation where Facebook data scientists were building algorithms for hate speech. They didn’t understand the importance of localization or language or actually internationalize their own algorithms to be inclusive towards all the countries. She also talked about algorithmic bias in library discovery systems, as well as how even ‘black robots’ are being impacted by racism. She also condemned Palmer Luckey's work who is helping U.S. immigration agents on the border wall identify Latin refugees. Finally, she urged people to take three major steps to progress towards being inclusive: Be an ally Think of inclusion as an approach, not a feature Work towards an Ethical AI Head over to NeurIPS facebook page for the entire talk and other sessions happening at the conference this week. NeurIPS 2018: Deep learning experts discuss how to build adversarially robust machine learning models NeurIPS 2018 paper: DeepMind researchers explore autoregressive discrete autoencoders (ADAs) to model music in raw audio at scale NeurIPS 2018: A quick look at data visualization for Machine learning by Google PAIR researchers [Tutorial]
Read more
  • 0
  • 0
  • 3535
article-image-how-neurips-2018-is-taking-on-its-diversity-and-inclusion-challenges
Sugandha Lahoti
06 Dec 2018
3 min read
Save for later

How NeurIPS 2018 is taking on its diversity and inclusion challenges

Sugandha Lahoti
06 Dec 2018
3 min read
This year the Neural Information Processing Systems Conference is asking serious questions to improve diversity, equity, and inclusion at NeurIPS. “Our goal is to make the conference as welcoming as possible to all.” said the heads of the new diversity and inclusion chairs introduced this year. https://twitter.com/InclusionInML/status/1069987079285809152 The Diversity and Inclusion chairs were headed by Hal Daume III, a professor from the University of Maryland and machine learning and fairness groups researcher at Microsoft Research and Katherine Heller, assistant professor at Duke University and research scientist at Google Brain. They opened up the talk by acknowledging the respective privilege that they get as a group of white man and woman and the fact that they don’t reflect the diversity of experience in the conference room, much less the world. They talk about the three major goals with respect to inclusion at NeurIPS: Learn about the challenges that their colleagues have faced. Support those doing the hard work of amplifying the voices of those who have been historically excluded. To begin structural changes that will positively impact the community over the coming years. They urged attendees to start building an environment where everyone can do their best work. They want people to: see other perspectives remember the feeling of being an outsider listen, do research and learn. make an effort and speak up Concrete actions taken by the NeurIPS diversity and inclusion chairs This year they have assembled an advisory board and run a demographics and inclusion survey. They have also conducted events such as WIML (Women in Machine Learning), Black in AI, LatinX in AI, and Queer in AI. They have established childcare subsidies and other activities in collaboration with Google and DeepMind to support all families attending NeurIPS by offering a stipend of up to $100 USD per day. They have revised their Code of Conduct, to provide an experience for all participants that is free from harassment, bullying, discrimination, and retaliation. They have added inclusion tips on Twitter offering tips and bits of advice related to D&I efforts. The conference also offers pronoun stickers (only them and they), first-time attendee stickers, and information for participant needs. They have also made significant infrastructure improvements for visa handling. They had discussions with people handling visas on location, sent out early invitation letters for visas, and are choosing future locations with visa processing in mind. In the future, they are also looking to establish a legal team for details around Code of Conduct. Further, they are looking to improve institutional structural changes that support the community, and improve the coordination around affinity groups & workshops. For the first time, NeurIPS also invited a diversity and inclusion (D&I) speaker Laura Gomez to talk about the lack of diversity in the tech industry, which leads to biased algorithms, faulty products, and unethical tech. Head over to NeurIPS website for interesting tutorials, invited talks, product releases, demonstrations, presentations, and announcements. NeurIPS 2018: Deep learning experts discuss how to build adversarially robust machine learning models NeurIPS 2018 paper: DeepMind researchers explore autoregressive discrete autoencoders (ADAs) to model music in raw audio at scale NeurIPS 2018: A quick look at data visualization for Machine learning by Google PAIR researchers [Tutorial]
Read more
  • 0
  • 0
  • 3011

article-image-technical-and-hidden-debts-in-machine-learning-google-engineers-give-their-perspective
Prasad Ramesh
06 Nov 2018
6 min read
Save for later

Technical and hidden debts in machine learning - Google engineers’ give their perspective

Prasad Ramesh
06 Nov 2018
6 min read
In a paper, Google engineers have pointed out the various costs of maintaining a machine learning system. The paper, Hidden Technical Debt in Machine Learning Systems, talks about technical debt and other ML specific debts that are hard to detect or hidden. They found that is common to incur massive maintenance costs in real-world machine learning systems. They looked at several ML-specific risk factors to account for in system design. These factors include boundary erosion, entanglement, hidden feedback loops, undeclared consumers, data dependencies, configuration issues, changes in the external world, and a number of system-level anti-patterns. Boundary erosion in complex models In traditional software engineering, setting strict abstractions boundaries helps in logical consistency among the inputs and outputs of a given component. It is difficult to set these boundaries in machine learning systems. Yet, machine learning is needed in areas where the desired behavior cannot be effectively expressed with traditional software logic without depending on data. This results in a boundary erosion in a couple of areas. Entanglement Machine learning systems mix signals together, entangle them and make isolated improvements impossible. Change to one input may change all the other inputs and an isolated improvement cannot be done. It is referred to as the CACE principle: Change Anything Changes Everything. There are two possible ways to avoid this: Isolate models and serve ensembles. Useful in situations where the sub-problems decompose naturally. In many cases, ensembles work well as the errors in the component models are not correlated. Relying on this combination creates a strong entanglement and improving an individual model may make the system less accurate. Another strategy is to focus on detecting changes in the prediction behaviors as they occur. Correction cascades There are cases where a problem is only slightly different than another which already has a solution. It can be tempting to use the same model for the slightly different problem. A small correction is learned as a fast way to solve the newer problem. This correction model has created a new system dependency on the original model. This makes it significantly more expensive to analyze improvements to the models in the future. The cost increases when correction models are cascaded. A correction cascade can create an improvement deadlock. Visibility debt caused by undeclared consumers Many times a model is made widely accessible that may later be consumed by other systems. Without access controls, these consumers may be undeclared, silently using the output of a given model as an input to another system. These issues are referred to as visibility debt. These undeclared consumers may also create hidden feedback loops. Data dependencies cost more than code dependencies Data dependencies can carry a similar capacity as dependency debt for building debt, only more difficult to detect. Without proper tooling to identify them, data dependencies can form large chains that are difficult to untangle. They are of two types. Unstable data dependencies For moving along the process quickly, it is often convenient to use signals from other systems as input to your own. But some input signals are unstable, they can qualitatively or quantitatively change behavior over time. This can happen as the other system updates over time or made explicitly. A mitigation strategy is to create versioned copies. Underutilized data dependencies Underutilized data dependencies are input signals that provide little incremental modeling benefit. These can make an ML system vulnerable to change where it is not necessary. Underutilized data dependencies can come into a model in several ways—via legacy, bundled, epsilon or correlated features. Feedback loops Live ML systems often end up influencing their own behavior on being updated over time. This leads to analysis debt. It is difficult to predict the behavior of a given model before it is released in such a case. These feedback loops are difficult to detect and address if they occur gradually over time. This may be the case if the model is not updated frequently. A direct feedback loop is one in which a model may directly influence the selection of its own data for future training. In a hidden feedback loop, two systems influence each other indirectly. Machine learning system anti-patterns It is common for systems that incorporate machine learning methods to end up with high-debt design patterns. Glue code: Using generic packages results in a glue code system design pattern. In that, a massive amount of supporting code is typed to get data into and out of general-purpose packages. Pipeline jungles: Pipeline jungles often appear in data preparation as a special case of glue code. This can evolve organically with new sources added. The result can become a jungle of scrapes, joins, and sampling steps. Dead experimental codepaths: Glue code commonly becomes increasingly attractive in the short term. None of the surrounding structures need to be reworked. Over time, these accumulated codepaths create a growing debt due to the increasing difficulties of maintaining backward compatibility. Abstraction debt: There is a lack of support for strong abstractions in ML systems. Common smells: A smell may indicate an underlying problem in a component system. These can be data smells, multiple-language smell, or prototype smells. Configuration debt Debt can also accumulate when configuring a machine learning system. A large system has a wide number of configurations with respect to features, data selection, verification methods and so on. It is common that configuration is treated an afterthought. In a mature system, config lines can be larger than the code lines themselves and each configuration line has potential for mistakes. Dealing with external world changes ML systems interact directly with the external world and the external world is rarely stable. Some measures that can be taken to deal with the instability are: Fixing thresholds in dynamic systems It is necessary to pick a decision threshold for a given model to perform some action. Either to predict true or false, to mark an email as spam or not spam, to show or not show a given advertisement. Monitoring and testing Unit testing and end-to-end testing cannot ensure complete proper functioning of an ML system.  For long-term system reliability, comprehensive live monitoring and automated response is critical. Now there is a question of what to monitor. The authors of the paper point out three areas as starting points—prediction bias, limits for actions, and upstream producers. Other related areas in ML debt In addition to the mentioned areas, an ML system may also face debts from other areas. These include data testing debt, reproducibility debt, process management debt, and cultural debt. Conclusion Moving quickly often introduces technical debt. The most important insight from this paper, according to the authors is that technical debt is an issue that both engineers and researchers need to be aware of. Paying machine learning related technical debt requires commitment, which can often only be achieved by a shift in team culture. Prioritizing and rewarding this effort which needs to be recognized is important for the long-term health of successful machine learning teams. For more details, you can read the paper at NIPS website. Uses of Machine Learning in Gaming Julia for machine learning. Will the new language pick up pace? Machine learning APIs for Google Cloud Platform
Read more
  • 0
  • 0
  • 8287

article-image-ethical-dilemmas-developers-on-artificial-intelligence-products-must-consider
Amey Varangaonkar
29 Sep 2018
10 min read
Save for later

The ethical dilemmas developers working on Artificial Intelligence products must consider

Amey Varangaonkar
29 Sep 2018
10 min read
Facebook has recently come under the scanner for sharing the data of millions of users without their consent. Their use of Artificial Intelligence to predict their customers’ behavior and then to sell this information to advertisers has come under heavy criticism and has raised concerns over the privacy of users’ data. A lot of it inadvertently has to do with the ‘smart use’ of data by companies like Facebook. As Artificial Intelligence continues to revolutionize the industry, and as the applications of AI continue to rapidly grow across a spectrum of real-world domains, the need for a regulated, responsible use of AI has also become more important than ever. Several ethical questions are being asked of the way the technology is being used and how it is impacting our lives, Facebook being just one of the many examples right now. In this article, we look at some of these ethical concerns surrounding the use of AI. Infringement of users’ data privacy Probably the biggest ethical concern in the use of Artificial Intelligence and smart algorithms is the way companies are using them to gain customer insights, without getting the consent of the said customers in the first place. Tracking customers’ online activity, or using the customer information available on various social media and e-commerce websites in order to tailor marketing campaigns or advertisements that are targeted towards the customer is a clear breach of their privacy, and sometimes even amounts to ‘targeted harassment’. In the case of Facebook, for example, there have been many high profile instances of misuse and abuse of user data, such as: The recent Cambridge Analytica scandal where Facebook’s user data was misused Boston-based data analytics firm Crimson Hexagon misusing Facebook user data Facebook’s involvement in the 2016 election meddling Accusations of Facebook along with Twitter and Google having a bias against conservative views Accusation of discrimination with targeted job ads on the basis of gender and age How far will these tech giants such as Facebook go to fix what they have broken - the trust of many of its users? The European Union General Data Protection Regulation (GDPR) is a positive step to curb this malpractice. However, such a regulation needs to be implemented worldwide, which has not been the case yet. There needs to be a universal agreement on the use of public data in the modern connected world. Individual businesses and developers must be accountable and hold themselves ethically responsible when strategizing or designing these AI products, keeping the users’ privacy in mind. Risk of automation in the workplace The most fundamental ethical issue that comes up when we talk about automation, or the introduction of Artificial Intelligence in the workplace, is how it affects the role of human workers. ‘Does the AI replace them completely?’ is a common question asked by many. Also, if human effort is not going to be replaced by AI and automation, in what way will the worker’s role in the organization be affected? The World Economic Forum (WEF) recently released a Future of Jobs report in which they highlight the impact of technological advancements on the current workforce. The report states that machines will be able to do half of the current job tasks within the next 5 years. A few important takeaways from this report with regard to automation and its impact on the skilled human workers are: Existing jobs will be augmented through technology to create new tasks and resulting job roles altogether - from piloting drones to remotely monitoring patients. The inclusion of AI and smart algorithms is going to reduce the number of workers required for certain work tasks The layoffs in certain job roles will also involve difficult transitions for many workers and investment for reskilling and training, commonly referred to as collaborative automation. As we enter the age of machine augmented human productivity, employees will be trained to work along with the AI tools and systems, empowering them to work quickly and more efficiently. This will come with an additional cost of training which the organization will have to bear Artificial stupidity - how do we eliminate machine-made mistakes? It goes without saying that learning happens over time, and it is no different for AI. The AI systems are fed lots and lots of training data and real-world scenarios. Once a system is fully trained, it is then made to predict outcomes on real-world test data and the accuracy of the model is then determined and improved. It is only normal, however, that the training model cannot be fed with every possible scenario there is, and there might be cases where the AI is unprepared for or can be fooled by an unusual scenario or test-case. Some images where the deep neural network is unable to identify their pattern is an example of this. Another example would be the presence of random dots in an image that would lead the AI to think there is a pattern in an image, where there really isn’t any. Deceptive perceptions like this may lead to unwanted errors, which isn’t really the AI’s fault, it’s just the way they are trained. These errors, however, can prove costly to a business and can lead to potential losses. What is the way to eliminate these possibilities? How do we identify and weed out such training errors or inadequacies that go a long way in determining whether an AI system can work with near 100% accuracy? These are the questions that need answering. It also leads us to the next problem that is - who takes accountability for the AI’s failure? If the AI fails or misbehaves, who takes the blame? When an AI system designed to do a particular task fails to correctly perform the required task for some reason, who is responsible? This aspect needs careful consideration and planning before any AI system can be adopted, especially on an enterprise-scale. When a business adopts an AI system, it does so assuming the system is fail-safe. However, if for some reason the AI system isn’t designed or trained effectively because either: It was not trained properly using relevant datasets The AI system was not used in a relevant context and as a result, gave inaccurate predictions Any failure like this could lead to potentially millions in losses and could adversely affect the business, not to mention have adverse unintended effects on society. Who is accountable in such cases? Is it the AI developer who designed the algorithm or the model? Or is it the end-user or the data scientist who is using the tool as a customer? Clear expectations and accountabilities need to be defined at the very outset and counter-measures need to be set in place to avoid such failovers, so that the losses are minimal and the business is not impacted severely. Bias in Artificial Intelligence - A key problem that needs addressing One of the key questions in adopting Artificial Intelligence systems is whether they can be trusted to be impartial, fair or neutral. In her NIPS 2017 keynote, Kate Crawford - who is a Principal Researcher at Microsoft as well as the Co-Founder & Director of Research at the AI Now institute - argues that bias in AI cannot just be treated as a technical problem; the underlying social implications need to be considered as well. For example, a machine learning software to detect potential criminals, that tends to be biased against a particular race, raises a lot of questions on its ethical credibility. Or when a camera refuses to detect a particular kind of face because it does not fit into the standard template of a human face in its training dataset, it naturally raises the racism debate. Although the AI algorithms are designed by humans themselves, it is important that the learning data used to train these algorithms is as diverse as possible, and factors in possible kinds of variations to avoid these kinds of biases. AI is meant to give out fair, impartial predictions without any preset predispositions or bias, and this is one of the key challenges that is not yet overcome by the researchers and AI developers. The problem of Artificial Intelligence in cybersecurity As AI revolutionizes the security landscape, it is also raising the bar for the attackers. With passing time it is getting more difficult to breach security systems. To tackle this, attackers are resorting to adopting state-of-the-art machine learning and other AI techniques to breach systems, while security professionals adopt their own AI mechanisms to prevent and protect the systems from these attacks. A cybersecurity firm Darktrace reported an attack in 2017 that used machine learning to observe and learn user behavior within a network. This is one of the classic cases of facing disastrous consequences where technology falls into the wrong hands and necessary steps cannot be taken to tackle or prevent the unethical use of AI - in this case, a cyber attack. The threats posed by a vulnerable AI system with no security measures in place - it can be easily hacked into and misused, doesn’t need any new introduction. This is not a desirable situation for any organization to be in, especially when it has invested thousands or even millions of dollars into the technology. When the AI is developed, strict measures should be taken to ensure it is accessible to only a specific set of people and can be altered or changed by only its developers or by authorized personnel. Just because you can build an AI, should you? The more potent the AI becomes, the more potentially devastating its applications can be. Whether it is replacing human soldiers with AI drones, or developing autonomous weapons - the unmitigated use of AI for warfare can have consequences far beyond imagination. Earlier this year, we saw hundreds of Google employees quit the company over its ties with the Pentagon, protesting against the use of AI for military purposes. The employees were strong of the opinion that the technology they developed has no place on a battlefield, and should ideally be used for the benefit of mankind, to make human lives better. Google isn’t an isolated case of a tech giant lost in these murky waters. Microsoft employees too protested Microsoft’s collaboration with US Immigration and Customs Enforcement (ICE) over building face recognition systems for them, especially after the revelations that ICE was found to confine illegal immigrant children in cages and inhumanely separated asylum-seeking families at the US Mexican border. Amazon is also one of the key tech vendors of facial recognition software to ICE, but its employees did not openly pressure the company to drop the project. While these companies have assured their employees of no direct involvement, it is quite clear that all the major tech giants are supplying key AI technology to the government for defensive (or offensive, who knows) military measures. The secure and ethical use of Artificial Intelligence for non-destructive purposes currently remains one of the biggest challenges in its adoption today. Today, there are many risks and caveats associated with implementing an AI system. Given the tools and techniques we have at our disposal currently, it is far-fetched to think of implementing a flawless Artificial Intelligence within a given infrastructure. While we consider all the risks involved, it is also important to reiterate one important fact. When we look at the bigger picture, all technological advancements effectively translate to better lives for everyone. While AI has tremendous potential, whether its implementation is responsible is completely down to us, humans. Read more Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms New cybersecurity threats posed by artificial intelligence Google’s prototype Chinese search engine ‘Dragonfly’ reportedly links searches to phone numbers
Read more
  • 0
  • 0
  • 5454
article-image-what-is-a-convolutional-neural-network-cnn-video
Richard Gall
25 Sep 2018
5 min read
Save for later

What is a convolutional neural network (CNN)? [Video]

Richard Gall
25 Sep 2018
5 min read
What is a convolutional neural network, exactly? Well, let's start with the basics: a convolutional neural network (CNN) is a type of neural network that is most often applied to image processing problems. You've probably seen them in action anywhere a computer is identifying objects in an image. But you can also use convolutional neural networks in natural language processing projects, too. The fact that they are useful for these fast growing areas is one of the main reasons they're so important in deep learning and artificial intelligence today. What makes a convolutional neural network unique? Once you understand how a convolutional neural network works and what makes it unique from other neural networks, you can see why they're so effective for processing and classifying images. But let’s first take a regular neural network. A regular neural network has an input layer, hidden layers and an output layer. The input layer accepts inputs in different forms, while the hidden layers perform calculations on these inputs. The output layer then delivers the outcome of the calculations and extractions. Each of these layers contains neurons that are connected to neurons in the previous layer, and each neuron has its own weight. This means you aren’t making any assumptions about the data being fed into the network - great usually, but not if you’re working with images or language. Convolutional neural networks work differently as they treat data as spatial. Instead of neurons being connected to every neuron in the previous layer, they are instead only connected to neurons close to it and all have the same weight. This simplification in the connections means the network upholds the spatial aspect of the data set. It means your network doesn’t think an eye is all over the image. The word ‘convolutional’ refers to the filtering process that happens in this type of network. Think of it this way, an image is complex - a convolutional neural network simplifies it so it can be better processed and ‘understood.’ What's inside a convolutional neural network? Like a normal neural network, a convolutional neural network is made up of multiple layers. There are a couple of layers that make it unique - the convolutional layer and the pooling layer. However, like other neural networks, it will also have a ReLu or rectified linear unit layer, and a fully connected layer. The ReLu layer acts as an activation function, ensuring non-linearity as the data moves through each layer in the network - without it, the data being fed into each layer would lose the dimensionality that we want to maintain. The fully connected layer, meanwhile, allows you to perform classification on your dataset. The convolutional layer The convolutional layer is the most important, so let’s start there. It works by placing a filter over an array of image pixels - this then creates what’s called a convolved feature map. "It’s a bit like looking at an image through a window which allows you to identify specific features you might not otherwise be able to see. The pooling layer Next we have the pooling layer - this downsamples or reduces the sample size of a particular feature map. This also makes processing much faster as it reduces the number of parameters the network needs to process. The output of this is a pooled feature map. There are two ways of doing this, max pooling, which takes the maximum input of a particular convolved feature, or average pooling, which simply takes the average. These steps amount to feature extraction, whereby the network builds up a picture of the image data according to its own mathematical rules. If you want to perform classification, you'll need to move into the fully connected layer. To do this, you'll need to flatten things out - remember, a neural network with a more complex set of connections can only process linear data. How to train a convolutional neural network There are a number of ways you can train a convolutional neural network. If you’re working with unlabelled data, you can use unsupervised learning methods. One of the best popular ways of doing this is using auto-encoders - this allows you to squeeze data in a space with low dimensions, performing calculations in the first part of the convolutional neural network. Once this is done you’ll then need to reconstruct with additional layers that upsample the data you have. Another option is to use generative adversarial networks, or GANs. With a GAN, you train two networks. The first gives you artificial data samples that should resemble data in the training set, while the second is a ‘discriminative network’ - it should distinguish between the artificial and the 'true' model. What's the difference between a convolutional neural network and a recurrent neural network? Although there's a lot of confusion about the difference between a convolutional neural network and a recurrent neural network, it's actually more simple than many people realise. Whereas a convolutional neural network is a feedforward network that filters spatial data, a recurrent neural network, as the name implies, feeds data back into itself. From this perspective recurrent neural networks are better suited to sequential data. Think of it like this: a convolutional network is able to perceive patterns across space - a recurrent neural network can see them over time. How to get started with convolutional neural networks If you want to get started with convolutional neural networks Python and TensorFlow are great tools to begin with. It’s worth exploring MNIST dataset too. This is a database of handwritten digits that you can use to get started with building your first convolutional neural network. To learn more about convolutional neural networks, artificial intelligence, and deep learning, visit Packt's store for eBooks and videos.
Read more
  • 0
  • 0
  • 27350

article-image-how-facebook-is-advancing-artificial-intelligence-video
Richard Gall
14 Sep 2018
4 min read
Save for later

How Facebook is advancing artificial intelligence [Video]

Richard Gall
14 Sep 2018
4 min read
Facebook is playing a huge role in artificial intelligence research. It’s not only a core part of the Facebook platform, it’s central to how the organization works. The company launched its AI research lab - FAIR - back in 2013. Today, led by some of the best minds in the field, it's not only helping Facebook to leverage artificial intelligence, it's also making it more accessible to researchers and engineers around the world. Let’s take a look at some of the tools built by Facebook that are doing just that. PyTorch: Facebook's leading artificial intelligence tool PyTorch is a hugely popular deep learning framework (rivalling Google's TensorFlow) that, by combining flexiblity and dynamism with stability, bridges the gap between research and production. Using a tape-based auto-differentiation system, PyTorch can be modified and changed by engineers without losing speed. That’s good news for everyone. Although PyTorch steals the headlines, there are a range of supporting tools that are making artificial intelligence and deep learning more accessible and achievable for other engineers. Read next: Is PyTorch better than Google’s TensorFlow? Find PyTorch eBooks and videos on the Packt website.  Facebook's computer vision tools Another field that Facebook has revolutionized is computer vision and image processing. Detectron, Facebook’s state-of-the-art object detection software system, has powered many research projects including Mask R-CNN - a simple and flexible way of developing Convolution Neural Networks for image processing. Mask R-CNN has also helped to power DensePose, a tool that map all human pixels of an RGB image to a 3D surface-based representation of the human body. Facebook has also heavily contributed to research in detecting and recognizing Human-Object interactions as well. Their contribution to the field of generative modeling is equally very important, with tasks such as minimizing variations in the quality of images, JPEG compression as well as image quantization now becoming easier and more accessible. Facebook, language and artificial intelligence We share updates, we send messages - language is a cornerstone of Facebook. This is why it's such an important area for Facebook’s AI researchers. There are a whole host of libraries and tools that are built for language problems. FastText is a library for text representation and classification, while ParlAI is a platform pushing the boundaries of dialog research. The platform is focused on tackling 5 key AI tasks: question answering, sentence completion, goal-oriented dialog, chit-chat dialog, and visual dialog. The ultimate aim for ParlAI is to develop a general dialog AI. There are also a few more language tools in Facebook’s AI toolkit - Fairseq and Translate are helping with translation and text generation, while Wav2Letter is an Automatic Speech Recognition system that can be used for transcription tasks. Rational artificial intelligence for gaming and smart decision making Although Facebook isn’t known for gaming, its interest in developing artificial intelligence that can reason could have an impact on the way games are built in the future. ELF is a tool developed by Facebook that allows game developers to train and test AI algorithms in a gaming environment. ELF was used by Facebook researchers to recreate DeepMind’s AlphaGo Zero, the AI bot that has defeated Go champions. Running on a single GPU, the ELF OpenGo bot defeated four professional Go players 14-0. Impressive, right? There are other tools built by Facebook that aim to build AI into game reasoning. Torchcraft is probably the most notable example - its a library that’s making AI research on Starcraft - a strategy game - accessible to game developers and AI specialists alike. Facebook is defining the future of artificial intelligence As you can see, Facebook is doing a lot to push the boundaries of artificial intelligence. However, it’s not just keeping these tools for itself - all these tools are open source, which means they can be used by anyone.
Read more
  • 0
  • 0
  • 2992